text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Measuring and imaging nanomechanical motion with laser light
Part of a collection:
"Enlightening the World with the Laser" - Honoring T. W. Hänsch
Andreas Barg1,
Yeghishe Tsaturyan1,
Erik Belhage1,
William H. P. Nielsen1,
Christoffer B. Møller1 &
Albert Schliesser ORCID: orcid.org/0000-0003-4317-50301
Applied Physics B volume 123, Article number: 8 (2017) Cite this article
An Erratum to this article was published on 11 April 2017
We discuss several techniques based on laser-driven interferometers and cavities to measure nanomechanical motion. With increasing complexity, they achieve sensitivities reaching from thermal displacement amplitudes, typically at the picometer scale, all the way to the quantum regime, in which radiation pressure induces motion correlated with the quantum fluctuations of the probing light. We show that an imaging modality is readily provided by scanning laser interferometry, reaching a sensitivity on the order of \(10\, {\mathrm {fm/Hz^{1/2}}}\), and a transverse resolution down to \(2\,\upmu {\hbox {m}}\). We compare this approach with a less versatile, but faster (single-shot) dark-field imaging technique.
Lasers are indispensable tools in science and technology today. They heal eyes, power the Internet, and print objects in 3D. They have also revolutionized atomic physics: Techniques such as laser cooling and optical frequency metrology have enabled the creation of new states of matter, precision tests of fundamental physical laws, and the construction of clocks more accurate than ever before. The lasers' key feature—high spatial and temporal coherence of the emitted light—is a unique asset, too, for the measurement of distance and motion. The laser interferometric gravitational wave observatory (LIGO) has provided the most recent, spectacular demonstration of this fact, with the direct detection of gravity waves [1].
While LIGO is concerned with the apparent displacement of kg-scale test masses, laser-based techniques are also an excellent choice to track the motion of micro- and nanoscale objects. Indeed, lasers have been used to measure a microcantilever's motion induced by the magnetic force of a single electron spin [2], providing only one example of the force and mass sensing capabilities of laser-transduced mechanical devices. The interaction of laser light and nanomechanical motion, which lies at the heart of any such measurement scheme, has, itself, moved to the center of attention recently. Research in the field of cavity optomechanics [3] explores the fundamental mechanisms—governed by the laws of quantum mechanics, of course—and the limitations and opportunities for mechanical measurements that they imply. Without even making an attempt at a comprehensive review of the vast activity in this field, we illustrate recent progress through a selection of our own results below.
For this research, it is often crucial to understand not only the spectral properties of the mechanical resonators, such as their eigenmodes' frequency and lifetime, but also the modes' spatial displacement patterns, as it determines the effective mass \(m_{\mathrm {eff}}\), and therefore the optomechanical interaction strength. The pattern can also strongly affect the modes' coherence properties. Both are particularly important for the development of new resonator systems. For example, the full knowledge of the mode shape has allowed us to design resonators with a "soft" phononic crystal clamping that enables unprecedented room-temperature quality factors \(Q>10^8\) at \({\mathrm {MHz}}\) frequencies [4]. While finite-element simulations of mechanical modes become ever more powerful and accurate, they often miss fabrication imperfections and substrate effects that can lead to broken symmetries or mode hybridization, among others. For this reason, we have developed several laser-based imaging techniques of micro- and nanomechanical devices. In this article, we provide a description of these highly useful tools.
Probing mechanical motion \(\delta x(t)\) by laser interferometry. a Simple two-path interferometer, involving reflection off the mechanical device (top). The thermal motion (blue trace) of a high-Q membrane is readily resolved above the measurement imprecision background (gray). b Cavity-enhanced measurement, here of a radial-breathing mode of an optical whispering-gallery-mode resonator (top). Thermal motion (red trace) is far above the imprecision background (gray), which is itself below the resonant standard quantum limit (SQL) for this mechanical mode (from [10]). c Cavity-based measurements of highly coherent mechanical resonators, here a high-Q silicon nitride membrane placed inside a Fabry–Perot resonator (top). Quantum backaction starts to dominate over the thermal motion of the device, inducing correlations that lead to squeezing of the output light (violet trace) below the vacuum noise (gray), among others (from [20]). d Comparison of the relative levels of measurement imprecision, backaction, and thermomechanical noise in the measurement regimes depicted in the examples (a)–(c)
Laser interferometry and spectroscopy
A simple two-path interferometer (Fig. 1a) constitutes the most straightforward approach to measuring mechanical displacements. One arm's path involves the reflection off the mechanical device's surface, so that its motion modulates the path length difference between the two arms. If the interferometer is biased to the optimum point, it can detect displacement (double-sided) spectral densities \(S_{xx}\) down to a level of [5]
$$\begin{aligned} S_{xx}^{1/2}=\frac{\lambda }{2\pi }\frac{1}{ \sqrt{\eta _{\mathrm {d}} P/\hbar \omega }}. \end{aligned}$$
It is limited by the quantum phase uncertainty of the coherent state that the laser emits, referred to as the measurement imprecision. Here, \(\lambda\), \(\omega\), and P are the wavelength, angular frequency, and power of the employed laser light, respectively. \(\eta _{\mathrm {d}}\) is the detection efficiency, which also absorbs penalties in the sensitivity due to optical losses, insufficient interference contrast, etc. Equation (1) implies that within a bandwidth \({\mathrm {BW}}\), the smallest displacements that can be recovered with unity signal-to-noise ratio are given by \(\delta x_{\mathrm {min}}/\sqrt{{\mathrm {BW}} }=\sqrt{S_{xx}}\).
Our instrument (detailed below ) employs a near-infrared laser and mW-scale probing powers and typically achieves a \(S_{xx}^{1/2}\sim 10\,{\mathrm {fm/\sqrt{Hz}}}\) displacement sensitivity, consistent with Eq. (1). This compares favorably with the picometer-scale thermal root-mean-square (RMS) displacement \(\delta x_{\mathrm {th}}=\sqrt{{k_{\mathrm {B}} T}/{m_{\mathrm {eff}} \varOmega _{\mathrm {m}}^2}}\) of the mechanical resonators we employ [4, 6], with nanogram mass \(m_{\mathrm {eff}}\) and MHz frequency \(\varOmega _{\mathrm {m}}/2\pi\) at room temperature T. In the Fourier domain, the spectral density of the thermal motion is spread over the mechanical linewidth \(\varGamma _{\mathrm {m}}=\varOmega _{\mathrm {m}}/Q\). Correspondingly, a nearly four-order-of-magnitude signal-to-noise ratio \(S_{xx}^{\mathrm {th}}(\varOmega _m)/S_{xx}\) between the peak thermal displacement spectral density \(S_{xx}^{\mathrm {th}}(\varOmega _m)=\frac{\delta x_{\mathrm {th}}^2}{{\varGamma _{\mathrm {m}}}/2}\) and the noise background \(S_{xx}\) can be reached already with quality factors in the millions. An example for such a measurement is shown in Fig. 1a.
This sensitivity is insufficient, however, for the detection of displacements at the level of the mechanical RMS zero-point fluctuations \(\delta x_{\mathrm {zpf}}=\sqrt{{\hbar }/{2 m_{\mathrm {eff}} \varOmega _{\mathrm {m}}}}\), which are at the femtometer level for the parameters discussed above. An optical cavity is needed to enhance the interaction between light and motion, recycling the light for a number of roundtrips that is commensurate with the finesse \(\mathcal {F}\) of the cavity. The phase shift of the light emerging from the cavity is multiplied correspondingly, allowing more sensitive detection with the same amount of laser light. In the simplest case of resonant probing (\(\omega =\omega _{\mathrm {c}}\), the cavity resonance frequency), the quantum imprecision noise is equivalent to displacement spectral densities of [7]
$$\begin{aligned} S_{xx}^{1/2}(\varOmega )=\frac{\lambda }{16 \eta _{\mathrm {c}} \mathcal {F}}\frac{1}{ \sqrt{\eta _{\mathrm {d}} P/\hbar \omega }}\sqrt{1+\left( \frac{\varOmega }{\kappa /2}\right) ^2}, \end{aligned}$$
for a Fabry–Perot resonator with a moving end mirror (in the case of a whispering-gallery-mode resonator whose radius is measured, \(\lambda \rightarrow \lambda /\pi\)). Note that the sensitivity now acquires a dependence on the Fourier frequency \(\varOmega\), here a simple cutoff behavior for frequencies larger than the cavity half linewidth \(\kappa /2\), as well as the degree of cavity overcoupling \(\eta _{\mathrm {c}}\).
Figure 1b shows an example of such a measurement, in this case performed on the radial-breathing mode of a silica whispering-gallery-mode resonator [7], with the help of a polarization spectroscopy technique [8]. It resolves not only thermal motion with a large signal-to-noise ratio (here, about \(58\,{\mathrm {dB}}\)), but also achieves an imprecision noise below that at the resonant standard quantum limit (SQL), \(S_{xx}^{\mathrm {SQL}}(\varOmega _{\mathrm {m}})=\frac{\delta x_{\mathrm {zpf}}^2}{{\varGamma _{\mathrm {m}}}/2}\). Note that this coincides with the peak spectral density of ground-state fluctuations [9], for this device with \(\varOmega _{\mathrm {m}}/2\pi =40{.}6\,{\mathrm {MHz}}\), \(\varGamma _{\mathrm {m}}=1{.}3\,{\mathrm {kHz}}\) and \(m_{\mathrm {eff}}=10\,{\mathrm {ng}}\) at the level of \(S_{xx}^{\mathrm {SQL}}(\varOmega _{\mathrm {m}})=(2{.}2\,{\mathrm {am}})^2/{\mathrm {Hz}}\) [10].
Cavity-enhanced laser interferometry has also been applied to nanomechanical resonators all the way down to the molecular scale. For example, it was shown that a fiber-based optical microcavity can resolve the thermal motion of carbon nanotubes [11]. Another successful sensing scheme consists in introducing nanomechanical resonators in the near field of optical whispering-gallery-mode resonators. It achieves imprecision well below that at the SQL of stressed silicon nitride nanostrings with picogram masses and \(Q\sim 10^6\) [12–14]. It is also expected that optical cavities suppress diffraction losses through preferential scattering into the cavity mode.
To track or steer coherent dynamics of mechanical resonators at the level of their vacuum fluctuations, yet higher sensitivities are required [14]. In particular, it is necessary to resolve the ground state—which entails averaging for a time \(4S_{xx}/x_{\mathrm {zpf}}^2\)—before it decoheres, e.g., by heating. The latter happens at a rate \(n_{\mathrm {th}} \varGamma _{\mathrm {m}}\), where \(n_{\mathrm {th}}=k_{\mathrm {B}} T/\hbar \varOmega _{\mathrm {m}}\gg 1\) is the mean occupation of the dominant thermal bath at temperature T. It follows from Eq. (2) that a resolution at the level of the zero-point-fluctuations is acquired at the measurement rate [9] \(\varGamma _{\mathrm {opt}}=4g^2/\kappa\), where \(g=x_{\mathrm {zpf}} (\partial \omega _{\mathrm {c}}/\partial x) a\), and \(|a|^2\) the number of photons in the cavity (assuming \(\eta _{\mathrm {c}}\eta _{\mathrm {d}}=1\), \(\varOmega \ll \kappa\)). The above-mentioned requirement can then be written as \(\varGamma _{\mathrm {opt}}\gtrsim n_{\mathrm {th}}\varGamma _{\mathrm {m}}\).
Interestingly, a completely new effect becomes relevant in this regime as well: the quantum fluctuations of radiation pressure linked to the quantum amplitude fluctuations of the laser light, representing the quantum backaction of this measurement [15]. And indeed the ratio of radiation pressure to thermal Langevin force fluctuations is given by \(\frac{S_{FF}^{\mathrm {qba}}(\varOmega _{\mathrm {m}})}{S_{FF}^{\mathrm {th}}(\varOmega _{\mathrm {m}})}=\frac{\varGamma _{\mathrm {opt}}}{n_{\mathrm {th}}\varGamma _{\mathrm {m}}}\). While these force fluctuations induce random mechanical motion that can mask a signal to be measured, it is important to realize that motion and light become correlated, at the quantum level, via this mechanism. As a consequence, the mere interaction of cavity light with a nanomechanical device can induce optical phase–amplitude quantum correlations, which squeeze the optical quantum fluctuations, in a particular quadrature, below the level of the vacuum noise. This effect is referred to as ponderomotive squeezing [16–19].
An example of this phenomenon is shown in Fig. 1c [20]. A 1.928-MHz nanomechanical membrane resonator of dimensions \((544\,{\mathrm {\mu m}})^2\times 60 \,{\mathrm {nm}}\) is placed in a laser-driven high-finesse optical cavity and thereby measured at a rate of \(\varGamma _{\mathrm {opt}}/2\pi =96\,{\mathrm {kHz}}\). Its decoherence rate is reduced to \(n\varGamma _{\mathrm {m}}/2\pi \approx 20\,{\mathrm {kHz}}\), by cooling it in a simple cryostat to \(T=10\,{\mathrm {K}}\). A slight detuning of the laser field with respect to the optical resonator (\(\Delta =\omega -\omega _{\mathrm {c}}=-2\pi \times 1{.}4\,{\mathrm {MHz}}\)) leads to further cooling of the mechanical mode [21–24], akin to Doppler cooling of atomic gases [25]—here to a mean occupation of \(n_{\mathrm {eff}}\sim 5\). It also allows direct observation of the squeezing in the amplitude fluctuations of the light emerging from the resonator: Its normalized spectral density assumes the form
$$\begin{aligned} S_{XX}^{\mathrm {out}}(\varOmega )&\approx 1-2\frac{8 \Delta }{\kappa }\varGamma _{\mathrm {opt}}{{Re}}\left\{ \chi _{\mathrm {eff}}(\varOmega ) \right\} +\\&+\left( \frac{8 \Delta }{\kappa }\right) ^2 \varGamma _{\mathrm {opt}} \left| \chi _{\mathrm {eff}}(\varOmega )\right| ^2\nonumber \left( \varGamma _{\mathrm {opt}} +n_{\mathrm {th}} \varGamma _{\mathrm {m}} \right) . \end{aligned}$$
Note that the second term represents the correlations, which can assume negative values and thus lead to noise below the vacuum level \(S_{XX}^{\mathrm {out}}=1\) (\(\chi _{\mathrm {eff}}\) is the effective mechanical susceptibility [3]). Ponderomotive squeezing down to \(-2{.}4\,{\mathrm {dB}}\) has been observed, the strongest value so far, and simultaneous squeezing in a multitude of mechanical modes [20]. Schemes that exploit such quantum correlations for sub-SQL measurements of displacement and forces are subject of ongoing research [26–28].
The above examples show that laser-based measurements resolve the motion of nanomechanical oscillators all the way to the level of their vacuum fluctuations. In a simple classification (Fig. 1d), basic interferometers can readily resolve thermal motion, as required in many sensing and characterization experiments. Cavity-enhanced approaches achieve imprecision below the resonant SQL. To measure and control motion at the quantum level, displacements at the scale of the vacuum fluctuations must be resolved within the coherence time of the mechanical resonator. Then the imprecision (of an ideal setup) is more than \(n_{\mathrm {th}}\) times below the resonant SQL, and quantum backaction exceeds thermal force fluctuations and induces quantum correlations [3, 9].
While the above-described techniques can be considered variants of laser interferometry, there are a number of techniques to characterize mechanical devices that are laser spectroscopic in nature. A prominent example is optomechanically induced transparency (OMIT), first described in Refs. [29, 30]. It consists in the observation that a laser-driven cavity containing a dispersively coupled mechanical device will have a modified transmission spectrum for a second "probe" laser beam at the frequency \(\omega _{\mathrm {p}}=\omega +\varOmega _{\mathrm {m}}+\Delta '\), where \(|\Delta '|\lesssim \kappa\) is the two-photon detuning. The intracavity probe field,
$$\begin{aligned} a_{\mathrm {p}} \propto \frac{\sqrt{\kappa }}{(-i\Delta '+\kappa /2)+\frac{g^2}{-i \Delta '+\varGamma _{\mathrm {m}}/2}} \end{aligned}$$
in the simplest case \(-\Delta =\varOmega _{\mathrm {m}}\ll \kappa\), encodes the coupling strength g. It is thus possible to derive g, for example, from probe transmission measurements [31, 32].
Laser-based imaging
As already indicated, it can be of great interest to also spatially resolve mechanical displacement patterns. With laser light, this can be accomplished in an extremely sensitive and virtually non-perturbing manner [33–37]. In the following, we present two methods that we have implemented for characterizing nano- and micromechanical resonators with micrometer transverse resolution, sufficient for resolving the spatial patterns of MHz mechanical modes.
Scanning laser interferometry
Setup for interferometric imaging of mechanical motion. a Probe head with microscope objective mounted on a motorized 3-axis translation stage to position a focused laser spot on the sample. The sample is imaged simultaneously onto a CMOS camera via a beam splitter (BS) and a lens. b Sample placed on top of a piezo (PZT1) inside a high vacuum chamber. c Main part of the Michelson interferometer. A balanced receiver (detectors D1 and D2) measures the relative phase between the light returned from the sample and the reference arm. Electronic feedback to a piezomounted mirror (PZT2) stabilizes this phase with a low (\(\lesssim 10\,{\mathrm {kHz}}\)) bandwidth. d Signal from the balanced receiver as a function of time while scanning (blue) and actively stabilizing (purple) the relative phase
The first setup, shown in Fig. 2, is a Michelson interferometer based on a Nd:YAG laser at \(\lambda =1064\) nm. A polarizing beam splitter (PBS1) splits its output into two interferometer arms. In one arm, a single-mode fiber guides light to a probe head mounted on a motorized 3-axis translation stage. The probe head (Fig. 2a) consists of a microscope objective focusing the laser light to a spot of diameter \(\sim 2\,{\mathrm { \mu m}}\) on the sample and a CMOS camera capturing images of the sample in real time. To reduce viscous (gas) damping of the nanomechanical motion, the sample is placed inside a high vacuum chamber at a pressure of \(<10^{-5} \, {\mathrm {mbar}}\). A piezoelectric shaker (PZT1) can excite mechanical eigenmodes (Fig. 2b).
Nanomechanical modes of a stoichiometric SiN membrane measured with the raster-scan interferometer. a–e Measurements of thermal motion on a 22\(\,\times \,\)22 point grid. f–j Calculated displacement for mode numbers (n, m), accounting for hybridization between mode (1, 2) and (2, 1)
Light reflected off the sample is spatially overlapped with the local oscillator from the other interferometer arm in PBS1 (Fig. 2c). Projection on a common polarization basis subsequently enforces interference in a second polarizing beam splitter (PBS2), whose outputs are monitored with a high-bandwidth (\(0-75\) MHz) InGaAs-balanced receiver. This configuration ensures shot-noise-limited detection of the reflected light when a typical \(\sim 800\,{\mathrm {\mu W}}\) beam is sent to the sample. In the correct polarization base, one obtains a receiver signal \(V_{\text {ff}} \cos (\phi )\), where \(\phi\) is the relative phase between the two beams and \(V_{\text {ff}}\) is the full fringe voltage, which we check with an oscilloscope (Fig. 2d). For maximal transduction, \(\phi\) is actively stabilized to the mid-fringe position by means of a mirror mounted on a piezoelectric transducer in the local oscillator arm (PZT2) and a proportional-integral (PI) feedback control.
Measurements of patterned SiN membrane with the raster-scan interferometer. a Micrograph of a SiN membrane patterned with a phononic crystal structure. b Localized nanomechanical mode imaged on 100\(\,\times \,\)100 grid in the scan area indicated by a green square in (a). Holes are detected by disappearance of the calibration peak and shown as white pixels. c Snapshot of an animation provided as supplementary material. It shows the displacement pattern (left) corresponding to a particular frequency bin (green line) of the averaged spectrum (right) (from [4])
In this case, small measured voltages \(\delta V(t) \ll V_{\text {ff}}\) convert to displacement via \(\delta x(t) \approx \pm \delta V(t) \lambda / 4 \pi V_{\text {ff}}\). Modulating PZT2 continuously with known frequency and amplitude generates a reference displacement and provides an independent calibration tone (CT) in the spectra.
As a first example, Fig. 3 shows a raster scan of a stoichiometric silicon nitride (SiN) membrane with side length \(l = 1\,{\mathrm {mm}}\). We scan the membrane surface with the probe head using stepper motor actuation and record traces \(\delta x(t)\) at each of the \(22 \times 22\) positions. The traces are spectrally filtered around the peaks of several mechanical modes via digital post-processing. In this manner, we extract RMS displacements of each mechanical eigenmode in each scan pixel. Figure 3 shows the corresponding displacement maps for the modes, which are thermally excited at room temperature (PZT1 off). The measured mode patterns compare well with the hybridized eigenmodes of a square membrane:
$$\begin{aligned} w_{n,m} \propto \sin {(k_n u)}\sin {(k_m v)} + \beta \sin {(k_m u)}\sin {(k_n v)} , \end{aligned}$$
where \(k_n = 2 \pi n/ l\), \(k_m = 2 \pi m/ l\), and \(n, m \ge 1\) denote the number of antinodes along in-plane coordinates u and v, respectively, and \(|\beta |<1\) quantifies the degree of hybridization between degenerate mode pairs. We find that the measured maximum RMS displacements, as calibrated by the CT, are in good agreement with the expected thermal motion (Fig. 3). Here, we have assumed a mass \(m_{\text {eff}} = \rho l^2 h/4 \sim 34\,{\mathrm {ng}}\), given the thickness \(h = 50\) nm and density \(\rho = 2.7\) g/cm\(^3\) of the membrane. Note that the modes \((n,m)=(1,2)\) and (2, 1) show hybridization with \(|\beta | \sim 0.2\).
Scanning laser interferometry is particularly useful to characterize complex mode structures, such as SiN membranes patterned with phononic crystal structures [4] (Fig. 4). A scan measured on a grid of \(100\times 100\) points with a \(5\,{\upmu {\hbox {m}}}\) spacing resolves also the \(9{.}3\,{\upmu {\hbox {m}}}\)-wide tethers in between two holes, as Fig. 4b shows. At the expense of measurement time, the grid spacing could be further reduced; however, the spatial resolution of the obtained image is eventually limited to the \(\sim 2\,{\upmu {\hbox {m}}}\) diameter of the laser spot. Figure 4c shows another mode of the same device imaged over a larger area. At a distance of \(500\,{\upmu {\hbox {m}}}\) from the center, the mode's amplitude has decayed to the measurement noise level, illustrating the localization of the mode to the defect.
An advantage of measuring thermally excited modes is that information on all modes within the detector bandwidth is acquired simultaneously. This large set of data can be processed and represented in different ways. As an example, Fig. 4c shows an average spectrum of 400 measurement points on the defect. It clearly reveals a phononic bandgap between about 1.41 and \(1{.}68\,{\mathrm {MHz}}\), containing five defect mode peaks, as well as the calibration peak at \(1{.}52\,{\mathrm {MHz}}\). The left panel shows a displacement map corresponding to a specific frequency bin of this spectrum. We can also create an animation that composes the displacement maps for each of the frequency bins in the spectrum. It is provided as electronic supplementary material to this article (see supplementary material). It delivers an instructive illustration of the effect of the phononic crystal structure, contrasting the small number of localized modes inside the bandgap with a "forest" of distributed modes at frequencies outside the bandgap.
A disadvantage of the scanning laser interferometer is its long measurement time. For instance, a high-resolution scan, such as the one shown in Fig. 4b, takes more than 8 hours. This is because for each pixel of the image we probe thermal motion during several seconds, averaging over timescales longer than \(\varGamma _{\mathrm {m}}^{-1}\). Some acceleration is possible by either artificially increasing \(\varGamma _{\mathrm {m}}\), e.g., by controlled gas damping, or by driving the modes coherently using PZT1. The latter can furthermore provide information about the mechanical phase at each position, if mechanical frequency drifts are properly accounted for.
Dark-field imaging
A powerful approach to single-shot characterization of mechanical modes is provided by dark-field imaging [35]. Figure 5a shows the setup which we have implemented to this end. It directly captures the squared displacement patterns of two-dimensional resonators such as membranes or cantilevers on a CCD camera. Its functional principle is described with simple Fourier optics [38]. A collimated laser beam with a wavelength \(\lambda = 1064\,{\mathrm {nm}}\) and beam diameter of \(2.4\,{\mathrm {mm}}\) impinges perpendicularly on the sample, here a SiN membrane with side length \(l = 1\,{\mathrm {mm}}\). The reflected electric field \(E_{\text {r}}\) at transverse position (u, v) is subject to a phase shift proportional to the membrane displacement w(u, v, t). We assume that the incident electric field \(E_{0} e^{i\omega t}\) is constant across the membrane, since the incident beam diameter is 2.4 times larger than the membrane. Assuming furthermore \(w(u,v,t) \ll \lambda\), the reflected electric field reads \(E_{\text {r}}(u,v,t) \approx r E_{0} e^{i\omega t} \left[ 1 + i k w(u,v,t) \right]\), where r is the absolute value of the reflection coefficient, \(k = 2 \pi / \lambda\) and \(\omega = c k\). A lens (focal length \(f_1 = 75\,{\mathrm {mm}}\)) performs an optical Fourier transform \(\mathcal {F}\) with respect to the coordinates (u, v), yielding
$$\begin{aligned} \mathcal {F}(E_{\text {r}}) = r E_{0} e^{i\omega t} \left[ \mathcal {F}(1) + \mathcal {F}( i k w(u,v,t) )\right] . \end{aligned}$$
The zero-order peak (first term in Eq. (6)) is removed from the beam by an opaque disk in the Fourier plane. This extracts the diffracted light due to the membrane displacement w. A second, subsequent lens (focal length \(f_2 = 50\) mm) performs another Fourier transform on the filtered light. The time-averaged intensity pattern
$$\begin{aligned} I(u',v')&= \left\langle \left| r E_{0} e^{i\omega t} \mathcal {F}( \mathcal {F}( i k w(u,v,t) ) ) \right| ^2 \right\rangle \nonumber \\&= I_{0} r^2 k^2 \left\langle w(-u,-v,t)^2 \right\rangle , \end{aligned}$$
is then recorded by a camera, where \(I_0 = |E_0 e^{i\omega t} |^2\) is the incident intensity. It directly shows an intensity pattern proportional to the squared displacement of an eigenmode.
Setup for dark-field imaging of mechanical motion. a Optical configuration with small opaque disk in the Fourier plane A, creating a dark-field image of the sample in the image plane B. A lens (\(f_3\)) projects a magnified image onto a CCD camera. A ray diagram illustrates how the image is formed (purple lines). b Image of the Fourier plane, where an opaque disk blanks out undiffracted zero-order light, when a membrane mode is excited. c Sample is mounted on a piezoelectric actuator (PZT1) in a high vacuum chamber
In our setup, a third lens with focal length \(f_3 = 35\,{\mathrm {mm}}\) is placed in front of the camera to magnify the image. It also allows imaging the Fourier plane by adjusting the distance between camera and lens to \(f_3\). Figure 5b shows a Fourier image of the membrane with diffraction patterns extending in two orthogonal directions due to the sharp edges of the membrane. Two bright spots close to the center originate from diffraction due to a driven eigenmode, a hybridization between the modes (1,2) and (2,1) at a frequency of 645 kHz. The opaque disk made of aluminum deposited on a thin piece of glass is seen as a white disk in the center. With a diameter \(d = 100\,{\mathrm { \mu m}}\), it blocks diffraction angles \(\alpha \lesssim d/ 2 f_1\) generated by mechanical modes with a distance between nodes of \(\gtrsim \lambda /2 \alpha \approx 800\,{\mathrm {\mu m}}\).
Nanomechanical modes of a stoichiometric SiN membrane measured with dark-field imaging. a–e Single-shot (acquisition time \(\sim 10\,{\mathrm {ms}}\)) measurements of mechanical modes at various frequencies while driving the membrane with PZT1. Pixels colored in dark red indicate high intensities and correspond to large values of squared mechanical displacement. f–j Calculated squared mechanical mode patterns with mode numbers (n, m). Modes (3, 5) and (5, 3) show nearly complete hybridization with \(|\beta | \sim 1\)
A piezoelectric actuator (PZT1) successively excites the eigenmodes of the SiN membrane inside a vacuum chamber, by slowly sweeping a strong drive tone across the frequency window of interest (here \(0{.}4\ldots 2\,{\mathrm {MHz}}\)). Figure 6 shows images of several modes recorded with an incident optical power of \(\sim 100\,\mathrm {\mu W}\) and a typical integration time of \(10\,{\mathrm {ms}}\). Comparison with mode patterns calculated from Eq. (5) allows inferring the mode numbers (n, m), and the degree of hybridization, as seen, for example, on the 1.683-MHz mode.
While it enables much shorter measurement times than the scanning laser interferometer, the dark-field imaging setup has a relatively low displacement sensitivity. For this reason, PZT1 has to be driven with a stroke of \(\gtrsim 300\) pm, significantly increasing the membrane oscillation amplitude, up to a regime where mechanical nonlinearities (e.g., Duffing-type frequency shifts) can play a role. In principle, the sensitivity can be enhanced by increasing the laser intensity \(I_0\), yet in practice it is often limited by background noise due to scattered light from optical components increasing equally with \(I_0\). Another important limitation is that diffraction from the sample's geometry cannot be discriminated from modal displacements. In this simple implementation, the approach is thus unsuitable for devices with fine structures in their geometry, such as the patterned membranes.
In summary, we have described several laser-based techniques to measure and image nanomechanical motion. As we show, exquisite displacement sensitivity can be reached, well into the regime in which quantum backaction and the ensuing light-motion quantum correlations dominate over thermomechanical noise. This sensitivity is rivaled only by techniques based on superconducting microwave electromechanical systems, which operate at ultra-low (\(T\ll 1\,{\mathrm {K}}\)) cryogenic temperatures [39, 40]. Interest in this quantum domain has originally been motivated by observatories such as LIGO and can now, for the first time, be explored with optical and microwave experiments [3, 9, 15, 41, 42]. In addition, laser-based techniques can provide spatial imaging of mechanical displacement patterns. They constitute not only highly useful tools to develop and characterize novel micro- and nanomechanical devices [4, 6, 35–37]. Similar techniques could also be used to address individual elements in multimode devices [20] or (opto-)mechanical arrays [4, 43]—if need be, also in combination with cavity-enhanced readout [33, 44].
B.P. Abbott, R. Abbott, T.D. Abbott, M.R. Abernathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R.X. Adhikari et al., Phys. Rev. Lett. 116, 061102 (2016)
D. Rugar, R. Budakian, H.J. Mamin, B.W. Chui, Nature 430, 329 (2004)
M. Aspelmeyer, T.J. Kippenberg, F. Marquardt, Rev. Mod. Phys. 86, 1391 (2014)
Y. Tsaturyan, A. Barg, E.S. Polzik, A. Schliesser, arXiv:1608.00937 (2016)
J.W. Wagner, J.B. Spicer, J. Opt. Soc. Am. B 4, 1316 (1987)
Y. Tsaturyan, A. Barg, A. Simonsen, L.G. Villanueva, S. Schmid, A. Schliesser, E.S. Polzik, Opt. Express 6, 6810 (2013)
A. Schliesser, G. Anetsberger, R. Rivière, O. Arcizet, T.J. Kippenberg, New J. Phys. 10, 095015 (2008)
T.W. Hänsch, B. Couillaud, Opt. Commun. 35, 441 (1980)
A.A. Clerk, M.H. Devoret, S.M. Girvin, F. Marquardt, R.J. Schoelkopf, Rev. Mod. Phys. 82, 1155 (2010)
A. Schliesser, T. J. Kippenberg, Cavity optomechanics with whispering-gallery mode optical micro-resonators. in Advances in Atomic, Molecular and Optical Physics, vol. 58, Chap. 5, ed. by P. Berman, E. Arimondo, C. Lin (Elsevier Academic Press, 2010), pp. 207–323
S. Stapfner, L. Ost, D. Hunger, J. Reichel, I. Favero, E.M. Weig, Appl. Phys. Lett. 102, 151910 (2013)
G. Anetsberger, O. Arcizet, Q.P. Unterreithmeier, R. Rivière, A. Schliesser, E.M. Weig, J.P. Kotthaus, T.J. Kippenberg, Nat. Phys. 5, 909 (2009)
G. Anetsberger, E. Gavartin, O. Arcizet, Q.P. Unterreithmeier, E.M. Weig, M.L. Gorodetsky, J.P. Kotthaus, T.J. Kippenberg, Phys. Rev. A 82, 061804 (2010)
D.J. Wilson, V. Sudhir, N. Piro, R. Schilling, A. Ghadimi, T.J. Kippenberg, Nature 524, 325 (2015)
C.M. Caves, Phys. Rev. Lett. 45, 75 (1980)
C. Fabre, M. Pinard, S. Bourzeix, A. Heidmann, E. Giacobino, S. Reynaud, Phys. Rev. A 49, 1337 (1994)
S. Mancini, P. Tombesi, Phys. Rev. A 49, 4055 (1994)
T.P. Purdy, P.-L. Yu, R.W. Peterson, N.S. Kampel, C.A. Regal, Phys. Rev. X. 3, 031012 (2013)
A.H. Safavi-Naeini, S. Gröblacher, J.T. Hill, J. Chan, M. Aspelmeyer, O. Painter, Nature 500, 185 (2013)
W.H.P. Nielsen, Y. Tsaturyan, C.B. Møller, E.S. Polzik, A. Schliesser, arXiv:1605.06541 (2016)
O. Arcizet, P.-F. Cohadon, T. Briant, M. Pinard, A. Heidmann, Nature 444, 71 (2006)
S. Gigan, H.R. Böhm, M. Paternosto, F. Blaser, G. Langer, J.B. Hertzberg, K.C. Schwab, D. Bäuerle, M. Aspelmeyer, A. Zeilinger, Nature 444, 67 (2006)
A. Schliesser, P. Del'Haye, N. Nooshi, K. Vahala, T. Kippenberg, Phys. Rev. Lett. 97, 243905 (2006)
J.D. Thompson, B.M. Zwickl, A.M. Jayich, F. Marquardt, S.M. Girvin, J.G.E. Harris, Nature 452, 72 (2008)
T.W. Hänsch, A.L. Schawlow, Opt. Commun. 13, 68 (1975)
O. Arcizet, T. Briant, A. Heidmann, M. Pinard, Phys. Rev. A 73, 033819 (2006)
L.F. Buchmann, S. Schreppler, J. Kohler, N. Spethmann, D.M. Stamper-Kurn, Phys. Rev. Lett. 117, 030801 (2016)
N.S. Kampel, R.W. Peterson, R. Fischer, P.-L. Yu, K. Cicak, R.W. Simmonds, K.W. Lehnert, C.A. Regal, arXiv:1607.06831 (2016)
A. Schliesser, Cavity optomechanics and optical frequency comb generation with silica whispering-gallery-mode microresonators, Ph.D. thesis, Ludwig-Maximilians-Universität München, 2009
G.S. Agarwal, S. Huang, Phys. Rev. A 81, 041803 (2010)
S. Weis, R. Rivière, S. Deléglise, E. Gavartin, O. Arcizet, A. Schliesser, T.J. Kippenberg, Science 330, 1520 (2010)
A.H. Safavi-Naeini, T.P. Mayer, I. Alegre, J. Chan, M. Eichenfield, M. Winger, J.Q. Lin, J.T. Hill, D.E. Chang, O. Painter, Nature 472, 69 (2011)
T. Briant, P.-F. Cohadon, A. Heidmann, M. Pinard, Phys. Rev. A 68, 033823 (2003)
O. Arcizet, P.-F. Cohadon, T. Briant, M. Pinard, A. Heidmann, J.-M. Mackowski, C. Michel, L. Pinard, O. Francais, L. Rousseau, Phys. Rev. Lett. 97, 133601 (2006)
S. Chakram, Y.S. Patil, L. Chang, M. Vengalattore, Phys. Rev. Lett. 112, 127201 (2014)
Z. Wang, J. Lee, P.X.L. Feng, Nat. Commun. 5, 5158 (2014)
D. Davidovikj, J.J. Slim, S.J. Cartamil-Bueno, H.S.J. van der Zant, P.G. Steeneken, W.J. Venstra, Nano Lett. 16, 2768 (2016)
See Supplementary Material
W. Lauterborn, T. Kurz, M. Wiesenfeldt, Coherent Optics (Springer, Berlin, 1995)
Book MATH Google Scholar
J.D. Teufel, R. Donner, M.A. Castellanos-Beltran, J.W. Harlow, K.W. Lehnert, Nat. Nanotech. 4, 820 (2009)
J.D. Teufel, F. Lecocq, R.W. Simmonds, Phys. Rev. Lett. 116, 013602 (2016)
V.B. Braginsky, F.Y. Khalili, Quantum Measurement (Cambridge University Press, Cambridge, 1992)
I. Tittonen, G. Breitenbach, T. Kalkbrenner, T. Müller, R. Conradt, S. Schiller, E. Steinsland, N. Blanc, N.F. de Rooij, Phys. Rev. A 59, 1038–1044 (1999)
G. Heinrich, M. Ludwig, J. Qian, B. Kubala, F. Marquardt, Phys. Rev. Lett. 107, 043603 (2011)
M. Mader, J. Reichel, T.W. Hänsch, D. Hunger, Nat. Commun. 6, 7249 (2015)
We would like to acknowledge our (former and present) colleagues Georg Anetsberger, Olivier Arcizet, Tobias Kippenberg, Jörg H. Müller, Eugene S. Polzik, Andreas Næsby Rasmussen, Remi Rivière, Anders Simonsen, Koji Usami, Stefan Weis, and Dalziel J. Wilson for their contributions to the work discussed here. Financial support came from the ERC starting grant Q-CEOM, a starting grant from the Danish Council for Independent Research, the EU FP7 grant iQUOEMS, and the Carlsberg Foundation.
Niels Bohr Institute, Blegdamsvej 17, 2100, Copenhagen, Denmark
Andreas Barg, Yeghishe Tsaturyan, Erik Belhage, William H. P. Nielsen, Christoffer B. Møller & Albert Schliesser
Andreas Barg
Yeghishe Tsaturyan
Erik Belhage
William H. P. Nielsen
Christoffer B. Møller
Albert Schliesser
Correspondence to Albert Schliesser.
This article is dedicated to Theodor W. Hänsch on the occasion of his $$75{\mathrm {th}}$$ 75 th birthday. Fortunate enough to have several chances to work with him, I could learn about his unique approach to experimental science. The first bit came right during my interview for a PhD position: When somebody claimed that all simple interesting things had already been done, he insisted that great experiments do not have to be complicated—if they are clever. It felt wise already then, now I know (better) how true it is. And I'm looking forward to seeing more clever experiments emerge from the Munich laboratories. Happy Birthday!
This article is part of the topical collection "Enlightening the World with the Laser" - Honoring T. W. Hänsch guest edited by Tilman Esslinger, Nathalie Picqué, and Thomas Udem.
An erratum to this article is available at http://dx.doi.org/10.1007/s00340-017-6722-y.
Below is the link to the electronic supplementary material.
Supplementary material 1 (mp4 16371 KB)
Barg, A., Tsaturyan, Y., Belhage, E. et al. Measuring and imaging nanomechanical motion with laser light. Appl. Phys. B 123, 8 (2017). https://doi.org/10.1007/s00340-016-6585-7
Quantum Correlation
Force Fluctuation
Mechanical Resonator
Laser Interferometry
Vacuum Fluctuation | CommonCrawl |
Volume 22 Supplement 5
Proceedings of the International Conference on Biomedical Engineering Innovation (ICBEI) 2019-2020
GCRNN: graph convolutional recurrent neural network for compound–protein interaction prediction
Ermal Elbasani1,
Soualihou Ngnamsie Njimbouom1,
Tae-Jin Oh3,4,5,
Eung-Hee Kim2,
Hyun Lee1 &
Jeong-Dong Kim ORCID: orcid.org/0000-0002-5113-221X1,3
BMC Bioinformatics volume 22, Article number: 616 (2021) Cite this article
Compound–protein interaction prediction is necessary to investigate health regulatory functions and promotes drug discovery. Machine learning is becoming increasingly important in bioinformatics for applications such as analyzing protein-related data to achieve successful solutions. Modeling the properties and functions of proteins is important but challenging, especially when dealing with predictions of the sequence type.
We propose a method to model compounds and proteins for compound–protein interaction prediction. A graph neural network is used to represent the compounds, and a convolutional layer extended with a bidirectional recurrent neural network framework, Long Short-Term Memory, and Gate Recurrent unit is used for protein sequence vectorization. The convolutional layer captures regulatory protein functions, while the recurrent layer captures long-term dependencies between protein functions, thus improving the accuracy of interaction prediction with compounds. A database of 7000 sets of annotated compound protein interaction, containing 1000 base length proteins is taken into consideration for the implementation. The results indicate that the proposed model performs effectively and can yield satisfactory accuracy regarding compound protein interaction prediction.
The performance of GCRNN is based on the classification accordiong to a binary class of interactions between proteins and compounds The architectural design of GCRNN model comes with the integration of the Bi-Recurrent layer on top of CNN to learn dependencies of motifs on protein sequences and improve the accuracy of the predictions.
Compound–protein interaction (CPI) is important in the design of new compounds for the pharmaceutical industry. Proteins consist of large of small units called amino acids, which forms long chains that regulate specific functions of the human body. In humans, 20 types of amino acids are combined to form proteins. An amino acid sequence is structured into a three-dimensional complex, and its surface has a pocket that interacts with a compound through a specific combination of amino acids. In the framework of modern pharmaceutic research, the relationship between a compound and a protein can be depicted as a network, in which each node represents a compound or a protein, and an edge indicates a CPI.
Based on this paradigm, many methods based on in silico networks have been introduced to predict CPIs [1,2,3]. Nevertheless, these methods present limitations, such as simulating the CPI as a bipartite network while ignoring the similarities between compounds and interactions between proteins. Moreover, CPI is essential for achieving a variety of health states. In fact, compounds may be small chemical elements composed of molecules, single elements, or other combined elements that contain a variety of proteins with specific functions determined by their structure. The protein function varies according to the interaction sites that enable interaction with compounds. Thus, the interaction between molecular compounds and proteins is being actively studied for the discovery and development of safe and effective drugs. Drugs are generally low-molecular-weight compounds that regulate the biological functions of targets [4], which mostly correspond to disease-related proteins. When drugs interact with such targets, they can be used to treat the related diseases [5, 6].
The discovery of new drugs is time-consuming and costly, usually taking over 10 years in development to then conduct clinical trials for their profound study and ensure compliance with safety standards. The recording and sharing of drug information has greatly accelerated discovery and production, further facilitating the search for new interactions of drugs that can bind to more than one protein. Experimental wet laboratory experiments are available to predict interactions of known drugs, but they require considerable effort and time to set up and implement. The need for faster results has triggered the development of accurate and powerful analytical tools. Although such tools have been experimentally implemented in previous decades, current technologies and data availability have enabled the analytic process of drug development to be driven by machine learning and artificial intelligence.
Bioinformatics and data science have been combined to develop solutions based on various methods and algorithms, especially for CPI prediction [7, 8]. Conventional methods in this field use similarity-based approaches, which consider the similarity of known compound matrices with each other and across protein data. Bleakly et al. [9] proposed a model for CPI prediction with a variety of similarity information, achieving reasonable prediction performance but often demanding high computational cost, additional expertise, or three-dimensional structures of proteins.
Machine learning has been used to construct strong and sustainable drug delivery pipelines in a shorter time compared with the use of conventional methods [10]. Such pipelines allow to rapidly synthesize and analyze a small number of compounds that would help refine developed models and new designs. For drug discovery, machine learning and other technologies have enabled faster, cheaper, and more effective solutions [11].
Artificial intelligence involves various machine learning methods, with the most prominent being deep neural networks (DNNs), which provide state-of-the-art solutions in many applications, such as speech recognition and visual object recognition [12]. DNNs have also achieved excellent performance in the investigation of compounds and proteins [7, 8]. However, most available methods do not include end-to-end representation learning and consider on molecular encodings and protein phylogenetic data banks as input features that remain fixed during training. Convolutional neural networks (CNN) and recurrent neural networks (RNNs) are variants of DNNs used to classify time series and sequential data [13]. Given the long sequential nature of protein data, RNNs with long short-term memory (LSTM) layers have been proven successful. These machine learning methods can help developing high-value and cost-effective target drugs with faster transport and less harm to patients. Moreover, customized drugs can be developed to achieve the desired results faster than conventional drugs, substantially reducing the costs and time of treatments. By analyzing data of genome, proteomics, metabolomics, and clinical trials, we may fully understand the structure of a disease. Then, this knowledge may be applied to machine learning toward the development of drugs with faster and more accurate targeting.
Unlike conventional methods, a feature vector allows to automatically extract features from data without requiring expert knowledge or the three-dimensional structure of objects/proteins. Jacob et al. [14] applied tensor-product-based features to represent compound and protein families in mathematical vectors and then applied a support vector machine to predict CPIs. Jones et al. [15] used a CNN and combination graphs to find a CPI with a pairwise model, also Tsubaki et al. [16] use a similar structure for CPI prediction. Specifically in pairwise models, a CNN is used to analyze the protein structure and a graph neural network (GNN) was used for the molecular structure. Then, vectors were obtained from these branches and concatenated for the final CPI prediction.
This study contributes to the development of an end-to-end learning framework based on chemical information using a graph representation of a compound and a sequence of a protein by combining neural networks to identify the existence of CPIs. We represent compound and protein complexes as feature vectors and apply a learning algorithm to train a classifier for CPI prediction. This method, called graph convolutional recurrent neural network (GCRNN), uses protein analysis based on a CNN after a max-pooling layer followed by a bidirectional LSTM layer. The integration of recurrent layers into a CNN for protein modeling improves the representation of protein functions that dictate interactions with a compound and promote accurate results in real laboratory experiments. In addition, we integrate a recurrent layer after the max-pooling layer because protein functions follow patterns that represent specific biological arrangements, and the integration increases the detection probability and provides memory for capturing long-term dependencies [17, 18].
The remainder of this paper is organized as follows. Section 2 presents result and discussion, we draw conclusions in Sect. 3 and the proposed method is detailed in Sect. 4.
A hybrid architecture improves the performance of prediction
After selecting the features for analyzing the data, despite factors such as data size or complexity, the performance is essential to choose the appropriate machine learning model. In addition, the selected model should deal with factors such as linearity, numbers of parameters and features of the data bank, training time, and accuracy. This work mainly measures the performance based on the classification accuracy. Also, to be noticed that the study conducted in this paper is compared with accuracy of replicated work of Tsubaki et al. [16] the model named GCNN(Graph Convolution Neural Network), with the intention improving parts of this research to convey our idea practically. The nature of data for CPI prediction is computed based on binary classes, where a class is determined by an output threshold. This work use a binary class representing the existence or absence of CPI. To accurately evaluate the model and prevent overfitting, the data were split into disjoint training (65% of the samples), validation (20% of the samples), and test (15% of the samples) sets. This work evaluate the performance using measures based on the numbers of true positives (TP), which indicate the correct classification of positive samples (i.e., CPIs), true negatives (TN), which indicate the correct classification of negative samples (i.e., no CPIs), false positives (FP), which indicate incorrect classification of positive samples, and false negatives (FN), which indicate incorrect classification of negative samples. The evaluation measures based on TP, TN, FP, and FN, and a study of performance measures for classification tasks that are used widely in learning techniques is presented in [19].
The open source genomic and protein data were retrieved from respective data repositories, for chemical structure of the compound from the PubChem database and protein sequences from the Protein Data Bank [20]. Protein and chemical data are processed in order to have a training data of compound protein interaction which is detailed from Liu et al. [21]. This work have used these data for CPI analysis, and 7000 sets of annotated data containing 1000 base length proteins have been obtained. The classes establish balanced data, and this fact demonstrate the importance of training process to have a close detection rate of each classes, raising the probability of generating a model with high accuracy. Thus, models can achieve high classification performance compared with the use of imbalanced data. We conducted experiments using various machine learning modules to evaluate different architectures.
The GNN uses a chemical input given by the simplified molecular-input line-entry system, which provides molecular encoding sequential strings. The system uses RDkit [22] to obtain graphical representations, and as an open source package include library of cheminformatics operations for compound or molecules structures.
This study use a three-layer GNN with an r-radius number of 2 to represent molecules as vectors. For proteins, the CNN takes the original amino acid sequence and passes through a three-layer structure with 320 convolutional kernels and a window size of 30 with random initiation based on a similar model [23]. The pooling layer has a window size of 15 and step size of 15, followed by two layers of bidirectional LSTM with 320 forward and backward neurons. The same architecture is used for the bidirectional GRU, and computations are performed over several iterations sets. The best set of hyperparameter for tuning GCRNN w\LSTM and GRU are selected to be 100 training epoch, choosing Adam optimizer, learning rate of 0.001 and decay of learning rate 0.4. The model showed high performance after tuning the hyperparameters.
This study implemented the experiments in the PyTorch [24] using a computer running the Ubuntu 18.04 operating system and equipped with an Intel i9-10,940 × processor with 256 GB memory and an NVIDIA 4xRTX2080TI graphics processor with 44 GB memory.
Compound protein interaction prediction accuracy
CPI analysis requires wet laboratory experiments, but we only considered the data bank in this study assuming that the protein and interaction information is approved before the data were recorded. In addition, the bidirectional LSTM or bidirectional GRU after the max-pooling layer affects the CPI prediction performance. Thus, we obtained a high accuracy on the data, as shown in Fig. 1a, b for the corresponding models.
Accuracy of training and testing prediction over 100 iterations for proposed GCRNN with a bidirectional LSTM and b bidirectional GRU
The results in Fig. 1 show that the proposed GCRNN can predict the CPIs at 98% accuracy. Further experiments related to error rate when inserting a new test set separated and the results are given for Bi directional GRU which performed better compare to Bi-LSTM and the result is visualized in the Fig. 2, showing that the training and test accuracies and the error graphs do not overfit over 100 iterations.
Bidirectional GRU error according to iteration
Compared with the GCNN, our GCRNN shows a small improvement in the overall performance, as listed in Table 1. This result suggests that the proposed GCRNN provides a more reliable prediction because protein function extraction is important for CPI. Data about proteins are available in data banks [25] and are obtained over years of research. The similarity between proteins in humans reduces the burden of data recording, and thus various calculations are facilitated by only selecting a type of protein and a type of interaction. During drug discovery, analytic results and health information are linked to recognize patterns of compounds with different proteins.
Table 1 Classification performance of evaluated methods for CPI prediction
Visualization tools provide insights on the medical outcomes expected for patients with a high accuracy to predict effects while reducing the time and setup workload of wet laboratory experiments for producing a specific drug. With the advancement of research, data banks will become larger, increasing our ability to understand CPI patterns for healthcare, and patients will be treated with specific drugs related to their health condition.
This research is limited to the analysis process of the framework, even why several implementations are performed, a confident discussion of compound protein interaction requires wet laboratory experiments to be associates with, but this work will focus only on the database supposing that protein and interaction information is approved before when data are recorded.
This work proposes GCRNN to identify CPIs using high-end machine learning methods. Also, emphasize the end-to-end representation learning of a GNN and a CNN with bidirectional LSTM/GRU to predict CPIs. Experimental results demonstrate that a relatively low-dimensional end-to-end neural network can outperform various existing methods on both balanced and imbalanced data.
This study provides new insights on CPI prediction to construct general machine learning methods in bioinformatics rather than using feature engineering. Unlike existing structure-based computational approaches, the proposed GCRNN shows high performance using only protein primary structure information instead of three-dimensional structure information. Nevertheless, a deep learning model is usually considered a black box. Consequently, it is difficult to interpret the features that the model learns for CPI prediction. Improving the prediction performance on the validation and test sets would provide a starting point for subsequent research. In future work, this study will evaluate the model learning and performance considering comparisons with the results from wet laboratory experiments.
GCRNN for CPI prediction
Machine learning and computational methods are enhancing data analysis on a large scale and providing faster solutions, impacting research on biology and pharmaceutics. Biological data have been collected in data banks with plenty of information about genome and proteins being available for researchers to obtain reliable results in areas such as healthcare.
This work address CPI prediction, an important aspect for drug discovery and development. Figure 3 illustrates the development of new drugs for improving health conditions based on the information of proteins and chemical structure of natural or artificial compounds.
Deep learning-based drug discovery approach
A normal or abnormal condition carries information in the genome sequence, which can be translated into a protein sequence that interacts with a compound. The interactions can be stored continuously by using machine learning to determine the effective compound and protein for a specific disease. Then, laboratory experiments provide accurate results for clinical trials, and the resulting compound extends the dataset for new cases and developments.
Deep learning techniques provide state-of-the-art performance and high accuracy for handling protein sequences and modeling molecules. Among the available models and architectures, we combine three powerful methods for CPI prediction, namely, GNN, CNN, and bidirectional RNN, as shown in Fig. 4. These methods constitute the proposed GCRNN and are detailed below.
Architecture of GCRNN for CPI prediction
The GNN can provide the low-level error vector of a molecular chart. We use the GNN to represent a molecular embedding that maps a graph into a vector through transformation and output functions. In the GNN, the transformation function updates the node values related to the neighboring nodes and edges, and the output function describes the nodes as vectors. In the graph structure, G = (N, E), where N is the set of nodes, and E is the set of edges that connect neighboring nodes. We consider undirected graph G, in which a node ni ∈ N represents atom i of a molecule, and eij ∈ E represents the bond between atoms ni and nj.
Considering molecules as graphs simplifies the representation by defining few types of nodes and bonds and few parameters to learn. We also adopt r-radius subgraphs [26] that outperform the representation learning of the number of neighboring nodes. In an r-radius subgraph, for graph G = (N, E), the set of all nodes within a radius r of node i are represented as S(i,r), and the subgraph of r-radius nodes ni is defined as
$$n_{i}^{\left( r \right)} = \left( {N_{i}^{\left( r \right)} ,E_{i}^{\left( r \right)} } \right),$$
where \(N_{i}^{\left( r \right)} = \left\{ {n_{j} \left| {j \in S\left( {i,r} \right)} \right.} \right\}\) and \(E_{i}^{\left( r \right)} = \left\{ {e_{mn} {|}\left( {m,n} \right) \in S\left( {i,r} \right) \times S\left( {i,r - 1} \right)} \right\}\). The subgraph for the r-radius edges is defined as
$$e_{ij}^{\left( r \right)} = \left( {N_{i}^{{\left( {r - 1} \right)}} \cup N_{j}^{{\left( {r - 1} \right)}} ,E_{i}^{{\left( {r - 1} \right)}} \cup E_{j}^{{\left( {r - 1} \right)}} } \right).$$
An embedded vector is assigned for the r-radius node and r-radius edge, which are randomly initialized, and backpropagation is used for training. To update the node information with respect to its neighborhood, the transition functions in Eqs. (3) and (4) are used. At time step t of a given graph with random embeddings of nodes and edges, n(t) represents a node in Eq. (3) and e(t) represents an edge in Eq. (4). The updated vectors are defined as
$$n_{i}^{{\left( {t + 1} \right)}} = \sigma \left( {n_{i}^{\left( t \right)} + \mathop \sum \limits_{j \in S\left( i \right)} p_{ij}^{\left( t \right)} } \right),$$
where σ is the sigmoid function [27] and \(p_{ij}^{\left( t \right)} = f\left( {W_{{{\text{neighbor}}}} \left[ {\begin{array}{*{20}c} {n_{j}^{\left( t \right)} } \\ {e_{ij}^{\left( t \right)} } \\ \end{array} } \right] + b_{{{\text{neighbor}}}} } \right)\) is a neural network with f being a ReLU (rectified linear unit) activation function [27] and Wneighbor and bneighbor being a weight matrix and bias vector, respectively, at time step t. In the same iteration, the edges are updated as follows:
$$e_{i}^{{\left( {t + 1} \right)}} = \sigma \left( {e_{i}^{\left( t \right)} + q_{ij}^{\left( t \right)} } \right),$$
where function q is the neural network model for the edges and \(q_{ij}^{\left( t \right)} = f\left( {W_{{{\text{side}}}} \left( {n_{i}^{\left( t \right)} + n_{j}^{\left( t \right)} } \right) + b_{{{\text{side}}}} } \right)\). The transition functions generate an updated set of nodes \({\varvec{N}} = \left\{ {{\varvec{n}}_{1}^{{\left( {\varvec{t}} \right)}} ,{\varvec{n}}_{2}^{{\left( {\varvec{t}} \right)}} ,..,{\varvec{n}}_{{\left| {\varvec{N}} \right|}}^{{\left( {\varvec{t}} \right)}} } \right\}\), where |N| is the number of nodes in the molecular graph. Finally, the molecular representation vector is given by
$$y_{{{\text{molecule}}}} = \frac{1}{\left| N \right|}\mathop \sum \limits_{i = 1}^{\left| N \right|} n_{i}^{\left( t \right)} .$$
CNNs are DNNs that also effective for analyzing protein sequences. As a CNN uses a weight-sharing strategy to capture local patterns in data, it is suitable for studying DNA (deoxyribonucleic acid) because convolution filters can determine functions of protein sequences that are short repeating patterns in DNA that may have a biological function. The proposed deep CNN is characterized by sequential interactive convolutional and pooling layers that extract features form sequence at various scales, followed by a fully connected layer that computes the whole-sequence information to extract protein features. Each CNN layer undergoes a linear transformation from the previous output. Then, it is multiplied by a weight matrix and proceeds with a nonlinear transformation. To minimize prediction errors, the weighted value matrix is learned during training. The CNN model base layer is a convolutional layer that calculates the output of a one-dimensional operation concerning a specific number of kernels (weight matrices later transformed by ReLU activation). The CNN for proteins maps sequence P into vector y with multiple filter functions. The first CNN layer is applied to proteins, where the n-gram (n = 3) technique allows to represent amino acids as words. The group of three overlapping amino acids makes a word and represents input sequence P. The convolution for input protein sequence P is defined as
$$convolution\left( P \right)_{ik}^{\left( t \right)} = ReLU\left( {\mathop \sum \limits_{m = 0}^{M - 1} \mathop \sum \limits_{n = 0}^{N - 1} W_{mn}^{k} P^{\left( t \right)}_{i + m,n} } \right),$$
where i is the index of the output position, k is the index of the kernels, and Wk is an M × N weight matrix with M windows and N input channels. Then, a max-pooling layer reduces the size of the input or hidden layers by choosing the maximally activated neuron from a convolutional layer. Accordingly, for the CNN to be independent of the length of the protein sequence, max-pooling is applied when the maximally activated neuron is selected from the convolutional layer. Consequently, the number of hidden neurons generated by the convolution filter is the same as that of filters and not affected by the length of the input. For input Q, pooling is defined as
$$pooling\left( Q \right)_{ik}^{\left( t \right)} = {\text{max}}\left( {Q_{iM,k} ,Q_{{\left( {iM + 1,k} \right)}} , \ldots ,Q_{{\left( {iM + M - 1,k} \right)}} } \right).$$
Bidirectional RNN
An RNN is another type of DNN. Unlike a CNN, the connections between the RNN units form a directed cycle that creates an internal state of the network to exhibit a dynamic temporal or spatial behavior. A bidirectional LSTM is a variant of the RNN that combines the outputs of two RNNs to process a sequence both from left to right and from right to left. Instead of regular hidden units, the two proposed RNNs contain LSTM layers, which are smart network units that can remember a value over an arbitrary period. A bidirectional LSTM can capture long-term dependencies and has been effective for various machine learning applications. Bidirectional gated recurrent units (GRUs) are an alternative to bidirectional LSTMs to constantly represent sequential input without using separate memory units [28]. We use LSTM and GRU to prove that adding a recurrent structure after the CNN increases performance.
Features V provided from the pooling layer form a sequence \({\varvec{x}} = \left\{ {{\varvec{x}}_{1} ,{\varvec{x}}_{2} , \ldots ,{\varvec{x}}_{{\varvec{V}}} } \right\}\), which serves as input for a two-layer bidirectional neural network. The bidirectional LSTM layers updated at step v depend on forward and backward processing as follows:
$$\begin{aligned} \vec{i}_{t} & = \sigma \left( {\vec{W}_{i} \left[ {x_{t} ,\vec{h}_{t - 1} } \right] + \vec{b}_{i} } \right), \\ \vec{f}_{t} & = \sigma \left( {\vec{W}_{f} \left[ {x_{t} ,\vec{h}_{t - 1} } \right] + \vec{b}_{f} } \right), \\ \vec{o}_{t} & = \sigma \left( {\vec{W}_{o} \left[ {x_{t} ,\vec{h}_{t - 1} } \right] + \vec{b}_{o} } \right), \\ \vec{g}_{t} & = tanh\left( {\vec{W}_{c} \left[ {x_{t} ,\vec{h}_{t - 1} } \right] + \vec{b}_{c} } \right), \\ \vec{c}_{t} & = \vec{f}_{t} \odot \vec{c}_{t - 1} + \vec{i}_{t} \odot \vec{g}_{t} , \\ \vec{h}_{t} & = \vec{o}_{t} tanh\left( {{ }\vec{c}_{t} } \right) \\ H & = \vec{W}_{h} \vec{h}_{t} + \mathop{W}\limits^{\leftarrow} _{h} \mathop{h}\limits^{\leftarrow} _{t} . \\ \end{aligned}$$
At time t, → and ← indicate the calculation direction, i is the input gate, f is the forget gate, o is the modulate gate, h is the hidden state at time t, Wi, WF, Wo, and Wc are weight matrices for their corresponding gates, and \(\odot\) denotes the elementwise multiplication. The equivalent bidirectional GRU is defined as follows:
$$\begin{aligned} \vec{z}_{t} & = \sigma \left( {\vec{W}_{z} \left[ {x_{t} ,\vec{h}_{t - 1} } \right] + \vec{b}_{z} } \right), \\ \vec{r}_{t} & = \sigma \left( {\vec{W}_{r} \left[ {x_{t} ,\vec{h}_{t - 1} } \right] + \vec{b}_{r} } \right), \\ \overrightarrow {{\tilde{h}}}_{t} & = tanh\left( {\vec{W}_{h} \left[ {x_{t} ,r_{t} \odot \vec{h}_{t - 1} } \right]} \right), \\ \vec{h}_{t} & = \left( {1 - z_{t} } \right) \odot \vec{h}_{t - 1} + \vec{g}_{t} \odot \overrightarrow {{\tilde{h}}}_{t} , \\ \end{aligned}$$
where r is the reset gate and z is the update gate. The LSTM/GRU encodes \(\vec{\user2{h}}_{{\varvec{t}}} \user2{ }\) along the left direction of the embedded protein at position t. As both the left and right directions are important for the global structure of proteins, we use a bidirectional LSTM (or bidirectional GRU). The bidirectional layers encode each position into leftward and rightward representations. H is the output, which is the sum of the results along both directions:
$$H = \vec{W}_{h} \vec{h}_{t} +\overleftarrow {W} _{h}\overleftarrow {W} _{t},$$
where H is a set of hidden vectors \(H = \left\{ { h_{1}^{{\prime}\;(t)} , h_{2}^{{\prime}\;(t)} , \ldots , h_{\left| V \right|}^{{\prime}\;(t)}} \right\}\) obtained from the bidirectional LSTM/bidirectional GRU output. The protein vector representation is given by
$$y_{{{\text{protein}}}} = \frac{1}{\left| V \right|}\mathop \sum \limits_{i = 1}^{\left| V \right|} b_{i}^{\left( t \right)}$$
The vectors are concatenated to obtain output vector \({\text{Out}} = W_{{{\text{out}}}} \left[ {y_{{{\text{molecule}}}} ;y_{{{\text{protein}}}} } \right] + b_{{{\text{out}}}}\), which is the input of a classifier, where \({\varvec{W}}_{{{\mathbf{out}}}}\) is a weight matrix and \({\varvec{b}}_{{{\varvec{out}}}}\) is a bias vector. Finally, softmax activation is added to vector Out[\({\varvec{y}}_{0} ,{\varvec{y}}_{1}\)] to predict a binary label that represents the existence or not of a CPI.
Contact the corresponding author and the dataset is same described also in https://doi.org/10.1093/bioinformatics/btv256.
GCRNN:
Graph convolution recurrent neural network
Convolution neural network
LSTM:
Long Short-Term Memory
Bi-LSTM:
Bi-directional LSTM
GRU:
Gated recurrent units
Bi-GRU:
Bi-directional GRU
CPI:
Compound-protein interaction
DNN:
Deep neural network
RNN:
Recurrent neural network
GNN:
Graph neural network
DNA:
TP:
True positive
TN:
True negative
False negative
ReLU:
Rectified Linear Unit
Meng Y, Yi SH, Kim HC. Health and wellness monitoring using intelligent sensing technique. J Inf Process Syst. 2019;15(3):478–91.
Zong N, Kim H, Ngo V, Harismendy O. Deep mining heterogeneous networks of biomedical linked data to predict novel drug–target associations. Bioinformatics. 2017;33(15):2337–44.
Nascimento AC, Prudêncio RB, Costa IG. A multiple kernel learning algorithm for drug-target interaction prediction. BMC Bioinform. 2016;17(1):46.
Li ZC, Huang MH, Zhong WQ, Liu ZQ, Xie Y, Dai Z, Zou XY. Identification of drug-target interaction from interactome network with 'guilt-by-association 'principle and topology features. Bioinformatics. 2016;32(7):1057–64.
Shi JY, Yiu SM, Li Y, Leung HC, Chin FY. Predicting drug–target interaction for new drugs using enhanced similarity measures and super-target clustering. Methods. 2015;83:98–104.
Hao M, Wang Y, Bryant SH. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique. Anal Chem Acta. 2016;909:41–50.
Hamanaka M, Taneishi K, Iwata H, Ye J, Pei J, Hou J, Okuno Y. CGBVS-DNN: prediction of compound-protein interactions based on deep learning. Mol Inf. 2017;36(1–2):1600045.
Wan F, Zeng JM. Deep learning with feature embedding for compound-protein interaction prediction. bioRxiv. 2016;11:086033.
Bleakley K, Yamanishi Y. Supervised prediction of drug–target interactions using bipartite local models. Bioinformatics. 2009;25(18):2397–403.
Mullin R. And now: the drug plant of the future. Chem Eng News. 2017;95(21):22–4.
Fleming N. Computer-calculated compounds. Nature. 2018;557(7707):S55–7.
Alom MZ, Taha TM, Yakopcic C, Westberg S, Sidike P, Nasrin MS, Hasan M, Van Essen BC, Awwal AAS, Asari VK. A state-of-the-art survey on deep learning theory and architectures. Electronics. 2019;8(3):292. https://doi.org/10.3390/electronics8030292.
Om K, Boukoros S, Nugaliyadde A, McGill T, Dixon M, Koutsakis P, Wong KW. Modelling email traffic workloads with RNN and LSTM models. HCIS. 2020;10(1):1–6. https://doi.org/10.1186/s13673-020-00242-w.
Jacob L, Vert JP. Protein-ligand interaction prediction: an improved chemogenomics approach. Bioinformatics. 2008;24(19):2149–56.
Jones D, Kim H, Zhang X, Zemla A, Stevenson G, Bennett WD, Kirshner D, Wong S, Lightstone F, Allen JE. Improved protein-ligand binding affinity prediction with structure-based deep fusion inference. arXiv preprint. 2020. arXiv:2005.07704.
Tsubaki M, Tomii K, Sese J. Compound–protein interaction prediction with end-to-end learning of neural networks for graphs and sequences. Bioinformatics. 2019;35(2):309–18.
Zhou J, Troyanskaya OG. Predicting effects of noncoding variants with deep learning-based sequence model. Nat Methods. 2015;12(10):931–4.
Sundermeyer M, Alkhouli T, Wuebker J, Ney H. Translation modeling with bidirectional recurrent neural networks. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP); 2014. pp. 14–25.
Sokolova M, Lapalme G. A systematic analysis of performance measures for classification tasks. Inf Process Manag. 2009;45:427–37.
Burley SK, Berman HM, Kleywegt GJ, Markley JL, Nakamura H, Velankar S. Protein Data Bank (PDB): the single global macromolecular structure archive. In: Wlodawer A, Dauter Z, Jaskolski M, editors. Protein crystallography. New York: Humana Press; 2017. p. 627–41.
Liu H, Sun J, Guan J, Zheng J, Zhou S. Improving compound–protein interaction prediction by building up highly credible negative samples. Bioinformatics. 2015;31(12):i221–9.
The RDKit book. https://www.rdkit.org/docs/RDKit_Book.html. Accessed 02 June 2020.
Quang D, Xie X. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences. Nucleic Acids Res. 2016;44(11):e107.
Ketkar N. Introduction to pytorch. In: Ketkar N, editor. Deep learning with python. Berkeley: Apress; 2017. p. 195–208.
Protein Data Bank PDF. https://www.rcsb.org/. Accessed 1 June 2020.
Costa F, DeGrave, K. Fast neighborhood subgraph pairwise distance kernel. In: ICML; 2010.
Zhang C, Woodland PC. Parameterised sigmoid and ReLU hidden activation functions for DNN acoustic modelling. In: Sixteenth annual conference of the International Speech Communication Association; 2015.
Dhingra B, Liu H, Yang Z, Cohen WW, Salakhutdinov R. Gated-attention readers for text comprehension. arXiv preprint. 2016. arXiv:1606.01549.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 22 Supplement 5 2021: Proceedings of the International Conference on Biomedical Engineering Innovation (ICBEI) 2019-2020. The full contents of the supplement are available at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-22-supplement-5.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1F1A1058394). Publication cost are funded by a project administrated by Jeong-Dong Kim(JD.K). In this work JD.K contributed in conceptualization, methodology, validation, review and editing, and supervision.
Department of Computer Science and Engineering, Sun Moon University, Asan, 31460, South Korea
Ermal Elbasani, Soualihou Ngnamsie Njimbouom, Hyun Lee & Jeong-Dong Kim
Department of Artificial Intelligence and Software Technology, Sun Moon University, Asan, 31460, South Korea
Eung-Hee Kim
Genome-Based BioIT Convergence Institute, Sun Moon University, Asan, 31460, South Korea
Tae-Jin Oh & Jeong-Dong Kim
Department of Pharmaceutical Engineering and Biotechnology, Sun Moon University, Asan, 31460, South Korea
Tae-Jin Oh
Department of BT-Convergent Pharmaceutical Engineering, Sun Moon University, Asan, 31460, South Korea
Ermal Elbasani
Soualihou Ngnamsie Njimbouom
Hyun Lee
Jeong-Dong Kim
For research articles contributions is as follows: conceptualization: EE, and JDK; methodology: EE, JDK, SNN; software: TJO, HL, SNN; validation: JDK, EHK, HL and TJO; formal analysis: EHK; investigation: HL, EHK; resources: EE EHK; data curation: JDK; writing—original draft preparation, EE; writing—review and editing: EE JDK: visualization, EE, SNN, EHK; supervision: HL, TJO, JDK: All authors have read and agreed to the published version of the manuscript. All authors read and approved the final manuscript.
Correspondence to Jeong-Dong Kim.
Authors declare that they have no competing interests.
Elbasani, E., Njimbouom, S.N., Oh, TJ. et al. GCRNN: graph convolutional recurrent neural network for compound–protein interaction prediction. BMC Bioinformatics 22, 616 (2021). https://doi.org/10.1186/s12859-022-04560-x
DOI: https://doi.org/10.1186/s12859-022-04560-x
Protein compound interaction
Bi-LSTM
Bi-GRU | CommonCrawl |
Is Newton's 3rd law of motion not applicable to gravitational force?
Newton's law of gravitation states:
Every particle attracts every other particle in the universe with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between the centers.
and it can be mathematically expressed as $$F=G\ \frac{m_1\ m_2}{r^2}$$
Newton's 3rd law of motion states:
When two bodies interact, they apply forces to one another that are equal in magnitude and opposite in direction.
Consider a scenario where there's an object of mass 1 kg near Earth's surface.
Let's assume that Newton's 3rd law of motion isn't acting for now.
As per the law of gravitation, the Earth pulls the object with a force of approx. 9.8 N and the object also pulls Earth with a force of 9.8 N. It isn't that only Earth pulls the object, both the object and Earth are pulling each other with a force of same magnitude that is calculated from the above-stated formula.
So, the object and Earth each are experiencing a force of 9.8 N from each other.
Now, let's think that Newton's 3rd law of motion starts acting.
As the object was pulling Earth with a force, Earth now in turn pulls the object with the same magnitude of force. Thus, the object now experiences 9.8 N (gravitational force) + 9.8 N (reaction force from the object pulling Earth) = 19.6 N (net force experienced) and similarly, Earth also experiences 19.6 N of net force from the object.
So, when the 3rd law of motion is in action, the object and Earth each should experience 19.6 N of force from each other.
In reality, this is not what we observe.
We see that an object of 1 kg accelerates at only 9.8 m/s^2 near Earth's surface and not 19.6 m/s^2 as it should if the 3rd law of motion was acting. That means that the object experiences only 9.8 N of force from Earth and this matches the situation before the 3rd law of motion was acting in our scenario.
Does that mean that Newton's 3rd law of motion doesn't apply to gravitational force?
Am I thinking something wrong?
newtonian-mechanics
newtonian-gravity
edited Nov 26, 2021 at 6:46
Silica19
Silica19Silica19
So, the total force experienced by the Earth and the object by each other is approx. 19.6 N.
I guess you are free to consider that a sum of the magnitudes of the two different forces, but it is unclear to me how that would help you. The forces still act on different bodies. The earth has a force of 9.8N on it, and the object has a force of 9.8N.
We observe that an object of 1 kg accelerates at only 9.8 m/s^2 near Earth's surface and not at 19.6 m/s^2; this means that the force experienced by a 1 kg object is 9.8 N near Earth's surface.
Yes. In fact that's exactly how you began the scenario. We can compute the force on one object and find that to be the magnitude.
You don't get to add up the magnitudes of two forces on two different objects in two different directions and think that magnitude applies to one object in one direction.
That's a tough ask. We're used to forces arising from a coupled interaction. Both sides of the couple experience this interaction as a force.
That's exactly what we'd expect normally. How does this create a situation where Newton's law "isn't acting"?
Newton's 3rd law doesn't describe some additional force that pops into existence. It just says that if something creates a force (like gravity between two objects), that force is created on both (in opposite directions). That it's not possible to create a "one-way" force.
The gravitational force of 9.8N on both objects is consistent with the 3rd law.
BowlOfRedBowlOfRed
$\begingroup$ I would like to put a huge like-thumb 👍 on your last paragraph: "You don't get to add up the magnitudes of two forces on two different objects in two different directions and think that magnitude applies to one object in one direction." $\endgroup$
– md2perpe
$\begingroup$ I guess you've misunderstood my question. I've made my question clearer. Please check. $\endgroup$
– Silica19
Nov 26, 2021 at 6:47
$\begingroup$ I've added a bit. $\endgroup$
– BowlOfRed
$\begingroup$ "reaction" forces aren't different kinds of forces. They're just regular forces that Newton's 3rd says will always exist. The force is due to gravitation. It also happens to be paired with another gravitational force. You seem to be saying that this is different for gravitation, but it is identical to other situations. If I had a spring pulling two objects, the forces would be equal and opposite but both caused by the spring. One is not a magical "reaction" force. $\endgroup$
$\begingroup$ That sounds perfect! $\endgroup$
I think your confusion comes from 'double counting' the effect of the third law. Newton's law of gravitation states that all objects attract each other with the force $$F = G \frac{m_1 m_2}{r^2} \, .$$ Because this force is symmetric in $m_1$ and $m_2$, it means that both objects 1 and 2 attract each other with the same force. The gravitational law is therefore consistent with Newton's third law of motion.
The statement of the third law of motion is not to miraculously copy forces from one body to the other, it simply constrains the form of physically possible force laws. For example, the third law tells you that a hypothetical gravitational law, such as "the heavier object attracts the lighter one, but the lighter object does not attract the heavier one" is not physical (even though it appears to be true in day-to-day life).
leapsheepleapsheep
$\begingroup$ Newton's 3rd law of motion involves an action-reaction pair of forces. What is the action force and the reaction force in this case? $\endgroup$
$\begingroup$ @silica19 That is an arbitrary distinction. You could say that particle 1 attracts particle 2, and that particle 2 "reacts" by attracting particle 1. Or you could view it the other way around. The third law states that both viewpoints are equivalent. $\endgroup$
– leapsheep
$\begingroup$ I lacked to understand that the two forces involved as action-reaction are not 'action' and 'reaction'; they are both forces acting simultaneously on their own. For example, if 2 balls are colliding, it isn't that 1st ball collides with the 2nd, it is both balls colliding with each other. But I have another question: Imagine 2 persons in deep space in front of each other that push each other simultaneously with 10 N of force using their hands. In this case, what will be the net force experienced by each person? Also, how is this condition different from the gravitational force between objects? $\endgroup$
$\begingroup$ Both of them will feel $10 N$. As soon as they come in contact, both of them will be repelled by the same force, regardless if one or both of them intend to push the other. $\endgroup$
$\begingroup$ Yes, I think you got it! $\endgroup$
Dec 1, 2021 at 10:16
9.8 N of force causes a 1 kg object to accelerate at 9.8 m/s2.
But the earth has a mass of about $5.97\times10^{24}\ {\rm kg}$. So 9.8 N causes the Earth to accelerate toward the object at only about $1.64\times10^{-24}\ {\rm m/s^2}$.
Thus the relative acceleration you observe standing on the earth and watching the object fall toward it is only 9.800000000000000000000164 m/s2 (but of course it's not really exactly that because the figure of 9.8 m/s2 was never accurate to so many significant figures).
The PhotonThe Photon
There's nothing special about gravity.
The force on a mass $m_1$ at $\vec{r}_1$ due to a mass $m_2$ at $\vec{r}_2$ is $\frac{-Gm_1m_2}{|\vec{r}_1-\vec{r}_2|^3}(\vec{r}_1-\vec{r}_2)$. (The power in the denominator is $3$ rather than $2$, because the vector outside the fraction isn't a unit vector.) Similarly, the force on $m_2$ due to $m_1$ is $\frac{-Gm_2m_1}{|\vec{r}_2-\vec{r}_1|^3}(\vec{r}_2-\vec{r}_1)=\frac{Gm_1m_2}{|\vec{r}_1-\vec{r}_2|^3}(\vec{r}_1-\vec{r}_2)$. This is $-1$ times the former force, as per Newton's third law.
To address your combining-accelerations ambition, let's put gravity aside for a moment. Suppose a body of mass $m_1$ experiences a force $\vec{F}$ due to a body of mass $m_2$, for whatever reason . Then $m_1$ has acceleration $\vec{F}/m_1$. By Newton's third law, $m_2$ experiences a force $-\vec{F}$, giving it the acceleration $-\vec{F}/m_2$. The relative acceleration is $\vec{F}/m_1+\vec{F}/m_2=\vec{F}/\mu$ with $\mu:=\frac{m_1m_2}{m_1+m_2}$ the reduced mass of the two bodies.
J.G.J.G.
According to the law of gravitation, the object and the Earth will apply a approximate force of 9.8 N to each other.
...according to Newton's 3rd law of motion, as the object pulls the Earth, the Earth also pulls the object with the same magnitude of force (9.8 N) and vice-versa.
This is not a helpful way to think about the situation; it is not even wrong.
But in actual, this is not the case.
It is the case that each feels a 9.8N force due to the other
We observe that an object of 1 kg accelerates at only 9.8 m/s^2 near Earth's surface and not at 19.6 m/s^2;
Yes. A 9.8N force on a 1kg object leads to a 9.8m/s^2 acceleration.
this means that the force experienced by a 1 kg object is 9.8 N near Earth's surface.
Does this mean that Newton's 3rd law of motion is not applicable to gravitational force?
The force on the 1kg object is 9.8N. The acceleration of the 1kg object is 9.8m/s^2.
The force on the 5.972 × 10^24 kg earth is 9.8N. The acceleration of the 5.972 × 10^24 kg earth is 1.6744809 x 10^-25 m/s^2 (which is so small you can't notice it).
hfthft
$\begingroup$ The question has been re-written a bit, but does not seem to have been made any clearer. $\endgroup$
– hft
No body feels both forces. Each body (Earth and object) only feels the force exerted on it and not the force that it itself exerts.
It is a crucial part of Newton's 3rd law to realise that the two forces that make up the force-pair are not exerted on the same body. So there is no issue with the observation you make about the gained acceleration, and Newton's 3rd surely still does apply for gravitational forces as well.
SteevenSteeven
$\begingroup$ @silica19 "Thus, the object now experiences 9.8 N (gravitational force) + 9.8 N (reaction force from the object pulling Earth) = 19.6 N (net force experienced" This sentence is incorrect. When the object pulls in earth via gravity and earth also pulls in the object via gravity, then these two forces are the force action/reaction pair. So there are no further reaction forces to include via Newton's 3rd law. They are already all accounted for. $\endgroup$
– Steeven
$\begingroup$ When the object pulls in earth via gravity and earth also pulls in the object via gravity, then these two forces are the force action/reaction pair. Why do you call these forces an action-reaction pair when they're the forces generated by those objects themselves. According to the law of gravitation, both objects should pull each other with their own forces. $\endgroup$
$\begingroup$ @silica19 No, according to Newton's law of gravitation a gravitational force is exerted on both objects (it is equal but opposite) due to both objects being present. Not just due to one object. As the formula shows, both masses are involved. Thus, the gravitational force is not an object-specific force but a system-specific force. It only exists between two objects, never just due to one of them. And it acts on both with equal but opposite forces exactly as Newton's 3rs law expects. $\endgroup$
$\begingroup$ I'm having a hard time understanding how you conclude that the forces applied by the object and Earth on each other are related to Newton's 3rd law of motion. I understand how gravitational force is a system-specific force but I don't understand how these forces act. I'm not able to observe the action and reaction pair in these forces because as you said, it isn't that only Earth pulls the object and the object also pulls the Earth as a reaction force, the forces that they apply on each other are gravitational forces. Please explain how are they related to Newton's 3rd law of motion. $\endgroup$
Confused about Newton's 3rd law
Newton's third law, opposite force with a wall vs a floor
Confused about Newton's 3rd law of motion
Newton's 3rd law of motion, related to Earth's gravity
Newton's 3rd law in lami's theorem
Tension force and Newton's third law | CommonCrawl |
Quantitative Sobolev Extensions and the Neumann Heat Kernel for Integral Ricci Curvature Conditions
Olaf Post1,
Xavier Ramos Olivé2 &
Christian Rose ORCID: orcid.org/0000-0002-1078-07983
The Journal of Geometric Analysis volume 33, Article number: 70 (2023) Cite this article
We prove the existence of Sobolev extension operators for certain uniform classes of domains in a Riemannian manifold with an explicit uniform bound on the norm depending only on the geometry near their boundaries. We use this quantitative estimate to obtain uniform Neumann heat kernel upper bounds and gradient estimates for positive solutions of the Neumann heat equation assuming integral Ricci curvature conditions and geometric conditions on the possibly non-convex boundary. Those estimates also imply quantitative lower bounds on the first Neumann eigenvalue of the considered domains.
Avoid the most common mistakes and prepare your manuscript for journal editors.
The first aim of this article is to prove the existence of Sobolev extension operators for domains with smooth boundary in a Riemannian manifold whose norms depend only on the geometry near the boundary. To the best of our knowledge, we give the first explicit construction, giving a quantitative bound on the norm, that depends only on sectional and principal curvature assumptions. Such an extension operator provides a tool for geometric applications, especially when working on classes of manifolds fulfilling certain geometric bounds. Our second aim is using these extension operators to derive quantitative upper bounds for the Neumann heat kernel, gradient estimates for positive solutions of the Neumann heat equation, and lower bounds on the first Neumann eigenvalue under \(L^p\)-Ricci curvature conditions for relatively compact domains with sufficiently regular possibly non-convex boundary.
Let \(M=(M^n,g)\) be a complete Riemannian manifold of dimension \(n\ge 2\) with possibly non-empty boundary \(\partial M\) and distance function d. Fix an open subset \(\Omega \subset M\) such that \({{\overline{\Omega }}} \ne M\) is a smooth manifold with boundary. A linear and bounded operator \(E_\Omega :H^1(\Omega )\rightarrow H^1(M)\) is called extension operator for \(\Omega \), if it is a bounded right inverse for the restriction operator of \(\Omega \), i.e., if \(E_\Omega \) satisfies
$$\begin{aligned} E_\Omega u{\restriction }_\Omega = u,\quad u\in H^1(\Omega ), \end{aligned}$$
and has bounded operator norm. Such extension operators have a long history, starting with the seminal work by Whitney [34] for extension operators of class \(C^k\). It is well known, cf., e.g., Stein's monography [31, Thm. 5, Sec. VI.3.1], that for a domain \(\Omega \subset {\mathbb {R}}^n\) with Lipschitz boundary, such a Sobolev extension operator \(E_\Omega \) exists. Its norm depends implicitly on the Lipschitz constants of the charts as well as other properties of the atlas of \(\partial \Omega \). Therefore, this construction does not imply the existence of extension operators for classes of subsets of manifolds whose norms depend only on curvature restrictions. Sobolev extension operators especially for finite sets are constructed in [10], see also the references therein. The name "extension operator" is also used in a slightly different context, namely as a bounded right inverse of a Sobolev trace operator, i.e., of a restriction of functions to proper submanifolds, see, e.g., [12, 14] and the references therein.
However, to the best of our knowledge, apart from Stein's result for subsets in \({\mathbb {R}}^n\) mentioned above, existing constructions of extension operators for \(\Omega \subset M\) in the sense of (1.1) do not focus on quantitative bounds on \(\Vert E_\Omega \Vert \). Our aim is to construct extension operators whose operator norms depend only on geometric quantities such as bounds on the second fundamental form and the sectional curvature in a suitable tubular neighborhood of \(\partial \Omega \). Therefore, we introduce the following class of subsets of a manifold.
Definition 1.1
Let M be a Riemannian manifold of dimension \(n\ge 2\), \(r>0\) and \(H,K\ge 0\). An open subset \(\Omega \subset M\) is called (r, H, K)-regular if
\({{\overline{\Omega }}}\ne M\) is a connected smooth manifold with (smooth) boundary \(\partial \Omega \),
the exterior rolling r-ball condition: for any \(x\in \partial \Omega \), there is a point \(p \in M \setminus \Omega \) such that \(B(p,r) \subset M \setminus \Omega \) and \(\overline{B(p,r)}\cap \partial \Omega =\{x\}\);
the interior rolling r -ball condition: for any \(x\in \partial \Omega \), there is a point \(p \in \Omega \) such that \(B(p,r) \subset \Omega \) and \(\overline{B(p,r)}\cap \partial \Omega =\{x\}\);
the second fundamental form \(\textrm{II}\) w.r.t. the inward pointing normal of \(\partial \Omega \) satisfies \(-H \le \textrm{II} \le H\) (see Remark 1.3 (iv)),
the sectional curvature satisfies \(\textrm{Sec}\le K\) on the tubular neighborhood \(T(\partial \Omega ,r)\) of \(\partial \Omega \).
Our first main result is the following.
Theorem 1.2
Fix \(K,H\ge 0\), and a complete Riemannian manifold M of dimension \(n\ge 2\). There exists an explicitly computable \(r_0=r_0(K,H)>0\) such that for any \(r\in (0,r_0]\), there exists an explicitly computable constant \(C(r,K,H)>0\) and an extension operator \(E_\Omega :H^1(\Omega )\rightarrow H^1(M)\) satisfying
$$\begin{aligned} \Vert E_\Omega \Vert \le C(K,H,r) \end{aligned}$$
for any open (r, H, K)-regular subset \(\Omega \).
Our construction of an extension operator aims on implementing the curvature properties of \(\partial \Omega \) which are naturally encoded in the behavior of geodesics around \(\partial \Omega \). We will present a purely differential geometric approach and use a parametrization of the tubular neighborhood by geodesics perpendicular to \(\partial \Omega \), and control the behavior of the geodesics in terms of the geometric quantities given in \(T(\partial \Omega ,R)\). Then we use the reflection principle from [31] to construct extensions of functions and estimate the norm of the corresponding operator. Stein's approach for Lipschitz atlasses, which yields a bound on the extension operator in terms of Lipschitz and covering constants of the atlas, uses a regularized distance depending on a Whitney cover. The regularized distance implements the Lipschitz properties of the chosen atlas of the boundary. Curvature quantities cannot be reflected by such a regularized distance such that this approach does not suffice for our purposes and has to be modified. We want to emphasize that we are not interested in the most general situation but extension operators with norms not depending on the chosen atlas but on curvature quantities. In fact, it is impossible to recover Lipschitz constant estimates for some atlas only by curvature restrictions, such that Stein's result does not yield the bounds we are aiming for. On the other hand, we are aware of other approaches to extension operators using Whitney's ideas which even work in metric measure spaces, see, e.g., [3]. It would be interesting to see if this yields an easier proof of our result yielding an extension operator with our desired curvature properties.
Remark 1.3
Although we assume completeness of M in the above theorem, the proof also applies for incomplete M and \(\Omega \subset M\) in any connected component \(M'\) satisfying the conditions of Theorem 1.2 such that \({{\overline{\Omega }}}\ne M'\) and such that \({\mathcal {C}}^1({{\overline{\Omega }}})\) is dense in \(H^1(\Omega )\).
We do not expect that our constant C(K, H, r) is sharp. If \(\Omega =B(0,R_0)\) is an open ball in \({\mathbb {R}}^n\), then we can choose \(r_0=r=R_0/2\) and \(H=1/R_0\). A simple scaling argument shows that \(\Vert E_\Omega \Vert \) is bounded from above by the norm of the extension operator on the unscaled ball B(0, 1) times \(R_0^{-1}=r^{-1}\).
The upper bound on the sectional curvature and the double sided bound on the second fundamental form ensure that the exponential map of \(\partial \Omega \) is well defined on \(T(\partial \Omega ,r)\) for \(r\le r_0\). The distance \(r_0\) is in fact bounded by the minimal focal distance of all points in \(\partial \Omega \).
Note that a lower bound on the second fundamental form w.r.t. a fixed direction is equivalent to an upper bound of the second fundamental form w.r.t. the opposite direction, so the second fundamental form \(\textrm{II}\) w.r.t. the outward pointing normal of \(\partial \Omega \) also satisfies \(-H \le \textrm{II} \le H\).
The upper bound for the sectional curvature on \(T(\partial \Omega ,r)\) and the double sided bounds for the principal curvatures on \(\partial \Omega \) can be replaced by two bounds on \(T(\partial \Omega ,r)\cap \Omega \) and on \(T(\partial \Omega ,r)\setminus \Omega \), separately: an upper bound for sectional curvature, and a lower bound for \(\textrm{II}\) w.r.t. the corresponding inward normal of \(\partial \Omega \) (which changes depending on which side of the tubular neighborhood you consider).
The interior rolling r-ball condition ensures that \(\Omega \) is "sufficiently thick": there always fits a ball of a controllable size into the interior, and geodesics emanating from different points stay unique up to r.
The exterior rolling r-ball condition, cf. Fig. 2, is indeed necessary: the proof of Theorem 1.2 relies on a particular parametrization, so-called Fermi-coordinates, of the tubular neighborhood. The exterior rolling r-ball condition prevents the tubular neighborhood from self-overlapping, ensuring that the parametrization and the extension operator are well defined. For a problematic case, see Fig. 1.
A neighborhood not fulfilling the exterior rolling r-ball condition
The interior/exterior rolling r-ball
Our main motivation to study extension operators relies on our interest in the Neumann heat equation of compact (sub-)manifolds having possibly non-convex boundary. Denote by \(\Delta \ge 0\) the Laplace–Beltrami operator on a Riemannian manifold M. Let u be a positive solution of the heat equation
$$\begin{aligned} \partial _t u =-\Delta u, \end{aligned}$$
where one assumes additionally \(\partial _\nu u=0\) in case \(\partial M\ne \emptyset \), and \(\nu \) the inward pointing normal. If \(D>0\), \(K\ge 0\), then it was shown in [17] that there are \(c_1,c_2,c_3>0\) such that for any compact M with convex smooth boundary, \({{\,\mathrm{\mathop {diam}}\,}}M\le D\) and Ricci curvature bounded from below by \(-K\), the solution u satisfies
$$\begin{aligned} c_1\vert \nabla \ln u\vert ^2-\partial _t \ln u\le c_2K+c_3t^{-1}, \quad t>0. \end{aligned}$$
From such a gradient estimate, Li and Yau deduced a Harnack inequality as well as an upper bound for the Neumann heat kernel h of M having convex boundary of the form
$$\begin{aligned} h_t(x,y)&\le c_1' {{\,\mathrm{\mathop {Vol}}\,}}(B(x,\sqrt{t}))^{-1/2}{{\,\mathrm{\mathop {Vol}}\,}}(B(y,\sqrt{t}))^{-1/2}\exp \left( c_2'Kt-\frac{d(x,y)^2}{c_3' t}\right) ,\nonumber \\&\qquad x,y\in M, t>0. \end{aligned}$$
Inequality (1.3), and in turn (1.4), has been generalized in [32] to compact manifolds M with smooth possibly non-convex boundary satisfying the interior rolling r-ball condition (cf. Definition 1.1 (iii) and also Remark 1.6), second fundamental form bounded from below, and Ricci curvature bounded from below by \(-K\), \(K\ge 0\). For an extensive treatment of Neumann heat kernel estimates on non-compact domains, see, e.g., [15].
During the last decades, there was an increasing interest in relaxing the uniform pointwise Ricci curvature lower bound to integral Ricci curvature bounds. Those provide estimates that are more stable under perturbations of the metric. In the following, we denote
$$\begin{aligned} \rho :M\rightarrow {\mathbb {R}}, \quad x\mapsto \min (\sigma ({{\,\mathrm{\mathop {Ric}}\,}}_x)), \end{aligned}$$
where the Ricci tensor \({{\,\mathrm{\mathop {Ric}}\,}}\) of M is viewed as a pointwise endomorphism on the cotangent bundle, and \(\sigma (A)\) denotes the spectrum of an operator A. For \(x\in {\mathbb {R}}\), the negative part of \(x \in {\mathbb {R}}\) will be denoted by \(x_-=\max (0,-x)\ge 0\). For a subset \(\Omega \subset M\), \(p>n/2\), and \(R>0\), we let
$$\begin{aligned} \kappa _{\Omega }(p,R){:}{=}\sup _{x\in \Omega }\left( \frac{1}{{{\,\mathrm{\mathop {Vol}}\,}}(B(x,R))}\int _{B(x,R)} \rho _-^p\textrm{dvol}\right) ^\frac{1}{p}, \end{aligned}$$
measuring the \(L^p\)-mean of the negative part of Ricci curvature uniformly in balls of radius R with center in \(\Omega \). It is convenient to work with the scale-invariant quantity \(R^2\kappa _\Omega (p,R)\). If M is complete with \(\partial M=\emptyset \), assuming \(\kappa _{M}(p,R)\) is small for \(p>n/2\) led to several analytic and geometric generalizations of results that depend on pointwise lower Ricci curvature bounds, see, e.g., [1, 8, 11, 19,20,21,22,23,24,25, 28, 35, 36].
If M is allowed to have non-empty boundary \(\partial M\ne \emptyset \) and small \(\kappa _M(p,R)\), \(p>n/2\), it is not known that a version of (1.3) can be derived by only adapting the techniques from [17, 32]. However, there is a way to obtain (1.3) for proper compact subsets of a manifold: recently, the second author [22] obtained a generalized version of (1.3) for positive solutions of (1.2) on compact submanifolds \(\Omega \subset M\) with smooth boundary, where the volume measure of M is globally volume doubling and \(\kappa _\Omega (p,{{\,\mathrm{\mathop {diam}}\,}}\Omega )\) is small. Note that the latter does not imply global volume doubling unless M is compact. Additionally, the obtained gradient estimate relies a priori on Gaußian upper bounds of the Neumann heat kernel (1.4), that were obtained in [7]. Note that such gradient estimates depend implicitly on the norm of a Sobolev extension operator of \(\Omega \).
We will prove a uniform, quantitative, and localized version: if \(\kappa _M(p,R)\) is small for some \(p>n/2\), we obtain quantitative Gaußian Neumann heat kernel upper bounds (1.4) in the spirit of [7] depending only on geometric conditions as well as a generalization of (1.3) for all compact \(\Omega \subset M\) with possibly non-convex boundary satisfying certain regularity conditions that does neither depend on global volume doubling nor on extension operators with unknown operator norm.
If M is complete and \(\partial M=\emptyset \), we denote by \(p\in {\mathcal {C}}^\infty ((0,\infty )\times M\times M)\) the heat kernel of M, that is, the minimal fundamental solution of (1.2). If \(\Omega \subset M\) is a non-empty relatively compact domain with smooth boundary \(\partial \Omega \ne \emptyset \), we let \(h^\Omega \in {\mathcal {C}}^\infty ((0,\infty )\times M\times M)\) be the Neumann heat kernel, i.e., the minimal fundamental solution of (1.2) subject to Neumann boundary conditions on \(\partial \Omega \).
Our second main theorem is the following (recall the definition of \(\kappa _M(p,R)\) in (1.5)).
Let M be a complete Riemannian manifold of dimension \(n\ge 2\), \(p>n/2\), \(R>0\), and \(K,H\ge 0\). There exists an explicitly computable \(r_0=r_0(H,K)>0\) sufficiently small (cf. Remark 1.6) such that for any \(r\in (0,r_0]\), there are explicitly computable constants \(C=C(n,p,r,R,H,K)>0\) and \(\varepsilon =\varepsilon (n,p,r,H,K)>0\) such that if
$$\begin{aligned} R^2\kappa _M(p,R)\le \varepsilon , \end{aligned}$$
then for any (r, H, K)-regular domain \(\Omega \subset M\) with \({{\,\mathrm{\mathop {diam}}\,}}\Omega \le R/2\), the Neumann heat kernel \(h^\Omega \) of \(\Omega \) satisfies
$$\begin{aligned} h^\Omega _t(x,x)\le \frac{C}{{{\,\mathrm{\mathop {Vol}}\,}}_\Omega (x,\sqrt{ t})}, \quad x\in \Omega ,\ t>0, \end{aligned}$$
where \({{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,s)){:}{=}{{\,\mathrm{\mathop {Vol}}\,}}(\Omega \cap B(x,s))\).
We prove this theorem via a careful analysis of the results obtained in [2, 7] and Theorem 1.2. We want to emphasize again that Theorem 1.4 applies to domains with non-convex boundary.
Theorem 1.4 provides a tool to prove quantitative gradient estimates of type (1.3) for integral Ricci curvature assumptions. Moreover, such estimates give the opportunity to provide quantitative lower bounds on the first (non-zero) Neumann eigenvalue of (r, H, K)-regular subsets. Recently, the third author and G. Wei obtained in [30] gradient and Neumann eigenvalue estimates assuming only the interior rolling r-ball condition, a lower bound on the second fundamental form, and a Kato-type condition on the negative part of the Ricci curvature defined by the Neumann heat semigroup. It is known that the latter condition is more general than assuming \(\kappa _M(p,R)\) is small. We refer to [4, 5, 26, 27, 29, 30] for more information about Kato-type curvature assumptions. While those results are very general, it is hard to check that the assumptions are indeed satisfied for compact manifolds with boundary and small \(\kappa _M(p,R)\). However, Theorem 1.4 gives such an opportunity for relatively compact subdomains, as the following corollary shows.
Let M be a complete Riemannian manifold of dimension \(n\ge 2\), \(p>n/2\), \(R>0\), and \(K,H\ge 0\). There exists an explicitly computable \(r_0=r_0(K,H)>0\) such that for any \(r\in (0,r_0]\) there are explicitly computable constants \(C_1=C_1(n,p,r,R,H,K)>0\), \(C_2=C_2(n,p,r,R,H,K)>0\), \(\varepsilon =\varepsilon (n,p,r,H,K)>0\), and a function \(J=J_{n,p,r,R,H,K}:(0,\infty )\rightarrow (0,\infty )\) such that if
then for any (r, H, K)-regular domain \(\Omega \subset M\) with \({{\,\mathrm{\mathop {diam}}\,}}\Omega \le R/2\), any positive solution of (1.2) with Neumann boundary conditions satisfies
$$\begin{aligned} J(t)\vert \nabla \ln u\vert ^2-\partial _t \ln u\le C_1+\frac{C_2}{tJ(t)}, \quad t>0. \end{aligned}$$
Moreover, there is an explicitly computable constant \(C_3=C_3(n,p,r,R,H,K)>0\) such that if \(\eta _1^\Omega \) denotes the first (non-zero) Neumann eigenvalue of \(\Omega \), we have
$$\begin{aligned} \eta _1^\Omega \ge C_3 {{\,\mathrm{\mathop {diam}}\,}}(\Omega )^{-2}. \end{aligned}$$
The above corollary generalizes many recent results on Neumann eigenvalues, e.g., [13].
As pointed out in [6, 32], the assumption that the interior rolling r-ball condition holds for \(r>0\) small enough in Corollary 1.5 depends implicitly on upper bounds on the sectional curvature in \(T(\partial \Omega ,r)\cap \Omega \) and the lower bound on the second fundamental form \(\textrm{II}\ge -H\) w.r.t. the inward pointing normal. More precisely, \(r\in (0,1)\) has to be chosen such that
$$\begin{aligned} \sqrt{ K}\tan \left( r\sqrt{ K }\right) \le \frac{1}{2}(1+H)\quad \text {and}\quad \frac{H}{\sqrt{ K}}\tan \left( r\sqrt{K}\right) \le \frac{1}{2}. \end{aligned}$$
Thus, the restriction on the sectional curvature in the tubular neighborhood is a natural assumption in our context.
The structure of this paper is as follows. In Sect. 2, we provide geometric criteria for the tubular neighborhood that imply conditions necessary to prove Theorem 1.2. To prove the main theorem, we establish a new estimate for norms of vector fields along geodesics perpendicular to hypersurfaces. Some proofs of auxiliary comparison estimates for hypersurfaces that we used in the proof can be found in the appendix. In Sect. 3, we construct an extension operator using Jacobi field techniques; this construction depends on geometric assumptions that may or may not be satisfied for a given manifold. In Sect. 4, we carefully adapt the main results of [2, 7] to our setting and prove Theorem 1.4. At the end, we provide proofs of Corollary 1.5 by showing that the Kato condition is indeed satisfied in our situation.
Geometry of Tubular Neighborhoods of Hypersurfaces
In order to prove Theorem 1.2, we need geometric assumptions in order to bound the norm of the extension operator, that will depend on the geometry of the tubular neighborhood of the boundary of the considered domain. Let M be a Riemannian manifold of dimension \(n\ge 2\), and \(N\subset M\) a hypersurface in M. In order to provide a quantitative estimate depending on curvature restrictions and the size of the tubular neighborhood, it is necessary to justify the regularity of the coordinate maps we use in our calculations. Our aim is to prove that under certain curvature restrictions in the tubular neighborhood of N, there exists \(r>0\) such that for any \(p\in N\),
$$\begin{aligned} \exp _pt\nu ,\quad t\in (-r,r) \end{aligned}$$
is non-singular. On one hand, \(\exp \) is non-singular only up to the cut-locus. On the other hand, geodesics emanating from different points starting in N must not intersect. If such two geodesics intersect in a point, this point is called a focal point. It is known that focal points appear not later than cut points [33]. Thus, we can restrict our investigation to the absence of focal points along any geodesic emanating from N. Below we provide estimates for the focal distance along a geodesic. Note that those are local considerations which do not focus on geodesics with starting points lying far away from each other inside the boundary.
Let \(\gamma :[0,l]\rightarrow M\) be a distance minimizing geodesic such that \(\gamma (0)\in N\) and \(\gamma '(0)\in (T_{\gamma (0)}N)^\perp \). Denote by \(\textrm{Sec}\) the sectional curvature of M and \(\textrm{II}\) the second fundamental form of N w.r.t. the unit normal \(\nu \) with the same orientation as \(\gamma '(0)\) at \(\gamma (0)\). As explained above, a point \(q\in \gamma \) is called a focal point if the exponential map at \(\gamma (0)\) is singular in q. The focal distance is bounded below in geometric terms by the following theorem.
Lemma 2.1
[33, Corollary 4.2] Let M, N, \(\gamma \) as above and \(H,K\in {\mathbb {R}}\). Suppose \(\textrm{II}\ge H\) in \(\gamma (0)\) and \(\textrm{Sec}\le K\). Then, there are no focal points along \(\gamma \) on \([0,\min (r_0,l))\), where \(r_0>0\) is the smallest positive number r such that one of the three conditions below is satisfied:
$$\begin{aligned} {\left\{ \begin{array}{ll} \cot (\sqrt{ Kr})=\frac{H}{\sqrt{ K}}, &{} \hbox { if}\ K>0,\\ r=\frac{1}{H}, &{} \hbox { if}\ K=0,\\ \coth (\sqrt{Kr})=\frac{H}{\sqrt{ K}} &{} \hbox { if}\ K<0. \end{array}\right. } \end{aligned}$$
Note that the equations (2.1) come from a comparison result with the first zero of the N-Jacobi field equation in a space of constant curvature for some hypersurface with constant second fundamental form. If no positive solution exists, there are no focal points along any geodesic. Warner's result ensures that a Fermi parametrization of the \(r_0\)-tubular neighborhood of N is well defined if we assume uniform upper bounded sectional curvature in the \(r_0\)-tubular neighborhood and uniform lower bounds for the principal curvatures of N. In our formulation above, we are also taking into account that the curve \(\gamma \) might only be defined up to length l, which may or may not be larger than \(r_0\) as defined by (2.1). This becomes useful in our application of the result below, where we restrict our attention to geodesics of length r, where r is the radius of the rolling r-balls.
An additional ingredient for our proof is the following theorem stating that the norm of a vector along a geodesic can be estimated in terms of a geometric constant and Fourier coefficients w.r.t. a frame consisting of the so-called N-Jacobi fields. The difficulty is the non-orthonormality of the frame along the geodesic. Recall that according to [33], an N-Jacobi field J along \(\gamma \) is a Jacobi field satisfying \(S(J(0),\nu )- J'(0)\perp T_{\gamma (0)}N,\) where S denotes the shape operator of N.
Let \(\gamma \) be a distance minimizing geodesic perpendicular to a hypersurface \(N\subset M\), \(X_1{:}{=}\gamma '\), \(X_1(0),e_2,\ldots , e_n\) an orthonormal basis of \(T_{\gamma (0)}M\), and \(X_i\), \(i=2,\ldots ,n\), N-Jacobi fields with \(X_i(0)=e_i\) and \(t_F\) be the focal distance of N in \(\gamma (0)\). Assume \(\textrm{Sec}\le K\) along \(\gamma \) and \(\textrm{II}\ge -H\) in \(\gamma (0)\). There exist explicitly computable constants \(\beta _{K,H,n}\) and \(t_\beta \in (0,t_F/2]\) such that for any vector field v along \(\gamma \) we have
$$\begin{aligned} \vert v\vert _{\gamma (s)} \le \beta _{K,H,n}\sum _{i=1}^n \left| \langle v,X_i\rangle _{\gamma (s)}\right| ,\quad s\in [0,t_\beta ]. \end{aligned}$$
For any \(s\in [0,t_F)\), \(X_i(\gamma (s))\), \(i=1,\ldots , n\), spans \(T_{\gamma (s)}M\). Let \(E_i(\gamma (s))\) be the parallel orthonormal frame along \(\gamma (s)\) with \(E_i(0) = X_i(0)\). Denote by X(s) the matrix of change of basis from \(\{ E_i^*(\gamma (s)) \}_{i=1}^n\) to \(\{X_i^*(\gamma (s))\}_{i=1}^n\), i.e., the matrix with the ith row given by the coordinates of \(X_i(\gamma (s))\) in the basis \(\{E_i(\gamma (s))\}_{i=1}^n\). In other words, \(X_{ij} = E_j^*(X_i)\). In particular, \(X^{-1}\) will have the coordinates of \(X_i^*\) in the basis \(\{E_i^*\}_{i=1}^n\) in the ith column, \((X^{-1})_{i,j} = X_j^*(E_i)\).
By definition of the dual basis, we have that
$$\begin{aligned}v= \sum _{i=1}^n X_i^*(v)X_i,\end{aligned}$$
thus we have
$$\begin{aligned} |v|^2_{\gamma (s)}= & {} \left\langle v, \sum _{i=1}^n X_i^*(v)X_i \right\rangle _{\gamma (s)} \\= & {} \sum _{i=1}^n X_i^*(v)\langle v,X_i \rangle _{\gamma (s)}\\\le & {} \sum _{i=1}^n \vert X_i^* \vert _{_{op}} \vert v\vert _{\gamma (s)} \left| \langle v,X_i \rangle _{\gamma (s)} \right| ,\end{aligned}$$
where \(|\cdot |_{_{op}}\) denotes the operator norm. Denoting
$$\begin{aligned} |A|_F^2= \displaystyle \sum \nolimits _{i=1}^n \sum \nolimits _{j=1}^m |a_{ij}|^2 \end{aligned}$$
the square of the Frobenius norm of a matrix \(A_{n\times m}\) and using that \(|A|_{_{op}} \le |A|_F \le \sqrt{n}|A|_{_{op}}\), we get
$$\begin{aligned} |v|_{\gamma (s)} \le \sum _{i=1}^n |X_i^*|_{\gamma (s),F} \left| \langle v, X_i \rangle _{\gamma (s)} \right| \le |X^{-1}|_{\gamma (s),F} \sum _{i=1}^n \left| \langle v, X_i \rangle _{\gamma (s)} \right| , \end{aligned}$$
where we used that \(|X_i^*|_{\gamma (s),F}^2 =\displaystyle \sum \nolimits _{j=1}^n \left[ X_i^*(E_j)\right] ^2 \le \sum \nolimits _{i,j=1}^n \left[ X_i^*(E_j)\right] ^2 = |X^{-1}|_{\gamma (s),F}^2\).
To estimate \(\vert X^{-1}\vert _{\gamma (s),F}^2\), we employ the Neumann criterion as follows. Let \(I=E_i\otimes E_i^*\) along \(\gamma \). Then
$$\begin{aligned} \vert I - X\vert _{\gamma (s)}^2\le \vert I - X\vert _{\gamma (s),F}^2= \sum _{i=1}^n \vert E_i-X_i\vert _{\gamma (s)}^2. \end{aligned}$$
Moreover, we have \(E_1-X_1=0\), and for \(i=2,\ldots , n\)
$$\begin{aligned} \frac{d}{ds}\vert E_i-X_i\vert _{\gamma (s)}^2&=-2\frac{d}{ds}\langle E_i, X_i\rangle _{\gamma (s)}+\frac{d}{ds}\vert X_i\vert _{\gamma (s)}^2\\&=-2\langle E_i, \dot{X}_i\rangle _{\gamma (s)}+2\langle \dot{X}_i, X_i\rangle _{\gamma (s)}\\&\le 2\vert \dot{X}_i\vert _{\gamma (s)}(1+\vert X_i\vert _{\gamma (s)}). \end{aligned}$$
Integrating w.r.t. s and using \(X_i(0)=E_i(0)\), we arrive at
$$\begin{aligned} \vert E_i-X_i\vert _{\gamma (s)}^2\le \int _0^s 2\vert \dot{X}_i\vert _{\gamma (s)}(1+\vert X_i\vert _{\gamma (s)}) \textrm{d}s. \end{aligned}$$
Recalling that S denotes the shape operator of N, we have \(\dot{X}_i(s)= S(X_i(s),\gamma '(s))\), \(s\in [0,t_F)\). Since \(\textrm{Sec}\le K\) and \(\textrm{II}\ge -H\) there exist functions \(f=f_{K,H,n}\) and \(g=g_{K,H}\) such that
$$\begin{aligned} \vert X_i\vert _{\gamma (s)}\le \vert f(s)\vert , \quad \vert \dot{X}_i\vert _{\gamma (s)}\le \vert g(s)\vert , \end{aligned}$$
cf. [16, p. 211 pp.], [33]. Hence
$$\begin{aligned} \vert E_i-X_i\vert _{\gamma (s)}^2\le 2s \max _{[0,t_F]} g (1+f). \end{aligned}$$
$$\begin{aligned} t_\beta {:}{=}\min \left( t_F/2,\left( 4n \max _{[0,t_F]} g (1+f)\right) ^{-1}\right) \end{aligned}$$
$$\begin{aligned} \alpha _{K,H,n}{:}{=}2t_\beta \max _{[0,t_F]} g (1+f)\le \frac{1}{2}<1, \end{aligned}$$
and hence von Neumann's criterion yields \(\vert X^{-1}\vert _{\gamma }\le \sqrt{n}(1-\alpha _{K,H,n})^{-1}\). \(\square \)
We also need comparison estimates for the metric tensor along distance hypersurfaces. Suppose that \(\gamma :[0,r_0)\rightarrow M\), \(\gamma (0)=p\in N\) is as above. Although the proof is straightforward and adapted from the proof in [18] for distance spheres, we are not aware of any result in the literature that gives our comparison estimates below, so we decided to include a full proof in the appendix using the Riccati comparison technique. The tubular neighborhood T(N, r) can be parametrized by the distance function \(s{:}{=}{{\,\mathrm{\mathop {dist}}\,}}(\cdot ,N)\) to N: Set
$$\begin{aligned} \psi :(-r,r)\times N \rightarrow M, (s,x)\mapsto \psi (s,x)= \exp _x (\nu s). \end{aligned}$$
If N is the smooth boundary \(\partial \Omega \) of \(\Omega \subset M\), then \(\psi \) satisfies for all \(x\in \partial \Omega \)
$$\begin{aligned} d(\psi (s,x),x)={\left\{ \begin{array}{ll}s,&{} \hbox { if}\ \psi (s,x)\in M \setminus \Omega ,\\ -s, &{} \hbox { if}\ \psi (s,x)\in \Omega .\end{array}\right. } \end{aligned}$$
The parametrization \(\psi \) is defined only for r small enough; more precisely, the size of r depends on the focal set of \(\partial \Omega \), that is, the set of points where the exponential map is non-regular. We can decompose
$$\begin{aligned} g=ds^2+g_s, \end{aligned}$$
where \(g_s\) is the metric on N evolving with respect to s as long as there are no focal points along the corresponding distance minimizing geodesic \(\gamma \) perpendicular to N up to \(r_0\).
Proposition 2.3
Let \(k, K\in {\mathbb {R}}\), and \(\gamma \), \(r_0\) as above. If \(\textrm{Sec}\ge k\) along \(\gamma \) on the interval \([0,r_0)\), then for almost all \(s\in [0,r_0)\),
$$\begin{aligned} g_s\le \mu _{k,H_+}^2(s)g_0, \end{aligned}$$
and if \(\textrm{Sec}\le K\), then for almost all \(s\in [0,r_0)\),
$$\begin{aligned} \mu _{K,H_-}^2(s)g_0\le g_s, \end{aligned}$$
where \(H_+\) and \(H_-\) are the maximum resp. minimum principal curvatures of N in \(\gamma (0)\), and \(\mu _{k,H}(s)\) are functions that arise from the solution of a Riccati-type ODE (cf. Appendix A, Eq. (A.2)).
The volume form \(\textrm{dvol}\) of M decomposes accordingly into
$$\begin{aligned} \textrm{dvol}= \textrm{d}s\wedge \textrm{dvol}_s,\quad s\in (-r,r), \end{aligned}$$
where \(\textrm{dvol}_s\) denotes the volume element of the distance hypersurface \(\psi ^{-1}(s,\cdot )\).
Under the assumptions as in Proposition 2.3, we have
$$\begin{aligned} \textrm{dvol}_s&\le \textrm{dvol}^{k,H_+}_s=D(s)\textrm{dvol}_0, \quad s\in [0,r_0), \\ \quad \text {and}\quad \textrm{dvol}_{-s}&\ge \textrm{dvol}^{K,H_-}_{-s}=d(s)\textrm{dvol}_0, \quad s\in [0,r_0). \end{aligned}$$
$$\begin{aligned} D(s)=\mu _{k,H_+}^{n-1}(s),\quad \text {and}\quad d(s)=\mu _{K,H_-}^{n-1}(s). \end{aligned}$$
For a self-contained proof, see the Appendix.
Quantitative Sobolev Extensions
The existence of an extension operator \(E_\Omega :H^1(\Omega )\rightarrow H^1(M)\) with bounded operator norm will follow by constructing an extension operator along geodesics perpendicular to \(\partial \Omega \) with bounded operator norm depending on the behavior of the geodesics in a tubular neighborhood. \(E_\Omega \) can then be defined on M via Fermi-coordinates on \(\partial \Omega \). The operator norm will then be controlled by the behavior of the geodesics in the tubular neighborhood and of the volume element as explained in Sect. 2.
For \(U\subset M\) be open, we denote by \(\Vert \cdot \Vert _{H^1(U)}\) the \(H^1\)-norm in U, i.e., for \(u\in {\mathcal {C}}^1(U)\),
$$\begin{aligned} \Vert u\Vert _{H^1(U)}^2{:}{=}\Vert u\Vert _{L^2(U)}^2+\Vert \nabla u\Vert _{L^2(U)}^2, \end{aligned}$$
and by \({\mathcal {C}}^1({\overline{U}})\) the set of all \(f\in {\mathcal {C}}^1(U)\) with continuous zeroth and first derivatives up to the boundary of U. Let \(\Omega \subset M\) be (r, H, K)-regular.
Theorem 2.2 can be applied to all geodesics perpendicular to \(\partial \Omega \). Let \(R_\beta {:}{=}\min (t_\beta ,r)\) be the resulting minimal width for the tubular neighborhood. We abbreviate for \(x\in T(\partial \Omega ,R_\beta )\) and \(s\in (-R_\beta ,R)\)
$$\begin{aligned} \vert \cdot \vert _x{:}{=}\vert \cdot \vert _{g(x)}\quad \text {and}\quad \vert \cdot \vert _{s,x}{:}{=}\vert \cdot \vert _{g_s(x)}, \end{aligned}$$
For \(x\in \partial \Omega \), let
$$\begin{aligned} \gamma =\gamma _x:(-R_\beta ,R_\beta )\rightarrow M, \quad s\mapsto \exp _x(s\nu ) \end{aligned}$$
be the unique geodesic perpendicular to \(\partial \Omega \) at x. Let \(\varphi _x\in {\mathcal {C}}_{\textrm{c}}^\infty (M)\) be a cut-off function such that \(0\le \varphi \le 1\), \({{\,\mathrm{\mathop {supp}}\,}}\varphi \subset B(x,R_\beta )\), \(\varphi =1\) on \(B(x,R_\beta /2)\), and
$$\begin{aligned} \vert \nabla \varphi _x\vert \le G/R_\beta \end{aligned}$$
for some dimension constant \(G>0\) which always exists by completeness of M. Moreover, let \(u\in {\mathcal {C}}^1({{\overline{\Omega }}})\). We define the one-dimensional extension of u along \(\gamma \) by
$$\begin{aligned} (E_x u)(s){:}{=} {\left\{ \begin{array}{ll}u(\gamma (s)),&{} :s\in (-R_\beta ,0],\\ \left( -3u(\gamma (-s))+4 u(\gamma (-s/2))\right) \varphi _{x}(\gamma (s)),&{} :s\in (0,R_\beta ).\end{array}\right. } \end{aligned}$$
\(E_x u\) is continuously differentiable along \(\gamma \). Furhtermore, there exists an explicitly computable \(\theta _{K,H,n}\ge 1\) such that
$$\begin{aligned} \Vert E_xu\Vert ^2_{H^1(\gamma \cap (M\setminus \Omega ))}&\le \theta _{K,H,n} \left( 164\Vert \nabla u\Vert ^2_{L^2(\gamma \cap \Omega )} +(82+164G^2R_\beta ^{-2})\Vert u\Vert ^2_{L^2(\gamma \cap \Omega )}\right) . \end{aligned}$$
The continuity of \(E_x u\) is obvious. Moreover, we can restrict to the case \(s\ge 0\) since \(E_x u\) coincides with u for \(s<0\). We compute the gradient of \(E_xu\) by calculating its directional derivatives. The regularity of our parametrization allows to define a variation of \(E_xu\) along \(\gamma \) such that we can compute the partial derivatives in directions perpendicular to \(\gamma \). The difficulty here is that we cannot use just a parallel frame along \(\gamma \) to compute the gradient because we do not know that curves with initial tangent vectors given by the frame in small neighborhoods of the point under consideration do not intersect. Thus, we need to define an appropriate frame consisting of Jacobi fields whose curves lie in the distance hypersurfaces and that span the tangent spaces along \(\gamma \). The directional derivatives in the directions given by the Jacobi fields will be obviously continuous and the gradient therefore exists.
For \(\gamma =\gamma _x\) as above, we get for free
$$\begin{aligned} \partial _s E_x u(s)&= \left( 3(\partial _s u)(\gamma (-s))-2(\partial _s u)(\gamma (-s/2))\right) \varphi _{x}(\gamma (s)) \end{aligned}$$
$$\begin{aligned}&\quad +\left( -3u(\gamma (-s))+4 u(\gamma (-s/2))\right) (\partial _s\varphi _{x})(\gamma (s)). \end{aligned}$$
To compute the partial derivatives perpendicular to \(\gamma \), we introduce the following variation of \(\gamma \): Let \(e_i\in T_xM\), \(i=2,\ldots ,n\), be a completion of \(\nu \) to a basis. For \(i \in \{2,\ldots ,n\}\), let \(x_i(t)\) be a curve in \(\partial \Omega \) such that \(x_i(0)=x\), \(x'(0)=e_i\). We define the variation
$$\begin{aligned} \gamma _i(t,s)=\exp _{x_i(t)} (s\nu ), \quad t\in (-\varepsilon ,\varepsilon ), \ s\in (-R_\beta ,R_\beta ). \end{aligned}$$
Varying t gives a variation through geodesics emanating perpendicularly from \(\partial \Omega \). In particular, for fixed s, the curves \(\gamma _i(\cdot ,s)\) lie in the distance hypersurface with distance s. The vector field
$$\begin{aligned} X_i(s){:}{=} \frac{\partial }{\partial t} \gamma _i(t,s)|_{t=0} \end{aligned}$$
is a Jacobi field along \(\gamma \) and satisfies \(X_i(0)=e_i\), \(X_i'(0)=S(e_i,\nu )\), where S denotes the shape operator of \(\partial \Omega \). In particular, we have
$$\begin{aligned} \langle X_i,\gamma '\rangle _{\gamma (s)}=0,\quad X_i(0)\in T_x\Omega , \quad S(X_i(0),\nu )-X'(0)=0\perp T_x\partial \Omega , \end{aligned}$$
so \(X_i\) is a \(\partial \Omega \)-Jacobi field along \(\gamma \) for any \(i \in \{2,\ldots ,n\}\). According to [33], \(\{\nu (s)\}\cup \{X_i(s)\}_{i=2}^{n}\) spans \(T_{\gamma (s)}M\). For \(i \in \{2,\dots ,n\}\), define the following variation of Eu:
$$\begin{aligned}&E_x^i u(t,s)\nonumber \\&{:}{=}{\left\{ \begin{array}{ll} u(\gamma _i(t,s)),&{} :t\in (-\varepsilon ,\varepsilon ), s\in (-R_\beta ,0],\\ \left( -3u(\gamma _i(t,-s))+4 u(\gamma _i(t,-s/2))\right) \varphi _{x}(\gamma _i(t,s)), &{} :t\in (-\varepsilon ,\varepsilon ),s\in (0,R_\beta ). \end{array}\right. } \end{aligned}$$
The directional derivative of Eu(s) in direction \(X_i(s)\) is given by
$$\begin{aligned} \langle X_i,\nabla E_x u\rangle _{\gamma (s)}&= \frac{\textrm{d}}{\textrm{d}t} E_x^i u(t,s)|_{t=0}\\&= \frac{\textrm{d}}{\textrm{d}t} \left( -3u(\gamma _i(t,-s))+4 u(\gamma _i(t,-s/2))\right) \ \varphi _{x}(\gamma _i(t,s)) \\&\quad + \left( -3u(\gamma _i(t,-s))+4 u(\gamma _i(t,-s/2))\right) \ \frac{\textrm{d}}{\textrm{d}t}\varphi _{x}(\gamma _i(t,s))|_{t=0} \end{aligned}$$
and thus
$$\begin{aligned} \langle X_i,\nabla E_x u\rangle _{\gamma (s)}&= \left( -3\langle \nabla u ,\frac{\partial \gamma _i}{\partial t}\rangle _{\gamma _i(t,-s)}+4 \langle \nabla u,\frac{\partial \gamma _i}{\partial t}\rangle _{\gamma _i(t,-s/2)}\right) \ \varphi _{x}(\gamma _i(t,s))\nonumber \\&\quad + \left( -3u(\gamma _i(t,-s))+4 u(\gamma _i(t,-s/2))\right) \langle \nabla \varphi _{x},\frac{\partial \gamma _i}{\partial t}\rangle _{\gamma _i(t,s)}|_{t=0}\nonumber \\&= \left( -3\langle \nabla u,\frac{\partial \gamma _i}{\partial t}|_{t=0}\rangle _{\gamma (-s)}+4 \langle \nabla u,\frac{\partial \gamma _i}{\partial t}|_{t=0}\rangle _{\gamma (-s/2)}\right) \ \varphi _{x}(\gamma (s))\nonumber \\&\quad + \left( -3u(\gamma (-s))+4 u(\gamma (-s/2))\right) \langle \nabla \varphi _{x},\frac{\partial \gamma _i}{\partial t}|_{t=0}\rangle _{\gamma (s)}\nonumber \\&= \left( -3\langle \nabla u,X_i\rangle _{\gamma (-s)}+4 \langle \nabla u,X_i\rangle _{\gamma (-s/2)}\right) \ \varphi _{x}(\gamma (s))\nonumber \\&\quad + \left( -3u(\gamma (-s))+4 u(\gamma (-s/2))\right) \langle \nabla \varphi _{x},X_i\rangle _{\gamma (s)}. \end{aligned}$$
In particular, we have for the right limits
$$\begin{aligned} \partial _s E_x u(0+)&= 3\ \partial _s u(\gamma (0))-2\ \partial _s u(\gamma (0))\\&=\partial _s u(\gamma (0))=\partial _s u(x),\\ \langle X_i, \nabla E_x u\rangle _{\gamma (0+)}&= -3\partial _i u(\gamma (0))+4\partial _i u(\gamma (0))=\partial _i u(x). \end{aligned}$$
It is easily seen that for the pointwise norm \(E_xu\) along \(\gamma \), for \(s\ge 0\) we have
$$\begin{aligned} \vert E_x u(s)\vert ^2&\le 18 \vert u(\gamma (-s))\vert ^2+32 \vert u(\gamma (-s/2))\vert ^2. \end{aligned}$$
The pointwise norm of \(\nabla E_x u\) along \(\gamma \) is more involved. Since \(E_x u\) coincides with u in \(\Omega \), we restrict the computations for the norm to \(M\setminus \Omega \).
Denote \(X_1=\nu \). Theorem 2.2 yields the existence of \(\beta _{K,H,n}>0\) such that for any \(s\in [0,R_\beta ]\), we have
$$\begin{aligned} \vert \nabla E_xu\vert _{\gamma (s)}^2&\le \beta _{K,H,n}\left( \sum _{i=1}^n \left| \langle \nabla E_xu,X_i\rangle _{\gamma (s)}\right| \right) ^2 \le \beta _{K,H,n}n\sum _{i=1}^n \left| \langle \nabla E_xu,X_i\rangle _{\gamma (s)}\right| ^2 \\&=\beta _{K,H,n}n\left( \vert \partial _sE_xu(s)\vert ^2+\sum _{i=2}^{n}\langle \nabla E_xu,X_i\rangle _{\gamma (s)}^2\right) . \end{aligned}$$
$$\begin{aligned}&\vert \partial _sE_xu(s)\vert ^2+\sum _{i=2}^{n}\langle \nabla E_xu,X_i\rangle _{\gamma (s)}^2\\&\quad \le \big [3\vert \partial _s u(\gamma (-s))\vert +2\vert \partial _s u(\gamma (-s/2))\vert \\&\qquad +\left( 3 u(\gamma (-s))+4 u(\gamma (-s/2))\right) \vert \partial _s\varphi _{x}(\gamma (s))\vert \big ]^2\\&\qquad +\sum _{i=2}^{n}\left[ \left( -3\langle \nabla u,X_i\rangle _{\gamma (-s)}+4 \langle \nabla u,X_i\rangle _{\gamma (-s/2)}\right) \ \varphi _{x}(\gamma (s))\right. \\&\qquad +\left( -3u(\gamma (-s))+4 u(\gamma (-s/2))\right) \langle \nabla \varphi _{x},X_i\rangle _{\gamma (s)}]^2\\&\quad \le 36\vert \partial _s u(\gamma (-s))\vert ^2+16\vert \partial _s u(\gamma (-s/2))\vert ^2\\&\qquad +\left( 36 u(\gamma (-s))^2+64 u(\gamma (-s/2))^2\right) \vert \partial _s\varphi _{x}(\gamma (s))\vert ^2\\&\qquad +\sum _{i=2}^{n}36\langle \nabla u,X_i\rangle _{\gamma (-s)}^2+64 \langle \nabla u,X_i\rangle _{\gamma (-s/2)}^2 \\&\qquad + \sum _{i=2}^{n}(36u(\gamma (-s))^2+64 u(\gamma (-s/2))^2) \langle \nabla \varphi _{x},X_i\rangle _{\gamma (s)}^2. \end{aligned}$$
By assumption on the sectional and principal curvatures, there exists a constant \(\vartheta =\vartheta _{K,H}\ge 1\) that bounds the norms of the Jacobi fields from above, cf. the discussion in the proof of Theorem 2.2. This together with Cauchy–Schwarz yields a \(\theta =\theta _{K,H,n}>0\) such that
$$\begin{aligned} \vert \nabla E_xu\vert _{\gamma (s)}^2&\le 36\theta \vert \nabla u\vert _{\gamma (-s)}^2+64\theta \vert \nabla u\vert _{\gamma (-s/2)}^2\\&\quad +\theta \left( 36u(\gamma (-s))^2+64 u(\gamma (-s/2))^2\right) \vert \nabla \varphi \vert _{\gamma (s)}^2. \end{aligned}$$
Denote by \(\chi _I\) the characteristic function of \(I\subset {\mathbb {R}}\). The assumption on the cut-off function implies
$$\begin{aligned} \vert \nabla E_xu\vert _{\gamma (s)}^2&\le 36\theta \vert \nabla u\vert _{\gamma (-s)}^2+64\theta \vert \nabla u\vert _{\gamma (-s/2)}^2\nonumber \\&\quad +\theta \left( 36u(\gamma (-s))^2+64u(\gamma (-s/2))^2\right) \chi _{[R_\beta /2,R_\beta ]} G^2R_\beta ^{-2}. \end{aligned}$$
Now we can compute the \(H^1\)-norm of \(E_xu\). We already mentioned above that we only care about the norm in the complement of \(\Omega \). By (3.6), for the \(L^2\)-norm of \(E_xu\), we have
$$\begin{aligned} \Vert E_x u\Vert ^2_{L^2(\gamma \cap (M\setminus \Omega ))}&=\int _0^{R_\beta } \vert E_x u(s)\vert ^2\textrm{d}s\nonumber \\&\le \int _0^{R_\beta } 18 \vert u(\gamma (-s))\vert ^2+32 \vert u(\gamma (-s/2))\vert ^2\textrm{d}s\nonumber \\&\le \int _{-R_\beta }^0 18\vert u(\gamma (s))\vert ^2+32\vert u(\gamma (s/2))\vert ^2\textrm{d}s\nonumber \\&= \int _{-R_\beta }^0 18\vert u(\gamma (s))\vert ^2\textrm{d}s+\int _{-R_\beta /2}^0 64\vert u(\gamma (s))\vert ^2\textrm{d}s\nonumber \\&\le 82\int _{-R_\beta }^0 \vert u(\gamma (s))\vert ^2\textrm{d}s\nonumber \\&=82 \Vert u\Vert _{L^2(\gamma \cap \Omega )}^2 \end{aligned}$$
By (3.7), for the \(L^2\)-norm of \(\nabla E_xu\), we have
$$\begin{aligned}&\Vert \nabla E_x u\Vert _{L^2(\gamma \cap (M\setminus \Omega ))}^2\\&\quad = \int _0^{R_\beta } \vert \nabla E_x u(s)\vert _{\gamma (s)}^2\textrm{d}s\\&\quad \le \int _0^{R_\beta }36\theta \vert \nabla u\vert _{\gamma (-s)}^2+64\theta \vert \nabla u\vert _{\gamma (-s/2)}^2\\&\qquad +\theta \left( 36u(\gamma (-s))^2+64u(\gamma (-s/2))^2\right) \chi _{[R_\beta /2,R_\beta ]} G^2R_\beta ^{-2}\textrm{d}s\\&\quad \le \int _0^{R_\beta }36\theta \vert \nabla u\vert _{\gamma (-s)}^2\textrm{d}s+\int _0^{R_\beta /2}128\theta \vert \nabla u\vert _{\gamma (-s)}^2\textrm{d}s\\&\qquad +\theta G^2R_\beta ^{-2}\left( \int _{R_\beta /2}^{R_\beta }36u(\gamma (-s))^2\textrm{d}s+\int _{R_\beta /4}^{R_\beta /2}128u(\gamma (-s))^2\textrm{d}s\right) \\&\quad \le 164\theta \Vert \nabla u\Vert _{L^2(\gamma \cap \Omega )}^2+164\theta G^2R_\beta ^{-2}\Vert u\Vert _{L^2(\gamma \cap \Omega )}^2. \end{aligned}$$
Hence, the claim follows. \(\square \)
Proof of Theorem 1.2
Denote by \(x'\) the distance minimizing point of \(x\in T(\partial \Omega , R_\beta )\setminus {{\overline{\Omega }}}\) to \(\partial \Omega \). This point always exists and is unique due to our uniquely defined parametrization. For \(u\in {\mathcal {C}}^1({{\overline{\Omega }}})\), the extension \(E_\Omega u\) is given by
$$\begin{aligned} E_\Omega u(x) {:}{=}{\left\{ \begin{array}{ll} u(x),&{} \hbox { if}\ x\in {{\overline{\Omega }}},\\ E_{x'}u(d(x,x')),&{} \hbox { if}\ x\in T(\partial \Omega , r)\setminus {{\overline{\Omega }}}, \\ 0, &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$
Claim 1
\(E_\Omega \) defined above is linear and continuous from \(H^1(\Omega )\) to \(H^1(M)\) with operator norm
$$\begin{aligned} \Vert E_\Omega \Vert ^2\le 1+164\theta _{K,H,n}\max _{s,t\in [0,R_\beta ]}\frac{D(t)}{d(s)}(1+G^2R_\beta ^{-2}). \end{aligned}$$
We can restrict our considerations to \(u\in {\mathcal {C}}^1({{\overline{\Omega }}})\) since \({\mathcal {C}}^1({{\overline{\Omega }}})\) is dense in \(H^1(\Omega )\). To compute the operator norm of \(E_\Omega \), note that
$$\begin{aligned} \Vert E_\Omega u\Vert ^2_{H^1(M)}= \Vert u\Vert ^2_{H^1(\Omega )}+\Vert E_\Omega u\Vert _{H^1(T(\partial \Omega ,R_\beta )\setminus \Omega )}^2. \end{aligned}$$
Proposition 2.4 on the volume element of the hypersurfaces on the interval \([0,R_\beta )\) implies
$$\begin{aligned} \Vert E_\Omega u\Vert _{H^1(T(\partial \Omega ,R_\beta )\setminus \Omega )}^2&=\Vert E_\Omega u\Vert _{L^2(T(\partial \Omega ,R_\beta )\setminus \Omega )}^2+\Vert \nabla E_\Omega u\Vert _{L^2(T(\partial \Omega ,R_\beta )\setminus \Omega )}^2 \\&=\int _0^{R_\beta } \int _{\partial \Omega }\left( \vert E_\theta u (s)\vert ^2+\vert \nabla E_\theta u(s)\vert ^2_{\gamma (s)}\right) \textrm{dvol}_{s}\theta \textrm{d}s \\&\le \int _0^{R_\beta } \int _{\partial \Omega }\left( \vert E_\theta u (s)\vert ^2+\vert \nabla E_\theta u(s)\vert ^2_{\gamma (s)}\right) D(s)\textrm{dvol}_0\theta \textrm{d}s \\&=\int _{\partial \Omega }\int _0^{R_\beta } \left( \vert E_\theta u (s)\vert ^2+\vert \nabla E_\theta u(s)\vert ^2_{\gamma (s)}\right) D(s)\textrm{d}s\textrm{dvol}_0\theta \\&\le \max _{s\in [0,R_\beta ]}D(s)\int _{\partial \Omega }\Vert E_\theta u\Vert ^2_{H^1(\gamma _\theta \cap M\setminus \Omega )}\textrm{dvol}_0\theta \end{aligned}$$
Using Proposition 3.1, the last integral can be interpreted as an integral over \(T(\partial \Omega ,R_\beta )\cap \Omega \), and we get with Proposition 2.4
$$\begin{aligned}&\Vert E_\Omega u\Vert _{H^1(T(\partial \Omega ,R_\beta )\setminus \Omega )}^2\\&\quad \le 164 \theta _{K,H,n} \max _{s\in [0,R_\beta ]}D(s)\int _{\partial \Omega }\Vert \nabla u\Vert ^2_{L^2(\gamma _\theta \cap \Omega )}\textrm{dvol}_0\theta \\&\qquad +\theta _{K,H,n}(82+164G^2R_\beta ^{-2})\max _{s\in [0,R_\beta ]}D(s)\int _{\partial \Omega }\Vert u\Vert ^2_{L^2(\gamma _\theta \cap \Omega )}\textrm{dvol}_0\theta \\&\quad \le 164 \theta _{K,H,n} \max _{s\in [0,R_\beta ]}D(s)\int _{\partial \Omega }\Vert \nabla u\Vert ^2_{L^2(\gamma _\theta \cap \Omega )}\frac{1}{d(s)}\textrm{dvol}_{-s}\theta \\&\qquad +\theta _{K,H,n}(82+164G^2R_\beta ^{-2})\max _{s\in [0,R_\beta ]}D(s)\int _{\partial \Omega }\Vert u\Vert ^2_{L^2(\gamma _\theta \cap \Omega )}\frac{1}{d(s)}\textrm{dvol}_{-s}\theta \\&\quad \le 164\theta _{K,H,n}\max _{s,t\in [0,r]}\frac{D(t)}{d(s)}\int _{\partial \Omega }\Vert \nabla u\Vert ^2_{L^2(\gamma _\theta \cap \Omega )}\textrm{dvol}_{-s}\theta \\&\qquad +\theta _{K,H,n}(82+164G^2R_\beta ^{-2})\max _{s,t\in [0,R_\beta ]}\frac{D(t)}{d(s)}\int _{\partial \Omega }\Vert u\Vert ^2_{L^2(\gamma _\theta \cap \Omega )}\textrm{dvol}_{-s}\theta \\&\quad =164\theta _{K,H,n}\max _{s,t\in [0,R_\beta ]}\frac{D(t)}{d(s)}\Vert \nabla u\Vert _{L^2(\Omega \cap T(\partial \Omega ,R_\beta ))}^2\\&\qquad +\theta _{K,H,n}(82+164G^2R_\beta ^{-2})\max _{s,t\in [0,R_\beta ]}\frac{D(t)}{d(s)}\Vert u\Vert _{L^2(\Omega \cap T(\partial \Omega ,R_\beta ))}^2. \end{aligned}$$
Hence, (3.10) becomes
$$\begin{aligned} \Vert E_\Omega u\Vert ^2_{H^1(M)}&\le \left( 1+ 164\theta _{K,H,n}\max _{s,t\in [0,R_\beta ]}\frac{D(t)}{d(s)}\right) \Vert \nabla u\Vert _{L^2(\Omega )}^2\\&\quad +\left( 1+\theta _{K,H,n}(82+164G^2R_\beta ^{-2})\max _{s,t\in [0,R_\beta ]}\frac{D(t)}{d(s)}\right) \Vert u\Vert _{L^2(\Omega )}^2. \end{aligned}$$
\(\square \)
The extension operator defined above can be used in [22] to obtain a Li-Yau-type estimate on the Neumann heat kernel for (r, H, K)-regular domains under integral curvature conditions with appropriately chosen \(r>0\), where the estimate would only depend on geometric parameters. However, one would still need to require the ambient space to be globally doubling, since in general only local volume doubling holds under integral Ricci curvature assumptions. To remove the necessity of this condition, we develop in the next section local estimates for the Neumann heat kernel that will only require local volume doubling.
Localizing Estimates for the Neumann Heat Kernel
In [7], the authors observed that the main result of [2] can be used to prove full Gaußian upper bounds for the Neumann heat kernel of relatively compact domains with Lipschitz boundary satisfying the volume doubling property provided the ambient space is globally volume doubling and its heat kernel has full Gaußian upper bounds. In the proof they use the existence of a bounded extension operator. Although they show as an example that their result holds for relatively compact domains with Lipschitz boundary in \({\mathbb {R}}^n\) and hyperbolic spaces, they do not discuss conditions for the extension operator to be uniformly bounded in geometric parameters. In Sect. 2, we have shown the existence of extension operators with this property, such that those extension operators can be used to give a quantitative version of the results in [7].
Furthermore, the paper does not discuss any localized results, i.e., if local volume doubling and small-time upper bound for the heat kernel on M imply a small-time Gaußian upper bound for the Neumann heat kernel on a domain \(\Omega \subset M\) with sufficiently regular boundary. This is of independent interest for other quantitative applications. In fact, their main theorem is based on [2, Theorem 1.1], which does not apply to small-time Gaußian upper bounds. While it is indicated on p. 308 of the latter article that their results hold in the localized situation as well, [2, Theorem 1.1], as it is formulated, does not hold assuming only local instead of global volume doubling.
We will show here Neumann heat kernel upper bounds for (r, H, K)-regular bounded subsets of M, while \(R^2\kappa _M(p,R)\) is small for \(p>n/2\), i.e., Theorem 1.4. As explained above, this does neither follow directly from [7] nor from [2].
The results presented below are adaptions from [2, 7]. Although the proofs are almost the same, we give a complete outline, because the differences are quite subtle. The main improvements are that we only use local volume doubling and localized heat kernel estimates and our extension operators from Theorem 1.2. First, we will show that a local volume doubling and upper heat kernel estimate imply a family of localized Gagliardo–Nirenberg inequalities. Although we work directly on Riemannian manifolds, the proof is the same for metric measure spaces. Second, we show that Theorem 1.2 implies local Gagliardo–Nirenberg inequalities on (r, H, K)-regular bounded domains with appropriate r with a proof slightly different from [7]. The desired upper Neumann heat kernel bound then follows directly from [2, Theorem 1.1], since the Neumann Laplace operator satisfies the finite speed propagation property.
In the following, we use notation adapted from [2]. Let M be a Riemannian manifold of dimension \(n\ge 2\). For \(x\in M\), \(r\ge 0\), denote \(v_r(x){:}{=}{{\,\mathrm{\mathop {Vol}}\,}}(B(x,r))\). Let \(x_0\in M\), \(r_0>0\) and \(B_0{:}{=}B(x_0,r_0)\). We say that \(B_0\) satisfies the local volume doubling condition if there are \(C_D>0\) and \(R>0\) such that
$$\begin{aligned} v_r(x)\le C_D\left( \frac{r}{s}\right) ^n v_s(x),\quad x\in B_0, \ 0<s\le r\le R. \end{aligned}$$
We say that the Dirichlet heat kernel \(p^{B_0}\) of \(B_0\) satisfies local upper bounds if there are \(C,R>0\) such that
$$\begin{aligned} p_t^{B_0}(x,y)\le \frac{C}{v_{\sqrt{ t}}(x))^\frac{1}{2}v_{\sqrt{ t}}(y)^\frac{1}{2}}, \quad x,y\in B_0, \ t\in (0,R^2/4]. \end{aligned}$$
Furthermore, we define the following family of Gagliardo–Nirenberg inequalities on \(B_0\): there exists \(q\in (2,\infty ]\), \(\frac{q-2}{q}n<2\), such that
$$\begin{aligned} \Bigl \Vert v_r^{\frac{1}{2}-\frac{1}{q}} f\Bigr \Vert _q\le C \left( \Vert f\Vert _2+r\Vert \Delta ^{1/2}f\Vert _2\right) , \quad f\in {\mathcal {C}}_{\textrm{c}}^\infty (B(x_0,r))\quad r\le r_0. \end{aligned}$$
One could also choose another function v instead of the volume measure satisfying similar properties as in (4.1) and (4.2), while the underlying volume measure fulfills (4.1) separately, but we decided not to do so for the sake of presentation.
We will often write f to denote the multiplication operator by the function f. Given \(1\le p,q\le +\infty \) and \(\gamma , \delta \ge 0\) such that \(\gamma + \delta = \frac{1}{p}-\frac{1}{q}\), we will say that the condition \((vEv_{p,q,\gamma })\) holds if there exist \(t_0>0\) such that
where \(\Vert \cdot \Vert _{p,q}\) denotes the operator norm from \(L^{p}(B_0)\) to \(L^q(B_0)\), \(P_t^{B_0}=\textrm{e}^{-t\Delta ^{B_0}}\), \(t\ge 0\), and \(\Delta ^{B_0}\) is the Dirichlet Laplacian on \(B_0\). As with the assumption (4.2), the difference between our assumption \((vEv_{p,q,\gamma })\) and the one in [2] is that our assumption is local, for short time, as opposed to a global assumption for all values of \(t>0\). Note that if \(p'\) and \(q'\) are the conjugate exponents of p and q, respectively, then by duality \((vEv_{p,q,\gamma })\) is equivalent to \((vEv_{p',q',\delta })\).
(cf. [2, Proposition 2.1.1 and Corollary 2.1.2]) Assume that (4.1) is satisfied. Then, the following conditions are equivalent.
(4.2) holds up to \(t_0=R^2/4\),
\((vEv_{\infty ,\infty ,\frac{1}{2}})\) is satisfied up to time \(t_0\),
\((vEv_{1,\infty ,\frac{1}{2}})\) is satisfied up to time \(t_0\),
\((vEv_{1,2,0})\) is satisfied up to time \(t_0/2\),
\((vEv_{2,\infty ,\frac{1}{2}})\) is satisfied up to time \(t_0/2\).
The proof is analogous to the one of Proposition 2.1.1 and Corollary 2.1.2 in [2]. The only difference is that the last two conditions will hold for different time ranges than the first two. The equivalence between (4.2) and \((vEv_{1,\infty ,\frac{1}{2}})\) follows directly from the Dunford–Pettis theorem: for all \(t\in (0,t_0]\), we have
$$\begin{aligned} \Vert v_{\sqrt{ t}}^{1/2}P_t^{B_0}v_{\sqrt{ t}}^{1/2}\Vert _{1,\infty }=\sup _{x,y\in B_0} v_{\sqrt{ t}}^{1/2}(x)p_t^{B_0}(x,y)v_{\sqrt{ t}}^{1/2}(y)\le C<\infty . \end{aligned}$$
Since the proof in [2] is for a fixed value of t, the same proof gives us the equivalence in our case. The equivalence between \((vEv_{1,2,0})\) and \((vEv_{2,\infty ,\frac{1}{2}})\) follows by duality. It suffices to show that \((vEv_{1,\infty , \frac{1}{2}})\) holds up to time \(t_0\) if and only if \((vEv_{1,2,0})\) holds up to time \(t_0/2\). Note that if \(T:L^1 \rightarrow L^2\), then
$$\begin{aligned}\Vert T^*T\Vert _{1,\infty } = \Vert T^*\Vert _{2,\infty }^2 = \Vert T\Vert _{1,2}^2,\end{aligned}$$
so by taking \(T=P_{t/2}^{B_0}v^{1/2}_{\sqrt{t}}\), we get that
$$\begin{aligned}\Vert v_{\sqrt{t}}^{1/2}P_t^{B_0}v^{1/2}_{\sqrt{t}}\Vert _{1,\infty } = \Vert v^{1/2}_{\sqrt{t}}P_{t/2}^{B_0}\Vert _{2,\infty }^2 = \Vert P_{t/2}^{B_0}v^{1/2}_{\sqrt{t}}\Vert _{1,2}^2.\end{aligned}$$
Hence, \((vEv_{1,\infty ,\frac{1}{2}})\) is equivalent to
$$\begin{aligned}\sup _{0<t\le t_0}\Vert v^{1/2}_{\sqrt{t}}P_{t/2}^{B_0}\Vert _{2,\infty }^2<\infty \quad \text {and}\quad \sup _{0<t\le t_0}\Vert P_{t/2}^{B_0}v^{1/2}_{\sqrt{t}}\Vert _{1,2}^2 <\infty .\end{aligned}$$
If \((vEv_{1,\infty ,\frac{1}{2}})\) holds up to time \(t_0\), then defining \({\tilde{t}} = 2t\) for any \(0<t\le t_0/2\), we have that
$$\begin{aligned}\sup _{0<t\le t_0/2}\Vert P_t^{B_0}v^{1/2}_{\sqrt{t}}\Vert _{1,2}= \sup _{0<{\tilde{t}}\le t_0}\Vert P_{{{\tilde{t}}}/2}^{B_0}v^{1/2}_{\sqrt{{\tilde{t}}/2}}\Vert _{1,2} \le \sup _{0<{\tilde{t}}\le t_0}\Vert P_{{{\tilde{t}}}/2}^{B_0}v^{1/2}_{\sqrt{{\tilde{t}}}}\Vert _{1,2} < +\infty \end{aligned}$$
where we used that \(v_r(x)\) is non-decreasing in r. Thus, \((vEv_{1,2,0})\) holds up to time \(t_0/2\). Conversely, if \((vEv_{1,2,0})\) holds up to time \(t_0/2\), then we have
$$\begin{aligned}\sup _{0<{\tilde{t}}\le t_0}\Vert P_{{{\tilde{t}}}/2}^{B_0}v^{1/2}_{\sqrt{{\tilde{t}}}}\Vert _{1,2} = \sup _{0<t\le t_0/2}\Vert P_{t}^{B_0}v^{1/2}_{\sqrt{2t}}\Vert _{1,2} \le \sup _{0<t\le t_0/2} C\Vert P_{t}^{B_0}v^{1/2}_{\sqrt{t}}\Vert _{1,2} <+\infty ,\end{aligned}$$
where we used that \(v_{\sqrt{2t}}(x) \le v_{2\sqrt{t}}(x) \le Cv_{\sqrt{t}}(x)\) by the non-decreasing and the v-doubling (4.1) properties of \(v_r(x)\). Hence, \((vEv_{1,\infty , \frac{1}{2}})\) holds up to time \(t_0\), and this completes the proof. \(\square \)
Assume that \(B_0\) satisfies (4.1) and (4.2) for \(R=2r_0\) and let \(q\in (2,\infty ]\), \(\frac{q-2}{q}n<2\). Then, (4.3) holds on \(B_0\).
The proof is adapted from the beginning of Section 2 and Proposition 2.3.2 of [2]. According to Proposition 4.1, (4.1) and (4.2) for \(R>0\), together with an interpolation argument for bounded operators between \(L^p\)-spaces (see, e.g., [2, Corollary 2.1.6]), imply the existence of a \(C>0\) such that
$$\begin{aligned} H{:}{=}\sup _{t\in (0,R/2]} \Vert v_{\sqrt{ t}}^{\frac{1}{2}-\frac{1}{q}}P_{t}^{B_0}\Vert _{2,q}\le C. \end{aligned}$$
The fundamental theorem gives for any \(f\in {\mathcal {C}}_{\textrm{c}}^\infty (B_0)\)
$$\begin{aligned} f=P_{t}^{B_0} f+\int _0^t \Delta ^{B_0}P_{s}^{B_0} f\textrm{d}s. \end{aligned}$$
Thus, putting \(\alpha =\frac{1}{2}-\frac{1}{q}\),
$$\begin{aligned} \Vert v_{\sqrt{ t}}^\alpha f\Vert _q&\le \Vert v_{\sqrt{ t}}^\alpha P_{t}^{B_0}\Vert _{2,q}\Vert f\Vert _2+\int _0^t \Vert v_{\sqrt{ t}}^\alpha P_{s/2}^{B_0}\Vert _{2,q}\Vert \Delta ^{B_0}P_{s/2}^{B_0}f\Vert _2 \textrm{d}s. \end{aligned}$$
Using (4.1), we get
$$\begin{aligned} \Vert v_{\sqrt{ t}}^\alpha f\Vert _q&\le \Vert v_{\sqrt{ t}}^\alpha P_{t}^{B_0}\Vert _{2,q}\Vert f\Vert _2+\int _0^t \left\| \frac{v_{\sqrt{ t}}}{v_{\sqrt{s/2}}} \right\| _\infty ^\alpha \Vert v_{\sqrt{s/2}}^\alpha P_{s/2}^{B_0}\Vert _{2,q}\Vert \Delta ^{B_0}P_{s/2}^{B_0}f\Vert _2 \textrm{d}s\\&\le H\Vert f\Vert _2 +C_D2^{n/2}\int _0^t \left( \frac{t}{s}\right) ^{n\alpha /2}\Vert \Delta ^{1/2}P_{s/2}^{B_0}\Delta ^{1/2}f\Vert _2\textrm{d}s\\&\le H\Vert f\Vert _2 +GH t^{n\alpha /2}\int _0^t s^{-n\alpha /2-1/2}\Vert \Delta ^{1/2}f\Vert _2\textrm{d}s, \end{aligned}$$
and the last integral is finite by assumption on q. Hence, for all \(\sqrt{ t}\in (0,R/2]\),
$$\begin{aligned} \Vert v_{\sqrt{ t}}^\alpha f\Vert _q\le C(\Vert f\Vert _2 +\sqrt{ t}\Vert \Delta ^{1/2}f\Vert _2) \end{aligned}$$
for some C depending only on q, \(C_D\), n, and the upper bound on H. Putting \(r=\sqrt{t}\) yields the result. \(\square \)
Suppose \(R>0\), and \(2p>n\ge 2\). There is an \(\varepsilon >0\) such that if a manifold M of dimension n satisfies
$$\begin{aligned} \kappa _M(p,R)\le \varepsilon , \end{aligned}$$
then (4.3) holds on \(B_0\) with \(r_0=R/2\).
According to [19, 21], (4.1) holds up to radius R for some \(\varepsilon >0\). Since Dirichlet heat kernels are always less than the heat kernel of the manifold, (4.2) follows from [8, 25] by choosing \(\varepsilon \) possibly smaller. The claim follows from Proposition 4.2. \(\square \)
The proof is adapted from [7]. If \(\Omega \subset M\) is (r, H, K)-regular, then it satisfies in particular the rolling r-ball condition for some \(r\le r_0\), and \(r_0\) depends on K and H only. According to [22], there exists a \(C>0\) depending on R, p, n such that
$$\begin{aligned} {{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,t))\le C\left( \frac{t}{s}\right) ^n{{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,s)), \quad x\in \Omega , 0<s\le t\le {{\,\mathrm{\mathop {diam}}\,}}\Omega \le R. \end{aligned}$$
Moreover, according to Theorem 1.2, (r, H, K)-regularity implies that there is an extension operator \(E_\Omega \) with norm bounded in terms of H, K, r. Note that by construction, \(\Vert E_\Omega \Vert _{L^2(\Omega ),L^2(M)}\) is bounded as well and does not depend on any curvature restrictions but on the rolling r-ball condition. Moreover, we have
$$\begin{aligned} {{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,t))\le {{\,\mathrm{\mathop {Vol}}\,}}(B(x,t)),\quad t>0. \end{aligned}$$
Abbreviate \(A=\Vert E_\Omega \Vert _{L^2(\Omega ),L^2(M)}\) and \(B=\Vert E_\Omega \Vert _{H^1(\Omega ),H^1(M)}\). By the local Gagliardo–Nirenberg inequality, Corollary 4.3 and the existence of a extension operator \(E_\Omega \) from Theorem 1.2, for any \(s\in (0,R/2]\), \(q\in [2,\infty ]\), \(\frac{q-2}{q}n<2\), \(f\in {\mathcal {C}}^1({{\overline{\Omega }}})\), we have
$$\begin{aligned}&\Vert {{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,s))^{1/2-1/q}f\Vert _{L^q(\Omega )}\\&\quad \le \Vert v_s^{1/2-1/q} E_\Omega f\Vert _{L^q(\Omega )}\\&\quad \le C(\Vert E_\Omega f\Vert _{L^2(M)}+s \Vert \nabla E_\Omega f\Vert _{L^2(M)})\\&\quad \le C(A\Vert f\Vert _{L^2(\Omega )}+ B s (\Vert f\Vert _{L^2(\Omega )}+\Vert \nabla f\Vert _{L^2(\Omega )}))\\&\quad \le C\max (A,B)((1+s)\Vert f\Vert _{L^2(\Omega )}+s\Vert \nabla f\Vert _{L^2(\Omega )})\\&\quad \le C\max (A,B)(1+{{\,\mathrm{\mathop {diam}}\,}}(\Omega ))(\Vert f\Vert _{L^2(\Omega )}+s\Vert \nabla f\Vert _{L^2(\Omega )})\\&\quad \le C(n,p,r,K,H,R)(\Vert f\Vert _{L^2(\Omega )}+s\Vert \nabla f\Vert _{L^2(\Omega )}), \end{aligned}$$
i.e., the Gagliardo–Nirenberg inequality on \(\Omega \) for all \(s\in (0,R/2]\). In the second inequality, we use that since \({{\,\mathrm{\mathop {diam}}\,}}\Omega \le R/2\) we can choose the extension operator in such a way that the extended function has support in B(x, R/2). For the case \(s>R/2\), note that \({{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,s))={{\,\mathrm{\mathop {Vol}}\,}}(\Omega )\) for \(s\ge {{\,\mathrm{\mathop {diam}}\,}}\Omega \), hence the case \(s>{{\,\mathrm{\mathop {diam}}\,}}(\Omega )\) reduces to the case \(s={{\,\mathrm{\mathop {diam}}\,}}(\Omega )\). Thus, the Gagliardo–Nirenberg inequality holds for all \(s>0\) on \(\Omega \). To derive the desired Neumann heat kernel upper bound, we want to apply [2, Theorem 1.1] directly. More precisely, the latter theorem shows that global volume doubling and global Gagliardo–Nirenberg inequalities on \(\Omega \) yield an all-time upper bound for the Neumann heat kernel. The only thing that is still needed to check is whether inside \(\Omega \), volumes of different balls of the same radius are comparable, i.e., condition \((D_v')\) in the notation of the latter paper for \(v={{\,\mathrm{\mathop {Vol}}\,}}_\Omega \). If \(s>0\) and \(x,y\in \Omega \), \(d(x,y)\le s\), then \(B(y,s)\subset B(x,2s)\), such that (4.5) implies
$$\begin{aligned} {{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(y,s))\le {{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,2s))\le 2^n C {{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,s)). \end{aligned}$$
Hence, the theorem follows, and the constants appearing depend only on the dimension, the doubling constant and radius, and the heat kernel upper bound. \(\square \)
Proof of Corollary 1.5
According to [30], there exists an explicit constant \(\varepsilon =\varepsilon (n,r,H,K)>0\) such that if
$$\begin{aligned} \kappa _T(\rho _-)\le \int _0^T\Vert H_t^\Omega \rho _-\Vert _\infty \textrm{d}t\le \varepsilon , \end{aligned}$$
then all the conclusions of the corollary hold. Here \((H_t^\Omega )_{t\ge 0}\) is the Neumann heat semigroup of \(\Omega \). First, note that by the Dunford–Pettis theorem, Theorem 1.4, and volume doubling on \(\Omega \), we have for all \(t\le R^2\)
$$\begin{aligned} \Vert H^\Omega _t\Vert _{1,\infty }= & {} \sup _{x,y\in \Omega } h_t^\Omega (x,y)\\\le & {} \sup _{x,y\in \Omega }\frac{{\bar{C}}}{{{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(x,\sqrt{ t}))^{1/2}{{\,\mathrm{\mathop {Vol}}\,}}_\Omega (B(y,\sqrt{ t}))^{1/2}}\le \frac{{{\tilde{C}}}}{{{\,\mathrm{\mathop {Vol}}\,}}(\Omega )}\left( \frac{R}{\sqrt{ t}}\right) ^n. \end{aligned}$$
Thus, since \(\Vert H_t^\Omega \Vert _{\infty ,\infty }\le 1\) and by duality, the Riesz–Thorin interpolation theorem implies
$$\begin{aligned} \Vert H^\Omega _t\Vert _{p,\infty }\le C_R{{\,\mathrm{\mathop {Vol}}\,}}(\Omega )^{-1/p} t^{-n /2p}. \end{aligned}$$
$$\begin{aligned} \kappa _T(\rho _-)\le & {} \int _0^T \Vert H_t^\Omega \rho _-\Vert _\infty \textrm{d}t\\\le & {} \int _0^T \Vert H_t^\Omega \Vert _{p,\infty }\Vert \rho _-\Vert _{p,\Omega }\textrm{d}t \\ {}\le & {} C_R{{\,\mathrm{\mathop {Vol}}\,}}(\Omega )^{-1/p}\Vert \rho _-\Vert _{p,\Omega } \int _0^T t^{-n /2p}\textrm{d}t, \end{aligned}$$
and the latter function is integrable provided \(n /2p<1\), i.e., \(p>n/2\). Thus, the result follows by taking \(T=R^2\) and forcing the right-hand side to be smaller than \(\varepsilon \). \(\square \)
Aubry, E.: Finiteness of \(\pi _1\) and geometric inequalities in almost positive Ricci curvature. Ann. Sci. École Norm. Sup. (4) 40(4), 675–695 (2007)
Boutayeb, S., Coulhon, T., Sikora, A.: A new approach to pointwise heat kernel upper bounds on doubling metric measure spaces. Adv. Math. 270, 302–374 (2015)
Brudnyi, A.: Methods of Geometric Analysis in Extension and Trace Problems: Monographs in Mathematics, vol. 2. Springer, Basel (2011)
Carron, G.: Geometric inequalities for manifolds with Ricci curvature in the Kato class. Ann. Inst. Fourier (Grenoble) 69(7), 3095–3167 (2019)
Carron, G., Rose, C.: Geometric and spectral estimates based on spectral Ricci curvature assumptions. J. Reine Angew. Math. (2020). https://doi.org/10.1515/crelle-2020-0026
Article MATH Google Scholar
Chen, R.: Neumann eigenvalue estimate on a compact Riemannian manifold. Proc. Am. Math. Soc. 108(4), 961–970 (1990). (4)
Choulli, M., Kayser, L., Ouhabaz, E.M.: Observations on Gaußian upper bounds for Neumann heat kernels. Bull. Aust. Math. Soc. 92(3), 429–439 (2015)
Dai, X., Wei, G., Zhang, Z.: Local Sobolev constant estimate for integral Ricci curvature bounds. Adv. Math. 325, 1–33 (2018)
Eschenburg, J.H.: Comparison theorems and hypersurfaces. Manuscr. Math. 59(3), 295–323 (1987)
Fefferman, C.L., Israel, A., Luli, G.K.: Sobolev extension by linear operators. J. Am. Math. Soc. 27, 69–145 (2014)
Gallot, S.: Isoperimetric inequalities based on integral norms of Ricci curvature. Astérisque 157–158, 191–216 (1988). (Colloque Paul Lévy sur les Processus Stochastiques (Palaiseau, 1987))
MathSciNet MATH Google Scholar
Gluck, M., Zhu, M.: An extension operator on bounded domains and applications. Calc. Var. Partial Differ. Equ. 58(2), 79 (2019)
Gol'dshtein, V., Pchelintsev, V., Ukhlov, A.: Sobolev extension operators and Neumann Eigenvalues. J. Spectr. Theory 10, 337–353 (2020)
Große, N., Schneider, C.: Sobolev spaces on Riemannian manifolds with bounded geometry: General coordinates and traces. Math. Nachr. 286(16), 1586–1613 (2013)
Gyrya, P., Saloff-Coste, L.: Neumann and Dirichlet Heat Kernels in Inner Uniform Domains. Asterisque Series. American Mathematical Society, Providence, RI (2011)
Jost, J.: Riemannian geometry and geometric analysis, 5th edn. Universitext, Springer-Verlag, Berlin Heidelberg (2008)
Li, P., Yau, S.-T.: On the parabolic kernel of the Schrödinger operator. Acta Math. 156(3–4), 153–201 (1986)
Petersen, P.: Riemannian Geometry. Graduate Texts in Mathematics. Springer, New York (2006)
Petersen, P., Wei, G.: Relative volume comparison with integral curvature bounds. Geom. Funct. Anal. 7(6), 1031–1045 (1997)
Petersen, P., Sprouse, C.: Integral curvature bounds, distance estimates and applications. J. Differ. Geom. 50(2), 269–298 (1998)
Petersen, P., Wei, G.: Analysis and geometry on manifolds with integral Ricci curvature bounds. II. Trans. Am. Math. Soc. 353(2), 457–478 (2001)
Ramos Olivé, X.: Neumann Li-Yau gradient estimate under integral Ricci curvature bounds. Proc. Am. Math. Soc. 147(1), 411–426 (2019)
Ramos Olivé, X., Seto, S., Wei, G., Zhang, Q.S.: Zhong-Yang type eigenvalue estimate with integral curvature condition. Math. Z. 296, 595–613 (2019)
Rose, C.: Heat kernel estimates based on Ricci curvature integral bounds. PhD thesis, Technische Universität Chemnitz (2017)
Rose, C.: Heat kernel upper bound on Riemannian manifolds with locally uniform Ricci curvature integral bounds. J. Geom. Anal. 27, 1737–1750 (2017)
Rose, C.: Almost positive Ricci curvature in Kato sense - an extension of Myers' theorem. Math. Res. Lett. 28(6), 1841–1849 (2021)
Rose, C.: Li-Yau gradient estimate for compact manifolds with negative part of Ricci curvature in the Kato class. Ann. Glob. Anal. Geom. 55(3), 443–449 (2019)
Rose, C., Stollmann, P.: The Kato class on compact manifolds with integral bounds of Ricci curvature. Proc. Am. Math. Soc. 145, 2199–2210 (2017)
Rose, C., Stollmann, P.: Manifolds with Ricci curvature in the Kato class: heat kernel bounds and applications. In: Keller, M., Lenz, D., Wojciechowski, R.K. (eds.) Analysis and Geometry on Graphs and Manifolds, London Mathematical Society Lecture Note Series, vol. 461. Cambridge University Press, Cambridge (2020)
Rose, C., Wei, G.: Eigenvalue estimates under Kato-type Ricci curvature conditions. arXiv:2003.07075v2 [math.DG] (2020)
Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Monographs in Harmonic Analysis. Princeton University Press, Princeton (1970)
Wang, J.: Global heat kernel estimates. Pac. J. Math. 178(2), 377–398 (1997)
Warner, F.W.: Extension of the Rauch comparison theorem to submanifolds. Trans. Am. Math. Soc. 122(2), 341–356 (1966)
Whitney, H.: Functions differentiable on the boundaries of regions. Ann. Math. 35(3), 482–485 (1934)
Zhang, Q.S., Zhu, M.: Li-Yau gradient bound for collapsing manifolds under integral curvature condition. Proc. Am. Math. Soc. 145(7), 3117–3126 (2017)
Zhang, Q.S., Zhu, M.: Li-Yau gradient bounds on compact manifolds under nearly optimal curvature conditions. J. Funct. Anal. 275(2), 478–515 (2018)
We want to thank Leonhard Frerick, Rostislav Matveev, and Jürgen Jost for useful remarks. C.R. wants to thank O.P. and the University of Trier for their hospitality and Sebastian Boldt for pointing out a gap in an earlier version of the article.
Open Access funding enabled and organized by Projekt DEAL.
FB 4 - Mathematik, Universität Trier, Universitätsring 15, 54296, Trier, Germany
Olaf Post
Department of Mathematics and Statistics, Smith College, 10 Elm Street, Northampton, MA, 01063, USA
Xavier Ramos Olivé
Institut für Mathematik, Universität Potsdam, Karl-Liebknecht-Straße 24-25, 14476, Potsdam, Germany
Christian Rose
Correspondence to Christian Rose.
The authors did not receive support from any organization for the submitted work and have no relevant financial or non-financial interests to disclose.
Appendix: Proofs of Propositions 2.3 and 2.4
We derive a differential inequality for the metric tensor along a geodesic depending on upper and lower bounds of the sectional and mean curvature that we compare with the solution \(\lambda \) of the differential equation
$$\begin{aligned} \lambda '+\lambda ^2=-k,\quad \lambda (0)=h, \end{aligned}$$
(A.1)
where \(k,h\in {\mathbb {R}}\) are a lower (resp. upper) bound for the sectional curvature along the geodesic and upper (resp. lower) bound on the principal curvatures of N. By substituting \(\lambda =\frac{\mu '}{\mu +C}\) for \(C\in {\mathbb {R}}\), this equation transforms into the solvable differential equation
$$\begin{aligned} \mu ''+k\mu =-kC, \quad \mu '(0)=h(\mu (0)+C). \end{aligned}$$
Choosing \(\mu (0)=1\) and C appropriately guarantees the existence of a \(t_0>0\) and a unique solution \(\mu _{k,H}\) that is positive on an interval \((0,t_0]\).
Proof of Proposition 2.3
We only show Equation (2.4) by following the proof of [18, Theorem 27]. The upper bound for the metric can be proven similarly. Observe that the initial conditions for the Hessian of s are given by
$$\begin{aligned} {{\,\textrm{Hess}\,}}s(0)=\textrm{II}_{c(0)}. \end{aligned}$$
Fix \(\theta \in N\) and define
$$\begin{aligned} \lambda (s){:}{=}\lambda (s,\theta ){:}{=}\max _{v\perp \partial _s}\frac{{{\,\textrm{Hess}\,}}s(v,v)}{g(v,v)}, \end{aligned}$$
that is Lipschitz and hence absolutely continuous. Let v be a vector such that
$$\begin{aligned} {{\,\textrm{Hess}\,}}s(v,v)=\lambda (s_0)g_s(v,v), \end{aligned}$$
where \(s_0\) is a point where \(\lambda \) is differentiable, and extend v to a parallel field V. The function \(\varphi (s){:}{=}{{\,\textrm{Hess}\,}}s(V,V)\) satisfies \(\varphi (s)\le \lambda (s)\) and \(\varphi (s_0)=\lambda (s_0)\). This yields, since V is parallel,
$$\begin{aligned} \lambda '(s_0)+\lambda ^2(s_0)&=\partial _s{{\,\textrm{Hess}\,}}s(v,v)+{{\,\textrm{Hess}\,}}s^2(v,v)=\nabla _{\partial _s}{{\,\textrm{Hess}\,}}s(v,v)+{{\,\textrm{Hess}\,}}s^2(v,v)\\&=-g(R(v,\partial _s)\partial _s,v)\le -k. \end{aligned}$$
Moreover, \(\lambda (0)=H_+\). Thus, by Riccati comparison, the Hessian satisfies
$$\begin{aligned} {{\,\textrm{Hess}\,}}s\le \frac{\mu _{k,H_+}'(s)}{\mu _{k,H_+}(s)}g_s. \end{aligned}$$
In general, we have
$$\begin{aligned} \partial _s g_s=2{{\,\textrm{Hess}\,}}s, \end{aligned}$$
yielding
$$\begin{aligned} \partial _sg_s\le 2 \frac{\mu _{k,H_+}'(s)}{\mu _{k,H_+}(s)}g_s. \end{aligned}$$
To get the desired estimate for the metric, we compare this differential inequality with the conformal variation
$$\begin{aligned} h_s=\mu _{k,H_+}^2(s)g_0, \quad \mu _{k,H_+}(0)=1, s\in (0,r_0). \end{aligned}$$
This variation satisfies
$$\begin{aligned} \partial _s h_s=2\frac{\mu _{k,H_+}'(s)}{\mu _{k,H_+}(s)}h_s, \quad h_0=g_0. \end{aligned}$$
Comparing this equality with Equation (A.3) yields the claim. \(\square \)
To get Proposition 2.4, we adopt the comparison argument from [18] to our situation. In general for the decomposition (2.3) of our metric, we have
$$\begin{aligned} \partial _s \textrm{dvol}= \Delta s \textrm{dvol}\end{aligned}$$
where \(\Delta s={{\,\mathrm{\mathop {Tr}}\,}}\textrm{II}\) denotes the mean curvature of the distance hypersurface. If we decompose the volume element into
$$\begin{aligned} \textrm{dvol}= \lambda (s,\theta ) \textrm{d}s \textrm{dvol}_s, \end{aligned}$$
we see that the equation above for fixed \(\theta \in N\) reduces to
$$\begin{aligned} \lambda '(s)=\Delta s \ \lambda (s), \end{aligned}$$
where \(\Delta s(0)=H{:}{=}{{\,\mathrm{\mathop {Tr}}\,}}\textrm{II}(0)\). We want to compare this metric with the conformal variation
$$\begin{aligned} h_s=\mu _{k,H}^2(s)g_0, \end{aligned}$$
on N. Note that we have
$$\begin{aligned} \textrm{dvol}_s^{k,H}=\mu _{k,H}^{n-1}(s)\textrm{dvol}_0, \end{aligned}$$
$$\begin{aligned} \partial _s(\textrm{dvol}^{k,H}_s)=\partial _s \mu _{k,H}^{n-1}\textrm{dvol}_0=(n-1)\frac{\mu _{k,H}'(s)}{\mu _{k,H}(s)} \textrm{dvol}_s^{k,H},\quad \frac{\mu _{k,H}'(0)}{\mu _{k,H}(0)}=HC. \end{aligned}$$
The mean curvature can be controlled by the following result by Eschenburg.
Lemma A.1
[9, Theorem 4.1] For \(M,N,c,r_0, H\) as above, we have
$$\begin{aligned} \Delta s\le \mu _{k,H}(s), \quad s\in [0,r_0), m_{k,H}(0)={{\,\mathrm{\mathop {Tr}}\,}}\textrm{II}(0). \end{aligned}$$
For \(s<0\), the opposite inequality holds.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Post, O., Olivé, X.R. & Rose, C. Quantitative Sobolev Extensions and the Neumann Heat Kernel for Integral Ricci Curvature Conditions. J Geom Anal 33, 70 (2023). https://doi.org/10.1007/s12220-022-01118-4
Sobolev extensions
Integral Ricci curvature bounds
Neumann heat equation
Gradient estimate
Mathematics Subject Classification
35P15 | CommonCrawl |
Phenomenology of the companion-axion model: photon couplings
@inproceedings{Chen2021PhenomenologyOT,
title={Phenomenology of the companion-axion model: photon couplings},
author={Zhe Chen and Archil B. Kobakhidze and Ciaran A. J. O'Hare and Zachary S. C. Picker and Giovanni Pierobon},
Zhe Chen, A. Kobakhidze, +2 authors Giovanni Pierobon
Zhe Chen,1, ∗ Archil Kobakhidze,1, † Ciaran A. J. O'Hare,2, ‡ Zachary S. C. Picker,2, § and Giovanni Pierobon3, ¶ Sydney Consortium for Particle Physics and Cosmology, School of Physics, The University of Sydney, NSW 2006, Australia School of Physics, The University of Sydney and ARC Centre of Excellence for Dark Matter Particle Physics, NSW 2006, Australia School of Physics, The University of New South Wales, Sydney NSW 2052, Australia
View PDF on arXiv
Cosmology of the companion-axion model: dark matter, gravitational waves, and primordial black holes
Zhe Chen, A. Kobakhidze, C. O'Hare, Zachary S. C. Picker, Giovanni Pierobon
Zhe Chen,1, ∗ Archil Kobakhidze,1, † Ciaran A. J. O'Hare,2, ‡ Zachary S. C. Picker,2, § and Giovanni Pierobon3, ¶ Sydney Consortium for Particle Physics and Cosmology, School of Physics, The…
Simulations of axion-like particles in the post-inflationary scenario
C. O'Hare, Giovanni Pierobon, J. Redondo, Y. Wong
Ciaran A. J. O'Hare,1 Giovanni Pierobon,2, a Javier Redondo,3, 4 and Yvonne Y. Y. Wong2 School of Physics, The University of Sydney, and ARC Centre of Excellence for Dark Matter Particle Physics, NSW…
SHOWING 1-10 OF 120 REFERENCES
First Constraints on Light Axions from the Binary Neutron Star Gravitational Wave Event GW170817
Jun Zhang, Zhenwei Lyu, +4 authors Huan Yang
Jun Zhang, 2, ∗ Zhenwei Lyu, 4, † Junwu Huang, ‡ Matthew C. Johnson, 4, § Laura Sagunski, ¶ Mairi Sakellariadou, 8, ∗∗ and Huan Yang 4, †† Theoretical Physics, Blackett Laboratory, Imperial College,…
No evidence for axions from Chandra observation of magnetic white dwarf
Christopher Dessert, A. J. Long, B. Safdi
Christopher Dessert, 2, 3 Andrew J. Long, and Benjamin R. Safdi 3 Leinweber Center for Theoretical Physics, Department of Physics, University of Michigan, Ann Arbor, MI 48109 U.S.A. Berkeley Center…
The Low-Energy Frontier of Particle Physics
J. Jaeckel, A. Ringwald
Most embeddings of the Standard Model into a more unified theory, in particular those based on supergravity or superstrings, predict the existence of a hidden sector of particles that have only very…
Exploring the Role of Axions and Other WISPs in the Dark Universe
A. Ringwald
Axions and other very weakly interacting slim particles (WISPs) may be non-thermally produced in the early universe and survive as constituents of the dark universe. We describe their theoretical…
An improved limit on the axion–photon coupling from the CAST experiment
S. Andriamonje, S. Aune, +60 authors K. Zioutas
We have searched for solar axions or similar particles that couple to two photons by using the CERN Axion Solar Telescope (CAST) set-up with improved conditions in all detectors. From the absence of…
The dark-matter axion mass
Vincent B. Klaer, G. Moore
We evaluate the efficiency of axion production from spatially random initial conditions in the axion field, so a network of axionic strings is present. For the first time, we perform numerical…
Light spinless particle coupled to photons.
Massó, Toldr
Physical review. D, Particles and fields
The laboratory, astrophysical and cosmological constraints on $\phi$ are examined, it is generalized to possess $SU(2) \times U(1)$ gauge invariance, and the phenomenological implications are analyzed.
Experimental targets for photon couplings of the QCD axion
P. Agrawal, J. Fan, M. Reece, Lian-tao Wang
A bstractThe QCD axion's coupling to photons is often assumed to lie in a narrow band as a function of the axion mass. We demonstrate that several simple mechanisms, in addition to the photophilic…
Axion dissipation through the mixing of Goldstone bosons
K. Babu, S. Barr, D. Seckel
Abstract By coupling axions strongly to a hidden sector, the energy density in coherent axions may be converted to radiative degrees of freedom, alleviating the "axion energy crisis". The strong…
120 References | CommonCrawl |
Yet another tale of two cities: Buenos Aires and Chicago
Filipe Campante1 &
Edward L. Glaeser1
Latin American Economic Review volume 27, Article number: 2 (2018) Cite this article
Buenos Aires and Chicago grew during the nineteenth century for remarkably similar reasons. Both cities were conduits for moving meat and grain from fertile hinterlands to eastern markets. However, despite their initial similarities, Chicago was vastly more prosperous for most of the twentieth century. Can the differences between the cities after 1930 be explained by differences in the cities before that date? We highlight four major differences between Buenos Aires and Chicago in 1914. Chicago was slightly richer, and significantly better educated. Chicago was more industrially developed, with about 2.25 times more capital per worker. Finally, Chicago's political situation was far more stable and it was not a political capital. Human capital seems to explain the lion's share of the divergent path of the two cities and their countries, both because of its direct effect and because of the connection between education and political instability.
Both Buenos Aires and Chicago grew enormously over the late nineteenth century as nodes of a transportation network that brought the produce of the New World's rich, but relatively unpopulated, hinterlands to the tables of the world. (Figure 1 shows the parallel population growth of both places.) In the early 1900s, the two cities dominated meat-packing in the Americas and were great centers of grain shipments. About one-half of the populations of both cities were immigrants, who had come to take advantage of high wages in these urban areas. Both cities were governed by functioning but imperfect democracies, and both were famous for their corruption.
Sources: Historical data for Chicago from One Hundred Years of Land Values in Chicago, by Homer Hoyt, and data for 1940–2005 from the U.S. Census. Historical data for Buenos Aires is from Anuario Estadístico de la Ciudad de Buenos Aires and Anuario Municipal, and data for 1950–2005 from Poblacion de Buenos Aires
Population growth of Chicago and Buenos Aires, 1800–2005.
Over the course of the twentieth century, the paths of the two cities have, of course, significantly diverged, just as the paths of Argentina and the U.S. have diverged. Buenos Aires has had faster population growth, but Chicago has become much richer and has also been generally free of the regime-changing political uprisings that have challenged the Argentine capital.Footnote 1 In this paper, we ask whether differences between the cities at the start of the twentieth century can help us to make sense of their divergent paths since then.
On a functional level, the cities in 1900s appear quite similar. In both cases, rail lines brought wheat and beef into the port. From there, the beef was processed and the produce shipped east. The stockyards that carve up cattle and pigs are big employers in both places. Refrigeration significantly aids the exports of both cities. By 1910, the income gap between the two cities had closed to the point where real wages were about 70% higher in Chicago, which is substantially less than the gap was in 1890 or today.
Yet there were significant differences in Chicago and Buenos Aires even in 1910, beyond that income gap. First, the education levels of Chicago residents seem to have been much higher. This difference does not reflect educational enrollments, which seem broadly similar after 1884 Argentine education reform. Instead, the adults coming into Buenos Aires seem to have been much less educated than those coming into Chicago. The main reason for this difference is that rural–urban migrants in the U.S. were much better educated, reflecting the strength of the American common school movement in the early nineteenth century. Chicago also had more German immigrants, who were relatively well educated, while Buenos Aires disproportionately attracted immigrants from the less well-educated countries of Spain and Italy.
Second, Chicago moved much more quickly towards being an industrial producer as well as a transformer of raw commodities. Capital per worker appears to have been about 2.44 times higher in Chicago than in Buenos Aires in 1914. Value added per worker appears to have been 2.25 times higher in Chicago, which can readily explain the 70% wage gap. Chicago's production of goods was, to a large extent, oriented towards providing goods for the prosperous Midwestern hinterland. The market for Buenos Aires-made manufactured goods was much smaller, because the Argentine farmers were much poorer. Moreover, Chicago had a long track record of innovation, and in many areas, such as mechanical reapers, it was on the forefront of new technologies. By contrast, Argentina was an importer of technological ideas through much of the twentieth century. Chicago's higher human capital levels may help explain why Chicago was more technologically developed, but in any event, by 1930, Chicago is essentially an industrial town, while Buenos Aires is still focused on raw food production and commerce.
Last but not least, political forces in Buenos Aires and Chicago were different. While Chicago had universal manhood suffrage since the Civil War, Buenos Aires had a much more limited electoral base until 1914. More importantly, Buenos Aires is Argentina's capital while Chicago is not the capital of the U.S. The combination of commerce and politics in Buenos Aires means that uprisings in the city have the ability to topple national governments. Comparable uprisings in Chicago, such as the Haymarket riot, were only of local concern. The concentration of population in Buenos Aires seems to have made the country less politically stable.
In the fourth section of this paper, we attempt to assess the relative importance of these difference factors by using cross-national evidence. Inevitably; this pulls us away from a city-level focus to a more national perspective. We examine the ability of pre-World War I variables, including income, industrialization, education, urbanization and political instability to explain cross-section income variation today. All of these variables are strongly correlated with current per capita GDP levels, but measures of schooling in 1900 have the strongest connection to modern income. Using coefficients from cross-national regressions, we estimate that the differences in education between Argentina and the U.S. in 1900 can, in a mechanical decomposition, explain almost all of the differences in current income levels.
But why is the connection between historical education levels and current income so tight? The direct effect of education on earnings can explain only a small portion of the link. Education, however, is also correlated with political outcomes. Stable democracies are much rarer in less well-educated countries (Glaeser et al. 2007). Lower levels of education in Argentina can help us to understand that nation's twentieth century political problems. However, education also seems to have a strong direct impact on national income levels, which can, perhaps be understood as stemming from the connection between area-level human capital and the state of technological development.
Chicago del Plata; Buenos Aires on Lake Michigan
We begin by stressing the profound similarities between the economic models of Chicago and Buenos Aires in the nineteenth century. As late as 1880, 72% of the U.S. population was rural. The great wealth of the country came from its vast expanses of fertile land. No area was more fertile than the hinterland of Chicago: Illinois and Iowa. The rich black soil of America's Corn Belt yielded an average of 39 bushels per acre in 1880, about 50% more than the older corn producing areas of Kentucky. That higher productivity explains why Chicago passed Cincinnati as America's pig-producing polis.
America's vast hinterland was enormously rich, but at the start of the nineteenth century that land was virtually inaccessible. It cost as much to ship goods 32 miles over land as it did to ship them across the ocean. Over the course of the nineteenth century, Americans built a transportation network that managed to move agricultural produce far more cheaply over space. Cities, like Chicago and Cincinnati, were nodes on that transportation network. Typically, large cities formed in places where goods needed to move from one form of transport to another.
The growth of Chicago depended on two canals. The first canal, the Erie, connected the great lakes to the Hudson River, and through it the city of Chicago was able to ship by water all the way to New York and the outside world. The second canal was the Illinois and Michigan canal, which connected the Chicago River to the Mississippi River system. Chicago's first boom decades, the 1830s, coincided with speculation related to the completion of the canal. Those two canals situated Chicago as the lynchpin of a watery arc that ran from New Orleans to New York.
As it turned out, railroads became even more important in connecting Chicago to the west. Starting in 1848, the Chicago and Galena railroad connected the city westward. While initially intended to move lead, the rail connected to Iowa and became a conduit for agricultural produce, particularly pigs. Corn is an enormously calorie-intensive crop, but it is relatively expensive to ship. Hence corn was typically fed to pigs and those pigs were moved across space. To reap economies of scale, Chicago became a stockyard city specializing in turning live pigs into easy-to-move salted meat.
Typically, mankind has tended to be more interested in salted pig products (bacon, sausage, ham) than in salted beef products. For that reason, in the middle nineteenth century, pigs were slaughtered in Chicago before their movement east, while cows were shipped live. One great transport innovation in nineteenth century Chicago was the four-season refrigerated rail car, used by Gustavus Swift. (His engineer's brilliant insight was to put the ice on top of the meat so it dripped down.) After Swift began using refrigerated cars, Chicago increasingly shipped prepared beef, instead of cattle on the hoof, as well as prepared pigs.
The final element in Chicago's agricultural shipping empire was its increasing role as a center for grain shipments. Wheat has less value per ton than pork or beef, and as a result high shipping costs in the middle nineteenthth century meant that wheat typically traveled short distances. Rochester, New York, for example was America's flour city in its early years, specializing in milling grain on its way to New York City. As transportation costs fell, and as hard spring wheat made the cold areas north of Chicago more productive, wheat increasingly came east from the old northwest. Chicago, as the Midwest's premier transport center, became a conduit for shipping grain as well as shipping beef and pork.
Buenos Aires' evolution in the nineteenthth century is broadly similar to that of Chicago. The similarities start with the fact that what turned Buenos Aires into a major commercial hub was its exceptionally fertile hinterland, rather than an exceptionally located port (at least when compared to possible competitors such as Montevideo). The developments in terms of the accessibility of this hinterland to the main networks of international trade were once again the key in determining the patterns followed by the city's evolution. In 1850, transportation across the Atlantic was slow and expensive, dependent on sailing ships. Argentina, therefore, specialized in exporting products that were extremely durable, such as hides and tallow. In the 1840s, Buenos Aires was exporting more than 2 million hides per year and 10,000 tons of tallow (Brown 1976). Wool was also a major export. Notably, these were the same products being produced in the region around Los Angeles around the same time and for the same reason. Distant places with abundant land were best used to produce goods that could last for months during a long sea voyage.
Over the course of the nineteenthth century, Argentina moved to higher value agricultural products, first meat and then grain. In the middle years of the nineteenthth century, Argentina was further away from European markets and had a much higher ratio of land to population than the U.S. For example, in 1880, Argentina was composed of 2.7 million square kilometers and had around 2.5 million people. The U.S. had 8 million square kilometers of land and 50 million people. The vast amounts of space in Argentina made herding relatively more attractive than intensive agriculture. While Argentina actually imported breadstuffs from Chile, in the mid-1870s, it had more than 45 million sheep and more than 5 million cattle. Since cattle and sheep complement open ranges more than pigs, beef became the primary export item for Argentina. They were, of course, and still are overwhelmingly grass fed, whereas U.S. beef primarily eats corn.
Initially, the cattle exports were hides and some salted beef (a bit more than 20 thousand tons per year during the 1850s). The market for salted beef, such as beef jerky, was never particularly robust and this limited the growth of Argentine export trade. Two big transport innovations, however, enabled Argentina to grow dramatically as a meat exporter. First, starting in the 1840s, steam replaced sail on the cross-Atlantic journeys, reducing travel times by as much as two-thirds (from over 70 days to less than 25). Second, in 1875, refrigerated ships, or frigoríficos, made it possible to ship chilled beef and mutton. The impact of refrigeration was even greater on Buenos Aires than it was in Chicago, because the distances between Buenos Aires and London precluded the shipment of live cattle in large numbers before the 1880s.
With the coming of the frigoríficos, Buenos Aires became a large exporter of frozen and chilled beef and mutton. During the early years of chilled transport, mutton was actually a more important export than beef, because "mutton, unlike beef, is not injured materially in quality, flavor and appearance by the freezing and thawing process," (Hanson 1938, p. 84). By 1892, Argentina was exporting more than a million sheep carcasses annually. Faster transportation was also making it easier to export vast amounts of live cattle and sheep to the United Kingdom and other European markets, and by the turn of the century, 500,000 live sheep and 100,000 live cattle were being exported annually from Argentina to the England.
The vast increase in the amount of chilled beef exported from Argentina, much of it through Buenos Aires, actually occurred during the early years of the twentieth century. Between 1900 and 1916, Argentina's exports of frozen beef increased from 26,000 tons to 411,000 tons. About a third of those frozen carcasses were coming through the port of Buenos Aires, which was growing as a center for slaughtering and refrigeration, as well as shipping.
The final step in the agricultural development of Argentina also mirrors the changes in Chicago. Just as the decline in shipping costs made it more attractive to ship wheat from the west to New York via Chicago, easier shipping costs made wheat a more attractive export for Argentina. As late as the 1870s, Argentina was exporting essentially no wheat. By 1904, the Argentines were exporting more than two million tons of it per year.
The growth of the wheat trade was accompanied by a vast transformation on the Pampas. Land that had been used as open range became used for intensive wheat cultivation. By 1910, 10 million acres in the province of Buenos Aires were being used to grow wheat. The population of Buenos Aires' hinterland rose dramatically as immigrants came to farm. In 30 years, Argentina moved from having essentially no cereal production to becoming one of the world's three largest grain exporters.
The roots of this transformation also lay in better transportation technologies. Across the Atlantic, faster and faster steam ships made it cheaper to ship grain. Starting in the 1850s, a rail network was created within Argentina, generally supported by the government and mostly connecting Buenos Aires to places in the hinterland. (In yet another interesting parallel, just as a New England-born shipping magnate, John Murray Forbes, built some of the first rails that connected Chicago, a New England-born shipping magnate, William Wheelwright, built some of the first rail tracks in Argentina.) Rail allowed population to disperse through the hinterland, and it also brought goods into Buenos Aires to be processed and shipped out; quite crucially, it made it less expensive to ship grain to the capital. While cattle and sheep could walk on their own to the port, grain always needed to be shipped. As a result, grain particularly benefited from the improvements in rail.
In sum, like Chicago, Buenos Aires initial attraction was its harbor and waterways—the River Plata was an avenue into the interior—located next to an exceptionally fertile hinterland. The rail network, which centered at the capital, only increased Buenos Aires' place at the hub of Argentina's internal transport network, just as rail only increased Chicago's importance in the Midwest. The comparison did not escape contemporary observers, such as U.S. Trade Commissioner Herman G. Brock, who noted that "like Chicago, [Buenos Aires] has all the resources of the broad pampas at its doors and is the terminus of a dozen railways whose network of transportation covers the Republic from north to south and east to west, all feeding directly or indirectly into the capital" (Brock 1919, p. 13).
By 1910 both Chicago and Buenos Aires were "nature's metropolises." Both cities had grown great as conduits that moved the wealth of American hinterlands to more densely populated markets. In both cases, beef and wheat played a disproportionate role in the commerce of the cities. In both cases, improved shipping technologies, especially refrigeration, enabled the cities to grow.
Yet the twentieth century time paths of these places were quite different. By population, Buenos Aires grew faster, but by most other measures of progress Chicago dramatically passed its southern rival, just as the income gap between the U.S. and Argentina widened. Is it possible to see, in the differences between the two cities a century ago, the roots of their twentieth century divergence?
Four differences between Buenos Aires and Chicago in 1910
In this section, we discuss four major areas in which Buenos Aires and Chicago differed a century ago. In the next section, we connect these differences to the history of the cities and their countries since then.
Income levels are the natural starting point for understanding what was similar and different between the U.S. and Argentina, so we first look at wage data for the two countries (plus Great Britain and Italy) from 1870 to 1970 (data from Williamson 1995b) in Fig. 2a–c. (The wages are normalized so that the British wage in 1905 equals 100.) At the start of the time period, wages in the United States are more than 50% higher than wages either in Great Britain or Argentina. Wages in those places are about the same and about double the wages in Italy.
Source: Jeffrey G. Williamson, "The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses"
a Annual Wage Data 1870–1913 (100 = UK Real Wage in 1905). b Annual Wage Data 1914–1945 (100 = UK Real Wage in 1927). c Annual Wage Data 1946–1970 (100 = UK Real Wage in 1975).
Between 1870 and the early 1890s, Argentina experienced a remarkable 66% increase in real wages. Argentina's spectacular real wage increase was accompanied by, and probably created by, the aforementioned improvements in shipping technology that enabled Argentine mutton and beef to efficiently be shipped to European markets. Argentine land was made much more productive by the ability to ship meat quickly and that seems to have greatly increased the marginal productivity of labor.
Argentina was not alone in experiencing real wage increases during the late nineteenth century. American wages increased by amount the same proportion, so that in 1892 (a high water mark for Argentine wages), American wages remained 60% above those in Argentina. Wages in Argentina and Britain remained quite similar and about double the wages in Italy, which sent many immigrants to the U.S. and Argentina during this period. Spain, another exporter of people to Argentina, also had wages that were about one-half of those in Argentina.
Of course, these aggregate wage series do not particularly tell us about similar workers in the two cities. To make the scales somewhat more comparable, Fig. 3 shows monthly wages in Chicago from the U.S. Census in 1939 dollars. In Chicago, these wages rose substantially over the 1880s and then remained remarkably static in real terms from 1890 to 1920. Over this time period, of course, the size of Chicago's large force was increasing dramatically. The city expanded from 500,000 to 2.7 million. That vast influx of labor surely played a major role in keeping wage growth modest. The slower population growth over the 1920s, when America substantially reduced the flow of foreign immigrants, may explain rising real wages during that decade.
Sources: Chicago data from the U.S. Census. Argentina data from Williamson (1995a) and DiTella and Zymelman (1967)
Real monthly wages in Chicago and Argentina, 1880–1940.
We do not have data on wages in Buenos Aires itself. Instead, we are forced to use national industrial data. However, much of Argentina's industry was in the capital, so this should give us some sense about wage levels for manufacturing workers in Buenos Aires. While there are many ups and downs, over the whole period, Argentine industrial workers become steadily better paid, as shown in the Williamson data. Throughout the entire period, however, the workers in Chicago were earning more in real terms than the workers in Buenos Aires. During most of the time, the wage gap was approximately 70%. At the start of the century, before the great divergence, there was already a very substantial income gap between the two cities.
Why were the workers in Chicago, many of whom were doing comparable things, earning much more? Classical economics pushes us to consider wages as the intersection of labor supply and labor demand. Labor demand, in turn, reflects the marginal productivity of labor. The higher wages in Chicago, therefore, imply that labor was more productive in that city. Why?
There are three primary hypotheses. First, the workers in Chicago had more skills than the workers in Buenos Aires. We will treat this hypothesis in the next section, where we document significantly greater education levels in Chicago. This gap surely explains some of the difference. However, evidence on wages and schooling from within the U.S. makes it clear that education differences alone cannot explain the gap.
A second hypothesis is that Buenos Aires and Chicago had different amounts of capital, and greater capital levels in Chicago increased the productivity of workers in that city. We will turn to that hypothesis later, when we address the industrial mixes of the two cities. Chicago appears to have had about 2.44 times more capital worker, which in a standard Cobb–Douglas production function might suggest that wages would be 30% higher in Chicago. This can explain almost one-half of the gap.
Finally, a third hypothesis is that Chicago firms were more productive, either because of more advanced technologies or because of the greater distances between Buenos Aires and European markets meant that Argentine products were worth less, at their point of production. The American workers were often much closer to their customers and this decreased one cost of reduction and thereby increased the marginal productivity of labor.
The labor supply curve also gives us information about the reasons for and the nature of wage disparity between Chicago and Buenos Aires. Both cities attracted very significant amounts of immigration between 1890 and 1910. The 1910 census shows that 36% of Chicago's white residents were foreign born, out of which 16% were from Russia, 23% were from Germany, 17% were from Austria and 6% were Italian. In Buenos Aires, the estimates from the Buenos Aires Statistical Annual (Anuario Estadistico de la Ciudad de Buenos Aires 1925) indicate that the city's population increased by 140% over those two decades, and more than half of that increase was due to immigration. As a result of this massive inflow, 50% of Buenos Aires' residents were foreign born in 1914. Buenos Aires' immigrant population was by then overwhelmingly Spanish and Italian, as can be gleaned from the national data: in 1914, roughly 10% of the Argentine population was born in Spain, and 12% in Italy; natives of the two countries made up roughly three-fourths of the total foreign-born population of the country.
The fact that Italian immigrants were going in large numbers to both Buenos Aires and Chicago is puzzling if the real wage differences are actually of the order of 70%. Why would an Italian immigrant choose Buenos Aires over America knowing that real wages are likely to be so much less? There are three possible explanations for this phenomenon. First, it is possible that Buenos Aires offered amenities, like a better climate and a different culture, that were missing in Chicago. Second, the immigrants going to Chicago and Buenos Aires might have actually been quite different. Third, the real wage differences might have been smaller than they appear.
The first hypothesis surely has some truth to it. The fact that Spaniards were drawn to Buenos Aires, despite lower real wages, would not seem like that much of a puzzle. After all, Argentina is a Spanish-speaking country with a Latin culture. The attraction of Buenos Aires is understandable. Italians were also attracted to Buenos Aires because of the similarity in languages (and culture) between Italy and Spain.
There were also substantial differences in the populations going to the U.S. and Argentina. For example, between 1884 and 1886, two-thirds of the Italian immigrants coming to Argentina were from Northern Italy. During the same years, 85% of Italian immigrants coming to the U.S. were from the south. During later periods, the differences narrowed: in the 1907–1909 period; the number of southern Italian immigrants to Buenos Aires had soared, and 31% of the Italian immigrants came from the north. Still, that number was much higher than in the U.S. where only 9% of Italian immigrants came from Northern Italy.
The somewhat different regional origins suggests that, at least during the earlier periods, the U.S. had greater attraction for the southerners while Argentina had greater attraction for the northerners. The Northerners were generally much more skilled: only about 12% of the northerners were illiterate, while 54% of the southerners were illiterate. One interpretation is that the Southerners went to America, where industrial wages were higher. The northerners, however, saw greater returns to going to Buenos Aires, which was notably lacking in more skilled workers. (As we will see in the next section, Buenos Aires was, throughout most of the period, a significantly less well educated city than Chicago.) This suggests that the overall pattern of higher wages in Chicago might mask heterogeneity in the wage differentials for different skill profiles.
Finally, the pull that Buenos Aires had to many immigrants does suggest that real wages might not have been quite as low as they seem relative to the U.S. The economic question is how much of a real wage discount would immigrants have been willing to accept to live in Buenos Aires rather than in the U.S. This remains an open question.
In any event, the weight of evidence suggests that, one century ago, Chicago already had higher income levels than Buenos Aires. The next two subsections will dig deeper into the possible reasons behind that disparity.
While wages were certainly lower in Buenos Aires than in Chicago, wages—correcting for education—differ less. The Argentines appear to have been significantly less educated for much of this time period. Unfortunately, literacy remains the primary means of measuring education levels, and that, of course, is a quite coarse measure. Nonetheless, Fig. 4 shows literacy rates for Buenos Aires and Chicago during our period.
Sources: Data for Chicago from the U.S. Census IPUMS (Integrated Public Use Microdata Series) at http://www.ipums.org. Data for Buenos Aires from Primer Censo Nacional (1869), Segundo Censo Nacional (1895), Censo General de la Ciudad de Buenos Aires (1904), Tercer Censo Nacional (1914) Tomo III, and Cuarto Censo General de la Ciudad de Buenos Aires (1939)
Literacy rates in Buenos Aires and Chicago, 1869–1939
In Chicago, overall literacy rates for the population aged ten or older start above 95% in 1870 and stay at that level for the next 60 years. There is a gap between native and foreign born, but even among foreign born Chicago residents' literacy is never less than 87%. Native literacy is always over 98%, suggesting that pretty much everyone in the city knew how to both read and write.
By contrast, the Buenos Aires data suggest that less than one-half of the population could both read and write in 1869. By 1895, the next available data point, the literacy rate had shot up to 72%, which still meant that a substantial portion of the population was unable to either read or write. It is not until 1939 that more than 90% of the population in Buenos Aires is literate. The data are not entirely comparable since they refer to different age groups; still the differences are quite striking.
Why is there such a difference in literacy rates between the two cities? Table 1 shows school enrollment rates over time for Chicago and Buenos Aires. While enrollment rates are somewhat higher in the U.S., the rates seem much closer than the literacy rates would suggest. The political leaders who came to power after Rosas, such as Mitre and especially Sarmiento in the 1860s and 1870s, were quite committed to public schooling. In 1884, Argentine law made free, secular public schools a right—the Ley 1420 enacted by President Roca, and pushed by Sarmiento in his post-presidency role as head of the National Education Council. There are good reasons to believe that these schooling efforts were particularly successful in the capital, as is apparent from the enrollment data. As such, we can not explain the literacy gap with different enrollment rates alone.
Table 1 School enrollment in Chicago and Buenos Allies
One explanation for the difference is that immigrants who came to Argentina were significantly less literate than their American counterparts. Just as in the U.S., there is a gap between native and foreign born Argentines. In 1904, for example, 89% of native Argentines were literate, but only 72% of the foreign born in the city could read and write. In 1900 Chicago, by contrast, 93% of the foreign born were literate. Chicago's more Germanic population appears to have been much more skilled than the southern Europeans who came to Argentina. Even though Argentina received a higher share of northern Italians, this did not overcome the basic pattern of attracting much less literate people.
The skill differences between Buenos Aires and Chicago do not just reflect differences in foreign immigration. They also reflect the different levels of schooling in the American hinterland. Chicago was a city of immigrants, but it was also a city full of farm boys and girls who had come to town. Likewise, a large share of Buenos Aires residents was born outside the city in Argentina. While school enrollment rates look broadly similar between Buenos Aires and Chicago, outside of the cities the differences in schooling look rather more substantial.
During the first part of the nineteenth century, American rural areas had embraced the common school movement. Farmers throughout the country had been convinced that educating their children was a worthwhile endeavor that would make them more productive. By contrast, the large ranches that predominated in the Argentine hinterland made no such investments in education. One explanation for the difference is that the returns to skill were much lower in Argentine ranches than in intensive agriculture. Land appears to have been much more widely owned in the U.S., and skills were presumably higher for yeomen farmers than for gauchos.
As a result, the rural areas that fed people to Chicago were reasonably well schooled. The hinterland of Argentina was not, at least prior to 1880. For example, the 1869 census shows that, even after the public education initiatives of the Mitre presidency (although still at the outset of the heavily education-minded Sarmiento presidency), only one in five Argentinean school-age children were enrolled in school. Since that includes data on Buenos Aires, we are led to conclude that the situation in the hinterland was considerably worse than that.
How much of an earnings wedge can be explained by literacy alone? Using data on wages by occupation in 1940 (the first time such data are available), we can estimate a 1940 wage for each occupation in the 1900 U.S. Census. We then estimate the average 1940 wage earned by literate and illiterate Chicagoans.
We find that the average wage earned by an illiterate was 56 log points lower than the average wage earned by someone who could read and write. That premium survives controlling for individual age, and controlling for country of origin reduces the measured premium to 34 log points.
While that premium is extremely significant, it is not enough to explain most, or even much, of the wage gap between Chicago and Buenos Aires at the turn of the last century. 16% more of Argentina's population was illiterate than the Chicago's population. Multiplying 16% by even a 56% wage loss leads to an estimate that Buenos Aires should have had 8% higher wages if illiteracy was the only thing holding them back. This modest number is dwarfed by the actual 70% wage gap.
Of course, illiteracy is presumably just proxying for a large educational gap between the two groups. Still the wage gap seems far too large to be explained by education alone in a simple model where human capital produces productivity. If the returns to schooling were about 7% per annum, then Chicagoans would need to have the equivalent of 10 extra years of schooling to explain the observed wage difference, which is wildly implausible.
It is, of course, possible that wages impact earnings directly and through human capital externalities. An example of such externalities might be that more education leads to more innovation and better technology for everyone. In that case, the impact of greater skills in Chicago would be larger. Still, we suspect that this effect would show up mainly in the occupational and industrial distribution of the two countries, and we turn to that next.
Both Chicago and Buenos Aires owed their growth to their roles as centers for the shipment of natural produce. Both cities also developed other industries which produced goods for people living in the hinterland and the residents of the city itself. Cyrus McCormick is the quintessential example of an industrialist who moved his mechanical reaper operation to Chicago in order to be close to his customers, the farmers of the Midwest. Buenos Aires also had its industrialists, like Ernesto Tornquist, who invested in large factories.
While both cities certainly had industry, Chicago's industry developed earlier and was far more capital-intensive on the eve of World War I. By 1900, 15% of Chicago's population, 262,261 workers, labored in industrial pursuit. Four years later, only 7% of Buenos Aires' population, 68,512 people, were in manufacturing. After that point, however, the share of Chicago's workers in manufacturing stagnated while the share of Buenos Aires workers in manufacturing continued to rise. As a result, their employment in industry converged. By 1914, Chicago had 313,000 industrial workers, or 13% of the city's population. Buenos Aires province had 149,000 industrial workers, which was 9.4% of the city's population.
These similar employment shares were not matched by similar levels of output. In 1914, the U.S. Census writes that the value of Chicago's industrial output is 1.48 billion dollars (or 30 billion in current dollars); the value added by manufacturing was 581 billion dollars (or about 12 billion dollars today). Each Chicago worker was associated with 4728 dollars of output (about 100,000 dollars today) or 1856 dollars of value added (about 38,000 dollars today).
In Buenos Aires, total output was 280 million dollars and value added was 122 million dollars. On a per capita basis, each Buenos Aires worker was producing 1880 dollars worth of output (or 38,000 dollars today) and 819 dollars of value added (about 17,000 dollars today). Per worker output was 2.5 times higher in Chicago than in Buenos Aires. Per worker value added appears to have been 2.25 times higher in Chicago than it was in Buenos Aires. This difference in productivity is much larger than the 70% difference in manufacturing incomes that we found during this time period.
Why was manufacturing more productive in Chicago than in Buenos Aires? One hypothesis is that the level of capital per worker was higher in Chicago. In 1914, the total capital in the manufacturing sector was 1.19 billion dollars or 3800 dollars per worker (78,000 today). In 1914, Buenos Aires had 231 million dollars worth of capital or 1550 dollars per worker (32,000 today). The Chicago workers had 2.44 times more capital per worker which may help to explain the higher levels of productivity.
Using a standard Cobb–Douglas production function, we can estimate whether these capital differences can help explain the labor productivity differences across space. This assumes that output equals AK α L 1−α, where A reflects productivity, K reflects capital, L reflects labor and α reflects capital's share in output (typically taken to be one-third). This equation then implies that per worker productivity equals A(K/L) α, which would equal A times the capital to labor ratio to the power 1/3. If the capital/labor ratio was 2.44 times higher in Chicago than in Buenos Aires, this would predict that productivity would be 34% higher in Chicago. Thus, higher capital levels alone can only explain about 27% of the higher productivity levels in Chicago. The remaining 73% of the gap in productivity must be associated with the catchall variable "A", which describes total factor productivity. To explain a 125% greater productivity per worker in Chicago, total factor productivity must be 67% higher in that city.
The productivity gap can come from three sources: human capital, transportation costs and technological development. We have already noted that human capital appears more developed in Chicago. The Cobb–Douglas model, as written above, assumes that labor is measured in equivalent units. Assuming that L equals the number of workers times human capital per worker, implies that per capita productivity will increase by human capital to the power 2/3. If Chicago's workers had 20% more human capital per worker (which seems high), then this would predict a 13% increase in productivity in Chicago, which can explain another 10% of the observed productivity difference.
This would leave about 60% of the productivity difference to be explained by differences in "A", the productivity parameter, reflecting either more developed technologies or easier access to consumer markets. It is difficult to determine how much of the difference in productivity can be explained by either force. Chicago's industrialists certainly found it easier to sell to a much richer and larger market in the United States. The total GDP of the U.S. was about 18 times larger than the GDP of Argentina in 1913. Argentina's hinterland was filled with large numbers of relatively poor people; the farmers of the Midwest were much wealthier.
In principle, Argentina could have exported manufactured goods to Europe, but they do not appear to have done that. Almost all of Argentina's exports in 1914 were agricultural, which surely reflects the country's comparative advantage and the large shipping costs for manufactured goods. By contrast, America was an industrial exporter in 1900, and goods from Chicago, like McCormick's reapers, were traveling the globe. Still, it seems likely that these sales tell us more about technology than about transportation costs. In principle, reapers built in Buenos Aires could have been shipped to Russia, just like those in Chicago. It is not obvious that the costs would have been that much higher, if at all. The difference was that Chicago was at the cutting edge of reaper technology, while Buenos Aires was not.
A quick look at Chicago's industrial sectors gives us a sense of the city's level of technology. Table 2 lists the top five industries, by employment, for Chicago in 1910 and Buenos Aires in 1914. A few large industries dominated Chicago manufacturing in the years before World War I. The largest sector was men's clothing production, which employed 38,000 people in 1909. Another 37,000 were in foundry and machine shop products. 27,000 worked in meat-packing. There were also 33,000 in printing and publishing. 12,000 people worked in lumber. 12,000 more workers made cars. 11,000 Chicagoans made furniture and refrigerator units. The meat-packers were directly transforming the products of Chicago's hinterland, but the others were working in more advanced products.
Table 2 Top five industries in Chicago (1909) and Buenos Aires (1914)
Clothing was also Buenos Aires largest industrial sector in 1914, with 36,000 workers. Moreover, the capital/labor ratios were pretty similar in both cities: both men's clothing in Chicago and "dressing" (vestido y tocador) in Buenos Aires had about 750 dollars per worker in capital, which suggests that both industries were labor-intensive and using relatively similar technologies. In the clothing sector, the level of horsepower per capita was actually higher in Buenos Aires than in Chicago.
The fact that the clothing manufacturers in Chicago were more productive presumably reflects more about the available market, than anything about the state of clothing production technology in the Windy City. Chicago's clothing manufacturers had particularly benefited by the distribution networks in the Midwest put together by Chicago-based retail pioneers, such as Marshall Field, John Shedd (who worked for Field), Montgomery Ward, Richard Sears and Julius Rosenwald (who led Sears, Roebuck after Sears).
However, in other areas, there is much more evidence of Chicago's technology superiority. For example, Chicago had about 12 times more employment in car production in 1910 than Buenos Aires did in 1914. Automobiles in that era were a cutting-edge technology. Argentines would purchase plenty of cars in the teens and 20s, but the bulk of them were imported, often from the United States.
Chicago had 37,000 people in foundry and machine shop products relative to 16,000 people in Buenos Aires in metallurgy. However, in this case, the Americans appear to have been far more industrially advanced using 55,000 units of horsepower (or 1.1 per worker) as opposed to 8000 (or 0.5 per worker) in Buenos Aires. The Chicago workers had 2400 dollars of capital per worker; the blacksmiths in Buenos Aires had less than half that. These different levels of capital suggest that the Argentines were following a much more primitive model of metal machine production than their Chicago counterparts.
Chicago also appears to have been at the forefront of a number of technological breakthroughs, beyond McCormick and his reaper. In the nineteenth and early twentieth centuries, Chicago innovators created the skyscraper, the electric washing machine, the zipper and a host of other significant inventions. It is difficult to find any comparable breakthroughs for Buenos Aires.
Evidence for significant differences in the state of technology also appears in many industrial histories. For example, Torcuato DiTella was a leading Argentina industrialist over the first half of the twentieth century. While DiTella's first success came with a bread-kneading machine that he invented himself, many of his later successes came from importing American technology. For example, in the 1920s, he catered to Argentina's growing population of drivers (many of whom were in American cars), by providing a new gas pump through a licensing arrangement with the American Wayne Gas Pump company. In the 1930s, he began making refrigerators, first licensing from Kelvinator and then Westinghouse.
Why was Chicago more technologically sophisticated than Buenos Aires a century ago? There were surely many reasons, but human capital seems like a particularly important explanatory variable. Education helped spread ideas in the U.S. and gave engineers the background needed for more innovation. The differences in schooling between the two countries help us to understand why America had more developed industries a century ago.
The final major difference between Buenos Aires and Chicago lies in the area of politics. The Argentine Constitution of 1853 has a large number of similarities to the U.S. Constitution, which is not entirely coincidental, as the Argentines looked, in part, to the U.S. model. As in the U.S., there are three branches of government, and a bi-cameral legislature. The legislature included both a directly elected house, the Chamber of Deputies, and an indirectly elected legislature, the Senate. Moreover, between 1862 and 1930, Argentina maintained a reasonable amount of political stability, maintaining at least the appearance of a stable democracy.
Beneath this appearance, however, there were at least four major areas in which Argentina and the United States differed for at least some of that post-Rosas time period. First, until 1912, Argentinean suffrage was far more restricted than that of the United States. For example, after 1850, no U.S. state had property rights requirements for voting. By 1860, any of the old tax requirements had also disappeared. Of course, some American states did impose "literacy" qualifications, often in an attempt to exclude African-Americans from voting, but aside from African-Americans in southern states, essentially all American men could vote by the Civil War.
By contrast, universal male suffrage did not appear in Argentina until 1912. For example, as late as 1896, Banerjee et al. (2006) estimate that only 1.6% of Argentina's population voted, in part because of literacy and wealth requirements. Alonso (1993) documents that 1.8% of the city's population, or less than 4% of the male population, voted in the 1896 election. By contrast, more than 40% of Illinois' male population voted in the 1896 U.S. Presidential election, which suggests a far more open democracy in Chicago than Argentina.
In addition to the limits on suffrage, the Argentinean electoral system did not have a secret ballot. Instead, the voto cantado ("sung ballot")—in which each voter would come to the electoral precinct and loudly declare his preferred candidate, upon which the electoral authority would write it down—guaranteed that a local caudillo could pressure voters into supporting the candidate of his choosing. Ironically, the allegedly liberal arguments often advanced by urban interests against the extension of the franchise—the idea that rural oligarchs would just manipulate their workers' votes—found their match in the allegedly enlightened arguments of the landed oligarchy against the secret ballot, as they argued that it would deprive ignorant workers from the "healthy influence" of their landlords (Sampay 1974).
Argentina's voting rules evolved over the period 1890–1910 (Alonso, 1993), and the country moved to universal manhood suffrage and the secret ballot in 1912, with the passage of the Sáenz Peña law. Engerman et al. (2000) document that voter participation increased to 9% (or 18% of the male population) in the 1916 election and 12.8% (or 25% of the male population) in the 1928. By 1920, both Chicago and Buenos Aires had mass democracy, but that democracy was much younger in Argentina. As (at least some) political institutions take time to mature, the novelty of that democracy in Argentina may have added to its weakness.
Not only were electoral rules different between the two cities until 1912, electoral practices were as well. It is unclear if Buenos Aires or Chicago had more electoral corruption, as allegations of voter abuse flew in both places. Textbooks on Argentinean history regularly describe the corruption of nineteenth century politics. The voto cantado system, in particular, gave tremendous power to the electoral judges who were in charge of writing down the vote announced by each voter and invited widespread corruption on their part. For example, Rock (1987) writes that "only a small fraction of the nominally enfranchised population voted in elections, which local bosses regulated by manipulating the electoral roles or by simple bribery and intimidation."
However, American politics during the Gilded Age was hardly a model of probity. The tale of Charles Yerkes and his acquisition of traction franchises with payments to Chicago politicians, told in fiction by Theodore Dreiser, is among the most famous of all Gilded Age political stories. As late as 1960, rumors alleged that Mayor Daley had manufactured vast numbers of votes for John F. Kennedy in Chicago. Since electoral fraud is hard to measure, and allegations of fraud abound in both places, it would be hard to claim any clear ranking between the two cities in that area.
In any event, it is certainly true that mass violence was far more regular in Argentina than in the U.S., at least after the bloodbath of the Civil War. It is clear that elections in Chicago were not leading to major armed outbreaks. America, of course, did have one election which ended up in open warfare, but after 1865 disagreements over outcomes did not lead to large-scale battles. Not so in Argentina.
Buenos Aires was no stranger to political conflict during the late nineteenth century and early twentieth century. In 1880, 1890, 1893 and 1905, Argentina experienced major uprisings; three of those started in Buenos Aires, and the fourth also reached it. The 1880 uprising was associated with the election of Julio Roca as President of Argentina. Roca was seen as favoring nationalization over decentralization and he defeated Carlos Tejedor, a favorite in Buenos Aires. After the electoral defeat, 10,000 Buenos Aires residents rose up and a bloody battle ensued with 3000 casualties. Roca secured the presidency, and the centralization of Argentina, only by suppressing the revolt.
After that point, the República Conservadora ("Conservative Republic") that lasted between 1880 and 1916, under the oligarchic rule of the so-called Generación del'80 ("Year'80 Generation"), faced constant pressure from the "Radical" opposition. This often spilled into armed conflict, such as in 1890, 1893 and 1905. The 1890 revolution was associated with the somewhat leftist Civic Union group, which was actually led by Mitre himself, and it aimed to topple the President Miguel Celman. In that, the uprising succeeded and led to the presidency of Carlos Pellegrini, who was a general opposing the revolt. In 1893, an uprising led by the Radical Civic Union, an offshoot of the Civic Union, started in the Santa Fe region of Argentina, but also spilled over into the capital city. In 1905, the Radical Civic Union led another revolt in Buenos Aires, which was unsuccessful. In addition, the anarchist- and socialist-influenced labor movement brought about by European immigrants contributed to the political turmoil with massive strikes such as the "tenants' strike" of 1907 and the "Red Week" of 1909.
The coup of 1930, which would oust President Yrigoyen, is often seen as a turning point in Argentine politics, where democracy was replaced with military rule. However, we have seen that this coup was hardly without precedent. Four times between 1880 and 1905, revolts started or reaching Buenos Aires shook the country and often achieved a fair amount of success. This suggests a degree of instability in Buenos Aires that was much more extreme than in Chicago.
Chicago did have uprisings, most notably the Haymarket Riot of 1886 and the Chicago Race Riot of 1919. The labor union movement also made its presence felt, of course, as illustrated by the Haymarket episode, the "Teamsters' strike" of 1905 and the "Garment strike" of 1910, all of which ended with many killed and injured in confrontations with police. Broadly speaking, Chicago was hardly a model of social order. Although, in 1890, homicide rates were about two times higher in Buenos Aires than in Chicago, by the 1920s, after Prohibition, the picture is essentially reversed.
While both Chicago and Buenos Aires had uprisings, their consequences were vastly disparate. If the immediate consequences of the Haymarket riot were the controversial execution of seven anarchists and a boost to May Day commemorations around the world, the Buenos Aires events had far more direct consequences for the Argentinean political system. The Revolution of the Park, in 1890, while defeated by government forces, still led to the fall of President Celman. The 1893 Revolution also took over the Casa Rosada before being defeated. In fact, the consensus interpretation of the Sáenz Peña law among historians describes it as largely motivated by the rising tension and the pressure exerted by the Radical opposition, galvanized by the battle cry of the secret ballot and universal suffrage, and by the labor movement. As a result of the electoral reform, the Conservative Republic also met its demise in 1916, when the Radical Yrigoyen won the presidency in the first election under the new rules.
What can explain these different consequences? The relative immaturity of the Argentine democracy certainly played a part, but it is also the case that the location of Buenos Aires at the very heart of the country's politics, as the all-important capital city in which by 1914 more than one in six Argentineans lived, made Porteño turmoil more consequential. In fact, Argentina still is one of the countries with the highest concentration of population around the capital city in the whole world—it has the highest concentration among countries with large territories—using the measure developed in Campante and Do (2010).
The centrality of Buenos Aires, of course, is not simply related to its designation as the capital city. From the very early years of the independent Republic, the city's enormous weight in terms of population and economic activity, which was engendered by its position as the gateway to the hinterland and by the low labor intensity of the dominant cattle-raising activity, posed a constant challenge to the Argentinean federal system. This is illustrated by the perennial tension between the Province of Buenos Aires—which was still fighting the idea of joining the Union, in the battlefield, as late as 1862—and the other provinces, which culminated in the federalization of the city of Buenos Aires in 1880. Chicago, in contrast, was a relative latecomer to the Union, which the state of Illinois joined more than 40 years after independence—and Chicago, of course, is not even the capital of that state.
In any event, the fact is that the 1890 Revolution, for instance, started in the Artillery Park, located a half-mile from the Casa Rosada. The Haymarket riot, in contrast, took place some 700 miles away from the White House. For this reason, it is very likely that the political and social instability that brewed in the similar environments of Chicago and Buenos Aires, both of which were undergoing rapid transformation, had much more detrimental consequences for Argentina in terms of the consolidation of its democracy.
There is a strong connection between urban concentration in and around a primate capital and political instability (Ades and Glaeser 1995; Campante et al. 2016), which reflects causality running in both directions. For at least 2500 years, urban mobs have had the ability to force political change. In 509 B.C., Lucius Junius Brutus led the coup that ousted the last Roman King. In 411, Athenian democracy was ended by another urban coup. The history of Europe's great medieval cities, like Bruges, is replete with organized opposition to aristocratic rules. France's political instability in the nineteenth century owes much to the power of Parisian mobs to topple governments.
The fundamental ingredient in a successful revolt is scale: isolated activists can do little to challenge a government. Urban density makes it easier to form connections, which can create a sufficiently large uprising. Riots are, after all, a primarily urban phenomenon (DiPasquale and Glaeser 1997). The political importance, however, of urban riots depends on their proximity to power (Campante et al. 2016). That explains why uprisings in Buenos Aires were so much more important than those in Chicago.
The political power of urban mobs can lead to two political responses. The first is to placate the mob with public handouts and services (Campante et al. 2016). Classical Rome's vast bread doles, for example, can be understood as an attempt to cool the mobs organized by the Gracchus and others. The general tendency of developing countries to target public services to the capital is a more modern example of this phenomenon. Of course, placating urban unrest has the effect of then further expanding the size of the capital city. For this reason, the connection between political instability and capital size is two-sided. A large capital appears to create instability, and instability means that services flow to the capital, which attracts migrants and further increases its size.
In some cases, political leaders respond to the threat created by urban unrest by moving their capital far away from the city (Campante et al. 2016). When Peter the Great moved his capital to St. Petersburg he was protecting his regime from the influence of Muscovites. Likewise, America's founders chose to create a new capital on the Potomac, in part to reduce the influence of people in New York and Philadelphia (America's first capitals). America's largest riot, the 1863 New York City draft riot, could have had a much larger influence on history if New York, rather than Washington, had been the capital of the U.S.
In light of these facts, we are led to conclude that the large, primate capital of Argentina might have played a major role in the nation's twentieth century political problems.
Did those differences matter?
We have argued that, despite the enormous similarities between Chicago and Buenos Aires, there were substantial differences in income, education, industrial development and political institutions. The main question that remains is the extent to which each one of those differences might be able to account for the different paths of Buenos Aires and Chicago, and more broadly those of Argentina and the U.S., in the twentieth century. In principle, any one of those differences could have played a role. A "big push" theory of growth (e.g. Rosenstein-Rodan 1943; Murphy et al. 1989) might suggest that higher levels of income could have put the U.S. on a path towards industrialization. Human capital might have influenced growth directly, or indirectly, through industrial development or political change. The fact that Buenos Aires was far less industrial than Chicago, and far more dependent on natural resources, set the stage for the declines of the 1930s, when the price of natural resources plummeted. The political differences of Buenos Aires might have played a role in explaining the political traumas that Argentina experienced over the twentieth century.
A system with two countries and four potential explanatory variables is, of course, overdetermined. The only way to evaluate the relative importance of these four factors is to bring in other countries. We will do this directly, by running a set of cross-national regressions, while drawing on the long literature on the determinants of differences in country-level prosperity, such as Hall and Jones (1999). Although the limitations of cross-country regressions are well known, they can nevertheless provide us with a benchmark quantitative assessment of our candidate explanations.
We start from the premise that there is a link between relevant outcomes such as income today and variables in the early twentieth century, and that we can look at 100-year regressions at the cross-country level to estimate the impact that the latter have on the former. We then multiply these estimated coefficients by the differences in initial conditions between the U.S. and Argentina, to get a sense of the amount of today's differences that can be explained by the different initial conditions in this specific comparison. Essentially, we are assuming a model of the following form:
$$ Y_{{{\text{Today}},j}} = \sum\limits_{i} {\beta_{i} X_{i,1900,j} } + \varepsilon_{j} , $$
where \( Y_{{{\text{Today}},j}} \) is country j's outcome today, \( \beta_{i} \) is the coefficient on explanatory variable i, \( X_{i,1900,j} \) is the value of explanatory variable i in country j in 1900 and \( \varepsilon_{j} \) is a country-specific error term. This estimating equation then suggests that the differences in outcomes between Argentina and the U.S. today can be understood as follows:
$$ Y_{\text{Today,US}} - Y_{\text{Today,Argentina}} = \sum\limits_{i} {\beta_{i} (X_{{i,1900,{\text{US}}}} - X_{{i,1900,{\text{Argentina}}}} ) + \varepsilon_{\text{US}} - \varepsilon_{\text{Argentina}} .} $$
The ratio \( \frac{{\beta_{i} \left( {X_{{i,1900,{\text{US}}}} - X_{{i,1900,{\text{Argentina}}}} } \right)}}{{Y_{\text{Today,US}} - Y_{\text{Today,Argentina}} }} \) is the share of the current differences between Argentina and the U.S. that can be explained by variable i. The cross-country regressions will furnish our estimates of the coefficients \( \beta_{i} \).
Our primary outcome variable is the logarithm of per capita GDP in 2000, calculated using purchasing power parity and taken from the Maddison (2008) data set. Since GDP is typically measured at the country, not city level, we will be using national GDP measures and national characteristics a century ago. Using this variable, the difference in log of GDP per capita between the U.S. and Argentina is 1.2, which means that American incomes were 230% higher than those in Argentina in 2000. This is, of course, much larger than the 48% difference shown in 1900 GDP data [also from the Maddison (2008) data set].
We will also look at a political outcome variable, as well as GDP, because so much of the work on Argentina has emphasized the interaction of political and economic distress (e.g. della Paolera and Gallo 2003). We focus on the democracy score of the country, as measured by the "Polity 2" variable from the Polity IV data set, averaged between 1970 and 2000. This measure subtracts a 0-to-10 "Autocracy" score from a 0-to-10 "Democracy" score (both of which constitute indices of institutional features), resulting in values ranging from − 10 to 10. We use a long-run political average, because democracy measures vary substantially from year to year. Moreover, Argentina's current political environment is far more stable than even its recent past, and looking only at the most recent data would understate the extent of the country's political turbulence. (For the period average, Argentina scores 2.06, while the U.S. scores 10.) We will look at GDP first, then politics, and then ask whether controlling for current politics helps us to understand the differences in GDP.
Our key explanatory variables are per capita GDP in 1900 [from Maddison (2008)], which is available for 37 countries, and measures of school enrollment for the same year (from Banks). Our school variable adds together the enrollment rates for primary, secondary and university education. (The most important variable is primary education, and results are similar if we use that variable alone.) We have 36 countries with this variable. Our third variable is the share of manufacturing in total output in the early twentieth century, which we obtain from multiple sources (Milward and Saul 1977; Bulmer-Thomas 1994; Engerman and Sokoloff 2000; Urquhart 1993). (The actual year varies by country, between 1899 and 1920; most come from around 1913.) This variable captures the degree of industrialization a century ago, but it is only available for 16 countries. Finally, we use the average of the Polity 2 variable between 1870 and 1900 to measure institutional development.
As these variables are often quite collinear, and as they are available for different subsamples of countries, we begin by examining the univariate relationship between these explanatory variables and the logarithm of per capita GDP in 2000. Regression (1) in Table 3 shows the relationship between GDP in 1900 and GDP today. The lagged variable explains 65% of the variation in current GDP across the 37 countries. Essentially, the elasticity is one, meaning that if a country was 10% richer than another in 1900, then it is 10% richer today.
Table 3 Nineteenth century variables and twentieth century economic performance
Figure 5 shows the relationship between income in 1900 and income today. The relationship certainly is tight, but Argentina is an outlier, falling substantially below the regression line. If we were to accept the coefficient of 1.01 on log GDP per capita in 1900, then initial income levels would only predict a 0.4 log point difference today. This translates into a difference of about 49%, which is just about one-fifth of the total difference in incomes between Argentina and the U.S.
Source: GDP per capita from Maddison
GDP per capita in 1900 and in 2000.
In the second regression, we look at the connection between our schooling variable and GDP today. The R-squared rises to 70%, and as the share of the population attending school increases by 5%, then GDP today increases by 0.7 log points. This is about doubling. This captures the enormously strong connection that schooling in the past appears to have with current income levels (as in Glaeser et al. 2004). Figure 6 shows the connection between schooling enrollments in 1900 and income today. In this case, Argentina lies on the regression line and the U.S. is somewhat beneath it.
School Enrollment in 1900 and GDP per capita in 2000.
Can the difference in schooling explain current income levels? We will return to this question later, when we have controlled for other variables, but a simple thought experiment using the univariate coefficient suggests that power of education. The gap in enrollment rates between Argentina and the U.S. in 1900 is 0.12. While Buenos Aires may have had comparable enrollment rates to Chicago, outside the city education levels were far lower than in the U.S. Multiplying 0.12 by the estimated coefficient of 14.4 suggests a current income difference of 1.80 log points, which is actually substantially larger than the realized income difference. While this fact tells us nothing about whether schooling is actually determining the gap or whether it is just proxying for something else, the raw coefficient suggests that the cross-country relationship in income suggested by 1900 schooling levels can account for the current differences between Argentina and the U.S.
Our third regression looks at the share of manufacturing in output around 1913. We only have 16 observations, but again, the relationship with current income is positive and significant. As in the case of income, however, even the univariate regression does not suggest that this variable is powerful enough to explain more than a quarter of the current difference between the U.S. and Argentina.
Finally, we look at the correlation between political instability in the late nineteenth century and GDP today. The explanatory power of this variable is much weaker than the other variables. As Fig. 7 shows, there are plenty of once unstable countries that are now quite prosperous. Argentina may have been less stable than the U.S., but it was more stable and democratic than many European countries which are now far more prosperous. Still, the correlation between nineteenth century instability and wealth today might explain something of the current differences between Argentine and U.S. wealth. Using the univariate coefficient, we find that the differences in the historical politics measures would predict a 0.7 log point difference in incomes today, which is more than half of the total income differences.
Source: GDP per capita from Maddison. Political instability from Marshall and Jaggers
Political Instability and GDP per capita in 2000.
In sum, the univariate relationships suggest that human capital and politics both have a chance at explaining significant amounts of the differences in income between the U.S. and Argentina. The other variables appear less important. To sort out the relative importance of these different variables, we now turn to multivariate regressions. In regression (5), we include both GDP and schooling as control variables. The coefficient on GDP drops by almost 75% and becomes statistically indistinguishable from zero; the coefficient on schooling retains statistical significance but drops by one-half. The bulk of this drop does not come from controlling for income, but rather from restricting the sample size. We do not have GDP figures in 1900 for many poorer countries, especially in Latin America; as a result, the sample becomes wealthier and the coefficient (which is smaller across richer countries) becomes smaller.
In regression (6), we control for manufacturing and schooling. When we control for schooling, the coefficient on manufacturing is very small, and just borderline significant at the 10% level. The coefficient on the schooling variable is 7.4. When we include GDP in the regression (not shown), controlling for manufacturing drives the coefficient on GDP in 1900 essentially to zero; the coefficients on the other two variables remain largely unaffected, but the significance of manufacturing is removed. In regression (7), we control for politics as well as the schooling variable. In this case, politics becomes insignificant, and the coefficient on schooling is essentially the same as in the univariate case.
These results strengthen the case for the central role played by differences in schooling, but we still need to investigate what happens when the full set of variables is simultaneously included. This is what we do in regression (8) (with the exception of manufacturing, which causes our sample to shrink too much). With all three variables, schooling remains significant with a coefficient of 7.6. The other two variables are not. We take away from these regressions the view that no variable, other than schooling in 1900, has a reliable correlation with GDP in 2000. The coefficient on schooling ranges from 7.5 to 14.5.
We have already shown that if the schooling coefficient is 14.5 it can more than explain the current differences between Argentina and the U.S. How much of those differences can schooling in 1900 predict if the coefficient is smaller? For example, if the coefficient is 10, then the differences in schooling levels in 1900 would predict a 1.2 log point difference in current incomes, which is exactly the difference in 2000. If the coefficient is 7.5, then the schooling difference can explain 75% of the current income differences. As such, human capital in 1900 seems to predict the lion's share of the difference in current incomes.
But why would historical human capital levels predict such large income differences? One obvious explanation is that human capital in 1900 predicts human capital today and that current human capital differences explain the gap between the U.S. and Argentina. It is certainly true that schooling in 1900 is strongly correlated with schooling today: the correlation coefficient between our enrollment data and total years of schooling in 2000 taken from Barro-Lee (2000) is 85%.
Moreover, years of schooling today certainly strongly predict income levels. A univariate regression of log of GDP on total years of schooling in 2000 finds a coefficient of 0.369 (R-squared: 0.745). The gap in total years of schooling between Argentina and the U.S. today is 3.22 years (12.05–8.83). Taking the estimated univariate coefficient literally suggests that current schooling differences can explain 98% of the current GDP gap between the U.S. and Argentina.
But what does this univariate coefficient mean? Our cross-country coefficient certainly implies a much higher effect than estimates from individual-level studies, where an extra year of schooling rarely increases wages by more than 10 or at most 15% (e.g. Ashenfelter and Krueger 1994; Card 1999). If that lower range of coefficients represented the link between education and productivity, then higher education levels in the U.S. can explain less than one-third of the difference in incomes between Argentina and the U.S.
How can we reconcile the gap between individual-level estimates of human capital effects and country-level estimates of human capital effects? One view is that the larger coefficients at the national level represent human capital spillovers. Living in a country with more skilled individuals may make everyone more productive, perhaps because skilled workers are responsible for determining the level of technology in a given country. However, cross-metropolitan area studies of human capital spillovers generate an estimate that is positive, but far too small to account for the size of the cross-country coefficient (Rauch 1993; Acemoglu and Angrist 2000).
One explanation for the difference between the cross-city estimates and the cross-country estimates is that—as suggested by Glaeser et al. 2007), building on the famous Lipset (1959) hypothesis—schooling is responsible for political outcomes. In particular, stable democratic institutions tend to be predicated on the level of schooling of the citizenry. According to this view, Argentina's problematic political history during the twentieth century has its roots in the relatively lower human capital levels of the country in 1900.Footnote 2 To test this hypothesis, in Table 4, we reproduce the exercise from Table 3, but now with political stability between 1970 and 2000 as our dependent variable.
Table 4 Nineteenth century variables and twentieth century democratic institutions
The first four regressions repeat the univariate relationships shown in Table 3. As before, all of these variables predict the outcome variable. Schooling has the strongest correlation with democracy during the late twentieth century, but the other variables also predict democratic stability. In the fifth regression, we include all of the variables—again with the exception of manufacturing, which depletes too much of the sample. In this case, schooling continues to predict democracy, and the coefficient is essentially unchanged. None of the other variables remain statistically significant.
Can the schooling differences between Argentina and the U.S. explain the instability of late twentieth century Argentina, in a quantitative sense? The difference in the two outcome variables is 7.94. The estimated coefficient on schooling is approximately 52. Multiplying 52 by the schooling difference in 1900 yields an estimate of 6.24, which is 79% of the observed instability difference. While the schooling differences cannot explain all of the differences in democracy, they can certainly go most of the way.
Our final exercise is to see whether the connection between education and democracy can explain why schooling in 1900 is so correlated with incomes today. Going back to the specification from Table 3, we now include the twentieth century politics variable in a regression that also includes schooling in 1900. Including this variable causes the coefficient on schooling to decrease by more than a third, relative to the univariate regression, but the coefficient remains 8.87, which is still quite high. If we include both democracy today and GDP in 1900 as controls, then the coefficient on schooling in 1900 falls to 2.7 and is no longer significantly different from zero, as shown in regression (9). We interpret these regressions as suggesting that much of the impact of relatively low levels of schooling in Argentina went through political channels.
Whatever remains of the schooling effect may work either through unmeasured political channels, or direct human capital effects, or through better technology. Hopefully, further work will better help us to understand the strong connection between historical schooling and current GDP in a broader context. In our specific case, however, it does seem to be true that Argentina's collapse, relative to the U.S., had much to do with lower education levels.
There were many similarities between the historical trajectories of Chicago and Buenos Aires. Both cities were conduits for natural wealth coming from the American hinterland to the markets of the east. Both cities dealt in the same products, first animals and then grain. Both cities grew spectacularly and were among the wealthiest places on earth a century ago.
However, even 100 years ago there were substantial differences between the two cities. Chicago was wealthier and better educated. Its industries were more advanced and more capital intensive. Its political system was more stable, and its instability was less consequential. All told, Buenos Aires looks more like a place that became rich because of a boom in natural resources. Chicago used those natural resources and then transitioned into becoming a more modern industrial place, with substantially greater levels of physical and human capital.
The gap in industrial development and human capital then set the stage for the twentieth century. Across countries, schooling in 1900 strongly predicts success today, partially because less schooled places have had far worse political outcomes. America's greater level of human capital in 1900 surely deserves much credit for its track record of twentieth century political stability. In this regard, the effects of the lower levels of human capital in Buenos Aires were in turn magnified by its overwhelming political importance within Argentina. All in all, the divergence between Chicago and Buenos Aires reflects the fact that Buenos Aires in 1900 had wealth levels that were far higher than its actual level of human and physical capital accumulation.
From a slightly broader perspective, particularly within the context of Latin America, this conclusion sounds somewhat dispiriting. After all, by the standards of the region, Argentina did invest early and heavily in human capital accumulation and achieved a stage of near-universal literacy and enrollment way before most of its neighbors—many of which are still considerably off that mark. Still, it seems that the human capital lag it displayed in comparison with the US or Western Europe, even in its heyday, ended up trapping the country with relatively immature political institutions. This fragility was in turn made more acute by the geographical concentration of population and economic activity around Buenos Aires and eventually plunged the country into a cycle of instability from which its economic performance could not escape unscathed. President Sarmiento seemed to have his finger on the right issue when he stated that "all problems are problems of education", but for Argentina we might add that this recognition was not enough.
The events at the 1968 Democratic Convention were as close as Chicago ever came to toppling a government. While many observers link the Chicago riots with Richard Nixon's success in the election, it remains true that Nixon came to power through an electoral process that is quite different from the paths to power of several 20th century Argentine leaders.
The relatively low levels of human and physical capital might have influenced political instability in Argentina through yet another channel. Campante and Chor (2012) show evidence that, in countries that are relatively land-abundant, individual schooling tends to be more strongly associated with political activity, particularly for "conflictual" modes of activity such as demonstrations. This suggests that, for the case of Argentina, its dearth of physical and human capital relative to the U.S. meant that the country's investments in expanding education were partly translated into relatively more political conflict.
Acemoglu D, Angrist J (2000) How large are human-capital externalities? Evidence from compulsory schooling laws. NBER Macroecon Annu 15:9–59
Ades AF, Glaeser EL (1995) Trade and circuses: explaining urban giants. Q J Econ 110:195–258
Alonso Paula (1993) Politics and elections in Buenos Aires, 1890–1898: the performance of the radical party. J Latin Am Stud 25(3):465–487
Anuario Estadístico de la Ciudad de Buenos Aires, Año VI -1896-, G. Kraft, Buenos Aires, 1897
Anuario Estadístico de la Ciudad de Buenos Aires (Resúmen de los años 1915/23), Briozzo Hnos., Buenos Aires, 1925
Anuario Municipal 1937–1938, sin datos de imprenta (colección de la biblioteca del Ministerio de Economía y Producción)
Ashenfelter O, Krueger Alan (1994) Estimates of the economic return to schooling from a new sample of twins. Am Econ Rev 84(5):1157–1173
Banerjee AV, Bénabou R, Mookherjee D (eds) (2006) Understanding poverty. Oxford University Press, Oxford
Barro RJ, Jong-Wha L (2000) International data on educational attainment: updates and implications. CID working paper no. 042
Brock HG (1919) Boots and shoes, leather, and supplies in Argentina, Uruguay, and Paraguay. United States Dept. of Commerce, Government Printing Office, Washington
Brown JC (1976) Dynamics and autonomy of a traditional marketing system: Buenos Aires, 1810–1860. Hispanic Am 56(4):605–629
Bulmer-Thomas Victor (1994) The economic history of Latin America since independence. Cambridge University Press, Cambridge
Campante FR, Chor D (2012) Schooling, political participation, and the economy. Rev Econ Stat 94(4):841–859
Campante FR, Quoc-Anh D (2010) A centered index of spatial concentration: expected influence approach with an application to population and capital cities. HKS Faculty Working Papers RWP09-005
Campante F, Quoc-Anh D, Beranrdo G (2016) Capital cities, conflict, and misgovernance. Harvard Kennedy School (unpublished)
Card DE (1999) The causal effect of education on earnings. Handb Labor Econ III I:1801–1863
Censo Escolar de la Nación. various years. Argentina
Censo General de la Ciudad de Buenos Aires. various years. Argentina
Censo Nacional. various years. Argentina
Della Paolera G, Gallo E (2003) Epilogue: the Argentine Puzzle. In: della Paolera G, Taylor A (eds) A new economic history of Argentina. Cambridge University Press, Cambridge
Di Tella G, Manuel Z (1967) Las Etapas del Desarrollo Económico Argentino. Editorial Universitaria de Buenos Aires, Buenos Aires
DiPasquale D, Glaeser EL (1997) Incentives and social capital: are homeowners better citizens?" Harvard Institute of economic research working papers 1815. Harvard-Institute of Economic Research
Engerman SL, Sokoloff KL (2000) History lessons: institutions, factor endowments, and paths of development in the new world. J Econ Perspect 14(3):217–232
Engerman SL, Haber S, Sokoloff KL (2000) Inequality, institutions, and differential paths of growth among new world economies. In: Menard C (ed) Institutions, contracts and organizations. Edward Elgar, Cheltenham
Glaeser EL, La Porta R, Lopez-de-Silanes F, Shleifer A (2004) Do institutions cause growth? J Econ Growth 9:271–303
Glaeser EL, Ponzetto G, Shleifer Andrei (2007) Why does democracy need education? J Econ Growth 12(2):77–99
Hall Robert E, Jones Charles I (1999) Why do some countries produce so much more output per worker than others? Q J Econ 114(1):83–116
Hanson SG (1938) Argentine meat and the British market: chapters in the history of the Argentine meat industry. Stanford University Press, California; H. Milford, Oxford University Press, London
Lipset SM (1959) Some social requisites of democracy: economic development and political legitimacy. Am Polit Sci Rev 53(1):69–105
Maddison A (2008) Historical statistics for the world economy: 1-2006 AD. Last updated october 2008. http://www.ggdc.net/maddison/
Milward Alan S, Saul SB (1977) Development of the economies of Continental Europe, 1850–1914. Harvard University Press, Cambridge
Murphy KM, Shleifer A, Vishny RW (1989) Industrialization and the big push. J Polit Econ 1003–1026
Población de Buenos Aires. Various Years. Buenos Aires
Rauch James E (1993) Does history matter only when it matters little? The case of city-industry location. J Urban Econ 108(3):843–867
Rock D (1987) Argentina, 1516–1987: from Spanish Colonization to Alfonsín. University of California Press, California
Rosenstein-Rodan PN (1943) Problems of Industrialization of Eastern and South-Eastern Europe. Econ J 53(210/211):202–211
Sampay A Enrique (1974) Constitución y pueblo, 2nd edn. Cuenca Ediciones, Buenos Aires
United States Census. Various years. Washington DC
Urquhart MC (1993) Gross national product, Canada, 1870–1926: the derivation of the estimates. McGill-Queen's University Press, Buffalo
Williamson Jeffrey G (1995a) Reform, recovery, and growth: Latin America and the Middle East. University of Chicago Press, Chicago
Williamson Jeffrey G (1995b) The evolution of global labor markets since 1830: background evidence and hypotheses. Explor Econ Hist 32(2):141–196
Harvard University and NBER, Cambridge, USA
Filipe Campante
& Edward L. Glaeser
Search for Filipe Campante in:
Search for Edward L. Glaeser in:
Correspondence to Edward L. Glaeser.
Both authors thank the John S. and Cynthia Reed foundation for financial support. Conversations with John Reed helped start this project. We also thank the Taubman Center for State and Local Government for financial assistance. We are grateful to Kristina Tobio for her usual superb research assistance and to Esteban Aranda for his outstanding assistance with the Argentine data.
Campante, F., Glaeser, E.L. Yet another tale of two cities: Buenos Aires and Chicago. Lat Am Econ Rev 27, 2 (2018) doi:10.1007/s40503-017-0052-7
Argentine exceptionalism
Comparative development | CommonCrawl |
Nutritional and health status of children 15 months after integrated school garden, nutrition, and water, sanitation and hygiene interventions: a cluster-randomised controlled trial in Nepal
Akina Shrestha1,2,3,
Christian Schindler1,2,
Peter Odermatt1,2,
Jana Gerold1,2,
Séverine Erismann1,2,
Subodh Sharma4,
Rajendra Koju3,
Jürg Utzinger1,2 &
Guéladio Cissé1,2
It has been suggested that specific interventions delivered through the education sector in low- and middle-income countries might improve children's health and wellbeing. This cluster-randomised controlled trial aimed to evaluate the effects of a school garden programme and complementary nutrition, and water, sanitation and hygiene (WASH) interventions on children's health and nutritional status in two districts of Nepal.
The trial included 682 children aged 8–17 years from 12 schools. The schools were randomly allocated to one of three interventions: (a) school garden programme (SG; 4 schools, n = 172 children); (b) school garden programme with complementary WASH, health and nutrition interventions (SG+; 4 schools, n = 197 children); and (c) no specific intervention (control; 4 schools, n = 313 children). The same field and laboratory procedures were employed at the baseline (March 2015) and end-line (June 2016) surveys. Questionnaires were administered to evaluate WASH conditions at schools and households. Water quality was assessed using a Delagua kit. Dietary intake was determined using food frequency and 24-h recall questionnaire. Haemoglobin levels were measured using HemoCue digital device and used as a proxy for anaemia. Stool samples were subjected to a suite of copro-microscopic diagnostic methods for detection of intestinal protozoa and helminths. The changes in key indicators between the baseline and end-line surveys were analysed by mixed logistic and linear regression models.
Stunting was slightly lowered in SG+ (19.9 to 18.3%; p = 0.92) and in the control (19.7 to 18.9%). Anaemia slightly decreased in SG+ (33.0 to 32.0%; p < 0.01) and markedly increased in the control (22.7 to 41.3%; p < 0.01), a minor decline was found in the control (43.9 to 42.4%). Handwashing with soap before eating strongly increased in SG+ (from 74.1 to 96.9%; p = 0.01, compared to control where only a slight increase was observed from 78.0 to 84.0%). A similar observation was made for handwashing after defecation (increase from 77.2 to 99.0% in SG+ versus 78.0 to 91.9% in control, p = 0.15).
An integrated intervention consisting of school garden, WASH, nutrition and health components (SG+) increased children's fruit and vegetable consumption, decreased intestinal parasitic infections and improved hygiene behaviours.
Trial registration
ISRCTN17968589 (date assigned: 17 July 2015).
Childhood is a critical period for the development of eating patterns that persist into adulthood, particularly with regards to fruit and vegetable consumption [1]. Hence, it is vital that children learn early about the importance of a balanced diet, including fruits and vegetables [2]. Considering the importance of adequate nutrition in childhood to achieve healthy growth and development, giving children opportunities to learn about fruits and vegetables, including their benefits, may help to facilitate the increase in their intake that could prevent malnutrition [1]. School gardens are considered as an ideal setting to facilitate dietary behaviour change among children. They offer a potential to increase children's exposure to, and consumption of, fruits and vegetables [3]. Studies indicate positive effects on children's food preferences and dietary habits, including fruits and vegetables consumption, and about knowledge, benefit towards good health and prevention of malnutrition [4, 5]. School garden education also provides a context for understanding seasonality, what needs to be eaten and where food comes from [1, 6]. Furthermore, it provides an opportunity to teach life skills to school-aged children, including gardening and working cooperatively on planting and harvesting [1].
Malnutrition, inadequate water, sanitation and hygiene (WASH) conditions and intestinal parasitic infections are intricately linked. Severe malnutrition in school-aged children has been documented in association with inadequate sanitation, poor hygiene and improper child feeding practices [7]. Inadequate WASH conditions are also important risk factors for intestinal parasitic infections that are transmitted through the faecal-oral route [8, 9]. Parasitic infections contribute to stunting by loss of appetite, diarrhoea, mal-absorption and/or an increase in nutrient wastage [10, 11]. Furthermore, infections with intestinal parasites may cause internal bleeding, leading to a loss of iron and anaemia [12], exacerbate the effects of malnutrition, and hence, compromise the development of cognitive abilities [10]. An inadequate dietary intake could lead to weakened immunity, weight loss, impaired growth and increased susceptibility to intestinal parasitic infections [10]. Hence, it is crucial to consider the inter-linkages of malnutrition, intestinal parasitic infections, and WASH for preventive action.
In Nepal, studies related to the inter-linkage of WASH, health and nutrition interventions focusing on increased knowledge and consumption of adequate diet, especially fruits and vegetables, are limited. Efforts to control malnutrition were predominantly targeted to children under the age of 5 years [13]. Deworming campaigns are mainly focussing on school-aged children; however, drug therapy alone might be only a short-term measure for reducing parasitic worm burden among the target population [14]. It has been shown that the prevalence of intestinal parasitic infection returns to the pre-treatment levels within 6 to 18 months after treatment cessation [15,16,17]. A school garden programme with integrated nutrition education, health and WASH interventions, and increasing knowledge about diet diversity, could address the underlying determinants of nutritional and health problems among school-aged children [18].
A multi-country, multi-sectorial project entitled "Vegetables go to School: improving nutrition through agricultural diversification" (VgtS) was developed and implemented in five countries of Asia and Africa (Bhutan, Burkina Faso, Indonesia, Nepal and the Philippines) to address school-aged children's nutrition and health problems in an interdisciplinary approach [19]. The objective of the current study was to evaluate whether a school garden and education programme and a school garden with complementary WASH, health and nutrition interventions would improve nutritional and health indices among school-aged children in two districts of Nepal.
We undertook a randomised controlled trial in 12 schools. Four schools received a school garden and specific education about fruits and vegetables only (SG). Four schools received school garden, coupled with nutrition, health and WASH interventions (SG+). The remaining four schools did not receive any specific interventions (control schools). The two main impact pathways assessed were whether: (a) children's knowledge about, and intake of, fruits and vegetables will increase by growing fruits and vegetables in both SG and SG+ which, in turn, will improve their nutritional status; and (b) the prevalence of malnutrition, anaemia and intestinal parasite infections among children in SG+ will be reduced, compared to SG and control schools.
School gardens with education component (SG)
The first intervention component consisted of a school garden for the cultivation of nutrient-dense vegetables. Teachers were trained in theoretical and practical skills on how to establish and manage school gardens (e.g. levelling and raising land beds, construction of drainage, plantation and caring by children). The trainings were offered twice for 1 week and conducted by project teams, including representatives from the National Agricultural Research Council (NARC), the Ministry of Health and the Ministry of Education. Teachers received different varieties of vegetable seeds and gardening tools and equipment [20]. The school gardens were set up in April 2015. The second intervention consisted of the development and implementation of a curriculum to teach children about gardening (duration: 23 weeks, mainly theory). Teachers received specific training about the use of curriculum by a local project team. The teaching took place once a week during a 90 min class with an emphasis on learning by doing in the school gardens.
Children's caregivers were invited to visit the school at least twice a year to receive a briefing about the school garden project. Children received small packets of seeds to grow vegetables at home and teachers visited some of the children's homes for observation of the garden [20]. Two technical staffs were recruited with a background in agriculture. They monitored the school gardens weekly and provided technical assistance as requested by the students and teachers. Single school gardens produced, on average, about 150 kg of vegetables per school year, which were distributed among the children and teachers [21].
School garden and complementary interventions (SG+)
In addition to the school garden programme, complementary WASH, health and nutrition interventions were implemented in four schools. The intervention package included the following components:
Health promotion activities, such as the development of an educational comic booklet that incorporated information about school gardens, nutrition and WASH targeted to school-aged children. Formative research was conducted with children and their caregivers to develop this booklet.
Provision of a nutrition booklet and hand-outs, incorporating information for children related to fruits and vegetables. The booklet was developed in collaboration with the health personnel.
Development of a poster to display information related to nutrition, handwashing and waste management for children.
Demonstration of adequate handwashing with soap. The demonstration was done by health personnel, delivered to children and their caregivers.
Developing songs related to sanitation and hygiene. Teachers, in collaboration with local authorities, drafted the songs in the schools.
Audio-visual aids related to nutrition and WASH for children and their caregivers.
Construction of at least three latrines per school and six to 12 handwashing facilities with the weekly provision of soap (50 bars per week).
Weekly health education programmes related to nutrition and WASH for caregivers and community stakeholders with the distribution of soap once a week over a 5-month period.
Organisation of informative sessions for caregivers to explain the school garden programme, highlighting the importance of school gardening and replicating the learnt gardening skills at home to set up home gardens.
These interventions were implemented in combined classes with health education. They were intended to be implemented over a 12-month period. However, due to a major earthquake and a series of aftershocks that hit Nepal in April and May 2015, the duration was abbreviated.
Study sites, study population and sample size
This study was conducted in the Dolakha and Ramechhap districts. Dolakha is located approximately 180 km and Ramechhap approximately 150 km from Kathmandu, the capital of Nepal. The study population consisted of school-aged children aged 8–17 years at the baseline survey. A Monte Carlo simulation showed that 800 children, with 50 children per school and four schools per intervention arm would provide at least 75% power for finding simultaneous significant effects of the two implemented type of interventions under the following assumptions:
the prevalence of intestinal protozoan and helminth infections is about 30% [19] and remains constant in the absence of any intervention;
the probability of new intestinal protozoa and helminth infections at follow-up is 10%;
the same effect odds ratios (ORs) apply to incidence and persistence of intestinal protozoa and helminth infection; and
each of the two interventions reduces the odds of infection by 50%, and their effects are additive on the logit-scale.
The study was registered as a cluster randomised controlled trial with study ID ISRCTN17968589 (date assigned: 17 July 2015). The study intended to measure and compare the impact of SG and SG+ interventions on school-aged children's nutritional and health indices in comparison to control schools. At baseline, a total of 12 schools (10 in Dolakha and two in Ramechhap) were selected randomly among 30 schools that met the following inclusion criteria: (a) schools located within one-hour walking distance from a main tarmac road; and (b) water available at school for vegetable cultivation. Only two schools were included in the Ramechhap district, as the two criteria were difficult to meet. The schools were then randomly allocated to one of the three study arms (Fig. 1). In the first arm, schools received the school garden and education component about gardening only (SG); in the second arm they additionally received WASH, health and nutrition interventions (SG+), while no specific interventions were implemented in the third arm; hence, serving as control. The details of the study protocol have been published elsewhere [19].
Study 1 compliance of the study population
Outcome indicators
The outcome indicators and expected results are presented in Table 1. The presented outcomes were based on the project's impact pathway that assumes stepwise changes in the children's knowledge of fruits and vegetables and intake via school garden that might lead to a change in children's nutritional and health status.
Table 1 Outcome indicators and expected results among schoolchildren in three intervention arms (SG, SG+ and control) in a randomised controlled trial conducted in two districts of Nepal between March 2015 and June 2016
Data collection procedures
The same instruments were employed in the baseline and end-line surveys (Additional file 1, Additional file 2, Additional file 3 and Additional file 4). The school directors, district and village authorities, parents and children were informed about the purpose and procedures of the study. Enumerators with a background including higher secondary education and health sciences were recruited for a questionnaire survey. The enumerators were not involved in the implementation of the project and were blinded to the intervention status of the school. Written informed consent was obtained from the children, parents or legal guardians of the children. The voluntary nature of participation in the research activities was emphasised. Children aged 8–17 years were enrolled at baseline. At the follow-up survey in June 2016, the same children were re-assessed. Each child was given a unique identification code for the different assessments at the onset of the study.
The sampled children provided fresh mid-morning, post-exercise stool sample, which were processed and analysed the same day by using the Kato-Katz technique, a formalin-ether concentration and a saline wet mount concentration method. The intensity of infection was calculated as the number of eggs per gram of stool (EPG). The selected school-aged children were subjected to anthropometric measurements according to standard operating procedures, as described by the World Health Organization (WHO), using a digital scale and a height measuring board with a precision of 0.1 kg and 0.1 cm, respectively. The haemoglobin (Hb) level was measured and used as a proxy for anaemia, using a Hemocue portable device (HemoCue Hb 201+ System; HemoCue AB, Angelholm, Sweden). Drinking water samples were collected at the unit of schools, households and community water source [22]. The water samples were analysed in situ at the schools and households for turbidity, pH, chlorine residuals and microbial quality using the DelAgua Kit (Oxfam-DelAgua; Guildford, UK) using readily available standard operating procedures. Details of the data collection procedure are described in a previously published study protocol [19].
Data were described using percentages, frequencies and means. To characterise household socioeconomic status, we conducted a factor analysis to group households into three socioeconomic strata from a list of 18 household assets and construction material of the house wall, roof and floor [23]. Three factors reflecting household socioeconomic status were retained and each of them divided into three strata (high, middle and poor) using the k-means procedure. The data were analysed according to the intention to treat principle. As the children who are symptomatic at baseline often systematically differ from children who were asymptomatic at baseline, we decided to not just study change in prevalence but to distinguish change in children who were asymptomatic (i.e. by studying incidence) and change in children who were symptomatic (i.e. by studying "remission" or "persistence" which equals "1-remission"). Mixed logistic regression models with random intercepts of schools adjusting for age, sex, socioeconomic status and districts were used to estimate intervention effects on incidence and persistence of binary outcomes, such as intestinal parasite infections, anaemia, stunting and thinness, between baseline and end-line. These models also included the factors district, sex and age group of children, and socioeconomic status. To address change in prevalence, repeated measures analyses with additional random intercepts at the level of children were used. Models of change in prevalence involved group-specific indicator variables for end-line observations along with indicator variables for the two follow-up groups, to obtain group-specific ORs of change in prevalence. The statistical significance of the differences between these ORs in the intervention groups and the respective ORs in the control group was determined by replacing the end-line indicator variable of the control group by the overall end-line indicator variable. To address potential period effects and interactions of this variable with the intervention indicator variables to estimate and compare changes in prevalence across the different study arms. The change in prevalence is determined by the persistence (e.g. children who were stunted at baseline and were still stunted at end-line and whether there was a difference between groups) and incidence along with the baseline prevalence according to the formula:
$$ \mathrm{Prevalence}\ \mathrm{at}\ \mathrm{follow}-\mathrm{up}=\left(\mathrm{prevalence}\ \mathrm{at}\ \mathrm{baseline}\right)\ast \mathrm{persistence}+\left(1-\mathrm{prevalence}\ \mathrm{at}\ \mathrm{baseline}\right)\ast \mathrm{incidence} $$
All effect estimates regarding dichotomous outcomes are reported as ORs with 95% confidence intervals (CIs).
Mixed linear regression models with random intercepts for schools adjusting for age, sex of the children, socio-economic status of the caregivers and districts were applied to assess intervention effects on longitudinal changes of continuous variables such as dietary diversity scores (DDS), height and weight, and Hb level. These models included the baseline value of the respective outcome as one of the predictor variables along with age, sex, district and socioeconomic status. Differences were considered statistically significant if p-values were < 0.05. All analyses were carried out using STATA, version 14 (STATA Corporation; College Station, TX, USA).
Study compliance and characteristics of study population
Of the 708 children who were enrolled at the March 2015 baseline survey, 682 children completed the questionnaire survey and 624 children completed all aspects of health and nutritional examination (anthropometry, stool examination and, Hb measurements) at the June 2016 end-line survey. Of four schools allocated to receive the SG intervention, a total of 172 children completed the follow-up. For the four schools allocated to receive the SG+, a total of 197 children completed the end-line and for the four schools allocated to the control group without any intervention, 313 children completed the end-line survey in both districts. Due to the proximity of the earthquake epicentre to the study area, which destroyed around 75% of schools and households in May 2015, 26 children were lost from baseline and 89 of 562 households were no longer accessible at the end-line survey in both districts. Hence, complete data were available from 433 households. Therefore, the final analysis included 433 households, 682 schoolchildren for socio-demography and knowledge, attitude and practice (KAP) and 624 for clinical examination (anthropometry, stool and Hb) (Fig. 1). We compared the baseline socioeconomic status of the households having participated in the follow-up with those households which were lost to follow-up. From the 31.2% households classified with a high socioeconomic status at baseline, only 8.7% remained in this class at end-line. The percentage of households with an average socioeconomic status increased from 30.9 to 38.3%, while households with poor socioeconomic status increased from 37.9 to 53.0% over the 15-month study period.
The characteristics (e.g. sex and age) of children and caregivers who completed the follow-up study are described in Table 2. More than half of the surveyed children were boys (52.7%). There was substantial heterogeneity in the educational status of caregivers across study arms, with 51.4% of caregivers being without formal education in SG+ compared to 26.6% in the control arm, which has also been taken into account in the statistical analysis. The primary occupation of caregivers was farming across all study arms (90.7% in SG+, 79.3% in SG and 78.1% in control; p < 0.01). More than three quarter of the school-aged children from all groups had domestic animals in their households (85.0% SG+, 86.8% SG and 94.0% control; p < 0.01). Most of school-aged children's households had agricultural land (82.9% SG+, 92.7% SG and 94.0% control; p < 0.01), and the self-food production was slightly lower in the SG+ arm (82.1%; compared to 90.1% in SG and 91.0% in control; p < 0.01).
Table 2 Characteristics of schoolchildren and caregivers in Dolakha and Ramechhap districts, Nepal, at baseline, March-May 2015
Outcomes 1 and 2: change in knowledge about fruits and vegetables, dietary diversity, malnutrition, anaemia and intestinal parasitic infection
The changes in key indicators from the questionnaire related to knowledge about fruits and vegetables, malnutrition, anaemia and intestinal parasitic infections in the surveyed school-aged children's households are presented in Table 3.
Table 3 Change in knowledge about fruits and vegetables, malnutrition, anaemia, intestinal parasitic infections and dietary diversity at baseline and during follow-up across the different study arms in Dolakha and Ramechhap districts, Nepal (March–May 2015 and June 2016)
An increase of knowledge regarding the importance of consuming ≥5 portion of vegetables and fruits per day was found mostly among SG+ school-aged children (7.1 to 24.9% in SG+, 12.2 to 28.5% in SG and 10.9 to 26.5% in control). The improvement in knowledge about requirement of vegetables in diet also translated into behavioural change by increasing in the intake of vegetables, i.e. SG+ (33.5 to 74.6%), SG (37.2 to 74.4%) and control arm (33.9 to 77.0%). The proportion of households preparing vegetables increased in all three arms (from 70.2 to 95.0%, in SG+, from 81.1 to 86.5%, in SG and from 91.3 to 93.7% in the control arm). The same was true for the proportion of households giving fruits to children (from 49.0 to 51.0% in SG+, from 50.4 to 14.2% in SG and from 54.6 to 76.6% in the control arm).
Similarly, the percentage of school-aged children who heard about malnutrition increased in all schools, but most strongly in SG+ (44.2 to 88.3%), followed by SG (25.6 to 70.9%) and the control arm (26.5 to 68.0%). The same was true for the proportion of children who heard about anaemia, which increased most strongly in SG+ (12.4 to 22.4%) in comparison to SG (24.3 to 17.7%), while there was a slight decrease in control schools (63.4 to 60.1%). In contrary, children who heard about intestinal parasitic infection increased in control (37.6 to 57.3%) in comparison to SG (31.6 to 19.0%) and SG+ schools (30.8 to 23.6%).
Outcome 3: changes in anthropometric indicators and anaemia among school-aged children
The changes in anthropometric indicators and anaemia among school-aged children are shown in Table 4. Stunting was slightly lowered in SG+ (19.9 to 18.3%) and in the control arm (19.7 to 18.9%) and slightly increased in SG (17.7 to 19.5%), however, without a statistically significant difference. Thinness increased both in SG+ (5.7 to 9.9% compared to control) and SG (9.7 to 10.4% compared to control) and decreased in the control arm (12.3 to 7.1%). There was a slight reduction in anaemia in SG+ (33.0 to 32.0%) but a major increase was observed in SG (20.7 to 43.9%) and the control arm (22.7 to 41.3%).
Table 4 Odds ratios of change in prevalence from baseline to end-line for parasitic infections, anaemia, stunting and thinness, in a cohort of schoolchildren in two districts of Nepal, March-May 2015 and June 2016
The persistence and incidence of anthropometric indicators and anaemia at end-line are shown in Table 5. The persistence of stunting was slightly lower in SG+ (36.8%) than in the control arm (37.7%). The incidence of stunting was slightly higher in SG (16.3%) than SG+ (13.7%) and control arm (14.3%). The mean increase in height and weight were highest in SG+ (6.8 cm and 5.8 kg, respectively) and the control schools (5.2 cm and 6.2 kg, respectively) and considerably lower in SG (3.2 cm and 3.5 kg, respectively). The height and weight gains in the SG arm were significantly lower than the ones in the control arm. Persistence of anaemia was higher in SG (67.6%) than in SG+ (47.6%) and the control arm (52.5%). The mean change in Hb level was significantly higher in SG+ than in the control arm (∆ = 0.58, 95% CI: − 0.26-1.43; p = 0.18).
Table 5 Changes of nutritional indicators in the study cohort by group (control, intervention (SG) and additive Intervention (SG+)) in two districts of Nepal, June 2016
Outcome 4: change in intestinal parasitic infections in school-aged children
At baseline, the prevalence of intestinal parasitic infections, among school-aged children in the three arms, were all high (37.1% in SG+, 33.5% in SG, and 43.9% in the control arm). At the end-line, there was a strong decline to 9.4% in SG+, while the prevalence showed only minor changes in SG and the control arms (Table 4).
The persistence and incidence of intestinal parasitic infections at the end-line are presented for all study arms in Table 6. The persistence of overall intestinal parasitic infections was significantly lower in SG+ than in the control arm (8.4% vs. 45.8%, p < 0.01). The incidence of overall intestinal parasitic infections was highest in the control arm (39.7%), intermediate in SG (25.7%, p = 0.07 compared to the control arm) and lowest in SG+ (10.0%, p < 0.01 compared to the control arm). The persistence of overall intestinal protozoa infection was lowest in SG+ (0.0%), comparable in SG (9.1%) and the control arm (10.3%). Similarly, the incidence of overall intestinal protozoa infection was lowest in SG+ (1.5%, p = 0.03 compared to control), intermediate in SG (5.8%, p = 0.24 compared to control) and highest in the control arm (10.4%). Similar patterns were observed for the persistence (a) and incidence (b) of overall soil-transmitted helminth infections, with values for (a) of 10.3% (SG+), 28.3% (SG) and 47.5% (control arm), and for (b) of 7.3% (SG+), 18.0% (SG) and 28.5% (control arm).
Table 6 Intestinal parasitic infections change during follow-up across the different study arms in Dolakha and Ramechhap districts, Nepal (March-May 2015 versus June 2016)
Outcome 5: changes in drinking water quality in households and KAP on WASH among school-aged children
The thermo-tolerant coliforms (TTC) in the drinking water showed considerably higher percentages in all study groups at the end-line compared to baseline (increase from 0.0 to 13.7% in SG+, increase from 2.4 to 9.5% in SG and increase from 3.9 to 14.8% in the control arm) (Table 7).
Table 7 Water quality parameters at baseline and its change during follow-up across the different study arms in Dolakha and Ramechhap districts, Nepal (March-May 2015 and June 2016)
The change in KAP on WASH among school-aged children is shown in Table 8 and Additional file 5: Table S1. Handwashing with soap (a) before eating and (b) after defection showed stronger increased from baseline to end-line in SG+ compared to the control arm, with (a) 74.1 to 96.9% vs. 78.3 to 84.0% (p = 0.01), and (b) 77.2 to 99.0% vs. 78.0 to 91.0% (p = 0.15). The proportion of children bringing drinking water from home decreased in the SG+ (21.8 to 11.7%), while it increased in SG (11.0 to 27.3%) and control (11.2 to 43.1%). The intervention had no effect on knowledge related to the diseases such as diarrhoea and cholera.
Table 8 Change in KAP on water, sanitation and hygiene (WASH) indicators among a study cohort of school-aged children in two districts of Nepal, March–May 2015 and June 2016
The changes in key indicators from the questionnaire related to WASH in the surveyed school-aged children's households are presented in Additional file 5: Table S1. The proportion of water sufficiency increased significantly in SG compared to control (83.8 to 98.2%; p = 0.003).
Our study assessed the effects of school gardens and complementary nutrition and WASH interventions on children's KAP about fruits and vegetables, their dietary diversity, intestinal parasitic infections and nutritional status in the districts of Dolakha and Ramechhap, Nepal within the frame of the VgtS project. Only few studies have investigated an effect of SG and SG+ interventions on children's nutritional practices, anthropometric indices and intestinal parasitic infections. The novelty of our approach was to assess a number of behavioural, health and nutritional outcome indicators in the frame of an integratrated school garden programme.
Effects on intestinal parasitic infections, anaemia, anthropometry and KAP on WASH
Our results indicate that the SG+ interventions significantly reduced intestinal parasitic infections in comparison to control schools, which might be partly due to the impact of applied interventions such as increase in knowledge in handwashing before eating and deworming with 6 months intervals. Consistently, the strongest increase in the school-aged children's handwashing before eating was observed in the SG+ arm. Furthermore, significant improvements in caregivers' knowledge on nutrition indicators, such as preparation of vegetables and giving fruits to children, increased in the SG+ arm. Stunting was slightly decreased in the SG+ and SG arms, but these changes were not significantly different from the slight increase observed in the control arm. No measurable improvements were observed for thinness.
The significant decrease in intestinal parasitic infections and anaemia could be partially explained by deworming in a 6-month interval that resulted in an increased Hb level among children in SG+. The decrease could also be explained by the number of complementary interventions to the school-aged children and their caregivers leading to increased knowledge on handwashing before eating and after defecation. Similar programmes that combined WASH and nutrition interventions in Bangladesh and Peru have shown impressive results with respect to health (increased access to safe water, improved sanitation and enhanced handwashing), reduced anaemia and improved nutritional indicators (increased DDS and reduced stunting) [24]. Our study showed no effect of the intervention on stunting and thinness and this might be explained by the fact that the increment in the duration of intervention might show an impact. Of note, height and weight may not be ideal indicators for school-aged children because of unequal growth during adolescence [25]. A study conducted in Bangladesh reported that the odds of being stunted in adolescence could be explained by the combined effect of being stunted in childhood and having mothers whose height was <145 cm [25]. Furthermore, the same study reported that girls were more likely to be stunted in childhood than boys, whereas boys were more likely than girls to be stunted in adolescence and this might be due to the difference in pace of maturation [25]. As a limitation, we did not explore the history of stunting among children in their childhood, which could be considered in future studies.
Effects on fruits and vegetable consumption
The intervention studies conducted among children and youths have suggested that gardening can lead to improvements in fruit and vegetable consumption [5, 26, 27]. Published studies have measured the relationship between school-aged children's fruit and vegetable intake and participation in a school garden programme. The results were, however, inconsistent for comparison with our study that only revealed a minor effect [1,2,3, 5, 26, 28]. Studies conducted among school-aged children reported significant beneficial effect on fruit and vegetable intake [5, 28]; one study reported a significant beneficial effect of school garden on vegetable consumption only [3]; another study reported only minor effects of school garden on fruit and vegetable intake [2]; one study found a significant beneficial effect on fruit and vegetable consumption in boys only [26]; while, yet another one study reported no differences between boys and girls in fruit and vegetable intake [1]. Christian et al. (2014) found little evidence to support that school gardens alone could improve students' fruit and vegetable intake. The authors though reported that when the school garden programme was integrated within an educational component (curriculum), students' daily fruit and vegetable consumption significantly increased, which is in line with the findings of our study, showing a small effect on the consumption of fruits and vegetables and growth indicators.
Effects on the school curriculum and involvement of school-aged children and teachers for school gardening
The main aim of SG in the VgtS project was to introduce children to basic gardening skills such as land levelling, raising beds for drainage and easy planting, watering, weeding and harvesting. Only 2 weeks on every Friday, for 90 min were allocated for school garden education. Previous successful gardening interventions all involved additional elements to the gardening activities, such as health promotion programmes [1, 2, 28, 29]. In our study, we found positive impacts on children's fruit and vegetable intake, anaemia status and intestinal parasite infections when schools integrated gardening activities throughout their curriculum and implemented additional complementary interventions (SG+). However, experiences and lessons learned are that for sustainability of the programme, schools need continued support for the provision of regular refreshment trainings on knowledge related to the gardening, health, nutrition and WASH. Of note, the successful interventions in prior trials were implemented by teachers [1,2,3, 28], which was only partly the case in our study.
Effects on water quality
In our survey, some water samples from both SG and SG+ households exceeded the national tolerance limit for TTC contamination (<1 colony forming unit (CFU)/100 ml). The microbiological analyses of water samples revealed the presence of TTC in 25 water samples of SG with eight of these samples having TTC >100 CFU/100 ml; and 17 water samples of SG+ with 10 of these samples having TTC >100 CFU/100 ml that call for specific treatment. Of note, despite households reporting of obtaining water from improved sources and treating water, faecal contamination was still observed in most of the water samples. The increased water contamination with TTC might have been caused by garbage discarded in open spaces in close proximity to drinking water points, open defecation practices or cross-contamination between water supply and sewage system, leaky pipes contaminating the water via runoff or behavioural practices during transportation. Similar findings of cross-contamination and leakage points, old pipelining and drainage system and back siphoning have been reported in a study conducted in Myagdi district and a mountainous region of Nepal [30, 31].
Taken together, our study showed that combining school garden, WASH, regular deworming and nutrition interventions resulted in decreased intestinal parasitic infection and increased knowledge of children about requirement of consumption of more than five portions of fruits and vegetables per day. This might be due to addressing the immediate cause of under-nutrition (i.e. providing awareness about requirement of consumption of nutrient-dense fruits and vegetables via school garden) as well as addressing underlying contributing factors that included lack of access to clean water and sanitation, recurrent infectious diseases and lack of awareness on health and hygiene.
The main issues encountered were related to difficulties in implementing SG and SG+ interventions in our study, explained by the relatively short implementation period. It is conceivable that school gardens require longer term commitment, and a supportive team for protecting and maintaining garden over the regular days as well as during school holidays. There are several limitations to our study.
First, although, the number of clusters in the intervention and control arms was the same, the numbers of children within the clusters and between the two districts were different. This is mainly explained by the challenge posed by the April 2015 earthquake, which affected particularly the Ramechhap district. Indeed, 26 children and 89 households were lost during follow-up. Approximately one out of six households (15.8%) were not found in the post-earthquake emergency crises and a number of villages were severely destroyed during the earthquake. In addition, around 3.7% of the school-aged children were lost to follow-up, due to the aftermath of the earthquake, mostly in the intervention schools, which resulted into a loss of statistical power. Second, the numbers of schools selected in Dolakha and Ramechhap districts were not equal, which might be a limiting factor in generalizing the regional differences. Third, only two of the schools had a school meal programme which, however, due to limited resources, targeted only school-aged children up to the fourth grade. Fourth, the integrated agriculture, nutrition and WASH interventions were implemented only for a relatively short period (5 months) due to delayed project implementation, a major earthquake, an economic blockage between India and Nepal and the end of the project in 2016 that might have limited the larger potential benefits for children's health and nutritional status. Fifth, we did not explore the history of stunting among children in their childhood, which should be investigated in future studies. Sixth, we did not collect data in different seasons. Instead, the data were collected over a bit more than of a single calendar year with different fruits and vegetables being abundant in different periods of the year. This suggests that the true relationships between school gardens and nutrition outcomes, including fruit and vegetable consumption, may have been underestimated for some schools, if data were collected during the low production month. In the meantime, it is conceivable that schools, opting to maintain a vegetable garden, may be generally more interested in creating a healthier school environment [32]. Seventh, nutritional and WASH practices of children were self-reported and changes in behaviour were not closely observed, which may have resulted in over- or under-reporting. Similarly, it is conceivable that households tend to under- or over-report their dietary consumption patterns and either over- or underestimate their consumption of healthy foods, such as fruits and vegetables, thus resulting in biases of food intake assessment [33]. Eighth, the results from selected schools, households and communities in the Dolakha and Ramechhap districts may not be considered as representative for other parts of Nepal. Ninth, our diagnostic approach consisted of the collection of a single stool sample per child, which was subjected to duplicate Kato-Katz thick smear examination. The collection of multiple consecutive stool samples (instead of single specimens) and examination of triplicate or quadruplicate Kato-Katz thick smears would have resulted in higher sensitivity of the diagnostic methods [34]. Although our diagnostic approach for helminth consisted of the collection of a single stool sample per child, stool samples were subjected to multiple diagnostic methods (e.g. Kato-Katz, formalin ether-concentration and wet mount methods), which enhanced diagnostic accuracy. Tenth and finally, a limitation is that anaemia can be caused by multiple and complex factors. Thus, by using a HemoCue device for Hb measurement, the identification of the exact type of anaemia was not possible and we did not collect data on other important risk factors for anaemia, such as vitamin A, riboflavin and folate deficiencies [35].
Despite these limitations, the current research provides some evidence that SG+ interventions improve direct and indirect determinants of children's nutritional and health indices, by reducing intestinal parasitic infections, improving Hb levels and improving certain hygiene practices. Our model of interventions implemented in these pilot schools could be readily replicated and scaled-up. The study thus holds promise to impact on public health. The methodology used for the study presents a suitable approach for evaluating impacts of school-based programme in a setting where there is paucity of information related to school-aged children's health and nutrition. School gardens and complementary nutrition and WASH interventions could sustainably impact children's dietary and hygiene behaviour in the longer term, if they are linked with a greater involvement of their parents/caregivers.
Our study suggests that a holistic approach of school gardens, coupled with complementary education, nutrition, WASH and health interventions holds promise to increase children's fruit and vegetable consumption and decrease intestinal parasitic infections. We recommend that engaging children into high quality gardening interventions that can also incorporate additional intervention components, such as regular deworming and educational activities (e.g. health promotion programmes and teaching children and their caregivers about healthy foods and hygiene practices) are essential for improving children's dietary intake and health status.
The data analysed for this study are not publicly available, as they are part of the PhD study of the first author. However, the data are available from the corresponding author upon reasonable request and signature of a mutual agreement. The questionnaires in English are available upon request from the corresponding author.
CFU:
Colony forming unit
EPG:
Eggs per gram of stool
KAP:
Knowledge, attitude and practices
NARC:
Nepal Agricultural Research Council
SG:
SG+:
School garden with complementary intervention
TTC:
Thermo-tolerant coliforms
VgtS:
Vegetables go to School (project)
Morgan PJ, Warren JM, Lubans DR, Saunders KL, Quick GI, Collins CE. The impact of nutrition education with and without a school garden on knowledge, vegetable intake and preferences and quality of school life among primary-school students. Public Health Nutr. 2010;13:1931–40.
Christian MS, Evans CE, Nykjaer C, Hancock N, Cade JE. Evaluation of the impact of a school gardening intervention on children's fruit and vegetable intake: a randomised controlled trial. Int J Behav Nutr Phys Act. 2014;11:99.
Parmer SM, Salisbury-Glennon J, Shannon D, Struempler B. School gardens: an experiential learning approach for a nutrition education program to increase fruit and vegetable knowledge, preference, and consumption among second-grade students. J Nutr Educ Behav. 2009;41:212–7.
Howerton MW, Bell BS, Dodd KW, Berrigan D, Stolzenberg-Solomon R, Nebeling L. School-based nutrition programs produced a moderate increase in fruit and vegetable consumption: meta and pooling analyses from 7 studies. J Nutr Educ Behav. 2007;39:186–96.
McAleese JD, Rankin LL. Garden-based nutrition education affects fruit and vegetable consumption in sixth-grade adolescents. J Am Diet Assoc. 2007;107:662–5.
Ozer EJ. The effects of school gardens on students and schools: conceptualization and considerations for maximizing healthy development. Health Educ Behav. 2007;34:846–63.
Akhtar S. Malnutrition in South Asia-a critical reappraisal. Crit Rev Food Sci Nutr. 2016;56:2320–30.
Schaible UE, Kaufmann SHE. Malnutrition and infection: complex mechanisms and global impacts. PLoS Med. 2007;4:e115.
Victora CG, Adair L, Fall C, Hallal PC, Martorell R, Richter L, et al. Maternal and child undernutrition: consequences for adult health and human capital. Lancet. 2008;371:340–57.
Alum A, Rubino JR, Ijaz MK. The global war against intestinal parasites--should we use a holistic approach? Int J Infect Dis. 2010;14:e732–8.
Katona P, Katona-Apte J. The interaction between nutrition and infection. Clin Infect Dis. 2008;46:1582–8.
Hall A, Hewitt G, Tuffrey V, de Silva N. A review and meta-analysis of the impact of intestinal worms on child growth and nutrition. Matern Child Nutr. 2008;4(Suppl 1):118–236.
MOHP, New ERA, ICF. Nepal demographic and health survey. Kathmandu: Ministry of Health and Population, New ERA, and ICF International; 2012.
NHRC. An assessment of school deworming program in Surkhet and Kailali District. Nepal Health Research Council; 2010.
Gunawardena K, Kumarendran B, Ebenezer R, Gunasingha MS, Pathmeswaran A, de Silva N. Soil-transmitted helminth infections among plantation sector schoolchildren in Sri Lanka: prevalence after ten years of preventive chemotherapy. PLoS Negl Trop Dis. 2011;5:e1341.
Hotez PJ. Mass drug administration and integrated control for the world's high-prevalence neglected tropical diseases. Clin Pharmacol Ther. 2009;85:659–64.
Jia T-W, Melville S, Utzinger J, King CH, Zhou X-N. Soil-transmitted helminth reinfection after drug treatment: a systematic review and meta-analysis. PLoS Negl Trop Dis. 2012;6:e1621.
Black RE, Victora CG, Walker SP, Bhutta ZA, Christian P, de Onis M, et al. Maternal and child undernutrition and overweight in low-income and middle-income countries. Lancet. 2013;382:427–51.
Erismann S, Shrestha A, Diagbouga S, Knoblauch A, Gerold J, Herz R, et al. Complementary school garden, nutrition, water, sanitation and hygiene interventions to improve children's nutrition and health status in Burkina Faso and Nepal: a study protocol. BMC Public Health. 2016;16:244.
Schreinemachers P, Bhattarai DR, Subedi GD, Acharya TP, Chen H, Yang R, et al. Impact of school gardens in Nepal: a cluster randomised controlled trial. J Dev Eff. 2017;9:329–43.
Bhattarai DR, Subedi G, Acharya TP, Schreinemachers P, Yang RY, Luther G, et al. Effect of school vegetable gardening on knowledge, willingness and consumption of vegetables in Nepal. Int J Horticulture. 2016;5:1–7.
Shrestha A, Sharma S, Gerold J, Erismann S, Sagar S, Koju R, et al. Water quality, sanitation, and hygiene conditions in schools and households in Dolakha and Ramechhap districts, Nepal: results from a cross-sectional survey. Int J Environ Res Public Health. 2017;14:1.
Erismann S, Knoblauch AM, Diagbouga S, Odermatt P, Gerold J, Shrestha A, et al. Prevalence and risk factors of undernutrition among schoolchildren in the plateau central and Centre-Ouest regions of Burkina Faso. Infect Dis Poverty. 2017;6:17.
USAID. WASH and nutrition: water and development strategy. 2015. https://www.usaid.gov/sites/default/files/documents/1865/WASH_Nutrition_Implementation_Brief_Jan_2015.pdf.
Bosch AM, Baqui AH, van Ginneken JK. Early-life determinants of stunted adolescent girls and boys in Matlab, Bangladesh. J Health Popul Nutr. 2008;26:189–99.
Lautenschlager L, Smith C. Understanding gardening and dietary habits among youth garden program participants using the theory of planned behavior. Appetite. 2007;49:122–30.
Ratcliffe MM, Merrigan KA, Rogers BL, Goldberg JP. The effects of school garden experiences on middle school-aged students' knowledge, attitudes, and behaviors associated with vegetable consumption. Health Promot Pract. 2011;12:36–43.
Wang MC, Rauzon S, Studer N, Martin AC, Craig L, Merlo C, et al. Exposure to a comprehensive school intervention increases vegetable consumption. J Adolesc Health. 2010;47:74–82.
Lautenschlager L, Smith C. Beliefs, knowledge, and values held by inner-city youth about gardening, nutrition, and cooking. Agric Hum Values. 2007;24:245.
Aryal J, Gautam B, Sapkota N. Drinking water quality assessment. J Nepal Health Res Counc. 2012;10:192–6.
Rai SK, Ono K, Yanagida JI, Kurokawa M, Rai CK. Status of drinking water contamination in mountain region, Nepal. Nepal Med Coll J. 2009;11:281–3.
Utter J, Denny S, Dyson B. School gardens and adolescent nutrition and BMI: results from a national, multilevel study. Prev Med. 2016;83:1–4.
Shrestha A, Koju RP, Beresford SAA, Gary Chan KC, Karmacharya BM, Fitzpatrick AL. Food patterns measured by principal component analysis and obesity in the Nepalese adult. Heart Asia. 2016;8:46–53.
Sayasone S, Utzinger J, Akkhavong K, Odermatt P. Multiparasitism and intensity of helminth infections in relation to symptoms and nutritional status among children: a cross-sectional study in southern Lao People's Democratic Republic. Acta Trop. 2015;141:322–31.
Righetti AA, Koua A-YG, Adiossan LG, Glinz D, Hurrell RF, N'Goran EK, et al. Etiology of anemia among infants, school-aged children, and young non-pregnant women in different settings of south-central Côte d'Ivoire. Am J Trop Med Hyg. 2012;87:425–34.
We thank all school-aged children, caregivers and school personnel for their commitment, the national and district health authorities for their support, and the Dhulikhel Hospital, Kathmandu University Hospital in Nepal for their technical assistance during the field and laboratory work. We appreciate the institutional involvement of the National Agricultural Research Council, the Ministry of Education and the Ministry of Health in Nepal. We are grateful to our project partners from the "Vegetables go to School" project; namely, the AVRDC-World Vegetable Centre (Shanhua, Taiwan) and the University of Freiburg (Freiburg, Germany) for their valuable support.
This work is part of the "Vegetables go to School: improving nutrition through agricultural diversification" project, supported by the Swiss Agency for Development and Cooperation under grant agreement contract number 81024052 (project 7F-08511.01). The funder had no role in the study design, data collection and analysis, decision to publish or preparation of the manuscript.
Swiss Tropical and Public Health Institute, P.O. Box, CH-4002, Basel, Switzerland
Akina Shrestha, Christian Schindler, Peter Odermatt, Jana Gerold, Séverine Erismann, Jürg Utzinger & Guéladio Cissé
University of Basel, P.O. Box, CH-4003, Basel, Switzerland
School of Medical Sciences, Kathmandu University, Dhulikhel, Nepal
Akina Shrestha & Rajendra Koju
School of Science, Aquatic Ecology Centre, Kathmandu University, Dhulikhel, Nepal
Subodh Sharma
Akina Shrestha
Christian Schindler
Peter Odermatt
Jana Gerold
Séverine Erismann
Rajendra Koju
Jürg Utzinger
Guéladio Cissé
All listed authors contributed to the study design. AS coordinated the field and laboratory work, collected data, supervised research assistants, performed the statistical analysis under the supervision of CS and drafted the manuscript. CS, PO, JG, SE, SS, RK, JU and GC contributed to the interpretation of the data and manuscript writing. All authors read and approved the final version of the manuscript prior to submission.
Correspondence to Guéladio Cissé.
Ethical approval was obtained from the "Ethikkommission Nordwest- und Zentralschweiz" (EKNZ) in Switzerland (reference number UBE-15/02; approval date: January 12, 2015), the institutional review board of Kathmandu University, School of Medical Sciences, Dhulikhel Hospital, Nepal (reference no. 86/14; approval date: August 24, 2014) and the institutional review board, Nepal Health Research Council (reference no 565; approval date: November 11, 2014). The study is registered at International Standard Randomised Controlled Trial Number register (identifier: ISRCTN 30840; date assigned: July 17, 2015). Participants (children and their parents/caregivers) provided written informed consent, with the opportunity to "opt-out" of the study at any time without further obligation.
The authors declare that they have no competing interests. There are two members of the editorial board of this journal listed in the authorship.
School Children Questionnaire.
Household Questionnaire.
Identification Codes.
Sheet for Anthropometrics and Biomedical Specimen.
Changes in key indicators from questionnaire among households in two districts of Nepal, March-May 2015 and June 2016.
Shrestha, A., Schindler, C., Odermatt, P. et al. Nutritional and health status of children 15 months after integrated school garden, nutrition, and water, sanitation and hygiene interventions: a cluster-randomised controlled trial in Nepal. BMC Public Health 20, 158 (2020). https://doi.org/10.1186/s12889-019-8027-z
DOI: https://doi.org/10.1186/s12889-019-8027-z
Intestinal parasitic infections | CommonCrawl |
Collateral affects return risk: evidence from the euro bond market
Stig Helberg1 &
Snorre Lindset1
Financial Markets and Portfolio Management (2020)Cite this article
Covered bonds and senior bonds are prominent securities in the euro bond market. Senior bonds are unsecured, while covered bonds are secured—backed by collateral. Our results show that the presence of collateral reduces the total risk in individual bonds by more than 70%. Compared to diversified portfolios of senior bonds, diversified portfolios of covered bonds have a significantly lower level of systematic risk. However, the fraction of systematic risk to total risk is higher for covered bonds. By decomposing the variance of bond returns, we find that around 33% of the risk in senior bonds is systematic, versus 53% in covered bonds. Both types of bonds contain instrument-specific risk.
Fig. 10
Comprehensive information on covered bonds is found on the website of the European Covered Bond Council at www.ecbc.hypo.org.
For a background on collateral in the current financial markets, see the report from the Committee on the Global Financial System, BIS (2013), and chapter 3 titled "Safe Assets: Financial System Cornerstone?" in IMF's April 2012 Global Financial Stability Report, IMF (2012).
Note that variance, although a standard risk measure does not necessarily capture all relevant aspects of risk. A broader set of risk metrics, reflects higher-order moments or reward-to-risk measures.
The covered bonds in the Markit index fulfill the criteria specified in UCITS 22.4 or similar directives, e.g., CAD III. In addition, other bonds with a structure affording an equivalent risk and credit profile, and considered by the market as covered bonds, are included. The following bond types are included: Austrian Pfandbriefe, Canadian, Hungarian, Italian, Portuguese, Scandinavian, Netherlands, Switzerland, UK, US, and New Zealand covered bonds, French Obligations Foncières, Obligations à l'Habitat, CRH and General Law-Based Covered Bonds, German Pfandbriefe, Irish Asset Covered Securities, Luxembourg Lettres de Gage, Spanish Cedulas Hipotecarias and Cedulas Territoreales.
Company website at www.markit.com.
For more information on the index see the Index Guide at www.markit.com/assets/en/docs/products/data/indices/bond-indices/iboxx-rules/benchmark/Markit%20iBoxx%20EUR_Benchmark%20Guide.pdf.
We use Ox (see Doornik 1999) for numerical calculations and for most of the plots.
Throughout the study, we express empirical variances as \(a\times 10^{-4}\) to make comparisons to standard deviation straightforward. A daily variance (annualized) of \(a\times 10^{-4}\) equals a standard deviation (volatility) of \(\sqrt{a}\%\).
For some of the simulations, the risk in the \((n+1)\)-portfolio is higher than the risk in the n-portfolio. For these cases, we still consider which portfolio has the larger change in risk. Change in risk is defined as n-risk subtracted \((n+1)\)-risk.
Note that the variance estimates in Fig. 10 are not 6-month moving averages.
Altman, E.I., Brady, B., Resti, A., Sironi, A.: The link between default and recovery rates: theory, empirical evidence, and implications. J. Bus. 78(6), 2203–2227 (2005)
Benmelech, E., Bergman, N.K.: Collateral pricing. J. Financ. Econ. 91(3), 339–360 (2009)
Berger, A.N., Udell, G.F.: Collateral, loan quality, and bank risk. J. Monet. Econ. 25(1), 21–42 (1990)
BIS: Asset encumbrance, financial reform and the demand for collateral assets. Committee on the Global Financial System Papers No 49 (2013)
Campbell, J.Y., Lettau, M., Malkiel, B.G., Xu, Y.: Have individual stocks become more volatile? An empirical exploration of idiosyncratic risk. J. Finance 56(1), 1–43 (2001)
Chen, N.-F., Roll, R., Ross, S.A.: Economic forces and the stock market. J. Bus. 59(3), 383–403 (1986)
Coval, J.D., Jurek, J.W., Stafford, E.: Economic catastrophe bonds. Am. Econ. Rev. 99(3), 628–666 (2009)
Dbouk, W., Kryzanowski, L.: Diversification benefits for bond portfolios. Eur. J. Finance 15(5–6), 533–553 (2009)
Doornik, J.: Object-Oriented Matrix Programming Using Ox. Timberlake Consultants Press and Oxford, London (1999)
Elton, E.J., Gruber, M.J., Agrawal, D., Mann, C.: Explaining the rate spread on corporate bonds. J. Finance 56, 247–277 (2001)
Ericsson, J., Renault, O.: Liquidity and credit risk. J. Finance 61(5), 2219–2250 (2006)
Fama, E.F., French, K.R.: Common risk factors in the returns on stocks and bonds. J. Financ. Econ. 33(1), 3–56 (1993)
Heider, F., Hoerova, M.: Interbank lending, Credit risk premia and collateral. Int. J. Cent. Bank. 5(4), 1–39 (2009)
Helberg, S., Lindset, S.: Risk protection from risky collateral: evidence from the euro bond market. J. Bank. Finance 70, 193–213 (2016)
Houston, J.F., Stiroh, K.J.: Three decades of financial sector risk. Federal Reserve Bank of New York, Staff Report no. 248 (2006)
Hull, J., Predescu, M., White, A.: Bond prices, default probabilities and risk premiums. J. Credit Risk 1(2), 53–60 (2005)
IMF: Global Financial Stability Report, April 2012, International Monetary Fund (2012)
John, K., Lynch, A.W., Puri, M.: Credit ratings, collateral, and loan characteristics: implications for yield. J. Bus. 76(3), 371–409 (2003)
Liu, E.: Portfolio Diversification and International Corporate Bonds. J. Financ. Quant. Anal. 51(3), 959–983 (2016)
Markowitz, H.: Portfolio selection. J. Finance 7(1), 77–91 (1952)
McEnally, R.W., Boardman, C.: Aspects of corporate bond portfolio diversification. J. Financ. Res. 2(1), 27–36 (1979)
Merton, R.C.: On estimating the expected return on the market: an exploratory investigation. J. Financ. Econ. 8(4), 323–361 (1980)
Moody's (2014) Annual Default Study: Corporate Default and Recovery Rates, 1920–2013. Moody's Investor Service, February 28, 2014
Nyborg, K.G.: Collateral Frameworks: The Open Secret of Central Banks. Cambridge University Press, Cambridge (2017)
Prokopczuk, M., Vonhoff, V.: Risk premia in covered bond markets. J. Fixed Income 22(2), 19–29 (2012)
Prokopczuk, M., Siewert, J.B., Vonhoff, V.: Credit risk in covered bonds. J. Empir. Finance 21, 102–120 (2013)
S&P: Recovery Study (U.S.): Recoveries Come Into Focus As The Speculative-Grade Cycle Turns Negative. Standard & Poor's Global Fixed Income Research (2012)
Schwert, G.W., Seguin, P.J.: Heteroskedasticity in stock returns. J. Finance 45, 1115–1153 (1990)
Varotto, S.: Credit risk diversification: evidence from the eurobond market. Bank of England, Working Paper no. 199 (2003)
We are grateful for valuable comments and suggestions from Lars-Erik Borge, Eric Duca, Hans Marius Eikseth, Egil Matsen, Aksel Mjøs, Kjell Nyborg, two anonymous referees, and seminar participants at NTNU, at the University of Central Florida, and at the 2017 Paris Financial Management Conference. We are particularly grateful for the data assistance provided by Ivar Pettersen. The paper was partially written while Lindset was a visiting scholar at the University of Central Florida.
Norwegian University of Science and Technology, Trondheim, Norway
Stig Helberg
& Snorre Lindset
Search for Stig Helberg in:
Search for Snorre Lindset in:
Correspondence to Snorre Lindset.
Decomposition of risk
Let \(r_t=R_t-R_{\mathrm{f}}\) be the period t return in excess of the risk-free rate. By regressing the bond excess return on the portfolio excess return, we get the following relationship:
$$\begin{aligned} r_{it}=\beta _{ip}r_{{\mathrm{p}}t}+{\tilde{\epsilon }}_{it}, \end{aligned}$$
where \({\tilde{\epsilon }}_{it}\) is the error term.
Campbell et al. (2001) present a simplified model that permits a variance decomposition on an appropriate aggregate level, without having to keep track of covariances and without having to estimate betas. The model, generally called a "market-adjusted model," drops the beta coefficient \(\beta _{ip}\) from Eq. (6)
$$\begin{aligned} r_{it}=r_{{\mathrm{p}}t}+\epsilon _{it}. \end{aligned}$$
From Eq. (7), we have that \(\epsilon _{it}\) is the difference between the bond return \(r_{it}\) and the portfolio return \(r_{{\mathrm{p}}t}\). Comparing Eqs. (6) and (7), we have
$$\begin{aligned} \epsilon _{it}={\tilde{\epsilon }}_{it}+(\beta _{ip}-1)r_{{\mathrm{p}}t}. \end{aligned}$$
The residual \(\epsilon _{it}\) equals the residual in Eq. (6) only if the bond beta \(\beta _{ip}=1\) or the portfolio return \(r_{{\mathrm{p}}t}=0\). Calculating the variance of the bond return yields
$$\begin{aligned} \begin{aligned} \text {Var}(r_{it})&=\text {Var}(r_{{\mathrm{p}}t})+\text {Var}(\epsilon _{it})+2\text {Cov}(r_{{\mathrm{p}}t},\epsilon _{it})\\&=\text {Var}(r_{{\mathrm{p}}t})+\text {Var}(\epsilon _{it})+2(\beta _{ip}-1)\text {Var}(r_{{\mathrm{p}}t}), \end{aligned} \end{aligned}$$
where taking account of the covariance term once again introduces the bond beta into the variance decomposition. Although the variance of an individual bond return contains covariance terms, the weighted average of variances across bonds in the portfolio is free of the individual covariances. This result follows from the fact that the weighted sums of the betas equal unity (note that we outline the methodology using general weights)
$$\begin{aligned} \sum \limits _iw_{it}\beta _{ip}=1. \end{aligned}$$
Consequently, we have
$$\begin{aligned} \sum \limits _iw_{it}\text {Var}(r_{it})=\text {Var}(r_{{\mathrm{p}}t})+\sum \limits _iw_{it}\text {Var}(\epsilon _{it})=\underbrace{\sigma _{{\mathrm{p}}t}^{2}}_\text {portfolio}+\underbrace{\sigma _{\epsilon t}^{2}}_\text {idiosyncratic}, \end{aligned}$$
where \(\sigma _{{\mathrm{p}}t}^2\equiv \text {Var}(r_{{\mathrm{p}}t})\) and \(\sigma _{\epsilon t}^2\equiv \sum _iw_{it}\text {Var}(\epsilon _{it})\). The weighting and summing have removed the covariance terms. We use the residual \(\epsilon _{it}\) in Eq. (7) to construct a measure of average bond-level risk that does not require any estimation of betas. We interpret the weighted average individual bond variance, the left-hand side of Eq. (8), as the expected volatility of a randomly drawn bond (with the probability of drawing bond i equal to its weight \(w_{it}\)). This measure of expected volatility reflects two components, a portfolio factor and an idiosyncratic factor.
The variances of excess returns on the two portfolios (covered bond portfolio and senior bond portfolio) include the impact of a market-wide factor and an instrument-specific factor. To identify the market-wide component, we cannot eliminate the betas as in the decomposition described above because covered bonds and senior bonds do not form the total market, that is, their betas do not necessarily add up to 1. We adapt Houston and Stiroh (2006) and decompose portfolio volatility ex post using the one-factor market model for each of the two bond portfolios
$$\begin{aligned} r_{{\mathrm{p}}t}=\beta _{pm}r_{mt}+\eta _{{\mathrm{p}}t}, \end{aligned}$$
where \(r_{mt}\) is the daily excess return for the overall market. By construction, we have
$$\begin{aligned} \text {Var}(r_{{\mathrm{p}}t})=\beta _{pm}^{2}\text {Var}(r_{mt})+\text {Var}(\eta _{{\mathrm{p}}t}). \end{aligned}$$
We substitute Eq. (9) into Eq. (8) and define \(\sigma _{\eta t}^2\equiv \text {Var}(\eta _{{\mathrm{p}}t})\). The final decomposition of bond return variance is now
$$\begin{aligned} \begin{aligned} \sum \limits _iw_{it}\text {Var}(r_{it})&=\text {Var}(r_{{\mathrm{p}}t})+\sum \limits _iw_{it}\text {Var}(\epsilon _{it})\\&=\beta _{pm}^{2}\text {Var}(r_{mt})+\text {Var}(\eta _{{\mathrm{p}}t})+\sum \limits _iw_{it}\text {Var}(\epsilon _{it})\\&=\underbrace{\beta _{pm}^{2}\text {Var}(r_{mt})}_\text {market-wide}+\underbrace{\sigma _{\eta t}^{2}}_\text {instrument}+\underbrace{\sigma _{\epsilon t}^{2}}_\text {idiosyncratic}. \end{aligned} \end{aligned}$$
The left-hand side of Eq. (10) shows total risk (average individual bond risk). The right-hand side shows the three components of risk; (market-wide) systematic risk, instrument-specific risk, and idiosyncratic risk.
Overview of banks in the sample
Table 8 Overview of issuers in the sample
Helberg, S., Lindset, S. Collateral affects return risk: evidence from the euro bond market. Financ Mark Portf Manag (2020) doi:10.1007/s11408-019-00343-2
Senior bonds
Systematic risk
Unsystematic risk
Instrument-specific risk | CommonCrawl |
On the stability and transition of the Cahn-Hilliard/Allen-Cahn system
Dynamics of positive steady-state solutions of a nonlocal dispersal logistic model with nonlocal terms
July 2020, 25(7): 2583-2606. doi: 10.3934/dcdsb.2020023
Point vortices for inviscid generalized surface quasi-geostrophic models
Carina Geldhauser 1, and Marco Romito 2,
School of Mathematics and Statistics, The University of Sheffield, Hounsfield Rd, Sheffield S3 7RH, United Kingdom
Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, I–56127 Pisa, Italia
Carina Geldhauser, http://www.cgeldhauser.de
Marco Romito, http://people.dm.unipi.it/romito
Received January 2019 Published April 2020
Fund Project: The first author was supported by Deutsche Forschungsgemeinschaft in the context of TU Dresden's Institutional Strategy "The Synergetic University". The second author acknowledges the partial support of the University of Pisa, through project PRA 2018_49
We give a rigorous proof of the validity of the point vortex description for a class of inviscid generalized surface quasi-geostrophic models on the whole plane.
Keywords: Inviscid generalized surface quasi-geostrophic, weak solutions, point vortex motion, vortex approximation, localization, stability.
Mathematics Subject Classification: Primary: 76B47, 76M23; Secondary: 76E20, 86A99.
Citation: Carina Geldhauser, Marco Romito. Point vortices for inviscid generalized surface quasi-geostrophic models. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2583-2606. doi: 10.3934/dcdsb.2020023
G. Badin and A. M. Barry, Collapse of generalized Euler and surface quasi-geostrophic point vortices, Phys. Rev. E, 98 (2018), 023110. doi: 10.1103/PhysRevE.98.023110. Google Scholar
D. Bernard, G. Boffetta, A. Celani and G. Falkovich, Conformal invariance in two-dimensional turbulence, Nature Physics, 2 (2006), 124-128. doi: 10.1038/nphys217. Google Scholar
D. Bernard, G. Boffetta, A. Celani and G. Falkovich, Inverse turbulent cascades and conformally invariant curves, Physical Review Letters, 98 (2007), 024501. doi: 10.1103/PhysRevLett.98.024501. Google Scholar
T. Bodineau and A. Guionnet, About the stationary states of vortex systems, Ann. Inst. H. Poincaré Probab. Statist., 35 (1999), 205-237. doi: 10.1016/S0246-0203(99)80011-9. Google Scholar
E. Caglioti, P.-L. Lions, C. Marchioro and M. Pulvirenti, A special class of stationary flows for two-dimensional Euler equations: A statistical mechanics description, Comm. Math. Phys., 143 (1992), 501-525. doi: 10.1007/BF02099262. Google Scholar
E. Caglioti, P.-L. Lions, C. Marchioro and M. Pulvirenti, A special class of stationary flows for two-dimensional Euler equations: A statistical mechanics description. Ⅱ, Comm. Math. Phys., 174 (1995), 229-260. doi: 10.1007/BF02099602. Google Scholar
G. Cavallaro, R. Garra and C. Marchioro, Localization and stability of active scalar flows, Riv. Math. Univ. Parma (N.S.), 4 (2013), 175-196. Google Scholar
D. Chae, P. Constantin, D. Córdoba, F. Gancedo and J. Wu, Generalized surface quasi-geostrophic equations with singular velocities, Comm. Pure Appl. Math., 65 (2012), 1037-1066. doi: 10.1002/cpa.21390. Google Scholar
D. Chae, P. Constantin and J. Wu, Inviscid models generalizing the two-dimensional Euler and the surface quasi-geostrophic equations, Arch. Ration. Mech. Anal., 202 (2011), 35-62. doi: 10.1007/s00205-011-0411-5. Google Scholar
D. Chae, P. Constantin and J. Wu, Dissipative models generalizing the 2D Navier-Stokes and surface quasi-geostrophic equations, Indiana Univ. Math. J., 61 (2012), 1997-2018. doi: 10.1512/iumj.2012.61.4756. Google Scholar
P. Constantin, D. Cordoba and J. Wu, On the critical dissipative quasi-geostrophic equation, Indiana Univ. Math. J., 50 (2001), 97–107, Dedicated to Professors Ciprian Foias and Roger Temam (Bloomington, IN, 2000). doi: 10.1512/iumj.2001.50.2153. Google Scholar
P. Constantin, A. J. Majda and E. Tabak, Formation of strong fronts in the 2-D quasigeostrophic thermal active scalar, Nonlinearity, 7 (1994), 1495-1533. doi: 10.1088/0951-7715/7/6/001. Google Scholar
A. Córdoba and D. Córdoba, A maximum principle applied to quasi-geostrophic equations, Comm. Math. Phys., 249 (2004), 511-528. doi: 10.1007/s00220-004-1055-1. Google Scholar
D. Córdoba, C. Fefferman and J. L. Rodrigo, Almost sharp fronts for the surface quasi-geostrophic equation, Proc. Natl. Acad. Sci. USA, 101 (2004), 2687-2691. doi: 10.1073/pnas.0308154101. Google Scholar
D. Córdoba, M. A. Fontelos, A. M. Mancho and J. L. Rodrigo, Evidence of singularities for a family of contour dynamics equations, Proc. Natl. Acad. Sci. USA, 102 (2005), 5949-5952. doi: 10.1073/pnas.0501977102. Google Scholar
D. Córdoba, J. Gómez-Serrano and A. D. Ionescu, Global solutions for the generalized SQG patch equation, Arch. Ration. Mech. Anal., 233 (2019), 1211-1251. doi: 10.1007/s00205-019-01377-6. Google Scholar
J.-M. Delort, Existence de nappes de tourbillon en dimension deux, J. Amer. Math. Soc., 4 (1991), 553-586. doi: 10.1090/S0894-0347-1991-1102579-6. Google Scholar
G. Falkovich, Symmetries of the turbulent state, J. Phys. A, 42 (2009), 123001, 18pp. doi: 10.1088/1751-8113/42/12/123001. Google Scholar
F. Flandoli and M. Saal, mSQG equations in distributional spaces and point vortex approximation, J. Evol. Equat., 19 (2019), 1071-1090. doi: 10.1007/s00028-019-00506-8. Google Scholar
C. Geldhauser and M. Romito, Limit theorems and fluctuations for point vortices of generalized Euler equations, 2018, arXiv: 1810.12706. Google Scholar
M. Hauray, Wasserstein distances for vortices approximation of Euler-type equations, Math. Models Methods Appl. Sci., 19 (2009), 1357-1384. doi: 10.1142/S0218202509003814. Google Scholar
I. M. Held, R. T. Pierrehumbert, S. T. Garner and K. L. Swanson, Surface quasi-geostrophic dynamics, Journal of Fluid Mechanics, 282 (1995), 1-20. doi: 10.1017/S0022112095000012. Google Scholar
I. M. Held, R. T. Pierrehumbert and S. K. L., Spectra of local and nonlocal two-dimensional turbulence, Chaos, Solitons & Fractals, 4 (1994), 1111–1116, Special Issue: Chaos Applied to Fluid Mixing. Google Scholar
A. Kiselev, L. Ryzhik, Y. Yao and A. Zlatoš, Finite time singularity for the modified SQG patch equation, Ann. of Math, 184 (2016), 909-948. doi: 10.4007/annals.2016.184.3.7. Google Scholar
P.-L. Lions, On Euler Equations and Statistical Physics, Cattedra Galileiana. [Galileo Chair], Scuola Normale Superiore, Classe di Scienze, Pisa, 1998. Google Scholar
F. Marchand, Existence and regularity of weak solutions to the quasi-geostrophic equations in the spaces $L^p$ or $\dot H^{-1/2}$, Comm. Math. Phys., 277 (2008), 45-67. doi: 10.1007/s00220-007-0356-6. Google Scholar
C. Marchioro and M. Pulvirenti, Hydrodynamics in two dimensions and vortex theory, Comm. Math. Phys., 84 (1982), 483-503. doi: 10.1007/BF01209630. Google Scholar
C. Marchioro and M. Pulvirenti, Vortex Methods in Two-Dimensional Fluid Dynamics, vol. 203 of Lecture Notes in Physics, Springer-Verlag, Berlin, 1984. Google Scholar
C. Marchioro and M. Pulvirenti, Vortices and localization in Euler flows, Comm. Math. Phys., 154 (1993), 49-61. doi: 10.1007/BF02096831. Google Scholar
C. Marchioro and M. Pulvirenti, Mathematical Theory of Incompressible Nonviscous Fluids, vol. 96 of Applied Mathematical Sciences, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4612-4284-0. Google Scholar
S. G. Resnick, Dynamical Problems in Non-Linear Advective Partial Differential Equations, ProQuest LLC, Ann Arbor, MI, 1995, Thesis (Ph.D.)–The University of Chicago. Google Scholar
J. L. Rodrigo, On the evolution of sharp fronts for the quasi-geostrophic equation, Comm. Pure Appl. Math., 58 (2005), 821-866. doi: 10.1002/cpa.20059. Google Scholar
S. Schochet, The weak vorticity formulation of the 2-D Euler equations and concentration-cancellation, Comm. Partial Differential Equations, 20 (1995), 1077-1104. doi: 10.1080/03605309508821124. Google Scholar
S. Schochet, The point-vortex method for periodic weak solutions of the 2-D Euler equations, Comm. Pure Appl. Math., 49 (1996), 911-965. doi: 10.1002/(SICI)1097-0312(199609)49:9<911::AID-CPA2>3.0.CO;2-A. Google Scholar
N. Schorghofer, Energy spectra of steady two-dimensional turbulent flows, Phys. Rev. E, 61 (2000), 6572-6577. doi: 10.1103/PhysRevE.61.6572. Google Scholar
C. V. Tran, Nonlinear transfer and spectral distribution of energy in $\alpha$ turbulence, Phys. D, 191 (2004), 137-155. doi: 10.1016/j.physd.2003.11.005. Google Scholar
C. V. Tran, D. G. Dritschel and R. K. Scott, Effective degrees of nonlinearity in a family of generalized models of two-dimensional turbulence, Phys. Rev. E, 81 (2010), 016301. doi: 10.1103/PhysRevE.81.016301. Google Scholar
A. Venaille, T. Dauxois and S. Ruffo, Violent relaxation in two-dimensional flows with varying interaction range, Phys. Rev. E, 92 (2015), 011001. doi: 10.1103/PhysRevE.92.011001. Google Scholar
Guido Cavallaro, Roberto Garra, Carlo Marchioro. Long time localization of modified surface quasi-geostrophic equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020336
Yanhong Zhang. Global attractors of two layer baroclinic quasi-geostrophic model. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021023
Jingjing Wang, Zaiyun Peng, Zhi Lin, Daqiong Zhou. On the stability of solutions for the generalized vector quasi-equilibrium problems via free-disposal set. Journal of Industrial & Management Optimization, 2021, 17 (2) : 869-887. doi: 10.3934/jimo.2020002
Eduard Feireisl, Elisabetta Rocca, Giulio Schimperna, Arghir Zarnescu. Weak sequential stability for a nonlinear model of nematic electrolytes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 219-241. doi: 10.3934/dcdss.2020366
Do Lan. Regularity and stability analysis for semilinear generalized Rayleigh-Stokes equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021002
Anna Abbatiello, Eduard Feireisl, Antoní Novotný. Generalized solutions to models of compressible viscous fluids. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 1-28. doi: 10.3934/dcds.2020345
Christian Beck, Lukas Gonon, Martin Hutzenthaler, Arnulf Jentzen. On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020320
José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091
Weinan E, Weiguo Gao. Orbital minimization with localization. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 249-264. doi: 10.3934/dcds.2009.23.249
Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081
Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331
Helmut Abels, Johannes Kampmann. Existence of weak solutions for a sharp interface model for phase separation on biological membranes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 331-351. doi: 10.3934/dcdss.2020325
Jens Lorenz, Wilberclay G. Melo, Suelen C. P. de Souza. Regularity criteria for weak solutions of the Magneto-micropolar equations. Electronic Research Archive, 2021, 29 (1) : 1625-1639. doi: 10.3934/era.2020083
Zaizheng Li, Qidi Zhang. Sub-solutions and a point-wise Hopf's lemma for fractional $ p $-Laplacian. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020293
Yi Guan, Michal Fečkan, Jinrong Wang. Periodic solutions and Hyers-Ulam stability of atmospheric Ekman flows. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1157-1176. doi: 10.3934/dcds.2020313
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Bo Chen, Youde Wang. Global weak solutions for Landau-Lifshitz flows and heat flows associated to micromagnetic energy functional. Communications on Pure & Applied Analysis, 2021, 20 (1) : 319-338. doi: 10.3934/cpaa.2020268
Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230
Soonki Hong, Seonhee Lim. Martin boundary of brownian motion on gromov hyperbolic metric graphs. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021014
HTML views (86)
Carina Geldhauser Marco Romito | CommonCrawl |
Mitigating the heroin crisis in Baltimore, MD, USA: a cost-benefit analysis of a hypothetical supervised injection facility
Amos Irwin1,2,
Ehsan Jozaghi3,4,
Brian W. Weir5,
Sean T. Allen5,
Andrew Lindsay5 &
Susan G. Sherman6
In Baltimore, MD, as in many cities throughout the USA, overdose rates are on the rise due to both the increase of prescription opioid abuse and that of fentanyl and other synthetic opioids in the drug market. Supervised injection facilities (SIFs) are a widely implemented public health intervention throughout the world, with 97 existing in 11 countries worldwide. Research has documented the public health, social, and economic benefits of SIFs, yet none exist in the USA. The purpose of this study is to model the health and financial costs and benefits of a hypothetical SIF in Baltimore.
We estimate the benefits by utilizing local health data and data on the impact of existing SIFs in models for six outcomes: prevented human immunodeficiency virus transmission, Hepatitis C virus transmission, skin and soft-tissue infection, overdose mortality, and overdose-related medical care and increased medication-assisted treatment for opioid dependence.
We predict that for an annual cost of $1.8 million, a single SIF would generate $7.8 million in savings, preventing 3.7 HIV infections, 21 Hepatitis C infections, 374 days in the hospital for skin and soft-tissue infection, 5.9 overdose deaths, 108 overdose-related ambulance calls, 78 emergency room visits, and 27 hospitalizations, while bringing 121 additional people into treatment.
We conclude that a SIF would be both extremely cost-effective and a significant public health and economic benefit to Baltimore City.
Baltimore City has one of the highest overdose death rates in the country, and overdoses have been increasing in recent years. From 2014 to 2015, heroin-related overdose deaths in Baltimore increased from 192 to 260 [1]. These increases are in part attributed to the prevalence of fentanyl in the heroin supply, with fentanyl causing 31 and 51% of 2015 and 2016 overdose deaths, respectively. Fentanyl is 50–100 times more potent than heroin or morphine. Illicit fentanyl and derivatives are appealing to illicit drug networks as these chemicals are cheaper than prescription opioids, heroin, and cocaine, and are extremely potent [2,3,4,5].
There are numerous additional medical costs associated with injection drug use, largely related to infectious diseases and soft-tissue infections. Roughly 18% of the people who inject drugs (PWID) in Baltimore are HIV positive, twice the 9% national average for PWID and 50 times the prevalence in the general population [6,7,8]. One in five Baltimore PWID suffers chronic skin and soft-tissue infection, the leading cause of PWID hospitalization [9,10,11].
Supervised injection facilities (SIFs) have been established worldwide to reduce the harms associated with injection drug use. In SIFs, PWID inject previously obtained drugs in the presence of medical staff. A number of public health, social, and economic benefits of SIFs have been evaluated by studies of the Insite SIF in Vancouver, Canada and the Medically Supervised Injecting Centre (MSIC) in Sydney, Australia, both of which were established in 2003 [12,13,14,15].
Among these benefits, studies have demonstrated four in particular that can be quantified. First, SIFs reduce blood-borne disease transmission by providing clean needles and safer injecting education [12, 16, 17]. Second, SIF staff reduce bacterial infection by providing clean injection equipment, cleaning wounds, and identifying serious infections early [18,19,20]. Third, SIF staff intervene in case of overdose, meaning that while PWID may overdose at a SIF, none die and few suffer complications [13]. Fourth, the SIF and its staff become a trusted, stabilizing force in many hard-to-reach PWID's lives, persuading many to enter addiction treatment [12, 14, 21].
As in other US cities, a multisector discussion about the merits and utility of SIFs has begun in Baltimore due to rising overdose deaths as well as the inadequacy of the current criminal justice-focused response [22].
The purpose of this article is to analyze the potential cost-effectiveness of establishing a SIF in Baltimore. We estimate the annual cost of the facility and the savings resulting from six separate health outcomes: prevention of HIV infection, HCV infection, skin and soft-tissue infection (SSTI), overdose death, and nonfatal overdose and increased medication-assisted treatment (MAT) uptake. Each estimate includes the health outcome, financial value, and a sensitivity analysis. First, we present the existing literature on SIF cost-benefit analyses, then our study's method, its results, its implications, and its limitations.
SIF cost-benefit analysis literature review
Prior cost-benefit analyses of Insite in Vancouver and MSIC in Sydney have assessed a more limited range of outcomes than the present study. The Insite studies were limited to the outcomes of HIV prevention, HCV prevention, and overdose death prevention. They have agreed that Insite generates net savings when all three outcomes are considered [23, 24]. The cost-benefit analysis of Sydney's MSIC only included savings from overdose deaths, ambulance calls, and police services averted by the SIF.
A number of other studies have estimated HIV and HCV prevention benefits for hypothetical SIFs in Canadian cities from Montreal to Saskatoon [25,26,27,28,29,30]. Irwin et al. [31] are the only other cost-benefit analysis of a hypothetical SIF in the USA—in San Francisco, California—and the only other study to consider more than three outcomes. We discuss the differences in methodologies between this paper and past analyses for each individual outcome in the "Methods" section.
This study calculates the financial and health costs and benefits of a hypothetical Baltimore SIF modeled on Insite. Insite occupies roughly 1,000 ft2, provides 13 booths for clients, and operates 18 h per day. Insite serves about 2100 unique individuals per month, who perform roughly 180,000 injections per year [32, 33].
This study measures the cost of the facility against savings from six outcomes: prevention of HIV, HCV, SSTI, and overdose deaths, reduced overdose-related medical costs, and referrals to MAT. We assess each model's dependence on important variables with a sensitivity analysis. For the sensitivity analysis, we increase and decrease the chosen variable by 50% and report the impact on the outcome.
Cost of the facility
Cost calculations are based on a facility equal in size and scope to Insite. We estimate that the annual cost of establishing a new SIF combines both upfront and operating costs. Since we assume the same staffing levels, equipment needs, and other operating cost inputs as Insite, we calculate the operating costs by multiplying the Insite SIF's $1.5 million operating costs by a 4% cost of living adjustment between Vancouver and Baltimore [34, 35]. Since the upfront costs would depend on the exact location and extent of renovations required, we make a conservative estimate of $1.5 million based on actual budgets for similar facilities and standard per-square-foot renovation costs [12, 36]. We convert this upfront cost into a levelized annual payment by assuming that it was financed with a loan lasting the lifetime of the facility. We determine the levelized annual payment according to the standard financial equation:
$$ C=\frac{i(P)}{1-{\left(1+ i\right)}^{- N}} $$
where C is the levelized annual upfront cost, i is a standard 10% interest rate, P is the $1.5 million total upfront cost, and N is the estimated 25-year lifetime of the facility.
HIV and HCV prevention benefits
The HIV infection prevention benefits of Insite, Vancouver's SIF, have been modeled in several cost-benefit analyses [23, 24, 37, 38]. Pinkerton [24] and Andresen and Jozaghi [23] estimate 5–6 and 22 infections averted per year, respectively. These estimates differ primarily because Pinkerton [24] assumes that the SIF only impacts injections occurring within the SIF, while Andresen and Jozaghi [23] incorporate the fact that the SIF reduces needle sharing outside the SIF as well, since Insite staff educate clients on safer injecting practices [38].
To estimate the impact of reduced needle sharing on HIV and HCV infection rates, we use an epidemiological "circulation theory" model developed to calculate how needle exchange programs impact HIV infection among PWID and subsequently used to study SIF HIV and HCV infection [23, 39]. We use the model to estimate new HIV infection cases (I HIV):
$$ {I}_{\mathrm{HIV}}= i N s d\kern0.21em \left[1-{\left(1- qt\right)}^M\right] $$
where i is the percentage of HIV-negative PWIDs, N is the total number of needles in circulation, s is the percentage of injections with a shared needle, d is the percentage of injections with an unbleached needle, q is the percentage of HIV-positive PWIDs, t is the chance of transmitting HIV through a single injection with a shared needle, and M is the average number of people injecting with a single previously used needle. Table 1 shows the values and sources for each variable.
Table 1 Values, notes, and sources for variables used to predict HIV infection reduction savings
We estimate SIF-averted HIV infections by finding the difference between I HIV at the current rate of needle sharing (s pre) and I HIV at the post-SIF rate (s post). We calculate s post with the formula:
$$ {S}_{\mathrm{post}}={S}_{\mathrm{pre}}\frac{\left( T- N\right)+\left(1- n\right) N}{T} $$
where T is the total number of PWID in Baltimore City, N is the number of SIF users, and n is the 70% reduction in needle sharing by SIF users [40].
We perform the same calculations for HCV, and the values and sources for the HCV variables are contained in Table 2.
Table 2 Values, notes, and sources for variables used to predict HCV infection reduction savings
We check the model's validity by comparing its baseline prediction of HIV and HCV incidence in Baltimore (I HIV and I HCV at s pre) with the city's actual incidence data. The model predicts 53 new PWID-related HIV cases in Baltimore each year in the absence of a SIF, only slightly lower than the 55 diagnoses reported by the Maryland Department of Health and Mental Hygiene [41]. Since many new HIV cases go undiagnosed, especially in the hard-to-reach PWID population, this baseline figure suggests that we are underestimating potential HIV infections averted [42].
For HCV, the model predicts 302 cases in the absence of a SIF. The Maryland Department of Health and Mental Hygiene (DHMH) does not report annual injection-related HCV infections for Baltimore City. However, based on Mehta et al.'s [43] finding that 7.8% of a sample of Baltimore's HCV-negative PWID contract HCV every year, we estimate PWID HCV incidence at 398 new cases per year. Since our model predicts a significantly lower incidence, we are most likely underestimating the potential number of HCV infections averted.
Skin and soft-tissue infection benefits
Since PWID frequently contract skin and soft-tissue infection from unsanitary injection practices and often avoid seeking medical treatment until these infections become life threatening, SSTI is the number one reason for PWID hospital admission. Insite studies have demonstrated that SIFs significantly reduce SSTI medical costs by providing clean injection materials and referring PWID for medical treatment when necessary [18, 20]. Irwin et al. [31], the only cost-benefit analysis to incorporate this outcome, have shown this outcome to be significant, concluding that a SIF in San Francisco could reduce SSTI-related hospitalizations by 415 days per year, saving $1.7 million.
We estimate annual savings due to SIF SSTI reduction (S SSTI) according to
$$ {S}_{\mathrm{SSTI}}= NhLrC $$
where N is the total number of SIF clients, h is the percent of PWID hospitalized for SSTI in an average year, L is the average length of SSTI hospitalization, r is the 67% reduction in SSTI hospital stay length that Lloyd-Smith et al. [18] documented for Insite clients, and C is the average daily cost of a hospital stay. See Table 3 for values and sources.
Table 3 Values, notes, and sources for variables used to predict skin and soft-tissue infection reduction savings
Overdose mortality benefits
While Andresen and Boyd [44] estimate that Insite prevents one overdose death per year, out of roughly 20 total overdose deaths in the neighborhood, they are simply extrapolating that if Insite hosts 5% of the city's injections, it should prevent 5% of the city's overdose deaths. However, Milloy et al. [45] demonstrate that Insite prevents more than 5% of the city's overdose deaths. Milloy et al. attribute this effect to drug use education, which 32% of all Insite clients report receiving. For example, PWID learn to pre-inject a small dose of their drug to "test" the potency, which can prevent accidental overdose in case of an unusually pure or contaminated dose. In Sydney's SIF, known as MSIC, 80% of clients report changing their injection behavior to reduce the risk of overdose as a result of in-SIF education [15].
This finding is supported by Marshall et al. [46], who compare the change in overdose deaths within 500 m of Insite to the change in other Vancouver neighborhoods both before and after the facility's opening. They find that before Insite opened, roughly 20 overdoses occurred within 500 m of the facility. After Insite opened, overdose mortality within 500 m of the facility fell by 35%, compared to a 9.3% reduction further away, suggesting that Insite reduced neighborhood overdose deaths by at least 26% [46].
Therefore, to predict the impact of a SIF on fatal overdose, we estimate the number of overdose deaths within a 500-m radius of an optimally placed SIF in Baltimore. Based on the fact that there were 260 heroin-related fatal overdoses in 2015 and 342 in the first three quarters of 2016, we estimate that there were 463 heroin-related fatal overdoses in all of 2016 [1, 47]. Since data on the geospatial distribution of fatal overdoses in Baltimore City are not available, we approximate this distribution by mapping data from the Baltimore City Fire Department Emergency Medical Services on the locations where medics administered naloxone in response to suspected opioid overdoses [48]. We identify the location with the highest concentration of naloxone administrations within 500 m by plotting the locations of all naloxone administrations in the first three quarters of 2016 in ArcGIS. The chosen location accounts for 6.2% of all naloxone administrations, suggesting that 28 heroin-related overdose deaths occurred within that 500-m radius circle in 2016. As the percent of overdose deaths within this area varies over time, we assume that in an average year, it would encompass a more conservative 23 heroin-related overdose deaths. This is 5% of the city-wide total and slightly higher than the 20 deaths per year within 500 m of Insite.
We calculate the total value of overdose deaths averted by the SIF (S OD) according to the equation:
$$ {S}_{\mathrm{OD}}= rnDV $$
where r is the rate of overdose death reduction expected within 500 m, n is the 5% share of naloxone administrations concentrated within a single circle of radius 500 m in Baltimore, D is the total number of overdose deaths in Baltimore, and V is the value of a single life saved.
In order to assign value to the loss of life due to overdose, we follow Andresen and Boyd [44] in considering only the tangible value to society rather than including the suffering and lost quality of life for loved ones. We estimate the tangible value by calculating the present value of the remaining lifetime wages of an average person from the community. Since the average age of PWID in Baltimore is 35, we convert 30 years of future wages to present value using a standard discount rate [44, 49]. So the value of a single prevented overdose death (V) is calculated as
$$ V={\displaystyle \sum_{i=1}^N\frac{W}{{\left(1+ r\right)}^i}} $$
where n represents the remaining years of income, W represents the median wage for Baltimore City, and r represents the discount rate. We thus use a value per life saved of $503,869 in the overdose death savings calculation above. The values and sources for each variable in this section are given in Table 4.
Table 4 Values, notes and sources for variables used to predict savings from averted overdose deaths
Most likely, this method underestimates the facility's impact, since this method only estimates averted overdose deaths within 500 m of the SIF, though the facility would also reduce overdose outside a 500-m radius.
Overdose morbidity benefits
Overdoses require emergency medical assistance, even when they are not life threatening. Evaluations of Sydney's MSIC show that by managing overdose events on-site, the SIF reduces ambulance calls, emergency room visits, and hospital stays for overdose-related morbidity [12]. No previous SIF cost-benefit evaluations have included overdose morbidity in their analyses, but MSIC provides sufficient data to estimate the magnitude of a SIF's impact.
In Baltimore, ambulances are called to the scene of roughly half of all nonfatal overdoses [50]. By contrast, almost all overdoses in MSIC, Sydney's SIF, were handled by on-site medical staff and did not result in ambulance calls [14]. We estimate cost savings of averted ambulance calls for a SIF in Baltimore according to the following model:
$$ {S}_{\mathrm{a}}= Io\left({c}_o-{c}_i\right) A $$
where Sa is the annual savings due to the SIF reducing ambulance calls for overdose, I is the annual number of injections in the SIF, o is the per-injection rate of overdose, c o and c i are the rates of overdose ambulance calls outside and inside the SIF, respectively, and A is the average cost of an overdose ambulance call. The values and sources for these variables are given in Table 5.
Table 5 Values, notes, and sources for variables used to predict savings from overdose-related ambulance calls
Emergency response personnel often transport overdose victims to the emergency room for treatment. One Baltimore study found that 33% of PWID reported being taken to the ER for their latest overdose [50]. By contrast, overdoses in SIFs lead to emergency room treatment in less than 1% of cases [14]. With a single Baltimore ER visit averaging over $1,300, SIFs reduce medical costs significantly by keeping PWID out of emergency rooms for overdose. We calculate the savings according to the following:
$$ {S}_{\mathrm{er}}= Io\left({t}_o-{t}_i\right) F $$
where S er is the annual savings due to the SIF reducing emergency room visits for overdose, I is the annual number of injections in the SIF, o is the rate of nonfatal overdose, t o and t i are the rates of ER visit for overdose when the overdose occurs outside and inside the SIF, respectively, and F is the average cost of an overdose emergency room visit. The values and sources for these variables are given in Table 6.
Table 6 Values, notes, and sources for variables used to predict savings from overdose-related emergency room visits
Overdose victims are occasionally hospitalized for treatment. In Baltimore, 12% of PWID who overdosed reported being hospitalized, while less than 1% of SIF overdoses lead to hospitalization [14, 50]. With one day in a Baltimore hospital averaging $2,500, SIFs reduce medical costs significantly by keeping PWID out of the hospital for overdose. We calculate the savings according to the following:
$$ {S}_{\mathrm{h}}= Io\left({a}_{\mathrm{o}}-{a}_{\mathrm{i}}\right) E $$
where S h is the annual savings due to the SIF reducing hospitalization for overdose, I is the annual number of injections in the SIF, o is the rate of nonfatal overdose, a o and a i are the rates of hospitalization for overdose when the overdose occurs outside and inside the SIF, respectively, and E is the average expense of an overdose hospital stay. The values and sources for these variables are given in Table 7.
Table 7 Values, notes, and sources for variables used to predict savings from overdose-related hospitalizations
Medication-assisted treatment benefits
Many PWID who are unable to quit using illicit opioids through traditional abstinence-based treatment programs are successful using methadone or buprenorphine maintenance as part of medication-assisted treatment (MAT) [51]. MAT not only reduces the crime and health care costs of PWID by helping a significant portion quit injecting drugs but also decreases drug use, crime, and health costs among the patients who do relapse [52, 53]. Wood et al. [15, 22] and MSIC [12] show that both Insite and Sydney's MSIC refer many SIF clients to treatment, increasing treatment uptake. Irwin et al. [31] find a single SIF's impact on treatment uptake to be significant, estimating that a SIF in San Francisco would bring 110 patients into MAT every year.
We estimate that by referring clients to MAT, a SIF would produce annual health care and crime savings equal to S MAT:
$$ {S}_{\mathrm{MAT}}= N r\; f\left( b-1\right) T $$
where N is the number of PWID who use the SIF, r is the percent of SIF clients who have been shown to access treatment as a result of SIF referrals, f is a conservative 50% estimate for retention in MAT, b is the average cost-benefit ratio studies have found for MAT, and T is the annual cost of treatment. Table 8 shows the values and sources for each variable.
Table 8 Sources for variables used to predict savings from medication-assisted treatment referrals
The SIF's success in referring PWID to MAT depends on the pre-existing local prevalence of MAT uptake, location and availability of MAT slots, and other neighborhood-level factors. As a result, we acknowledge that the 5.8% increase found for Sydney's MSIC may differ significantly from the actual referral rate for a SIF in Baltimore.
Overall cost-benefit ratio
Our analysis finds a total benefit of $7.77 million and a total cost of $1.79 million, yielding a cost-benefit ratio of $4.35 saved for every dollar spent. Net savings are $5.98 million. We present the sensitivity analysis results for each outcome in Table 9, showing both financial and health results for the base, low, and high cases. Table 10 shows the impact of the sensitivity analysis for each key variable on the overall cost-benefit ratio and net savings.
Table 9 Summary of sensitivity analysis impact for individual components
Table 10 Summary of sensitivity analysis impact on overall results
Our estimate of the total annual cost is $1.79 million, which includes $1.62 million in operating costs and $170,000 in annualized upfront costs. In our sensitivity analysis, raising the operating cost by 50% increased the total cost to $2.6 million, lowering the cost-benefit ratio from 4.35 to 2.99 and net annual savings from $5.98 million to $5.17 million. Lowering the operating cost by 50% resulted in a total cost of $980,000, raising the cost-benefit ratio to 7.96 and net savings to $6.79 million.
HIV and HCV benefits
We estimate that a SIF would prevent an average of 3.7 HIV and 21 HCV cases per year, translating to annual savings of $1.50 million and $1.44 million, respectively.
We conducted a sensitivity analysis on the syringe sharing rate. Increasing the rate by 50%, from 2.8 to 4.2%, raises averted infections to 5.5 for HIV and 32 for HCV and savings to $2.25 million for HIV and $2.17 million for HCV. As a result, the overall cost-benefit ratio for the SIF increases from 4.35 to 5.17 and net savings increase from $5.98 million to $6.45 million. Decreasing the sharing rate by 50%, from 2.8 to 1.4%, lowers averted infections to 1.8 for HIV and 11 for HCV, reducing HIV savings to $750,000 and HCV savings to $720,000. In this scenario, the overall cost-benefit ratio declines to 3.52 and net savings fall to $4.51 million.
We estimate that SIF SSTI care will reduce total PWID hospital stays for SSTI by 374 days per year, which translates to annual savings of roughly $930,000.
We conducted a sensitivity analysis on the SSTI hospitalization rate. Increasing the rate by 50% raises averted hospital days to 561 and savings to $1.40 million. As a result, the overall cost-benefit ratio for the SIF increases from 4.35 to 4.61 and net annual savings rise from $5.98 million to $6.45 million. Decreasing the rate by 50% lowers averted hospital days to 187 and reduces savings to $470,000. In this scenario, the overall cost-benefit ratio declines to 4.09 and net savings fall to $5.52 million.
We estimate that SIF overdose prevention will save an average of 5.9 lives per year, which translates to $3.00 million in savings for society.
We conducted a sensitivity analysis of drug overdose deaths in the neighborhood around the facility, since deaths fluctuate from year to year. Increasing the total by 50% raises estimated lives saved to 8.9 and financial savings to $4.50 million. This raises the overall cost-benefit ratio for the SIF from 4.35 to 5.19 and net savings from $5.98 million to $7.48 million. Lowering the neighborhood deaths by 50% would reduce estimated lives saved to 3.0 and financial savings to $1.50 million, for an overall cost-benefit ratio of 3.51 and net savings of $4.48 million.
We estimate that the SIF will also prevent 108 ambulance calls, 78 emergency room visits, and 27 hospitalizations for nonfatal overdose, which translates to $81,000, $110,000, and $67,000 in medical savings, respectively.
We conducted a sensitivity analysis on the nonfatal overdose rate, since it is not well documented for Baltimore. Increasing the rate 50% raises the benefits to 162 ambulance calls, 117 ER visits, and 40 hospitalizations, for savings of $120,000, $160,000, and $100,000, respectively. This higher rate would raise the overall cost-benefit ratio for the SIF from 4.35 to 4.42 and net savings from $5.98 to $6.11 million. Lowering the rate by 50% would reduce the benefits to 54 ambulance calls, 39 ER visits, and 13 hospitalizations, lowering the savings to $40,000, $50,000, and $30,000, respectively. This lower rate would reduce the SIF's overall cost-benefit ratio to 4.28 and net savings to $5.86 million.
We estimate that 121 PWID will enter MAT as a result of the SIF, translating into $640,000 in benefits for society.
We conducted a sensitivity analysis of the referral rate for MAT. Raising the rate by 50%, from 5.78 to 8.67%, would increase new people in treatment from 121 to 182 and financial savings to $960,000. This would increase the overall cost-benefit ratio from 4.35 to 4.53 and net annual savings from $5.98 to $6.30 million. Lowering the rate by 50%, to 2.89%, would reduce new people in treatment to 61 and financial savings to $320,000, for an overall cost-benefit ratio of 4.17 and net savings of $5.66 million.
Our analysis finds a significantly favorable cost-benefit ratio and net benefits in all scenarios for a SIF in Baltimore, MD. Our base case scenario predicts that every dollar spent would return $4.35 in savings. We estimate that a single, 13-booth facility would generate annual net savings of $5.98 million, which is equivalent to 28% of the city health department's entire budget for harm reduction and disease prevention [54]. The study predicts that a SIF would prevent 5.9 overdose deaths per year.
Compared to Irwin et al.'s [31] cost-benefit analysis of a SIF in San Francisco, our study estimates the cost-benefit ratio for a Baltimore SIF to be 87% higher (4.35 versus 2.33) and net savings to be 71% higher ($6.0 million versus $3.5 million). A Baltimore SIF would have lower costs, lower benefits from SSTI prevention, similar benefits related to HIV, HCV, and MAT, and much higher benefits related to overdose deaths. Our study also incorporates additional outcomes, demonstrating that a SIF could generate sizeable benefits by preventing ambulance calls, emergency room visits, and hospital stays related to nonfatal overdose.
The most significant difference between the San Francisco and Baltimore studies relates to the SIF's impact on overdose deaths. We predict 5.9 lives saved per year in Baltimore, compared to 0.24 lives in San Francisco [31]. This difference stems primarily from the much higher overdose death rate in Baltimore. While both cities have roughly 20,000 PWID, Baltimore has more than 20 times more heroin-related overdose deaths. We also use a more advanced methodology—mapping the concentration of overdose deaths—to estimate this outcome.
The SIF's impact on overdose prevention would complement the Baltimore City Health Department's extensive efforts to prevent overdose through trainings and naloxone distribution in community, treatment, and corrections settings. The city has trained over 17,500 Baltimore residents in overdose prevention, including use of the overdose reversal drug naloxone [55]. A SIF would ensure that when PWID overdose, they do so in the presence of staff trained to administer naloxone. In addition, a SIF would prevent overdose deaths outside the facility because SIF staff provide PWID with safer injecting education, stressing the importance of injecting where naloxone is available.
Our results also suggest that a SIF would become a key component of Baltimore's continued efforts to reduce viral infections among PWID. Preventing four HIV and 21 HCV infections every year would reduce total incidence of both HIV and HCV by roughly 5%. The SIF would allow service providers to locate PWID, test them for viral infection, refer them for HIV and HCV treatment, and retain them in treatment. It thus addresses all four aspects of the 2017 HIV prevention strategy of the National Institute on Drug Abuse: "seeking, testing, treating, and retaining" PWID and other populations in need of HIV care [56].
Our estimate that a SIF would save close to a million dollars per year in SSTI hospital costs shows the benefits of removing a small population of "frequent fliers" from emergency rooms and hospitals. Still, since San Francisco has both a more serious SSTI problem due to the prevalence of black tar heroin and higher hospital costs, this area of benefits is smaller for Baltimore.
Our estimate of 121 PWID entering MAT in Baltimore is similar to Irwin et al.'s [31] estimate of 110 PWID in San Francisco. However, in both cities, the actual number will depend on the existing ease of MAT access, as well as the efforts by SIF staff to refer PWID to treatment. Baltimore can maximize these benefits by increasing funding to MAT programs, making treatment referrals a priority for SIF staff, and establishing the SIF near existing treatment providers for easy referral and follow-up.
Our sensitivity analysis illustrates that the SIF's operating cost has a significant impact on the overall cost-benefit ratio, though less of an impact on net savings. While we used a conservatively high cost estimate, strategic staffing, location, and procedural decisions by both SIF executives and local government officials could reduce costs and further increase the net benefits. Cost-effectiveness in Baltimore would be significantly higher largely because Baltimore has lower real estate values, salaries, cost of living, and cost of doing business [31].
There are a number of lessons from the initial operations of Insite which could inform the overall costs associated with a SIF in Baltimore. For example, Health Canada's protocols required Insite to call an ambulance for every overdose incident, resulting in unnecessary costs given the ability to reverse overdose at Insite [57]. We recommend that the Baltimore City Health Department work with a local SIF, with extensive peer involvement, to consider the health, social, and economic impact of any such protocols.
The continuum of care provided at the SIF has important implications for its impact. An integrated SIF model would co-locate detoxification, treatment, medical care, mental health care, housing, employment, government benefits, and legal services. Such a model would facilitate service uptake for a population that faces a number of barriers in accessing services.
We should note that it is difficult to ascertain who exactly would ultimately receive the savings documented in this study. Savings from the HIV, HCV, SSTI, and nonfatal overdose outcomes all accrue to the health care system, but the real beneficiaries are difficult to pin down. Holtgrave [58] and Mehta [6] estimate that the public sector bears the greatest share of HIV treatment costs, in particular Medicaid. Whether PWID have private insurance, Medicare/Medicaid, or no insurance, the savings ultimately reach federal, state, and local taxpayers, as well as everyone who pays health care premiums and hospital bills. MAT savings are split between medical care and reduced crime committed to get money to buy drugs. Overdose death savings represent value to the overall local economy from that person's future contributions.
This cost-benefit analysis faces a number of limitations.
First, this study does not tackle the political, legal, and social barriers confronting the efforts to establish a SIF in Baltimore. In spring 2017, a second attempt to authorize safe consumption spaces in Maryland failed in the Maryland State Assembly. This effort faces opposition concerns similar to SIF campaigns in other cities, including fears of "enabling" drug use, "Not In My Back Yard," and potential legal vulnerability to prosecution under federal drug statutes [59,60,61]. It also faces more unique challenges—while the opiate epidemic's recent damage to white, middle-class communities has grabbed media attention, Baltimore's heroin crisis is decades old and fails to generate the same political capital for action because it primarily impacts lower-income African-American communities [62].
To address these issues, advocates have formed a coalition of public health practitioners, current and former drug users, community organizers, and academics. Over the past year, the coalition has been meeting with the local health department, social service providers, drug users, politicians, and community leaders. In addition to garnering local and state political support, a Baltimore SIF campaign will only be successful if it involves the affected communities and elevates their voices.
Our study's estimates of health and economic outcomes also face limitations. Without specific plans for a facility, some variables are difficult to estimate. Since there are no actual regulations, guidelines, or actual physical plans for a SIF in Baltimore, we can only make a conservative guess at facility cost. Once regulations are established and plans for construction and operation have been created, an updated cost analysis should be performed. Similarly, the SIF's success at referring PWID to treatment would depend on staffing decisions, the protocol for treatment referrals, and the convenience and availability of effective treatment options.
In addition, our models are difficult to verify because a number of important health indicators are not well documented for Baltimore's PWID population. For example, researchers have noted that resources have not been devoted to accurately measuring the Baltimore PWID population's HCV prevalence, much less the HCV incidence or the impact of needle sharing [63]. Also, available data conflicts on the prevalence of SSTI and rates of SSTI hospitalization among PWID. Other variables, from the average number of needle-sharing partners to the rate of ambulance calls to nonfatal overdose, are based on a single study and should be corroborated.
The study's accuracy would also benefit from specific cost information. The costs of HIV and HCV care, SSTI hospitalization, medication-assisted treatment, and overdose-related ambulance calls, emergency room visits, and hospital stays have all been approximated using figures for the general population. We consider all of these to be underestimates of the actual costs, since PWID tend to require more services and supervision [64].
There are also some potential interaction effects that are beyond the scope of this study. For example, our HIV and HCV models do not account for PWID becoming infected or transmitting the viruses to others through sexual contact. Our models also do not account for interaction effects between HIV and HCV infection or between viral infection and SSTI. While these effects would likely have a minor impact on our overall findings, if relevant data becomes available, our analysis should be updated accordingly.
Finally, the impact of the SIF will depend on how well the SIF and co-located service providers align with the unique features of Baltimore's population of PWID. Studies have shown that the effectiveness of harm reduction programs depends on their consideration of ethnicity, gender, age, homelessness, inequality, social networks, drug markets, and other demographic and social factors [65,66,67,68,69,70]. We have used the best local health data available to tailor our analysis to Baltimore's unique risk factors and social environment. However, the ultimate impact of a SIF in Baltimore will depend on how well the facility adapts to this environment by studying, consulting, and collaborating with the local PWID population [71,72,73].
Despite the present study's limitations, it demonstrates that a SIF in Baltimore would bring significant cost savings and public health benefits to the city. A single 13-booth SIF facility in Baltimore City modeled on Insite in Vancouver would generate medical and economic savings of roughly $7.77 million per year. At a total cost of $1.79 million per year, every dollar spent would generate an estimated $4.35 in savings. To put the $5.98 net annual savings for a single SIF in perspective, they equal 28% of the Baltimore City Health Department's budget for harm reduction and disease prevention.
In terms of health outcomes, we estimate that every year, a SIF would prevent 3.7 HIV infections, 21 HCV infections, 374 days in the hospital for skin and soft-tissue infection, 5.9 overdose deaths, 108 overdose ambulance calls, 78 overdose emergency room visits, and 27 overdose-related hospitalizations, while bringing an additional 121 PWID into treatment.
We recommend that the city avoid excessive regulation of a SIF and maximize the linkages to services for the PWID population. We also recommend that researchers carefully track health indicators and medical costs associated with the PWID population before and after establishing a SIF in order to evaluate the facility's benefits.
SIFs provide other important benefits in addition to those quantified in this study. They decrease public injection, prevent physical and sexual violence against PWID, and reduce syringe littering [38, 74,75,76]. They facilitate research to better understand the PWID population [77]. Lastly, they allow social service providers to harness the power of PWID peer networks and bring important programs to the hard-to-reach PWID population [78,79,80].
Establishing a SIF in Baltimore would bring a number of well-established medical, financial, and societal benefits. We do not believe that health initiatives like SIFs should be judged purely on financial terms. However, we hope that this cost-benefit analysis provides a helpful starting point to assess the potential impact on Baltimore of a supervised injection facility.
DHMH:
Department of Health and Mental Hygiene (Maryland)
HCV:
HIV:
Human immunodeficiency virus
MSIC:
Medically Supervised Injecting Centre (SIF in Sydney)
PWID:
People who inject drugs
Supervised injection facility
SSTI:
Skin and soft-tissue infection
DHMH. Drug- and alcohol-related intoxication deaths in Maryland, 2015. Maryland Department of Health and Mental Hygiene Report, September 2016. Accessed 23 Feb 2017, at http://bha.dhmh.maryland.gov/OVERDOSE_PREVENTION/Documents/2015%20Annual%20Report_revised.pdf.
Amlani A, McKee G, Khamis N, Raghukumar G, Tsang E, Buxton JA. Why the FUSS (Fentanyl Urine Screen Study)? A cross-sectional survey to characterize an emerging threat to people who use drugs in British Columbia, Canada. Harm Reduction J. 2015;12(1):54.
Peterson AB. Increases in fentanyl-related overdose deaths—Florida and Ohio, 2013–2015. MMWR Morb Mortal Wkly Rep. 2016;65(33);844–49.
Sutter ME, Gerona RR, Davis M, Roche BM, Colby DK, Chenoweth JA, Adams AJ, Owen KP, Ford JB, Black HB, Albertson TE. Fatal fentanyl: one pill can kill. Acad Emerg Med. 2016;24(1):106-13.
McIntyre IM, Anderson DT. Postmortem fentanyl concentrations: a review. J Forensic Res. 2012;3(157):2.
Mehta S. Personal correspondence of Dr. Shruti Mehta, Johns Hopkins University Bloomberg School of Public Health Department of Epidemiology, with Susan Sherman, January 15, 2017.
Centers for Disease Control and Prevention (CDC). HIV infection and HIV-associated behaviors among injecting drug users—20 cities, United States, 2009. MMWR Morb Mortal Wkly Rep. 2012;61(8):133.
Centers for Disease Control and Prevention. CDC—HIV/AIDS, viral hepatitis, sexually transmitted infections, and tuberculosis: FY 2015 President's Budget Request. https://www.cdc.gov/budget/documents/fy2015/hivaids-factsheet.pdf. 2014. Accessed 20 May 2015
Smith ME, Robinowitz N, Chaulk P, Johnson KE. High rates of abscesses and chronic wounds in community-recruited injection drug users and associated risk factors. J Addict Med. 2015;9(2):87.
Binswanger IA, Takahashi TA, Bradley K, Dellit TH, Benton KL, Merrill JO. Drug users seeking emergency care for soft tissue infection at high risk for subsequent hospitalization and death. J Stud Alcohol Drugs. 2008;69(6):924–32.
Takahashi TA, Maciejewski ML, Bradley K. US hospitalizations and costs for illicit drug users with soft tissue infections. J Behav Health Serv Res. 2010;37(4):508–18.
MSIC Evaluation Committee. Final report of the evaluation of the Sydney Medically Supervised Injecting Centre. MSIC Evaluation Committee; 2003.
UHRI. Findings from the evaluation of Vancouver's Pilot Medically Supervised Safer Injecting Facility - Insite. Urban Health Research Initiative, British Columbia Centre for Excellence in HIV/AIDS, June 2009. http://uhri.cfenet.ubc.ca/wp-content/uploads/images/Documents/insite_report-eng.pdf. Accessed 21 Feb 2017
KPMG. Further evaluation of the medically supervised injecting centre during its extended trial period (2007–2011): final report.
Wood E, Tyndall MW, Montaner JS, Kerr T. Summary of findings from the evaluation of a pilot medically supervised safer injecting facility. Can Med Assoc J. 2006;175(11):1399–404.
Kerr T, Kimber J, DeBeck K, Wood E. The role of safer injection facilities in the response to HIV/AIDS among injection drug users. Current HIV/AIDS Reports. 2007;4(4):158–64.
Wood E, Tyndall MW, Stoltz JA, Small W, Zhang R, O'Connell J, Montaner JS, Kerr T. Safer injecting education for HIV prevention within a medically supervised safer injecting facility. Int J Drug Policy. 2005;16(4):281–4.
Lloyd-Smith E, Wood E, Zhang R, Tyndall MW, Sheps S, Montaner JS, Kerr T. Determinants of hospitalization for a cutaneous injection-related infection among injection drug users: a cohort study. BMC Public Health. 2010;10(1):327.
Small W, Wood E, Lloyd-Smith E, Tyndall M, Kerr T. Accessing care for injection-related infections through a medically supervised injecting facility: a qualitative study. Drug Alcohol Depend. 2008;98(1):159–62.
Salmon AM, Dwyer R, Jauncey M, van Beek I, Topp L, Maher L. Injecting-related injury and disease among clients of a supervised injecting facility. Drug Alcohol Depend. 2009;101(1):132–6.
Wood E, Tyndall MW, Zhang R, Montaner JS, Kerr T. Rate of detoxification service use and its impact among a cohort of supervised injecting facility users. Addiction. 2007;102(6):916–9.
Sherman SG, Hunter K, Rouhani S. Safe drug consumption spaces: a strategy for Baltimore City. Abell Report. 2017;29(7).
Andresen MA, Jozaghi E. The point of diminishing returns: an examination of expanding Vancouver's Insite. Urban Stud. 2012;49(16):3531–44.
Pinkerton SD. How many HIV infections are prevented by Vancouver Canada's supervised injection facility? Int J Drug Policy. 2011;22(3):179–83.
Jozaghi E, Reid AA, Andresen MA. A cost-benefit/cost-effectiveness analysis of proposed supervised injection facilities in Montreal, Canada. Subst Abuse Treat Prev Policy. 2013;8(1):25.
Bayoumi AM, Strike C, Brandeau M, Degani N, Fischer B, Glazier R. Report of the Toronto and Ottawa supervised consumption assessment study, 2012. CATIE website; 2012. http://www.catie.ca/sites/default/files/TOSCA%20report%202012.pdf. Accessed 12 May 2017.
Enns EA, Zaric GS, Strike CJ, Jairam JA, Kolla G, Bayoumi AM. Potential cost‐effectiveness of supervised injection facilities in Toronto and Ottawa, Canada. Addiction. 2016;111(3):475–89.
Jozaghi E, Jackson A. Examining the potential role of a supervised injection facility in Saskatoon, Saskatchewan, to avert HIV among people who inject drugs. Int J Health Policy Manage. 2015;4(6):373.
Jozaghi E, Reid AA, Andresen MA, Juneau A. A cost-benefit/cost-effectiveness analysis of proposed supervised injection facilities in Ottawa, Canada. Subst Abuse Treat Prev Policy. 2014;9(1):31.
Jozaghi E, Reid AA. The potential role for supervised injection facilities in Canada's largest city, Toronto. Int Crim Justice Rev. 2015;25(3):233–46.
Irwin A, Jozaghi E, Bluthenthal RN, Kral AH. A cost-benefit analysis of a potential supervised injection facility in San Francisco, California, USA. J Drug Issues. 2016;47(2):164-84.
Health Canada. Vancouver's Insite service and other supervised injection sites: what has been learned from the research? Final report. 2008, March 31; Expert Advisory Committee on Supervised Injection Site Research. Ottawa, Ontario.
Maynard R. Personal correspondence of Russell Maynard, Director of Policy and Research, Portland Hotel Society Community Services, Vancouver, with Dr. Ehsan Jozaghi, February 10, 2017.
Jozaghi E, Hodgkinson T, Andresen MA. Is there a role for potential supervised injection facilities in Victoria, British Columbia, Canada? Urban Geography. 2015;36(8):1241–55.
Expatistian. Cost of living comparison between Baltimore, Maryland, United States and Vancouver, Canada. Expatistan Cost of Living Index. 2016, December. Cost of living comparison between Baltimore, Maryland, United States and Vancouver, Canada. Accessed 11 Dec 2016. https://www.expatistan.com/cost-of-living/comparison/vancouver/baltimore.
Primeau M. San Francisco Department of Public Health, 2013; Accessed 9 Nov 2015. https://www.sfdph.org/dph/files/hc/HCAgen/2013/jan%2015/mark's%20narrative.pdf
Bayoumi AM, Zaric GS. The cost-effectiveness of Vancouver's supervised injection facility. Can Med Assoc J. 2008;179(11):1143–51.
Stoltz JA, Wood E, Small W, Li K, Tyndall M, Montaner J, Kerr T. Changes in injecting practices associated with the use of a medically supervised safer injection facility. J Public Health. 2007;29(1):35–9.
Jacobs P, Calder P, Taylor M, Houston S. Cost effectiveness of Streetworks' needle exchange program of Edmonton. Can J Public Health. 1999;90(3):168.
Kerr T, Tyndall M, Li K, Montaner J, Wood E. Safer injection facility use and syringe sharing in injection drug users. Lancet. 2005;366(9482):316–8.
DHMH. Baltimore City HIV/AIDS Epidemiological Profile, Fourth Quarter 2012. Maryland Department of Health and Mental Hygiene Center for HIV Surveillance Report, 2013.
Bradley H, Hall HI, Wolitski RJ, Van Handel MM, Stone AE, LaFlam M, Skarbinski J, Higa DH, Prejean J, Frazier EL, Patel R. Vital signs: HIV diagnosis, care, and treatment among persons living with HIV—United States, 2011. MMWR Morb Mortal Wkly Rep. 2014;63(47):1113–7.
Mehta SH, Astemborski J, Kirk GD, Strathdee SA, Nelson KE, Vlahov D, Thomas DL. Changes in blood-borne infection risk among injection drug users. J Infect Dis. 2011;203(5):587–94.
Andresen MA, Boyd N. A cost-benefit and cost-effectiveness analysis of Vancouver's supervised injection facility. Int J Drug Policy. 2010;21(1):70–6.
Milloy MS, Kerr T, Tyndall M, Montaner J, Wood E. Estimated drug overdose deaths averted by North America's first medically-supervised safer injection facility. PLoS One. 2008;3(10):e3351.
Marshall BD, Milloy MJ, Wood E, Montaner JS, Kerr T. Reduction in overdose mortality after the opening of North America's first medically supervised safer injecting facility: a retrospective population-based study. Lancet. 2011;377(9775):1429–37.
DHMH. Drug- and alcohol-related intoxication deaths in Maryland: data update through 3rd quarter 2016. Maryland Department of Health and Mental Hygiene Report, 2017. Accessed 23 Feb 2017, at http://bha.dhmh.maryland.gov/OVERDOSE_PREVENTION/Documents/Quarterly%20report_2016_Q3_final.pdf.
BCFD. Personal correspondence of Baltimore City Fire Department Emergency Medical Services with Brian Weir, February 23, 2017.
Genberg BL, Gange SJ, Go VF, Celentano DD, Kirk GD, Mehta SH. Trajectories of injection drug use over 20 years (1988–2008) in Baltimore, Maryland. Am J Epidemiol. 2011;173(7):829–36. kwq441.
Pollini RA, McCall L, Mehta SH, Vlahov D, Strathdee SA. Non-fatal overdose and subsequent drug treatment among injection drug users. Drug Alcohol Depend. 2006;83(2):104–10.
Cartwright WS. Cost–benefit analysis of drug treatment services: review of the literature. J Ment Health Policy Econ. 2000;3(1):11–26.
Harris AH, Gospodarevskaya E, Ritter AJ. A randomised trial of the cost effectiveness of buprenorphine as an alternative to methadone maintenance treatment for heroin dependence in a primary care setting. Pharmacoeconomics. 2005;23(1):77–91.
CHPDM. Review of cost-benefit and cost-effectiveness literature for methadone or buprenorphine as a treatment for opiate addiction. Baltimore County: Center for Health Program Development and Management at the University of Maryland; 2007. http://www.hilltopinstitute.org/publications/Cost_benefit_Opiate_Addiction_August_29_2007.pdf. Accessed 7 Jan 2016.
Board of Estimates. Fiscal 2015 Agency Detail. Board of Estimates Recommendations, Volume 1, 2015. Accessed 19 Feb 2017, at http://ca.baltimorecity.gov/flexpaper/docs/Agency_Detail_Vol1_FINAL%20web.pdf.
BCHD. Baltimore City overdose prevention and response information.Baltimore City Health Department website, 2017. http://health.baltimorecity.gov/opioid-overdose/baltimore-city-overdose-prevention-and-response-information. Accessed February 21, 2017.
NIDA. Fiscal Year 2017 Funding Priorities. National Institute on Drug Abuse AIDS Research Program Research and Funding Priorities. 2016, October. Accessed 20 Oct 2016 at https://www.drugabuse.gov/sites/default/files/fy17priorities.pdf
Evans S. Personal correspondence of Sarah Evans, former Insite Director, with Amos Irwin, May 8, 2015.
Holtgrave D. Personal correspondence of Dr. David Holtgrave, Johns Hopkins University Bloomberg School of Public Health Chair of the Department of Health, Behavior, and Society, with Susan Sherman, January 15, 2017.
Beletsky L, Davis CS, Anderson E, Burris S. The law (and politics) of safe injection facilities in the United States. Am J Public Health. 2008;98(2):231–7.
Semaan S, Fleming P, Worrell C, Stolp H, Baack B, Miller M. Potential role of safer injection facilities in reducing HIV and hepatitis C infections and overdose mortality in the United States. Drug Alcohol Depend. 2011;118(2):100–10.
Tempalski B, Friedman R, Keem M, Cooper H, Friedman SR. NIMBY localism and national inequitable exclusion alliances: the case of syringe exchange programs in the United States. Geoforum. 2007;38(6):1250–63.
Lopez G. When a drug epidemic's victims are white. Vox, April 4, 2017. Accessed 27 Apr 2017 at http://www.vox.com/identities/2017/4/4/15098746/opioid-heroin-epidemic-race.
Nolan, N. Hepatitis C infection in Baltimore: a need for funding. JHSPH PHASE Internship Program, BCHD Acute Communicable Diseases Department. Accessed 18 Feb 2017, at http://dhmh.maryland.gov/phase/documents/nolan_nichole.pdf.
Ding L, Landon BE, Wilson IB, Wong MD, Shapiro MF, Cleary PD. Predictors and consequences of negative physician attitudes toward HIV-infected injection drug users. Arch Intern Med. 2005;165(6):618–23.
Cooper HL, Linton S, Kelley ME, Ross Z, Wolfe ME, Chen YT, Zlotorzynska M, Hunter-Jones J, Friedman SR, Des Jarlais D, Semaan S. Racialized risk environments in a large sample of people who inject drugs in the United States. Int J Drug Policy. 2016;27:43–55.
Hottes TS, Bruneau J, Daniel M. Gender-specific situational correlates of syringe sharing during a single injection episode. AIDS Behav. 2011;15(1):75–85.
Tassiopoulos K, Bernstein J, Bernstein E. Age and sharing of needle injection equipment in a cohort of Massachusetts injection drug users: an observational study. Addict Sci Clin Pract. 2013;8(1):20.
Zivanovic R, Milloy MJ, Hayashi K, Dong H, Sutherland C, Kerr T, Wood E. Impact of unstable housing on all-cause mortality among persons who inject drugs. BMC Public Health. 2015;15(1):106.
Nikolopoulos GK, Fotiou A, Kanavou E, Richardson C, Detsis M, Pharris A, Suk JE, Semenza JC, Costa-Storti C, Paraskevis D, Sypsa V. National income inequality and declining GDP growth rates are associated with increases in HIV diagnoses among people who inject drugs in Europe: a panel data analysis. PLoS One. 2015;10(4):e0122367.
Gyarmathy VA, Caplinskiene I, Caplinskas S, Latkin CA. Social network structure and HIV infection among injecting drug users in Lithuania: gatekeepers as bridges of infection. AIDS Behav. 2014;18(3):505–10.
McCann E, Temenos C. Mobilizing drug consumption rooms: inter-place networks and harm reduction drug policy. Health & Place. 2015;31:216–23.
Jozaghi E. The role of peer drug users' social networks and harm reduction programs in changing the dynamics of life for people who use drugs in the downtown eastside of Vancouver, Canada (Doctoral dissertation, Arts and Social Sciences).
Jozaghi E. Exploring the role of an unsanctioned, supervised peer driven injection facility in reducing HIV and hepatitis C infections in people that require assistance during injection. Health & Justice. 2015;3(1):16.
DeBeck K, Small W, Wood E, Li K, Montaner J, Kerr T. Public injecting among a cohort of injecting drug users in Vancouver, Canada. J Epidemiol Community Health. 2009;63(1):81–6.
Salmon AM, Thein HH, Kimber J, Kaldor JM, Maher L. Five years on: what are the community perceptions of drug-related public amenity following the establishment of the Sydney Medically Supervised Injecting Centre? Int J Drug Policy. 2007;18(1):46–53.
Wood E, Kerr T, Small W, Li K, Marsh DC, Montaner JS, Tyndall MW. Changes in public order after the opening of a medically supervised safer injecting facility for illicit injection drug users. Can Med Assoc J. 2004;171(7):731–4.
Linden IA, Mar MY, Werker GR, Jang K, Krausz M. Research on a vulnerable neighborhood—the Vancouver Downtown Eastside from 2001 to 2011. J Urban Health. 2013;90(3):559–73.
Small W, Van Borek N, Fairbairn N, Wood E, Kerr T. Access to health and social services for IDU: the impact of a medically supervised injection facility. Drug Alcohol Rev. 2009;28(4):341–6.
Tyndall MW, Kerr T, Zhang R, King E, Montaner JG, Wood E. Attendance, drug use patterns, and referrals made from North America's first supervised injection facility. Drug Alcohol Depend. 2006;83(3):193–8.
Jozaghi E, Lampkin H, Andresen MA. Peer-engagement and its role in reducing the risky behavior among crack and methamphetamine smokers of the Downtown Eastside community of Vancouver, Canada. Harm Reduction J. 2016;13(1):19.
Hunt D, Parker L. Baltimore City Syringe Exchange Program. Health Department: Baltimore, Maryland; 2016. Accessed from: http://www.aacounty.org/boards-and-commissions/HIV-AIDS-commission/presentations/BCHD%20Needle%20Exchange%20Presentation9.7.16.pdf.
German D, Park JN, Powell C, Flynn C. Trends in HIV and injection behaviors among Baltimore injection drug users. Baltimore: Presentation at 10th National Harm Reduction Conference; 2014.
Park JN, Weir BW, Allen ST, and Sherman SG. Prevalence and correlates of experiencing and witnessing drug overdose among syringe service program clients in Baltimore, Maryland. (Manuscript in preparation).
Bluthenthal RN, Wenger L, Chu D, Lorvick J, Quinn B, Thing JP, Kral AH. Factors associated with being asked to initiate someone into injection drug use. Drug Alcohol Depend. 2015;149:252–8.
Kaplan EH, O'Keefe E. Let the needles do the talking! Evaluating the New Haven needle exchange. Interfaces. 1993;23(1):7–26.
Kwon JA, Anderson J, Kerr CC, Thein HH, Zhang L, Iversen J, Dore GJ, Kaldor JM, Law MG, Maher L, Wilson DP. Estimating the cost-effectiveness of needle-syringe programs in Australia. Aids. 2012;26(17):2201–10.
Tempalski B, Cooper HL, Friedman SR, Des Jarlais DC, Brady J, Gostnell K. Correlates of syringe coverage for heroin injection in 35 large metropolitan areas in the US in which heroin is the dominant injected drug. Int J Drug Policy. 2008;19:47–58.
CDC. HIV/AIDS, viral hepatitis, sexually transmitted infections, & tuberculosis. FY 2016 President's Budget Request. 2015; Accessed 12 May 2017. https://www.cdc.gov/budget/documents/fy2016/hivaids-factsheet.pdf.
Falade‐Nwulia O, Mehta SH, Lasola J, Latkin C, Niculescu A, O'connor C, Chaulk P, Ghanem K, Page KR, Sulkowski MS, Thomas DL. Public health clinic‐based hepatitis C testing and linkage to care in baltimore. J Viral Hepatitis. 2016.
Razavi H, ElKhoury AC, Elbasha E, Estes C, Pasini K, Poynard T, Kumar R. Chronic hepatitis C virus (HCV) disease burden and cost in the United States. Hepatology. 2013;57(6):2164–70.
Hsieh, Y-H. Personal correspondence of Dr. Yu-Hsiang Hsieh, Johns Hopkins Department of Emergency Medicine, with Andrew Lindsay, July 17, 2015.
Kerr T, Wood E, Grafstein E, Ishida T, Shannon K, Lai C, Montaner J, Tyndall MW. High rates of primary care and emergency department use among injection drug users in Vancouver. J Public Health. 2005;27(1):62–6.
Stein MD, Sobota M. Injection drug users: hospital care and charges. Drug Alcohol Depend. 2001;64(1):117–20.
Palepu A, Tyndall MW, Leon H, Muller J, O'shaughnessy MV, Schechter MT, Anis AH. Hospital utilization and costs in a cohort of injection drug users. Can Med Assoc J. 2001;165(4):415–20.
Rosenthal E. As hospital prices soar, a stitch tops $500. New York Times. 2013;12(3).
Harris HW, Young DM. Care of injection drug users with soft tissue infections in San Francisco, California. Arch Surg. 2002;137(11):1217–22.
Census Bureau. Quickfacts for Baltimore City, Maryland. United States Census Bureau website. 2015. Accessed 18 Feb 2017 at http://www.census.gov/quickfacts/table/RHI805210/24510
Kerr T, Tyndall MW, Lai C, Montaner JS, Wood E. Drug-related overdoses within a medically supervised safer injection facility. Int J Drug Policy. 2006;17(5):436–41.
Astemborski J and Mehta S. Personal correspondence of Drs. Shruti Mehta and Jacquie Astemborski, Johns Hopkins University Bloomberg School of Public Health Department of Epidemiology, with Amos Irwin and Andrew Lindsay, July 16, 2015.
Baltimore County. Insurance carriers will begin paying for County EMS Transport. Police and Fire News, Baltimore County Government website, July 20, 2015.http://www.baltimorecountymd.gov/News/PoliceNews/iWatch/keyword/ambulance Accessed 20 Feb 2017.
Rienzi G. Johns Hopkins pilots study on EMS treatment of substance abusers. Johns Hopkins University Gazette, Sept-Oct 2014. Accessed 26 Feb 2017. http://hub.jhu.edu/gazette/2014/september-october/focus-baltimore-city-ems/
Pfuntner A, Wier LM, Steiner C. Costs for hospital stays in the United States, 2011: Statistical Brief# 168.
CSAM. Methadone treatment issues. California Society of Addiction Medicine website, 2011. http://www.csam-asam.org/methadone-treatment-issues Accessed 20 Feb 2017.
Gerstein DR, Johnson RA. Harwood HJ, Fountain D, Suter N, Malloy K. Evaluating recovery services: the California Drug and Alcohol Treatment Assessment (CALDATA), General Report. National Opinion Research Center (NORC) Report, 1994. Accessed January 7, 2016. https://www.ncjrs.gov/App/publications/abstract.aspx?ID=157812.
Schwartz RP, Alexandre PK, Kelly SM, O'Grady KE, Gryczynski J, Jaffe JH. Interim versus standard methadone treatment: a benefit–cost analysis. J Subst Abus Treat. 2014;46(3):306–14.
The contribution by AI was supported by the Criminal Justice Policy Foundation and the Law Enforcement Action Partnership. The contribution by EJ was supported by the Canadian Institutes of Health Research (CIHR) Postdoctoral Fellowship (201511MFE-358449-223266).The contributions by SGS and BWW were supported by the Johns Hopkins University Center for AIDS Research (P30AI094189). The contribution by STA was supported by a grant from the National Institute on Drug Abuse (T32DA007292, PI: Renee M. Johnson). The contribution by AL was supported by the Criminal Justice Policy Foundation and by Amherst College.
All data used in the current study are furnished in the text and tables. All calculations are available from the corresponding author on a reasonable request.
AI designed most of the models, performed the calculations, and took the lead in writing the manuscript. EJ found data for use in the models, designed the models for HIV and HCV, and assisted in formatting and editing the manuscript. AL found data for use in the models. STA conducted the overdose mapping analysis. BWW supplied data for use in the models and assisted with the overdose mapping analysis. SGS assisted in writing and editing the manuscript. All authors read and approved the final manuscript.
Law Enforcement Action Partnership, Silver Spring, MD, USA
Amos Irwin
Criminal Justice Policy Foundation, Silver Spring, MD, USA
British Columbia Centre for Disease Control, University of British Columbia, Vancouver, Canada
Ehsan Jozaghi
School of Population and Public Health, University of British Columbia, Baltimore, MD, USA
Department of Health, Behavior, and Society, Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD, USA
Brian W. Weir, Sean T. Allen & Andrew Lindsay
Criminal Justice Policy Foundation, Amherst College, Silver Spring, MD, USA
Susan G. Sherman
Brian W. Weir
Sean T. Allen
Andrew Lindsay
Correspondence to Amos Irwin.
Irwin, A., Jozaghi, E., Weir, B.W. et al. Mitigating the heroin crisis in Baltimore, MD, USA: a cost-benefit analysis of a hypothetical supervised injection facility. Harm Reduct J 14, 29 (2017). https://doi.org/10.1186/s12954-017-0153-2
Supervised consumption rooms
Cost-benefit
Opiate overdose
The state of harm reduction in North America | CommonCrawl |
Not everything we measure is an eigenvalue of a linear operator
TL;DR – Statistical quantities (e.g. averages) and angles (e.g. direction of spin) are measurable quantities but are not associated with linear operators, eigenkets and eigenvalues.
When studying quantum mechanics you learn about observables, how to each you associate a Hermitian operator, how the value is only defined on the eigenstates of that operator and how, in general, you will have a distribution over eigenvalues. Position, momentum, energy and spin are all examples. Since one mostly deals with those, one usually gets the impression that that's all there is. This may not be stated per se in your textbook, yet you may have that impression.
But is that true? Is everything that we measure an eigenvalue of some Hermitian operator? Here I'll present two quantities that don't follow the pattern: temperature and direction of spin.
Suppose we have a box filled with gas in thermodynamic equilibrium. Its temperature $T$ will be proportional to the variance of the velocity of all the elementary constituents of the gas. If we call $|\psi>$ the state of all the particles, $P_i$ the momentum operator for the $i$-th particle and $m_i$ its mass, we'll have something like:
T=\alpha<\psi|\sum_{i=1}^{n} \frac{P_i^2}{m_i}|\psi>
where $\alpha$ is an appropriate constant.
Now, what's important here is not the detail of the expression: the important aspect is that temperature is an average. Any state that represents a snapshot of a system (i.e. a pure state) will always have one and only one value of temperature. And that value needs to match what our thermometer says. That is: we are not going to have a statistical distribution over possible values of temperatures.
You may think: but the quantity $\sum_{i=1}^{n} \frac{P_i^2}{m_i}$ is an operator. And indeed it is. But that's where the connection with temperature ends. Think about the eigenstates of that operator: they correspond to those states for which the magnitude of the momentum is perfectly prepared for all particles. Those are not the only states for which we have a well defined value of temperature. And measuring the temperature does not mean measuring the magnitude of the momentum for each particle. So $\sum_{i=0}^{n} \frac{P_i^2}{m_i}$ is an operator but is not the temperature operator: it's an operator whose expectation corresponds to the temperature.
2. Spin direction
Now suppose we have a spin 1/2 system. You may be familiar with $S_x$, $S_y$ and $S_z$ which are the operators for the spin components along the respective directions. As you know, spin represents angular momentum so their conjugate quantity is the angle along the plane perpendicular to their direction. Which leads to the question: where is the spin angle operator? Well, there isn't one.
All spin 1/2 states can be defined by a unique direction in space. We can write:
|\psi>=\cos(\theta/2)|z^+> + \sin(\theta/2)e^{\imath \phi}|z^->
where $\theta$ and $\phi$ are the polar and azimuthal angle respectively. Note that we just need two states, $|z^+>$ and $|z^->$, to form a basis and those correspond to the two possible values of spin measured along the $z$ direction. An angle, instead, takes a continuum of possible values and therefore we would need an infinite number of eigenkets to form an angle operator (as it is for position and momentum). Since the space is two dimensional, all bases must be two dimensional: no angle operator.
But here is the thing: we can nonetheless measure angles. Suppose we have a source of electrons such that their spin comes aligned always in the same direction. With a Stern-–Gerlach type experiment, we can measure the fraction $0\leq f_z \leq 1$ that comes out with $z^+$. We have $f_z = <\psi|z^+><z^+|\psi> = \cos^2(\theta/2)$. So $\theta = 2 \arccos \sqrt{f_z}$ is definitely something we can measure. Similarly, we can find an expression for $\phi$.
Again you may think: all we did was measure the expectation of $|z^+><z^+|$ which is a Hermitian operator. And indeed we did. But that operator has eigenstates only for $\theta=0$ and $\theta=\pi$. Yet each state will always have a well defined value for $\theta$ and we can measure the values between $0$ and $\pi$ as well.
While it is often useful to think of a quantum state as a distribution over the eigenvalues of some observable, this is not the only way we should think about it. Not all measurable quantities work like that. In particular, note that many macroscopic quantities are averages over a large number of particles and therefore one should always be very careful when extrapolating ideas from the quantum world.
Full math, Observables, Quantum
What are complex numbers?
First-Person Experience and the Consciousness Transfer Device | CommonCrawl |
A comparison of analytic approaches for individual patient data meta-analyses with binary outcomes
Doneal Thomas1,
Robert Platt1 and
Andrea Benedetti1, 2, 3Email author
BMC Medical Research MethodologyBMC series – open, inclusive and trusted201717:28
Accepted: 2 February 2017
Individual patient data meta-analyses (IPD-MA) are often performed using a one-stage approach-- a form of generalized linear mixed model (GLMM) for binary outcomes. We compare (i) one-stage to two-stage approaches (ii) the performance of two estimation procedures (Penalized Quasi-likelihood-PQL and Adaptive Gaussian Hermite Quadrature-AGHQ) for GLMMs with binary outcomes within the one-stage approach and (iii) using stratified study-effect or random study-effects.
We compare the different approaches via a simulation study, in terms of bias, mean-squared error (MSE), coverage and numerical convergence, of the pooled treatment effect (β 1) and between-study heterogeneity of the treatment effect (τ 1 2 ). We varied the prevalence of the outcome, sample size, number of studies and variances and correlation of the random effects.
The two-stage and one-stage methods produced approximately unbiased β 1 estimates. PQL performed better than AGHQ for estimating τ 1 2 with respect to MSE, but performed comparably with AGHQ in estimating the bias of β 1 and of τ 1 2 . The random study-effects model outperformed the stratified study-effects model in small size MA.
The one-stage approach is recommended over the two-stage method for small size MA. There was no meaningful difference between the PQL and AGHQ procedures. Though the random-intercept and stratified-intercept approaches can suffer from their underlining assumptions, fitting GLMM with a random-intercept are less prone to misfit and has good convergence rate.
Individual patient data meta-analyses
One- and two-stage models
Generalized linear mixed models
Penalized quasi-likelihood
Adaptive gauss-hermite quadrature
Fixed and random study-effects
Individual Patient Data (IPD) meta-analyses (MA) are regarded as the gold standard in evidence synthesis and are increasingly being used in current practice [1, 2]. However, the implementation of the analysis of IPD-MA requires additional expertise and choices [3], particularly when the outcome is binary. These include (i) should a one- or two-stage model be used [4, 5], (ii) what estimation procedure should be used to estimate the one-stage model [6, 7] and, (iii) should the study effect be fixed or random [8].
Although IPD-MA were conventionally analyzed via a two-stage approach [9], over the last decade, use of the one-stage approach has increased [10]. Recently, some have suggested that the two-stage and one-stage frameworks produce similar results for MA of large randomized controlled trials [5]. The literature suggests the one-stage method is particularly preferable when few studies or few events are available as it uses a more exact statistical approach than relying on a normality approximation [3–5].
When IPD are available and the outcome is binary, the one-stage approach consists of estimating Generalized Linear Mixed Models (GLMMs) with a random slope for the exposure, to allow the exposure effect to vary across studies. Penalized quasi-likelihood (PQL) introduced by Breslow and Clayton is a popular method for estimating the parameters in GLMMs [11]. However, regression parameters can be badly biased for some GLMMs, especially with binary outcomes with few observations per cluster, low outcome rates, or high between cluster variability [12, 13]. Adaptive Gaussian Hermite quadrature (AGHQ) is the current favored competitor to PQL, which approximates the maximum likelihood by numerical integration [14]. Although estimation becomes more precise as the number of quadrature points increases, it often gives rise to computational difficulties for high-dimension random effects and convergence problems where variances are close to zero or cluster sizes are small [14].
The heterogeneity between studies is an important aspect to consider when carrying out IPD-MA. Such heterogeneity may arise due to differences in study design, treatment protocols or patient populations [8]. When such heterogeneity is present, the convention is to include a random slope in the model as it captures the variability of the exposure across studies. However, there are corresponding assumptions in regards to the study effect being modelled as stratified or random [4, 15].
Few comparisons of GLMMs have been reported in the context of IPD-MA with binary outcomes [4, 15], that is, when the number of studies and the number of subjects within each study is small, study sizes are imbalanced, in the presence of large between-study heterogeneity and small exposure effects and there is an interest in the variance parameter of the random treatment effect. According to previous literature, these factors have all been identified as influencing model performance [6]. While several simulation studies have been published, these have mainly limited their attention to simple models with only random intercepts [13, 16]. Thus, the performance of the random effects models including both a random intercept and a random slope are less well known.
Our objective was to assess and compare via simulation studies, (i) one-stage approaches to conventional two-stage approaches (ii) the performance of different estimation procedures for GLMMs with binary outcomes, and (iii) using stratified study-effect or random study-effects in a randomized trial setting. We use our results to develop guidelines on the choice of methods for analyzing data from IPD-MA with binary outcomes and to understand explicitly the trade-offs between computational and statistical complexity.
Methods section introduces the models we are considering, the design of the simulation study and the assessment criteria. In Results section, results for the different methods under varying conditions are presented and discussed. Discussion section concludes with a discussion.
We conducted a simulation study to compare various analytic approaches to analyze data from IPD-MA with binary outcomes. Hereto, our methods all assume that between-study heterogeneity exists, as it is likely in practice, and so only random treatment-effects IPD meta-analysis models are considered.
Data Generation
The data generation algorithm was developed to generate two-level data sets (e.g. patients grouped into studies). We generated a binary outcome (Y ij ) and a single binary exposure (X ij ). We denote the number of studies j = 1, 2 …, K and i = 1, 2 …, n j denotes the individuals per study. Therefore, Y ij is the outcome observed for the i th individual from the j th study.
The dichotomous exposure variable, X ij , was generated from a Bernoulli distribution with probability = 0.5 and recoded \( \pm {\scriptscriptstyle \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.} \) to indicate control/treatment group [15]. To generate the binary outcome variable Y ij , first the probability of the outcome was calculated from the random-study and –treatment effects logistic regression model (Eq. 1), or the stratified-study effects model (Eq. 2):
$$ logit\left({\pi}_{ij}\right)=\left({\beta}_0+{b}_{0 j}\right)+\left({\beta}_1+{b}_{1 j}\right){x}_{ij} $$
$$ logit\left({\pi}_{ij}\right)={\beta}_j+\left({\beta}_1+{b}_{1 j}\right){x}_{ij} $$
Here π ij is the true probability of the outcome for the i th individual from the j th study, β 0denotes the mean log-odds of the outcome (study-effect) and β 1 the pooled treatment effect (log odds ratio). The random effects (b 0j and b 1j ) were generated from a bivariate normal distribution with mean = 0 and variance-covariance matrix \( \Sigma =\left(\begin{array}{cc}\hfill {\sigma}^2\hfill & \hfill \rho \sigma \tau \hfill \\ {}\hfill \rho \sigma \tau \hfill & \hfill {\tau}^2\hfill \end{array}\right) \) for the random study-effect case. In the stratified study effects case, (i.e. Eq. (2)), β j , were generated from a uniform distribution and b 1j was generated from a normal distribution with zero mean and variance, τ 2.
A Bernoulli distribution with probability π ij from Eqs. (1) and (2) was used to generate the binary outcome Y ij .
The number of studies, study size, total sample size, variances and correlation of the random effects, and average conditional probability were all varied, with levels described in Table 1. For each distinct combination (n = 480) of simulation parameters, 1000 IPD-MA were generated from each Eqs. (1) and (2), allowing us to investigate a wide range of scenarios. The heterogeneity was set at I2 = 0.01, 0.23 and 0.55 as defined by τ 2/(τ 2 + π 2/3) for a binary outcome using an odds ratio [17]. The levels correspond to: little or no, low and moderate heterogeneity respectively [18].
Summary of Simulation Parametersa
IPD-Meta-analyses generated:
M = 1000
(Number of studies, number of subjects per study, total average sample sizes)b:
(K, n i , N) ∈ {(5,100,500), (15,33,500), (15,200,3000), (5,357,500), (15,98,500), (15,588,3000)}
Fixed effects (intercepts):
β 0 = − 0.85
Prevalence of the outcome
π = 30%
Fixed effects (Slopes):
β 1 = 0.18
Random effects distribution:
Random effects variances:
{τ 0 2 , τ 1 2 } ∈ (0.05, 1, 4)
Correlation between random effects:
ρ ∈ (0,0.5)
aIn a sensitivity analysis, we extended the number of studies to 50 with an average sample size of 9000 and reduced the prevalence of the outcome to 5%. The prevalence of the outcome was fixed to 30% by setting the value of the intercept β 0 to –0.85
bThe number of subjects per study was reported for only large studies when data sets were generated with imbalanced study sizes (bold text: 25% large studies-10 times more subjects)
A sensitivity analysis was also considered to explore the performance of different methods when just 5% of observation had a positive outcome.
Two-stage IPD methods
In the two-stage approach, each study in the IPD was analyzed separately via logistic regression
$$ {y}_i\sim Bernoulli\left({p}_i\right) $$
$$ logit\left({p}_{i j}\right)={\gamma}_0+{\gamma}_1{x}_i $$
The first step estimated the study-specific intercept and slope and their associated within-study covariance matrix (consisting of the variances of the intercept and slope, as well as the covariance) for each study. This model reduces the IPD to its relative treatment effect estimate and variance for each study then at the second stage these aggregate data (AD) are synthesized (described below).
Model 1- Bivariate meta-analysis
The AD were combined via a bivariate random-effects model that simultaneously synthesized the estimates whilst accounting for their correlation, and the within-study correlation [4]. The model assumes that the true effects follow a bivariate normal distribution and is estimated via restricted maximum likelihood with the following marginal distributions of the estimates [19]:
$$ \left[\begin{array}{c}\hfill \widehat{\gamma_{0 J}}\hfill \\ {}\hfill \widehat{\gamma_{1 J}}\hfill \end{array}\right]\sim N\left(\left(\begin{array}{c}\hfill {\gamma}_0\hfill \\ {}\hfill {\gamma}_1\hfill \end{array}\right),\varSigma +{C}_j\right),\varSigma =\left(\begin{array}{c}\hfill {\tau}_0^2\hfill \\ {}\hfill {\tau}_{01}^2\hfill \end{array}\begin{array}{c}\hfill {\tau}_{01}^2\hfill \\ {}\hfill {\tau}_1^2\hfill \end{array}\right) $$
where Σ is the unknown between-study variance-covariance matrix of the true effects (γ 0 and γ 1) and C j (j = 1, …, K) the with-in study variance-covariance matrix with the variances of the estimates.
Model 2: Conventional DerSimonian and Laird approach
The with-in study and between-study covariance estimates are often times not estimated since most researchers assumed that studies are independent, and instead a univariate meta-analysis of the logit of the odds ratios is performed [20]. The marginal distribution of the pooled estimated treatment effect under this approach is easily obtained as:
$$ \widehat{\gamma_{1 J}}\sim N\left({\gamma}_1,{\tau}_1^2+ var\left(\widehat{\gamma_J}\right)\right) $$
with unknown parameters γ 1 and τ 1 2 , estimated via the inverse variance weighted non-iterative method (method-of-moments) [21].
One-stage IPD methods
The one-stage approach analyzes the IPD from all studies simultaneously, while accounting for clustering of subjects within studies [4]. The one-stage model is a form of GLMM. Two different specifications are considered.
Model 3- Random intercept and random slope
We estimated a GLMM with a random study effect u 0j and a random treatment effect u 1j via PQL and AGHQ, and allowed the random effects to be correlated, which implies that the between-study covariance between u 0j and u 1j is fully estimated.
$$ \begin{array}{l} logit\left({p}_{ij}\right) = {\gamma}_0+{u}_{0 j}+\left({\gamma}_1+{u}_{1 j}\right){x}_{ij}\\ {}\left[\begin{array}{c}\hfill {u}_{0 j}\hfill \\ {}\hfill {u}_{1 j}\hfill \end{array}\right]\sim N\left(\left(\begin{array}{c}\hfill 0\hfill \\ {}\hfill 0\hfill \end{array}\right),{\Sigma}_j\right),\ {\Sigma}_j=\left(\begin{array}{cc}\hfill {\tau}_0^2\hfill & \hfill {\tau}_{01}^2\hfill \\ {}\hfill {\tau}_{01}^2\hfill & \hfill {\tau}_1^2\hfill \end{array}\right)\end{array} $$
Model 4-Stratified intercept one-stage
Finally, the stratified one-stage approach estimates a separate intercept for each study rather than constraining the intercepts to follow a normal or other distribution. Therefore, there is no need for the normality assumption for the study membership, hence, the between-study covariance term is no longer estimated. The model is defined as follows:
$$ logit\left({p}_{ij}\right) = {\displaystyle \sum_{k=1}^K}\left({\gamma}_k{I}_{k= j}\right)+\left({\gamma}_1+{u}_{1 j}\right){x}_{ij} $$
where I k = j indicates that a separate intercept should be estimated for each study j = 1, …, K and u 1j ~ N(0, τ 1 2 ). Parameters of both Models 3 and 4 were estimated via PQL and AGHQ.
Estimation Procedures and Approximations
The parameters of the one-stage models were estimated using PQL and AGHQ. For the two-stage approach, a logistic regression was first estimated for each study via maximum likelihood. The parameters of the two-stage model were estimated via method-of-moments (MOM) (Model 2) and restricted maximum likelihood (REML) (Model 1) [21–23] at the second stage.
Both likelihood-based methods (PQL and AGHQ) were implemented on SAS version 9.4 using PROC GLIMMIX with default options [24]. The number of quadrature points in AGHQ was selected automatically [25], the absolute value for parameter convergence criterion was 10–8 and the maximum number of iterations was 100.
Therefore, for each generated data set the following models were fit.
Two-stage approach (Models 1 and 2)
One-stage approach via GLMMs (Models 3 and 4) estimated with PQL.
One-stage approach via GLMMs (Models 3 and 4) estimated with AGHQ.
The performance of the estimation methods was evaluated using: a) numerical convergence, b) absolute bias; c) root mean square error (RMSE); and d) coverage probability - of the pooled treatment effect and its between-study variability.
Numerical convergence
The convergence rate was estimated for all models fit, as the number of simulation repetitions that did converge (without returning a warning message) divided by the total attempted (M = 1000). Models that returned a warning message specifying that the estimated variance-covariance matrix was not positive definite or that the optimality condition was violated were considered not to have converged.
The Monte Carlo bias of the pooled treatment effect and its between-study heterogeneity is defined as the average of the bias in the estimates provided by each method as compared to the truth, across the 1000 IPD-MA in each scenario. The Monte Carlo estimate of the bias is computed as
$$ bias=\frac{1}{1000}{\displaystyle \sum_{j=1}^{1000}}\ {\widehat{\theta}}_j-\theta, $$
where \( {\hat{\theta}}_j \) were the parameter estimates and θ was the true parameter of the pooled treatment effect or its between-study variance. We also reported the mean absolute bias (AB).
Mean square error
The mean square error (MSE) is a useful measure of the overall accuracy, because it penalizes an estimate for both bias and inefficiency. The Monte Carlo estimate of the MSE is:
$$ M S E\left(\widehat{\theta}\right)=\frac{1}{1000}{\displaystyle \sum_{j=1}^{1000}}{\widehat{\Big(\theta}}_j-\theta \Big){}^2, $$
For each scenario, the RMSE of the pooled treatment effect and its between-study heterogeneity was reported, as this measure is on the same scale as the parameter.
Coverage probability
We estimated coverage for the pooled treatment effect and its between-study heterogeneity for the various methods. Gaussian coverage was estimated, where if \( \left|\hat{\theta}-\theta \right|\le 1.96\times S E\left(\hat{\theta}\right) \) the true value was covered, and if \( \left|\hat{\theta}-\theta \right|>1.96\times S E\left(\hat{\theta}\right) \) it was not.
We reported the median, the 25th and 75th percentiles of the AB and RMSE of the pooled treatment effect and its between-study heterogeneity but reported percentages for the numerical convergence and coverage rate.
Tables 2, 3, 4, 5, 6 and 7 present the median and interquartile range of the AB, RMSE, coverage and convergence of the pooled treatment effect and its between-study variance, respectively, as estimated via two- and one-stage; AGHQ and PQL; random-intercept and stratified-intercept methods. We reported results for data generated with imbalances in study sizes (different sample size in all studies) for both the random-intercept and stratified-intercept data generation (Eqs. 1 and 2) with correlated random effects (ρ = 0.5), as this scenario is likely the closest to real-life.
Performance of the one- and two-stage approaches in small data setsa with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsb
Performance measuresc
Random-study and treatment effect (Eq. 1)
Stratified-study effect (Eq. 2)
Two-staged
One-stage
Two-stage
(τ 0 2 , τ 1 2 ) = (4, 4) e
AB (β 1)
0.04 (0.02 0.06)
0.04 (0.02, 0.07)
RMSE (β 1)
Coverage (β 1)
AB (τ 1 2 )
0.23 (0.14,0.30)
RMSE (τ 1 2 )
Coverage (τ 1 2 ) f
(τ 0 2 , τ 1 2 ) = (1, 1)
0.04 (0.02, 0.1)
Coverage (τ 1 2 )
aSmall data sets had 15 studies and on average 500 total subjects
bBold text represent "best value" of performance
cMedian (25th and 75th percentile) were reported for AB and RMSE, the proportion was reported for coverage and convergence
dTwo-stage method via conventional DerSimonian and Laird (Model 2). One-stage (Random-intercept and random treatment effect with PQL (Model 3)
e(τ 0 2 , τ 1 2 ): (Random treatment-effect variance, random study-effect variance)
fThe two-stage approach did not return a confidence interval for τ 1 2 , hence no coverage was estimated and comparison was not applicable (NA) to the one-stage method
Performance of the one- and two-stage approaches in large data setsa with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsb
(τ 0 2 , τ 1 2 ) = (1, 1)
aLarge data sets had 15 studies and on average 3000 total subjects
Performance of Penalized Quasi-likelihood and Adaptive Gaussian Hermite Quadrature estimation approaches in small data setsa with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsb
AGHQd
PQLd
AGHQ
PQL
(τ 0 2 , τ 1 2 ) = (4, 4)e
AB (\( \beta \) 1)
RMSE (\( \beta \) 1)
Coverage (\( \beta \) 1)
dResults are given for Adaptive Gaussian Hermite Quadrature (AGHQ) and Penalized Quasi-likelihood (PQL) for the One-stage random-intercept and random treatment effect model (Model 3)
Performance of Penalized Quasi-likelihood and Adaptive Gaussian Hermite Quadrature estimation approaches in large data setsa with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsb
Performance of the stratified- and random-intercepta models approaches in small data setsb with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsc
Performance measuresd
Random-study and -treatment effect (Eq. 1)
Stratified-intercept
Random-intercept
4..75 (2.23,7.64)
aResults are given for Penalized Quasi-likelihood (PQL) for the One-stage random-intercept and random treatment effect model (Model 3) and the stratified-intercept and random-slope model (Model 4)
bSmall data sets had 15 studies and on average 500 total subjects
cBold text represent "best value" of performance
dMedian (25th and 75th percentile) were reported for AB and RMSE, the proportion was reported for coverage and convergence
Performance of the stratified- and random-intercepta models approaches in large data setsb with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsc
bLarge data sets had 15 studies and on average 3000 total subjects
We did not exclude results from meta-analyses that returned a warning message (imperfect convergence). These meta-analyses were included as non-convergence and although these models failed to produce proper parameter estimates, these estimates were included in the calculation of the bias and the MSE.
One- versus Two-stage
In Tables 2 and 3, results for the absolute bias (AB) of the estimates for the pooled treatment effect β 1 are given. Recalling that the true parameter value was 0.18, we see that the biases were identical and under 0.05 in the one-stage and the two-stage approaches in both small and large data sets. Results were very comparable when the outcome rate was reduced from 30 to 5% (Additional file 1: Table S1). For both the one- and the two-stage, results depended on the true τ 2, and the sample size.
For the larger sample size, root mean square error (RMSE) in β 1 was generally slightly larger when the one-stage method was used than when the two-stage was used. The picture was similar across all heterogeneity levels (Tables 2 and 3) and when the outcome rate was reduced (Additional file 2: Table S3).
Neither one-stage nor two-stage methods yielded coverage of β 1 close to nominal levels (Tables 2 and 3). Increasing sample size had a positive effect on percent coverage, and increasing the true heterogeneity made estimation more difficult, hence decreasing the coverage (Table 3).
Absolute bias of the between-study heterogeneity, τ 1 2 was usually slightly lower when the one-stage approach was used than when the two-stage approach was (Tables 2 and 3), particularly, when the sample size was small (Table 2) and when greater amount of heterogeneity exist in the random effects (Bottom panel of Table 2). Regarding the effects of the simulation parameters, AB decreased when data was generated with equal study sizes and increased when the rate of occurrence was reduced (Additional file 3: Table S2). In these cases, the one-stage approach was most biased.
The RMSE of τ 1 2 for the one-stage estimates was mostly smaller than the RMSE of the two-stage method estimates. For increased sample size or reduction in the level of heterogeneity in the random effects, RMSE of τ 1 2 decreased at least by a factor of three across both methods. While the RMSE of τ 1 2 was inflated when the outcome rate was reduced, the one-stage method continued to outperform that of the two-stage method (Additional file 4: Table S4).
Convergence was not a problem for the two-stage approach while convergence of the one-stage method varied from 90 to 100% (Tables 2 and 3).
AGHQ versus PQL
One-stage models estimated via PQL and AGHQ methods often yielded similar AB in β 1. There was no observed difference in the AB (β 1) between the methods when the outcome rate was reduced (Additional file 1: Table S1).
RMSE of β 1 were generally greater when AGHQ was used than when PQL was used (Tables 4 and 5). Decreasing sample size, increasing the variances of the random effects or reducing the event rate (Additional file 2: Table S3) made precise estimation more difficult, hence RMSE increased.
When the true heterogeneity was large and total sample was small (Top panel of Table 4), AGHQ provided coverage for β 1closer to nominal levels than PQL, while both methods provided comparable coverage when the sample size was increased (Table 5). Note that across both methods, levels of coverage were higher as heterogeneity increased and similar coverage was observed when the outcome rate was reduced (Additional file 5: Table S5).
AB in τ 1 2 , was very comparable but slightly lower when PQL was used rather than AGHQ (Tables 4 and 5). The AB decreased with increasing sample size, particularly, when PQL was used (Table 5). There was substantial bias in τ 1 2 estimates when the event rate was reduced (Additional file 3: Table S2).
On account of a better overall performance of PQL with regards to AB, RMSE of τ 1 2 was generally lower with PQL than with AGHQ (Tables 4 and 5). RMSE decreased with decreased variability in the random effects, and with increased sample size. In addition, PQL-estimates continued to yield smaller RMSE than AGHQ-estimates when the outcome rate was reduced (Additional file 4: Table S4).
We found important under-coverage of the estimates for τ 1 2 for both estimation methods, particularly when PQL was used (Tables 4 and 5). The percent coverage was usually fair for both estimation methods when sample size increased, but was poor when the outcome rate was reduced (Additional file 6: Table S6).
Convergence occurred more often when AGHQ was used than when PQL was used (Tables 4 and 5). Convergence was problematic for PQL, particularly when true heterogeneity was low and sample size was small (Bottom panel of Table 4). Comparable convergence was seen when the event rate was reduced (Additional file 5: Table S5).
Random- intercept versus stratified-intercept
The results of the simulation studies, modeling the intercept as random or fixed (random slope was always considered) via PQL estimation are summarized in the Tables 6 and 7.
The convergence was markedly low (14-97%) for the fixed intercept & random slope method (Tables 6 and 7). Convergence was only reasonable for the approach when the sample size was large and heterogeneity was small, whereas convergence was always greater than 80% for the random intercept and slope approach.
In general, AB in β 1 was similar for both stratified-intercept (random-slope only) and random intercept & slope methods. Regarding the simulation parameters, sample size and variability of the random effects, were not influential in reducing the AB in β 1.
The RMSE in β 1 was smaller when estimated via the random intercept and slope model than when only a random slope was fit (Tables 6 and 7).
Increased sample size and level of heterogeneity in the random effect was most influential in determining coverage probability.
Absolute bias in τ 1 2 was clearly comparable when fit with a random intercept & slope approach or a random slope only (Tables 6 and 7). For lower outcome rate, there was a trend towards less pronounced bias when a random slope only was fit (Additional file 3: Table S2).
We observed lower RMSE of τ 1 2 when a random intercept was fit, especially when the true heterogeneity was large (Top panels of Tables 6 and 7). Comparable results were seen when both models were fit in large sample and the true heterogeneity was small (Bottom panel of Table 7)- also when outcome rate was reduced (Additional file 4: Table S4).
We found significant under coverage of τ 1 2 when both models were fit, however, this was more severe when a random slope only model was fit (Tables 6 and 7). When the generated values of τ 0 2 or τ 1 2 were low (i.e. low variability in the random effects) and sample size was increased, we had less difficulty to estimate the coverage of τ 1 2 when both models were fit. The coverage probability continued to be an issue when the rate of occurrence was reduced (Additional file 6: Table S6).
Our simulation results indicate that when the number of subjects per study is large, the one- and two-stage methods yield very similar results. Our results also confirm the finding of previous empirical studies [5, 26, 27] that in some cases, the one-stage and two-stage IPD-MA results coincide. However, we found discrepancies between these methods, with a slight preference towards the one-stage method when the number of subjects per study is small. In these situations, neither method produced accurate estimates for the between-study heterogeneity associated with the treatment-effect; however, the biases were larger for the two-stage approach. Furthermore, one-stage methods produced less biased and more precise estimates of the variance parameter and had slightly higher coverage probabilities, though these differences may be due to using the REML estimate of τ 1 2 instead of the der Simonian and Laird estimator used in the two-stage approach.
Estimation of GLMMs with binary outcomes continues to pose challenges, with many methods producing biased regression coefficients and variance components [7]. AGHQ has been shown to overestimate the variance component with few clusters or few subjects [17]. On the contrary, PQL has been found to underestimate the variance component while the standard errors are overestimated [12]. In the context of IPD-MA, we found similar absolute bias of the PQL- and AGHQ-estimated pooled treatment effect, while the PQL-estimates of the between-study variance had greater precision when study sizes were small and random effects were correlated. This somewhat confirms previous results, which found that PQL suffers from large biases but performs better in terms of MSE than AGHQ [6]. Both estimation methods experienced difficulty in attaining nominal coverage of the between-study heterogeneity associated with the treatment effect in two situations: (i) when the number of studies included was small and/or (ii) the true variances of the random effects were small. We also found that convergence was not an important problem for AGHQ when meta-analyses included studies with less than 50 individuals per study. However, convergence was poor when the prevalence of the outcome was reduced to 5% and the true heterogeneity was close to zero.
Stratification of the intercept in one-stage models avoids the need to estimate the random effect for the intercept and the correlation between the random effects. This approach may be preferable in situations not investigated in this work (e.g. when the distribution of the random effects is skewed). However, this approach suffered from marked convergence rates when fit to small data sets (15 studies and on average 500 subjects).
We used simulation studies to compare various analytic strategies to analyze data arising from IPD-MA across a wide range of data generation scenarios but made some simplifications. We only considered binary outcomes, one dichotomous treatment variable, a two-level data structure, and no confounders. Moreover, we estimated GLMMs via PQL and AGHQ, but did not compare Bayesian or other estimation methods, which might be particularly useful in sparse scenarios [28]. We have made the assumption throughout that IPD were available. Certainly, the time and cost associated with collecting IPD are considerable. However, once such data is in hand, we have addressed several open questions relating to the best way to analyze it. We should also note that methods exist for combining IPD and aggregated data [7]. Further study is needed to investigate alternative confidence intervals (or coverage probability) for the between-study heterogeneity that can be used to remedy the under-coverage of Gaussian intervals. The normality-based intervals (coverage rate) we studied greatly underperformed in most scenarios because the constructions of the confidence interval are likely to be invalid. A further simplification that limits the generalizability of this work is that it is restricted to only two-arm trials. The extension to three or more arms would require careful consideration of more complicated correlation structures in treatment effects across arms and within studies [29].
One important comparison we have not addressed is, computational speed where the two-stage method had a distinct advantage over the one-stage; PQL was faster than AGHQ and the stratified-intercept model run-time was quicker than the random-intercept model.
As far as we know, this simulation study is the first to simultaneously generate data with normally distributed and stratified random intercepts. This study also compares approaches that include a random intercept for study membership to those that do not. Furthermore, the use of simulation - to systematically investigate the robustness of the approaches to variation in sample size, study number, outcome rate, magnitude of correlation and variances. As a result, our scenarios have allowed us to assess performance without being too exhaustive.
Guidelines for Best Practice
On the basis of these findings, we can make several recommendations. When the IPD-MA included many studies and the outcome rate was not too low, this work supports the conclusion of a previous study [5] that the conventional two-stage method by DerSimonian and Laird [21] is a good choice under the data conditions simulated here. Cornell et al. found that the DL method produced too-narrow confidence bounds and p values that were too small when the number of studies was small or there was high between-study heterogeneity [30]. In such cases, a modification such as the Hartung-Knapp approach may be preferable [31]. Further, while the bivariate two-stage approach is very rarely used in practice, we found that it tended to yield good overall model performance, comparable with that of the one-stage models when study sizes are small. In addition, our results also suggest that the one-stage method can be used in IPD-MA where study sizes are less than 50 subjects per study or few events were recorded in most studies (outcome rate of 5%). In these cases, the one-stage approach is more appropriate as it models the exact binomial distribution of the data and offers more flexibility in model specification over the two-stage approach [32].
If interest lies in estimation of the pooled treatment effect or the between-study heterogeneity of the treatment effect, estimation using PQL appeared to be a better choice due to its lower bias and mean square error for the settings considered. On the contrary, computational issues such as convergence occurred less with this technique than with AGHQ. However, it is important to note that convergence and coverage in τ 2 was an issue in small and large total sample sizes and also, when level of true heterogeneity was large.
For these simulated data, the results of both the random-intercept and stratified-intercept models were not importantly different. However, under both data generations, fitting a GLMM with the random-intercept was overall less sensitive to misspecification in small sample sizes with large between-study heterogeneity than the stratified-intercept GLMM since we have observed high rates of non-convergence via the stratified-intercept model.
There are four important caveats to these recommendations. First, our simulations show greater accuracy of the pooled odds ratio as the number of studies increase. Therefore, an IPD-MA with more studies will provide more accurate estimates. Secondly, our results show that the estimation of the between-study heterogeneity of the treatment effect is highly biased regardless of the sample size and number of studies. Therefore, we should always expect that the variance parameter be estimated with some error. Thirdly, small overall samples mark the trade-off under which a meta-analyst might consistently choose precision over bias and our simulations show that PQL estimation may be preferred in these situations. Finally, large overall sample size can eliminate the lack of statistical power present in small overall samples. In such cases, comparable results are seen for one- and two-stage methods and fitting a two-stage analysis as a first step may be advisable. This could aid as a quick and efficient investigation of heterogeneity and treatment-outcome association.
To summarize, the one- and two-stage methods consistently produced similar results when the number of studies and overall sample are large. Although the PQL and AGHQ estimation procedures produced similar bias of the pooled log odds ratios, PQL-estimates had lower RMSE than the AGHQ-estimates. Both the random-intercept and stratified-intercept models yielded precise and similar estimates for the pooled log odds ratios. However, the random-intercept models gave good coverage probabilities of the between-study heterogeneity in small sample sizes and yielded overall good convergence rate as compared to the random slope only model.
Absolute bias
AGHQ:
Adaptive Gaussian hermite quadrature
GLMM:
Generalized linear mixed model
IPD-MA:
Individual patient data meta-analysis
MOM:
Method-of-moments
MSE:
Mean-squared error
PQL:
REML:
Restricted maximum likelihood
We have no acknowledgements.
This work was supported by an operating grant from the Canadian Institutes of Health Research. Andrea Benedetti is supported by the FRQ-S.
Data are available upon request.
DT led this project in the study design, performed simulation of data and statistical analyses, and also led the writing of the manuscripts. AB participated in the study design, guided statistical analyses and edited the final draft. RP helped draft and revised the manuscript. All authors read and approved the final manuscript.
Competing interest
Not applicable. This article reports a simulation study and does not involve human participants.
Additional file 1: Median (Interquartile range (IQR)) absolute bias (%) for treatment effect, β1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 72 kb)
Additional file 2: Median (Interquartile range (IQR)) (%) root mean square error for treatment effect, β1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 70 kb)
Additional file 3: Median (Interquartile range (IQR)) absolute bias (%) for random treatment-effect variance, τ2 1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 73 kb)
Additional file 4: Median (Interquartile range (IQR)) (%) root mean square error for random treatment-effect variance, τ2 1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 63 kb)
Additional file 5: Percent Coverage (percent convergence rate) for treatment effect, β1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 62 kb)
Additional file 6: Percent Coverage for random treatment-effect variance, τ2 1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 63 kb)
Department of Epidemiology, Biostatistics & Occupational Health, McGill University, Montreal, Canada
Department of Medicine, McGill University, Montreal, Canada
Respiratory Epidemiology and Clinical Research Unit, McGill University Health Centre, Purvis Hall, 1020 Pine Avenue West, Montreal, QC, H3A 1A2, Canada
Riley RD, Simmonds MC, Look MP. Evidence synthesis combining individual patient data and aggregate data: a systematic review identified current practice and possible methods. J Clin Epidemiol. 2007;60(5):431–9. doi:10.1016/j.jclinepi.2006.09.009. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar
Stewart LA, Parmar MK. Meta-analysis of the literature or of individual patient data: is there a difference? Lancet. 1993;341(8842):418–22.View ArticlePubMedGoogle Scholar
Debray T, Moons K, Valkenhoef G, et al. Get real in individual participant data (IPD) meta‐analysis: a review of the methodology. Res Synth Methods. 2015;6(4):293–309.Google Scholar
Debray TPA, Moons KGM, Abo-Zaid GMA, et al. Individual participant data meta-analysis for a binary outcome: One-stage or Two-stage? PLoS ONE. 2013;8(4):e60650. doi:10.1371/journal.pone.0060650. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar
Stewart GB, Altman DG, Askie LM, et al. Statistical analysis of individual participant data meta-analyses: a comparison of methods and recommendations for practice. PLoS ONE. 2012;7(10):e46042. doi:10.1371/journal.pone.0046042. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar
Callens M, Croux C. Performance of likelihood-based estimation methods for multilevel binary regression models. J Stat Comput Simul. 2005;75(12):1003–17. doi:10.1080/00949650412331321070. [published Online First: Epub Date]|.View ArticleGoogle Scholar
Capanu M, Gönen M, Begg CB. An assessment of estimation methods for generalized linear mixed models with binary outcomes. Stat Med. 2013;32(26):4550–66. doi:10.1002/sim.5866. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar
Rondeau V, Michiels S, Liquet B, et al. Investigating trial and treatment heterogeneity in an individual patient data meta-analysis of survival data by means of the penalized maximum likelihood approach. Stat Med. 2008;27(11):1894–910. doi:10.1002/sim.3161. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar
Simmonds MC, Higgins JP, Stewart LA, et al. Meta-analysis of individual patient data from randomized trials: a review of methods used in practice. Clinical trials (London, England). 2005;2(3):209–17.View ArticleGoogle Scholar
Thomas D, Radji S, Benedetti A. Systematic review of methods for individual patient data meta-analysis with binary outcomes. BMC Med Res Methodol. 2014;14:79.View ArticlePubMedPubMed CentralGoogle Scholar
Breslow NE, Clayton DG. Approximate inference in generalized linear mixed models. J Am Stat Assoc. 1993;88(421):9–25. doi:10.2307/2290687. [published Online First: Epub Date]|.Google Scholar
Breslow NE, Lin X. Bias correction in generalised linear mixed models with a single component of dispersion. Biometrika. 1995;82(1):81–91. doi:10.2307/2337629. [published Online First: Epub Date]|.View ArticleGoogle Scholar
Jang W, Lim J. A numerical study of PQL estimation biases in generalized linear mixed models under heterogeneity of random effects. Commun Stat Simul Comput. 2009;38(4):692–702. doi:10.1080/03610910802627055. [published Online First: Epub Date]|.View ArticleGoogle Scholar
Pinheiro JC, Bates DM. Approximations to the Log-likelihood function in the nonlinear mixed-effects model. J Comput Graph Stat. 1995;4(1):12–35. doi:10.2307/1390625. [published Online First: Epub Date]|.Google Scholar
Turner RM, Omar RZ, Yang M, et al. A multilevel model framework for meta-analysis of clinical trials with binary outcomes. Stat Med. 2000;19(24):3417–32.View ArticlePubMedGoogle Scholar
Benedetti A, Platt R, Atherton J. Generalized linear mixed models for binary data: Are matching results from penalized quasi-likelihood and numerical integration less biased? PLoS ONE. 2014;9(1):e84601. doi:10.1371/journal.pone.0084601. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar
Moineddin R, Matheson FI, Glazier RH. A simulation study of sample size for multilevel logistic regression models. BMC Med Res Methodol. 2007;7:34. doi:10.1186/1471-2288-7-34. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar
Higgins JP, Thompson SG, Deeks JJ, et al. Measuring inconsistency in meta-analyses. BMJ (Clin Res Ed). 2003;327(7414):557–60. doi:10.1136/bmj.327.7414.557. [published Online First: Epub Date]|.View ArticleGoogle Scholar
van Houwelingen HC, Arends LR, Stijnen T. Advanced methods in meta-analysis: multivariate approach and meta-regression. Stat Med. 2002;21(4):589–624. doi:10.1002/sim.1040. [published Online First: Epub Date].View ArticlePubMedGoogle Scholar
Riley RD. Multivariate meta-analysis: the effect of ignoring within-study correlation. J R Stat Soc A Stat Soc. 2009;172(4):789–811. doi:10.1111/j.1467-985X.2008.00593.x. [published Online First: Epub Date]|.View ArticleGoogle Scholar
DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177–88.View ArticlePubMedGoogle Scholar
Chen H, Manning AK, Dupuis J. A method of moments estimator for random effect multivariate meta-analysis. Biometrics. 2012;68(4):1278–84. doi:10.1111/j.1541-0420.2012.01761.x. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar
Hardy RJ, Thompson SG. A Likelihood approach to meta-analysis with random effects. Stat Med. 1996;15(6):619–29. doi:10.1002/(SICI)1097-0258(19960330)15:6<619::AID-SIM188>3.0.CO;2-A. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar
Littell RC, Milliken GA, Stroup WW, Wolfinger DR. SAS system for mixed models. Cary: SAS Institute, Inc.; 1996.Google Scholar
Proc Glimmix. Maximum Likelihood Estimation Based on Adaptive Quadrature, SAS Institute Inc., SAS 9.4 Help and Documentation. Cary: SAS Institute Inc; 2002–2004.Google Scholar
Abo-Zaid G, Guo B, Deeks JJ, et al. Individual participant data meta-analyses should not ignore clustering. J Clin Epidemiol. 2013;66(8):865–73.e4. doi:10.1016/j.jclinepi.2012.12.017. [published Online First: Epub Date].
Mathew T, Nordström K. Comparison of One-step and Two-step meta-analysis models using individual patient data. Biom J. 2010;52(2):271–87. doi:10.1002/bimj.200900143. [published Online First: Epub Date]|.PubMedGoogle Scholar
Gelman A, Hill J. Data analysis using regression and multilevel/hierarchical models. Cambridge: Cambridge University Press; 2007.Google Scholar
Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004;23(20):3105–24. doi:10.1002/sim.1875. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar
Cornell JE, Mulrow CD, Localio R, Stack CB, Meibohm AR, Guallar E, et al. Random-effects meta-analysis of inconsistent effects: a time for change. Ann Intern Med. 2014;160(4):267–70.View ArticlePubMedGoogle Scholar
IntHout J, Iaonnidis JPA, Borm GF. The the hartung-knapp-sidik-jonkman method for random effects meta-analysis is straightforward and considerably outperforms the standard DerSimonian-laird method. BMC Med Res Methodol. 2014;14:25.View ArticlePubMedPubMed CentralGoogle Scholar
Noh M, Lee Y. REML estimation for binary data in GLMMs. J Multivar Anal. 2007;98(5):896–915. http://dx.doi.org/10.1016/j.jmva.2006.11.009. [published Online First: Epub Date].
Data analysis, statistics and modelling | CommonCrawl |
Low-rank parity-check codes over Galois rings
Julian Renner1,
Alessandro Neri ORCID: orcid.org/0000-0002-2020-10401 &
Sven Puchinger2
Designs, Codes and Cryptography volume 89, pages 351–386 (2021)Cite this article
Low-rank parity-check (LRPC) codes are rank-metric codes over finite fields, which have been proposed by Gaborit et al. (Proceedings of the workshop on coding and cryptography WCC, vol 2013, 2013) for cryptographic applications. Inspired by a recent adaption of Gabidulin codes to certain finite rings by Kamche et al. (IEEE Trans Inf Theory 65(12):7718–7735, 2019), we define and study LRPC codes over Galois rings—a wide class of finite commutative rings. We give a decoding algorithm similar to Gaborit et al.'s decoder, based on simple linear-algebraic operations. We derive an upper bound on the failure probability of the decoder, which is significantly more involved than in the case of finite fields. The bound depends only on the rank of an error, i.e., is independent of its free rank. Further, we analyze the complexity of the decoder. We obtain that there is a class of LRPC codes over a Galois ring that can decode roughly the same number of errors as a Gabidulin code with the same code parameters, but faster than the currently best decoder for Gabidulin codes. However, the price that one needs to pay is a small failure probability, which we can bound from above.
Rank-metric codes are sets of matrices whose distance is measured by the rank of their difference. Over finite fields, the codes have found various applications in network coding, cryptography, space-time coding, distributed data storage, and digital watermarking. The first rank-metric codes were introduced in [6, 9, 22] and are today called Gabidulin codes. Motivated by cryptographic applications, Gaborit et al. introduced low-rank parity-check (LRPC) in [1, 10]. They can be seen as the rank-metric analogs of low-density parity-check codes in the Hamming metric. LRPC codes have since had a stellar career, as they are already the core component of a second-round submission to the currently running NIST standardization process for post-quantum secure public-key cryptosystems [17]. They are suitable in this scenario due to their weak algebraic structure, which prevents efficient structural attacks. Despite this weak structure, the codes have an efficient decoding algorithm, which in some cases can decode up to the same decoding radius as a Gabidulin code with the same parameters, or even beyond [1]. A drawback is that for random errors of a given rank weight, decoding fails with a small probability. However, this failure probability can be upper-bounded [1, 10] and decreases exponentially in the difference between maximal decoding radius and error rank. The codes have also found applications in powerline communications [29] and network coding [19].
Codes over finite rings, in particular the ring of integers modulo m, have been studied since the 1970s [3, 4, 24]. They have, for instance, be used to unify the description of good non-linear binary codes in the Hamming metric, using a connection via the Gray mapping from linear codes over \(\mathbb {Z}_4\) with high minimum Lee distance [12]. This Gray mapping was generalized to arbitrary moduli m of \(\mathbb {Z}_m\) in [5]. Recently, there has been an increased interest in rank-metric codes over finite rings due to the following applications. Network coding over certain finite rings was intensively studied in [7, 11], motivated by works on nested-lattice-based network coding [8, 18, 26, 28] which show that network coding over finite rings may result in more efficient physical-layer network coding schemes. Kamche et al. [14] showed how lifted rank-metric codes over finite rings can be used for error correction in network coding. The result uses a similar approach as [23] to transformation the channel output into a rank-metric error-erasure decoding problem. Another application of rank-metric codes over finite rings are space-time codes. It was first shown in [15] how to construct space-time codes with optimal rate-diversity tradeoff via a rank-preserving mapping from rank-metric codes over Galois rings. This result was generalized to arbitrary finite principal ideal rings in [14]. The use of finite rings instead of finite fields has advantages since the rank-preserving mapping can be chosen more flexibly. Kamche et al. also defined and extensively studied Gabidulin codes over finite principal ideal rings. In particular, they proposed a Welch–Berlekamp-like decoder for Gabidulin codes and a Gröbner-basis-based decoder for interleaved Gabidulin codes [14].
Motivated by these recent developments on rank-metric codes over rings, in this paper we define and analyze LRPC codes over Galois rings. Essentially, we show that Gaborit et al.'s construction and decoder work as well over these rings, with only a few minor technical modifications. The core difficulty of proving this result is the significantly more involved failure probability analysis, which stems from the weaker algebraic structure of rings compared to fields: the algorithm and proof are based on dealing with modules over Galois rings instead of vector spaces over finite fields, which behave fundamentally different since Galois rings are usually not integral domains. We also provide a thorough complexity analysis. The results can be summarized as follows.
Main results
Let p be a prime and r, s be positive integers. A Galois ring \({R}\) of cardinality \(p^{rs}\) is a finite Galois extension of degree s of the ring \(\mathbb {Z}_{p^r}\) of integers modulo the prime power \(p^r\). As modules over \({R}\) are not always free (i.e., have a basis), matrices over \({R}\) have a rank and a free rank, which is always smaller or equal to the rank. We will introduce these and other notions formally in Sect. 2.
In Sect. 3, we construct a family of rank-metric codes and a corresponding family of decoders with the following properties: Let \(m,n,k,\lambda \) be positive integers such that \(\lambda \) is greater than the smallest divisor of m and k fulfills \(k \le \tfrac{\lambda -1}{\lambda } n\). The constructed codes are subsets \(\mathcal {C}\subseteq {R}^{m \times n}\) of cardinality \(|\mathcal {C}| = |{R}|^{mk}\). Seen as a set of vectors over an extension ring of \({R}\), the code is linear w.r.t. this extension ring. We exploit this linearity in the decoding algorithm.
Furthermore, let t be a positive integer with \(t < \min \!\left\{ \tfrac{m}{\lambda (\lambda +1)/2}, \tfrac{n-k+1}{\lambda }\right\} \). Let \(\varvec{C}\in \mathcal {C}\) be a (fixed) codeword and let \(\varvec{E}\in {R}^{m \times n}\) be chosen uniformly at random from all matrices of rank t (and arbitrary free rank). Then, we show in Sect. 5 that the proposed decoder in Sect. 4 recovers the codeword \(\varvec{C}\) with probability at least
$$\begin{aligned} 1-4 p^{s[\lambda t-(n-k+1)]} - 4 t p^{s\left( t \frac{\lambda (\lambda +1)}{2}-m\right) }. \end{aligned}$$
Hence, depending on the relation of \(p^s\) and t, the success probability is positive for
$$\begin{aligned} t \lessapprox t_\mathrm {max} := \left\lceil \min \!\left\{ \tfrac{m}{\lambda (\lambda +1)/2}, \tfrac{n-k+1}{\lambda }\right\} \right\rceil -1. \end{aligned}$$
and converges exponentially fast to 1 in the difference \(t_\mathrm {max}-t\). Note that for \(\lambda =2\) and \(m>\tfrac{3}{2}(n-k+1)\), we have \(t_\mathrm {max} = \lfloor \tfrac{n-k}{2}\rfloor \).
The decoder has complexity \(\tilde{O}(\lambda ^2 n^2 m)\) operations in \({R}\) (see Sect. 6). In Sect. 7, we present simulation results.
Consider the case \(p=2\), \(s=4\), \(r=2\), \(m=n=101\), \(k=40\), and \(\lambda =2\). Then, the decoder in Sect. 4 can correct up to \(t_\mathrm {max} = \lfloor \tfrac{n-k}{2}\rfloor = 30\) errors with success probability at least \(1-2^{-6}\). For \(t=24\) errors, the success probability is already \(\approx 1-2^{-46}\) and for \(t=18\), it is \(\approx 1-2^{-102}\). A Gabidulin code as in [14], over the same ring and the same parameters, can correct any error of rank up to 30 (i.e., the same maximal radius). However, the currently fastest decoder for Gabidulin codes over rings [14] has a larger complexity than the LRPC decoder in Sect. 4.
The results of this paper were partly presented at the IEEE International Symposium on Information Theory 2020 [21]. Compared to this conference version, we generalize the results in two ways: first, we consider LRPC codes over the more general class of Galois rings instead of the integers modulo a prime power. This is a natural generalization since Galois rings share with finite fields many of the properties needed for dealing with the rank metric. Indeed, they constitute the common point of view between finite fields and rings of integers modulo a prime power. Second, the conference version only derives a bound on the failure probability for errors whose free rank equals their rank. For some applications, this is no restriction since the error can be designed, but for most communications channels, we cannot influence the error and need to correct also errors of arbitrary rank profile. Hence, we provide a complete analysis of the failure probability for all types of errors.
Let A be any commutative ring. We denote modules over A by calligraphic letters, vectors as bold small letters, and matrices as bold capital letters. We denote the set of \(m\times n\) matrices over the ring A by \(A^{m\times n}\) and the set of row vectors of length n over A by \(A^{n} = A^{1\times n}\). Rows and columns of \(m\times n\) matrices are indexed by \(1,\ldots ,m\) and \(1,\ldots ,n\), where \(X_{i,j}\) denotes the entry in the i-th row and j-th column of the matrix \(\varvec{X}\). Moreover, for an element a in a ring A, we denote by \({{\,\mathrm{Ann}\,}}(a)\) the ideal \({{\,\mathrm{Ann}\,}}(a) = \{b \in A \mid ab = 0\}\).
Galois rings
A Galois ring \({R}:={{\,\mathrm{GR}\,}}(p^r,s)\) is a finite local commutative ring of characteristic \(p^r\) and cardinality \(p^{rs}\), which is isomorphic to \(\mathbb {Z}[z]/(p^r,f(z))\), where f(z) is a polynomial of degree s that is irreducible modulo p. Let \(\mathfrak {m}\) be the unique maximal ideal of \({R}\). It is also well-known that \({R}\) is a finite chain ring and all its ideals are powers of \(\mathfrak {m}\) such that r is smallest positive integer r for which \(\mathfrak {m}^r = \{0\}\). Since Galois rings are principal ideal rings, \(\mathfrak {m}\) is generated by one ring element. We will call such a generator \(g_\mathfrak {m}\) (which is unique up to invertible multiples). Note that in a Galois ring this element can always be chosen to be p. Moreover, \({R}/\mathfrak {m}\) is isomorphic to the finite field \(\mathbb {F}_{p^s}\).
In this setting, it is well-known that there exists a unique cyclic subgroup of \({R}^*\) of order \(p^s-1\), which is generated by an element \(\eta \). The set \(T_s := \{0\}\cup \langle \eta \rangle \) is known as Teichmüller set of \({R}\). Every element \(a\in {R}\) has hence a unique representation as
$$\begin{aligned} a=\sum _{i=0}^{r-1} g_\mathfrak {m}^ia_i, \quad a_i\in T_s. \end{aligned}$$
We will refer to this as the Teichmüller representation of a. For Galois rings, this representation coincides with the p-adic expansion. If, in addition, one chooses the polynomial h(z) to be a Hensel lift of a primitive polynomial in \(\mathbb {F}_p[x]\) of degree s, then the element \(\eta \) can be taken to be one of the roots of h(z). Here, for Hensel lift of a primitive polynomial \(\bar{h}(z)\in \mathbb {F}_p[z]\), we mean that \(h(x)\in \mathbb {Z}_{p^r}[z]\) is such that the canonical projection of h(z) over \(\mathbb {F}_p[z]\) is \(\bar{h}(z)\) and h(z) divides \(z^{p^s-1}-1\) in \(\mathbb {Z}_{p^r}[z]\). The interested reader is referred to [2, 16] for a deeper understanding on Galois rings.
It is easy to see that the number of units in \({R}\) is given by
$$\begin{aligned} |{R}^*|&= |{R}\setminus \mathfrak {m}| = |{R}| - |\mathfrak {m}| = p^{sr} -p^{s(r-1)} = |{R}|\big (1-p^{-s}\big ). \end{aligned}$$
Let \(p=2\), \(s=1\), \(r=3\), and \({R}= \{0,1,\ldots ,7\}\). We have that \(\mathfrak {m}= \{0,2,4,6\}\) and \({R}/\mathfrak {m}= \{0,1\} = \mathbb {F}_2\). Thus, \(g_\mathfrak {m}= 2\). The set \(\{1\}\) is the unique cyclic subgroup of \({R}^*=\{1,3,5,7\}\) of order \(p^s-1 = 1\) which is generated by \(\eta =1\) and \(T_s = \{0,1\}\). Then, the Teichmüller representation of \(a=5\) is given by \( a = 1\cdot g_\mathfrak {m}^0 + 0 \cdot g_\mathfrak {m}^1 + 1 \cdot g_\mathfrak {m}^2\).
Let \(p=2\), \(s=3\), \(r=3\), and let us construct \({R}= {{\,\mathrm{GR}\,}}(8,3)\). Consider the ring \(\mathbb {Z}_{8}\), and \(h(z):=z^3+6z^2+5z+7\in \mathbb {Z}_{8}[z]\). The canonical projection of the polynomial h(z) over \(\mathbb {F}_2[z]\) is \(z^3+z+1\) which is primitive, and hence irreducible, in \(\mathbb {F}_2[z]\). Thus, we have
$$\begin{aligned} {R}\cong \mathbb {Z}_8[z]/(h(z)). \end{aligned}$$
Clearly, \(\mathfrak {m}=(2){R}\) and we can choose \(g_\mathfrak {m}=2\). Moreover, if \(\eta \) is a root of h(z), then we also have \({R}\cong \mathbb {Z}_8[\eta ]\), and every element can be represented as \(a_0+a_1\eta +a_2\eta ^2\), for \(a_0,a_1,a_2\in \mathbb {Z}_{8}\). On the other hand, the polynomial h(z) divides \(x^7-1\) in \(\mathbb {Z}_8[z]\) and therefore it is a Hensel lift of \(z^3+z+1\). This implies that \(\eta \) has order 7, and the Teichmüller set is \(T_3=\{0,\eta , \eta ^2,\ldots ,\eta ^7=1\}\). If we take the element \(a=5+3\eta ^2\), then, it can be verified that its Teichmüller represntation is \(a=\eta ^6+\eta ^4g_\mathfrak {m}+\eta ^5g_\mathfrak {m}^2=\eta ^6+\eta ^4\cdot 2+\eta ^5\cdot 4\).
Extensions of Galois rings
Let \(h(z) \in {R}[z]\) be a polynomial of degree m such that the leading coefficient of h(z) is a unit and h(z) is irreducible over the finite field \({R}/\mathfrak {m}\). Then, the Galois ring \({R}[z]/(h(z))\) is denoted by \({S}\). We have that \({S}\) is the Galois ring \({{\,\mathrm{GR}\,}}(p^r,sm)\), with maximal ideal \(\mathfrak {M}= \mathfrak {m}{S}\). Moreover, it is known that subrings of Galois rings are Galois rings and that for every \(\ell \) dividing m there exists a unique subring of \({S}\) which is a Galois extension of degree \(\ell \) of \({R}\). These are all subrings of \({S}\) that contain \({R}\). In particular there exists a unique copy of \({R}\) in \({S}\), and we can therefore consider (with a very small abuse of notation) \({R}\subseteq {S}\). In particular, we have that \(g_\mathfrak {m}\) is also the generator of \(\mathfrak {M}\) in \({S}\).
As for \({R}\), also \({S}\) contains a unique cyclic subgroup of order \(p^{sm}-1\), and we can consider the Teichmüller set \(T_{sm}\) as the union of such a subgroup together with the 0 element. Hence, every \(a\in {S}\) has a unique representation as
$$\begin{aligned} a=\sum _{i=0}^{r-1} g_\mathfrak {m}^ia_i, \quad a_i\in T_{sm}. \end{aligned}$$
The number of units in \({S}\) is given by
$$\begin{aligned} |{S}^*|&= |{S}\setminus \mathfrak {M}| = |{S}| - |\mathfrak {M}| = p^{srm} - |\mathfrak {m}|^m = p^{srm} - \big (p^{s(r-1)}\big )^m \\&= p^{srm}\big (1-p^{-sm}\big ) = |{S}|\big (1-p^{-sm}\big ). \end{aligned}$$
From now on and for the rest of the paper, we will always denote by \({R}\) the Galois ring \({{\,\mathrm{GR}\,}}(p^r,s)\), and by \({S}\) the Galois ring \({{\,\mathrm{GR}\,}}(p^r,sm)\).
Smith normal form
The Smith normal form is well-defined for both \({R}\) and \({S}\), i.e., for \(\varvec{A}\in {R}^{m \times n}\), there are invertible matrices \(\varvec{S}\in {R}^{m \times m}\) and \(\varvec{T}\in {R}^{n \times n}\) such that
$$\begin{aligned} \varvec{D}= \varvec{S}\varvec{A}\varvec{T}\in {R}^{m \times n} \end{aligned}$$
is a diagonal matrix with diagonal entries \(d_1,\ldots ,d_{\min \{n,m\}}\) with
$$\begin{aligned} d_j \in \mathfrak {m}^{i_j} \setminus \mathfrak {m}^{i_j+1}, \end{aligned}$$
where the \(0 \le i_1 \le i_2 \le \cdots \le i_{\min \{n,m\}} \le r\). The same holds for matrices over \({S}\), where we replace \(\mathfrak {m}\) by \(\mathfrak {M}\) (note that \(\mathfrak {M}^r=\{0\}\) and \(\mathfrak {M}^{r-1}\ne \{0\}\) for the same r). The rank and the free rank of \(\varvec{A}\) (w.r.t. a ring \(A \in \{{S},{R}\}\)) is defined by \(\mathrm {rk}(\varvec{A}) := |\{ i\in \{1,\ldots ,\min \{m,n\}\}: \varvec{D}_{i,i} \not = 0 \}|\) and \(\mathrm {frk}(\varvec{A}) := |\{ i \in \{1,\ldots ,\min \{m,n\}\} :\varvec{D}_{i,i} \text { is a unit} \}|\), respectively, where \(\varvec{D}\) is the diagonal matrix of the Smith normal form w.r.t. the ring R.
Modules over finite chain rings
The ring \({S}\) is a free module over \({R}\) of rank m. Hence, elements of \({S}\) can be treated as vectors in \({R}^m\) and linear independence, \({R}\)-subspaces of \({S}\) and the \({R}\)-linear span of elements are well-defined. Let \(\varvec{\gamma }=[\gamma _1,\ldots ,\gamma _m]\) be an ordered basis of \({S}\) over \({R}\). By utilizing the module space isomorphism \({S}\cong {R}^m\), we can relate each vector \(\varvec{a}\in {S}^{n}\) to a matrix \(\varvec{A}\in {R}^{m\times n}\) according to \({{\,\mathrm{ext}\,}}_{\gamma } : {S}^{n} \rightarrow {R}^{m\times n}, \varvec{a}\mapsto \varvec{A}\), where \(a_j = \sum _{i=1}^{m} A_{i,j} \gamma _{i}\), \(j \in \{1,\ldots ,n\}\). The (free) rank norm \(({{\,\mathrm{f}\,}})\mathrm {rk}_{{R}}(\varvec{a})\) is the (free) rank of the matrix representation \(\varvec{A}\), i.e., \(\mathrm {rk}_{{R}}(\varvec{a}) := \mathrm {rk}(\varvec{A})\) and \(\mathrm {frk}_{{R}}(\varvec{a}) := \mathrm {frk}(\varvec{A})\), respectively.
Let \(p=2\), \(s=1\), \(r=3\) as in Example 2, \(h(z) = z^3+z+1\) and
$$\begin{aligned} \varvec{a}= \begin{bmatrix} 2z^2 + 2z + 5,&4z^2 + z + 6,&2z^2 + z \end{bmatrix}. \end{aligned}$$
Using a polyomial basis \(\varvec{\gamma }=[1,z,z^2]\), the matrix representation of \(\varvec{a}\) is
$$\begin{aligned} \varvec{A}= \begin{bmatrix} 5 &{} 6 &{} 0\\ 2 &{} 1 &{} 1\\ 2 &{} 4 &{} 2 \end{bmatrix} \end{aligned}$$
and the Smith normal form of \(\varvec{A}\) is given by
$$\begin{aligned} \varvec{D}= \begin{bmatrix} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 2 \\ \end{bmatrix}. \end{aligned}$$
It can be observed that \(d_1, d_2 \in \mathfrak {m}^0 \setminus \mathfrak {m}^1 = \{1,3,5,7\}\) and \(d_3 \in \mathfrak {m}^1 \setminus \mathfrak {m}^2 = \{2,6\}\) and thus \(\mathrm {rk}(\varvec{A}) = \mathrm {rk}(\varvec{D}) = 3\) and \(\mathrm {frk}(\varvec{A})= \mathrm {frk}(\varvec{D})= 2\). It follows that \(\mathrm {rk}_{{R}}(\varvec{a}) = 3\) and \(\mathrm {frk}_{{R}}(\varvec{a}) = 2\).
Let \(a = \sum _{i=1}^{m} a_i \gamma _i \in {S}\), where \(a_i \in {R}\). The following statements are equivalent (cf. [14, Lemma 2.4]):
a is a unit in \({S}\).
At least one \(a_i\) is a unit in \({R}\).
\(\{a\}\) is linearly independent over \({R}\).
The \({R}\)-linear module that is spanned by \(v_1,\ldots ,v_{\ell } \in {S}\) is denoted by \(\langle v_1,\ldots ,v_\ell \rangle _{{R}} := \big \{\sum _{i=1}^{\ell } a_i v_i : a_i \in {R}\big \}\). The \({R}\)-linear module that is spanned by the entries of a vector \(\varvec{a}\in {S}^{n}\) is called the support of \(\varvec{a}\), i.e., \(\mathrm {supp}_\mathrm {R}(\varvec{a}) := \langle a_1,\ldots ,a_n \rangle _{{R}}\). Further, \({\mathcal {A}}\cdot {\mathcal {B}}\) denotes the product module of two submodules \({\mathcal {A}}\) and \({\mathcal {B}}\) of \({S}\), i.e., \({\mathcal {A}}\cdot {\mathcal {B}}:= \langle a \cdot b \, : \, a \in {\mathcal {A}}, \, b \in {\mathcal {B}}\rangle \).
Valuation in Galois rings
We define the valuation of \(a \in {R}\setminus \{0\}\) as the unique integer \(v(a) \in \{0,\ldots ,r-1\}\) such that
$$\begin{aligned} a \in \mathfrak {m}^{v(a)} \setminus \mathfrak {m}^{v(a)+1}, \end{aligned}$$
and set \(v(0) := r\). In the same way, the valuation of \(b \in {S}\setminus \{0\}\) as the unique integer \(v(b) \in \{0,\ldots ,r-1\}\) such that
$$\begin{aligned} b \in \mathfrak {M}^{v(b)} \setminus \mathfrak {M}^{v(b)+1}, \end{aligned}$$
and \(v(0) = r\).
Let \(\{\gamma _1,\ldots , \gamma _m\}\) be a basis of \({S}\) as \({R}\)-module. It is easy to see that for \(a = \sum _{i=1}^{m} a_i \gamma _i \in {S}\setminus \{0\}\), where \(a_i \in {R}\) (not all 0), we have
$$\begin{aligned} v(a) = \min _{i=1,\ldots ,m}\{v(a_i)\}. \end{aligned}$$
Let \(p=2\), \(s=1\), \(r=3\) as in Example 2, \(h(z) = z^3+z+1\) and let \(a=1\), \(b=2\), \(c=4 \in {R}\). Since \(a \in \mathfrak {m}^0 \setminus \mathfrak {m}^1=\{1,3,5,7\}\), \(b \in \mathfrak {m}^1 \setminus \mathfrak {m}^2 = \{2,6\}\), and \(c \in \mathfrak {m}^2 \setminus \mathfrak {m}^3=\{4\}\), one obtains \(v(a) =0\), \(v(b) = 1\) and \(v(c)=2\).
Furthermore, let \(d=2z^2+1\), \(e=4z^2+2z+2\), \(f=4z^2+4\), where \(d\in \mathfrak {M}^0\setminus \mathfrak {M}^1\), \(e\in \mathfrak {M}^1\setminus \mathfrak {M}^2\) and \(f\in \mathfrak {M}^2\setminus \mathfrak {M}^3\). It follows that \(v(d)=0\), \(v(e)=1\) and \(v(f)=2\). Since an element is a unit if and only if its valuation is equal to 0, only the elements a and d are units.
Rank profile of a module and mingensets
Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\) and \(d_1,\ldots ,d_n\) be diagonal entries of a Smith normal form of a matrix whose row space is \({\mathcal {M}}\). Define the rank profile of \({\mathcal {M}}\) to be the polynomial
$$\begin{aligned} \phi ^{{\mathcal {M}}}(x) := \sum _{i=0}^{r-1} \phi _i^{{\mathcal {M}}} x^i \in \mathbb {Z}[x]/(x^r), \end{aligned}$$
$$\begin{aligned} \phi ^{{\mathcal {M}}}_i := \left| \left\{ j : v(d_j)=i\right\} \right| . \end{aligned}$$
Note that \(\phi ^{{\mathcal {M}}}(x)\) is independent of the chosen matrix and Smith normal form since the diagonal entries \(d_i\) are unique up to multiplication by a unit. We can easily read the free rank and rank from the rank profile
$$\begin{aligned} \mathrm {frk}_{{R}} {\mathcal {M}}&= \phi ^{{\mathcal {M}}}_0 = \phi ^{{\mathcal {M}}}(0), \\ \mathrm {rk}_{{R}} {\mathcal {M}}&= \sum _{i=0}^{r-1} \phi ^{{\mathcal {M}}}_i = \phi ^{{\mathcal {M}}}(1). \end{aligned}$$
Consider the ring \({R}={{\,\mathrm{GR}\,}}(8,3)\) as defined in Example 3, where as generator of \(\mathfrak {m}\) we take \(g_\mathfrak {m}=2\). Take a module \({\mathcal {M}}\) whose diagonal matrix in the Smith normal form is
$$\begin{aligned} \begin{bmatrix} 1 &{} &{} &{} &{}\\ &{} 1 &{} &{} &{} \\ &{} &{} 2 &{} &{} \\ &{} &{} &{} 4 &{}\\ &{} &{} &{} &{} 0 \end{bmatrix}. \end{aligned}$$
$$\begin{aligned} \phi ^{{\mathcal {M}}}(x) = 2+x+x^2. \end{aligned}$$
On \(\mathbb {Z}[x]/(x^r)\), we define the following partial order \(\preceq \).
Let \(a(x),b(x) \in \mathbb {Z}[x]/(x^r)\). We say that \(a(x) \preceq b(x)\) if for every \(i\in \{0,\ldots , r-1\}\) we have
$$\begin{aligned} \sum _{j=0}^i a_j \le \sum _{j=0}^i b_j. \end{aligned}$$
The partial order \(\preceq \) on rank profiles is compatible with the containment of submodules. That is, if \(M_1\subseteq M_2\) then \(\phi ^{{\mathcal {M}}_1} \preceq \phi ^{{\mathcal {M}}_2}\). Clearly the opposite implication is not true in general.
For \(\varvec{D}\) and \(\varvec{T}\) as in the Smith normal form of a matrix over \({R}\), observe that the nonzero rows of the matrix \(\varvec{D}\varvec{T}^{-1}\) produce a set of generators for the \({R}\)-module generated by the rows of \(\varvec{A}\), which is minimal and of the form
$$\begin{aligned} \varGamma =\{g_\mathfrak {m}^ia_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}. \end{aligned}$$
A generating set coming from the Smith Normal Form as described above will be called \(\mathfrak {m}\)-shaped basis. Alternatively, a \(\mathfrak {m}\)-shaped basis for a \({R}\)-module \({\mathcal {M}}\) is a generating set \(\{b_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\) such that \(v(b_{i,\ell _i})=i\). Moreover, every \({R}\)-submodule of \({R}^n\) can be seen as the rowspace of a matrix, and hence it decomposes as
$$\begin{aligned} {\mathcal {M}}=\left\langle \varGamma ^{(0)}\right\rangle _{{R}}+\mathfrak {m}\left\langle \varGamma ^{(1)}\right\rangle _{{R}} + \cdots + \mathfrak {m}^{r-1}\left\langle \varGamma ^{(r-1)}\right\rangle _{{R}}, \end{aligned}$$
where \(\varGamma ^{(i)}:=\{a_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\). It is easy to see that \(\langle \varGamma ^{(i)}\rangle _{{R}}\) is a free module. However, this decomposition depends on the chosen \(\mathfrak {m}\)-shaped basis \(\varGamma \).
For a module \(\mathcal {M}\) with \(\mathfrak {m}\)-shaped basis \(\varGamma = \{g_\mathfrak {m}^ia_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\), we have the following: Let \(e \in \mathcal {M}\) and
$$\begin{aligned} e = \sum _{i=0}^{r-1} \sum _{\ell _i=1}^{\phi _i^\mathcal {M}} e_{i,\ell _i} g_\mathfrak {m}^ia_{i,\ell _i} = \sum _{i=0}^{r-1} \sum _{\ell _i=1}^{\phi _i^\mathcal {M}} e'_{i,\ell _i} g_\mathfrak {m}^i a_{i,\ell _i} \end{aligned}$$
be two different representations of e in the \(\mathfrak {m}\)-shaped basis with coefficients \(e_{i,\ell _i},e'_{i,\ell _i} \in {R}\), respectively. Then, we have
$$\begin{aligned} e_{i,\ell _i} \equiv e'_{i,\ell _i} \mod g_\mathfrak {m}^{r-i} \end{aligned}$$
for all \(0\le i \le r-1\) and \(1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i\). This is due to the fact that by definition of \(\mathfrak {m}\)-shaped basis, the set \(\{a_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i\}\) is linear independent over \({R}\), and hence \((e_{i,\ell _i} - e'_{i,\ell _i}) g_\mathfrak {m}^{i}=0\) for every \(i,\ell _i\). Therefore, the representation of an element in \({\mathcal {M}}\) with respect to a \(\mathfrak {m}\)-shaped basis have uniquely determined coefficients \(e_{i,\ell _i}\) modulo \({{\,\mathrm{Ann}\,}}(g_\mathfrak {m}^i)=\mathfrak {m}^{r-i}\).
Lemma 1
Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\) with rank-profile \(\phi ^{{\mathcal {M}}}\) and let \(j \in \{1,\ldots ,r-1\}\). Then, the rank-profile of \(\mathfrak {m}^j{\mathcal {M}}\) is given by
$$\begin{aligned} \phi ^{\mathfrak {m}^j{\mathcal {M}}}(x) = x^j \phi ^{{\mathcal {M}}}(x). \end{aligned}$$
In particular, the rank of \(\mathfrak {m}^j{\mathcal {M}}\) is equal to \(\phi ^{\mathfrak {m}^j{\mathcal {M}}}(1)=\sum \limits _{i=0}^{r-1-j}\phi ^{{\mathcal {M}}}_i\).
Let \(g_\mathfrak {m}\) be a generator of \(\mathfrak {m}\). If \(\varGamma =\{g_\mathfrak {m}^ia_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\) is a \(\mathfrak {m}\)-shaped basis for M, then it is easy to see that
$$\begin{aligned} \left\{ g_\mathfrak {m}^{i+j}a_{i,\ell _i} \mid 0\le i \le r-j-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \right\} \end{aligned}$$
is a \(\mathfrak {m}\)-shaped basis for \(\mathfrak {m}^j{\mathcal {M}}\). Hence, the first j coefficients of \(\phi ^{\mathfrak {m}^j{\mathcal {M}}}(x)\) are equal to zero, while the remaining ones are the j-th shift of the first \(r-j\) coefficients of \(\phi ^{{\mathcal {M}}}(x)\). \(\square \)
For any pair of \({R}\)-submodules \({\mathcal {M}}_1, {\mathcal {M}}_2\) of \({S}\), we have
$$\begin{aligned} \phi ^{{\mathcal {M}}_1 \cdot {\mathcal {M}}_2}(x) \preceq \phi ^{{\mathcal {M}}_1}(x) \phi ^{{\mathcal {M}}_2}(x). \end{aligned}$$
Let \(g_\mathfrak {m}\) be a generator of \(\mathfrak {m}\). Let \({\mathcal {M}}_1, {\mathcal {M}}_2\) be two \({R}\)-submodules with rank-profile \(\phi ^{{\mathcal {M}}_1}\) and \(\phi ^{{\mathcal {M}}_2}\) respectively. Then, there exist a minimal generating set of \({\mathcal {M}}_1\) given by
$$\begin{aligned} \varGamma _1:=\left\{ g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}_1}_i \right\} , \end{aligned}$$
and a minimal generating set of \(M_2\) given by
$$\begin{aligned} \varGamma _2:=\left\{ g_\mathfrak {m}^ib_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}_2}_i \right\} . \end{aligned}$$
In particular, the product set \(\varGamma _1\cdot \varGamma _2\) is a generating set of \({\mathcal {M}}_1 \cdot {\mathcal {M}}_2\). Hence
$$\begin{aligned} \sum _{i=0}^{r-1} \phi ^{{\mathcal {M}}_1\cdot {\mathcal {M}}_2}_i&=\mathrm {rk}_{{R}}({\mathcal {M}}_1\cdot {\mathcal {M}}_2) \\&\le |\varGamma _1 \cdot \varGamma _2\setminus \{0\}| \\&= \sum _{i=0}^{r-1}\sum _{j=0}^i \phi ^{{\mathcal {M}}_1}_j\phi ^{{\mathcal {M}}_2}_{i-j}\\&=\sum _{i=0}^{r-1}(\phi ^{{\mathcal {M}}_1} \phi ^{{\mathcal {M}}_2})_i. \end{aligned}$$
The general inequality for the truncated sums then follows by considering the rank of the submodule \(\mathfrak {m}^j({\mathcal {M}}_1 \cdot {\mathcal {M}}_2)\) and Lemma 1. \(\square \)
LRPC codes over Galois rings
Let \(k,n,\lambda \) be positive integers with \(0<k<n\). Furthermore, let \(\mathcal {F}\subseteq {S}\) be a free \({R}\)-submodule of \({S}\) of rank \(\lambda \). A low-rank parity-check (LRPC) code with parameters \(\lambda ,n,k\) is a code with a parity-check matrix \(\varvec{H}\in {S}^{(n-k) \times n}\) such that \({{\,\mathrm{rk}\,}}_{{S}} \varvec{H}= \mathrm {frk}_{{S}} \varvec{H}= n-k\) and \(\mathcal {F}= \langle H_{1,1},\ldots ,H_{(n-k),n} \rangle _{{R}}\).
Note that an LRPC code is a free submodule of \({S}^n\) of rank k. This means that the cardinality of the code is \(|{S}|^k = |{R}|^{mk} = p^{r s m k}\). We define the following three additional properties of the parity-check matrix that we will use throughout the paper to prove the correctness of our decoder and to derive failure probabilities. As for rank-metric codes over finite fields, we can interpret vectors over \({S}\) as matrices over \({R}\) by the \({R}\)-module isomorphism \({S}\simeq {R}^m\). In particular, an LRPC code can be seen as a subset of \({R}^{m \times n}\).
Let \(\lambda \), \(\mathcal {F}\), and \(\varvec{H}\) be defined as in Definition 2. Let \(f_1,\ldots ,f_\lambda \in {S}\) be a free basis of \(\mathcal {F}\). For \(i=1,\ldots ,n-k\), \(j=1,\ldots ,n\), and \(\ell =1,\ldots ,\lambda \), let \(h_{i,j,\ell } \in {R}\) be the unique elements such that \(H_{i,j} = \sum _{\ell = 1}^{\lambda } h_{i,j,\ell } f_{\ell }\). Define
$$\begin{aligned} \varvec{H}_{\mathrm {ext}} := \begin{bmatrix} h_{1,1,1} &{}\quad h_{1,2,1} &{}\quad \ldots &{} \quad h_{1,n,1} \\ h_{1,1,2} &{} \quad h_{1,2,2} &{} \quad \ldots &{} \quad h_{1,n,2} \\ \vdots &{} \quad \vdots &{} \quad \ddots &{}\quad \vdots \\ h_{2,1,1} &{}\quad h_{2,2,1} &{}\quad \ldots &{}\quad h_{2,n,1} \\ h_{2,1,2} &{}\quad h_{2,2,2} &{}\quad \ldots &{} \quad h_{2,n,2} \\ \vdots &{}\quad \vdots &{} \quad \ddots &{}\quad \vdots \\ \end{bmatrix} \in {R}^{(n-k)\lambda \times n}. \end{aligned}$$
Then, \(\varvec{H}\) has the
unique-decoding property if \(\lambda \ge \tfrac{n}{n-k}\) and \(\mathrm {frk}\left( \varvec{H}_{\mathrm {ext}} \right) = \mathrm {rk}\left( \varvec{H}_{\mathrm {ext}} \right) = n\),
maximal-row-span property if every row of the parity-check matrix \(\varvec{H}\) spans the entire space \(\mathcal {F}\),
unity property if every entry \(H_{i,j}\) of \(\varvec{H}\) is chosen from the set \(H_{i,j} \in \tilde{\mathcal {F}} := \left\{ \textstyle \sum _{i=1}^{\lambda } \alpha _i f_i \, : \, \alpha _i \in {R}^* \cup \{0\} \right\} \subseteq \mathcal {F}\).
Furthermore, we say that \(\mathcal {F}\) has the base-ring property if \(1 \in \mathcal {F}\).
In the original papers about LRPC codes over finite fields, [1, 10], some of the properties of Definition 3 are used without explicitly stating them.
We will see in Sect. 4.2 that the unique-decoding property together with a property of the error guarantees that erasure decoding always works (i.e., that the full error vector can be recovered from knowing the support and syndrome of an error). This property is also implicitly used in [10]. It is, however, not very restrictive: if the parity-check matrix entries \(H_{i,j}\) are chosen uniformly at random from \(\mathcal {F}\), this property is fulfilled with the probability that a random \(\lambda (n-k) \times n\) matrix has full (free) rank n. This probability is arbitrarily close to 1 for increasing difference of \(\lambda (n-k)\) and n (cf. [20] for the field and Lemma 7 in Sect. 5.2 for the ring case).
We will use the maximal-row-span property to prove a bound on the failure probability of the decoder in Sect. 5. It is a sufficient condition that our bound (in particular Theorem 3 in Sect. 5) holds. Although not explicitly stated, [1, Proposition 4.3] must also assume a similar or slightly weaker condition in order to hold. It does not hold for arbitrary parity-check matrices as in [1, Definition 4.1] (see the counterexample in Remark 4 in Sect. 5). This is again not a big limitation in general for two reasons: first, the ideal codes in [1, Definition 4.2] appear to automatically have this property, and second, a random parity-check matrix has this property with high probability.
In the case of finite fields, the unity property is no restriction at all since the units of a finite field are all non-zero elements. That is, we have \(\tilde{\mathcal {F}} = \mathcal {F}\). Over rings, we need this additional property as a sufficient condition for one of our failure probability bounds (Theorem 3 in Sect. 5). It is not a severe restriction in general, since
$$\begin{aligned} \frac{|\tilde{\mathcal {F}}|}{|\mathcal {F}|} = \frac{(|{R}^*|+1)^\lambda }{|{R}|^\lambda } = \big (1-p^{-s}+p^{-sr}\big )^\lambda , \end{aligned}$$
which is relatively close to 1 for large \(p^s\) and comparably small \(\lambda \).
Finally, Gaborit et al. [10] also used the base-ring property of \(\mathcal {F}\). In contrast to the other three properties in Definition 3, this property only depends on \(\mathcal {F}\) and not on \(\varvec{H}\). We will also assume this property to derive a bound on the probability of one possible cause of a decoding failure event in Sect. 5.3.
The main decoder
Fix \(\lambda \) and \(\mathcal {F}\) as in Definition 2. Let \(f_1,\ldots ,f_\lambda \in {S}\) be a free basis of \(\mathcal {F}\). Note that since the \(f_i\) are linearly independent, the sets \(\{f_i\}\) are linearly independent, which by the discussion in Sect. 2 implies that all the \(f_i\) are units in \({S}\). Hence, \(f_i^{-1}\) exists for each i. We will discuss erasure decoding (Line 6) in Sect. 4.2.
Algorithm 1 recovers the support \({\mathcal {E}}\) of the error \(\varvec{e}\) if \({\mathcal {E}}' = {\mathcal {E}}\). A necessary (but not sufficient) condition for this to be fulfilled is that we have \({\mathcal {S}}= {\mathcal {E}}\cdot \mathcal {F}\). Furthermore, we will see in Sect. 4.2 that we can uniquely recover the error vector \(\varvec{e}\) from its support \({\mathcal {E}}\) and syndrome \(\varvec{s}\) if the the parity-check matrix fulfills the unique decoding property and we have \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\). Hence, decoding works if the following three conditions are fulfilled:
\(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\), (product condition).
\({\mathcal {S}}= {\mathcal {E}}\cdot \mathcal {F}\), (syndrome condition)
\(\bigcap _{i=1}^{\lambda } {\mathcal {S}}_i = {\mathcal {E}}\), (intersection condition),
We call the case that at least one of the three conditions is not fulfilled a (decoding) failure. We will see in the next section (Sect. 5) that whether an error results in a failure depends solely on the error support \({\mathcal {E}}\). Furthermore, given an error support that is drawn uniformly at random from the modules of a given rank profile \(\phi \), the failure probability can be upper-bounded by a function that depends only on the rank of the module (i.e., \(\phi ^{\mathcal {E}}(1)\)).
In Sect. 6, we will analyze the complexity of Algorithm 1. The proofs in that section also indicate how the algorithm can be implemented in practice.
Note that the success conditions above imply that for an error of rank \(\phi ^{\mathcal {E}}(1) = t\), we have \(\lambda t \le m\) (due to the product condition) as well as \(\lambda \ge \tfrac{n}{n-k}\) (due to the unique-decoding property). Combined, we obtain \(t \le m\tfrac{n-k}{n} = m(1-R)\), where \(R := \tfrac{k}{n}\) is the rate of the LRPC code.
Erasure decoding
As its name suggests, the unique decoding property of the parity-check matrix is related to unique erasure decoding, i.e., the process of obtaining the full error vector \(\varvec{e}\) after having recovered its support. The next lemma establishes this connection.
(Unique Erasure Decoding) Given a parity-check matrix \(\varvec{H}\) that fulfills the unique-decoding property. Let \({\mathcal {E}}\) be a free support of rank \(t \le \tfrac{m}{\lambda }\). If \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{\mathcal {E}}\phi ^\mathcal {F}\), then, for any syndrome \(\varvec{s}\in {S}^{n-k}\), there is at most one error vector \(\varvec{e}\in {S}^n\) with support \({\mathcal {E}}\) that fulfills \(\varvec{H}\varvec{e}^\top = \varvec{s}^\top \).
Let \(f_1,\ldots ,f_\lambda \) be a basis of the free module \(\mathcal {F}\). Furthermore, let \(\varepsilon _1,\ldots ,\varepsilon _t\) be an \(\mathfrak {m}\)-shaped basis of \(\mathcal {M}\). To avoid too complicated sums in the derivation below, we use a slightly different notation as in the definition of \(\mathfrak {m}\)-shaped basis and write \(\varepsilon _j = g_\mathfrak {m}^{v(\varepsilon _j)} \varepsilon _j^*\) for all \(j=1,\ldots ,t\), where \(\varepsilon ^*_j \in {S}^*\) are units.
Due to \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{\mathcal {E}}\phi ^\mathcal {F}\), we have that \(f_i \varepsilon _\kappa \) for \(i=1,\ldots ,\lambda \) and \(\kappa =1,\ldots ,t\) is an \(\mathfrak {m}\)-shaped basis of the product space \({\mathcal {E}}\cdot \mathcal {F}\). Any entry of the parity-check matrix \(\varvec{H}\) has a unique representation \(H_{i,j} = \sum _{\ell = 1}^{\lambda } h_{i,j,\ell } f_{\ell }\) for \(h_{i,k,\ell } \in {R}\). Furthermore, any entry of error vector \(\varvec{e}= [e_1,\ldots ,e_n]\) can be represented as \(e_j = \sum _{\kappa =1}^{t} e_{j,\kappa } \varepsilon _\kappa \), where the \(e_{j,\kappa } \in {R}\) are unique modulo \(\mathfrak {m}^{r-v(\varepsilon _\kappa )}\).
We want to recover the error vector \(\varvec{e}\) from the syndrome \(\varvec{s}= [s_1,\ldots ,s_{n-k}]^\top \), which are related by definition as follows:
$$\begin{aligned} s_{i}&=\sum _{j=1}^{n}H_{i,j}e_{j} \\&=\sum _{j=1}^{n}\sum _{\ell =1}^{\lambda }h_{i,j,\ell }f_{\ell }\sum _{\kappa =1}^{t}e_{j,\kappa }\varepsilon _{\kappa } \\&=\sum _{j=1}^{n}\sum _{\ell =1}^{\lambda } \underbrace{\sum _{\kappa =1}^{t}h_{i,j,\ell }e_{j,\kappa }}_{=: \, s_{i,\ell ,\kappa }} f_{\ell }\varepsilon _{\kappa } \\&=\sum _{\ell =1}^{\lambda }\sum _{\kappa =1}^{t}s_{i,\ell ,\kappa } f_{\ell }\varepsilon _{\kappa }. \end{aligned}$$
Hence, for any representation \(e_{j,\kappa }\) of the error \(\varvec{e}\), there is a representation \(s_{i,\ell ,\kappa }\) of \(\varvec{s}\). If we know the latter representation, it is easy to obtain the corresponding \(e_{j,\kappa }\) under the assumed conditions: write
$$\begin{aligned} s_{i,\ell ,\kappa } = \sum _{j=1}^{n}h_{i,j,\ell } e_{j,\kappa },\quad \ell =1,\ldots ,\lambda , \, \kappa =1,\ldots ,t, \, i=1,\ldots ,n-k. \end{aligned}$$
We can rewrite this into t independent linear systems of equations of the form
$$\begin{aligned} \underbrace{\begin{bmatrix} s_{1,1,\kappa } \\ s_{1,2,\kappa } \\ \vdots \\ s_{2,1,\kappa } \\ s_{2,2,\kappa } \\ \vdots \end{bmatrix}}_{=: \, \varvec{s}^{(\kappa )}} = \varvec{H}_{\mathrm {ext}} \cdot \underbrace{\begin{bmatrix} e_{1,\kappa } \\ e_{2,\kappa } \\ \vdots \\ e_{n,\kappa } \\ \vdots \end{bmatrix}}_{=: \, \varvec{e}^{(\kappa )}} \end{aligned}$$
for each \(\kappa =1,\ldots ,t\), where \(\varvec{H}_{\mathrm {ext}} \in {R}^{(n-k)\lambda \times n}\) is independent of \(\kappa \) and defined as in (3).
By the unique decoding property, \(\varvec{H}_{\mathrm {ext}}\) has more rows than columns (i.e, \((n-k)\lambda \ge n\)) and full free rank and rank (equal to n). Hence, each system in (4) has a unique solution \(\varvec{e}^{(\kappa )}\).
It is left to show that any representation \(s_{i,\ell ,\kappa }\) of \(\varvec{s}\) in the \(\mathfrak {m}\)-shaped basis \(f_i \varepsilon _\kappa \) of \({\mathcal {E}}\cdot \mathcal {F}\) yields the same error vector \(\varvec{e}\). Recall that \(s_{i,\ell ,\kappa }\) is unique modulo \(\mathfrak {m}^{r-v(\varepsilon _i)}\) (note that \(v(f_i \varepsilon _\kappa ) = v(\varepsilon _\kappa )\)). Assume now that we have a different representation, say
$$\begin{aligned} {\varvec{s}'}^{(\kappa )} = \varvec{s}^{(\kappa )} + g_{\mathfrak {m}}^{r-v(\varepsilon _\kappa )} \varvec{\chi }, \end{aligned}$$
where \(\varvec{\chi } \in {R}^{(n-k)\lambda }\). Then the unique solution \({\varvec{e}'}^{(\kappa )}\) of the linear system \({\varvec{s}'}^{(\kappa )} \varvec{H}_\mathrm {ext} {\varvec{e}'}^{(\kappa )}\) is of the form
$$\begin{aligned} {\varvec{e}'}^{(\kappa )} = \varvec{e}^{(\kappa )} + g_{\mathfrak {m}}^{r-v(\varepsilon _\kappa )} \varvec{\mu } \end{aligned}$$
for some \(\varvec{\mu '} \in {R}^{(n-k)\lambda }\). Hence, \({\varvec{e}'}^{(\kappa )} \equiv \varvec{e}^{(\kappa )} \mod \mathfrak {m}^{r-v(\varepsilon _\kappa )}\), which means that the two representations \({\varvec{e}'}^{(\kappa )}\) and \(\varvec{e}^{(\kappa )}\) belong to the same error \(\varvec{e}\).
This shows that we can take any representation of the syndrome vector \(\varvec{s}\), solve the system in (4) for \(\varvec{e}^{(\kappa )}\) for \(\kappa =1,\ldots ,t\), and obtain the unique error vector \(\varvec{e}\) corresponding to this syndrome \(\varvec{s}\) and support \({\mathcal {E}}\). \(\square \)
Failure probability
Consider an error vector \(\varvec{e}\) that is chosen uniformly at random from the set of error vectors whose support is a module of a given rank profile \(\phi \in \mathbb Z[x]/(x^r)\) and rank \(\phi (1) = t\). In this section, we derive a bound on the failure probability of the LRPC decoder over Galois rings for this error model. The resulting bound does not depend on the whole rank profile \(\phi \), but only on the rank t.
This section is the most technical and involved part of the paper. Therefore, we derive the bound in three steps, motivated by the discussion on failure conditions in Sect. 4: In Sect. 5.1, we derive an upper bound on the failure probability of the product condition. Sect. 5.2 presents a bound on the syndrome condition failure probability conditioned on the event that the product condition is fulfilled. Finally, in Sect. 5.3, we derive a bound on the intersection failure probability, given that the first conditions are satisfied.
The proof strategy is similar to the analogous derivation for LRPC codes over fields by Gaborit et al. [10]. However, our proof is much more involved for several reasons:
we need to take care of the weaker structure of Galois rings and modules over them, e.g., zero divisors and the fact that not all modules have bases and thus module elements may not be uniquely represented in a minimal generating set;
we correct a few (rather minor) technical inaccuracies in the original proof; and
some for finite fields well-known prerequisite results are, to the best of our knowledge, not known over Galois rings.
Before analyzing the three conditions, we show the following result, whose implication is that if \(\varvec{e}\) is chosen randomly as described above, then the random variable \({\mathcal {E}}\), the support of the chosen error, is also uniformly distributed on the set of modules with rank profile \(\phi \). Note that the analogous statement for errors over a finite field follows immediately from linear algebra, but here, we need a bit more work.
Let \(\phi (x) \in \mathbb Z[x]/(x^r)\) with nonnegative coefficients and let \({\mathcal {E}}\) be an \({R}\)-submodule of \({S}\) with rank profile \(\phi (x)\). Then, the number of vectors \(\varvec{e}\in {S}^n\) whose support is equal to \({\mathcal {E}}\) only depends on \(\phi (x)\).
Let us write \(\phi (x)=\sum _{i=0}^{r-1}n_ix^i\) with \(N:=\phi (1)=\sum _{i=0}^{r-1}n_i=\mathrm {rk}_{{R}}({\mathcal {E}})\), and let \(\varGamma \) be a \(\mathfrak {m}\)-shaped basis for \({\mathcal {E}}\). Then, the vector \(\varvec{e}\) whose first N entries are the element of \(\varGamma \) and whose last \(n-N\) entries are 0 is a vector whose support is equal to \({\mathcal {E}}\). Moreover, all the vectors in \({S}^n\) whose support is equal to \({\mathcal {E}}\) are of the form \((\varvec{A}\varvec{e}^\top )^\top \), for \(\varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R})\). Let us fix a basis of \({S}\) so that we can identify \({S}\) with \({R}^m\). In this representation, \(\varvec{e}^\top \) corresponds to a matrix \(\varvec{D}\varvec{T}\), where
$$\begin{aligned} \varvec{D}=\begin{bmatrix} \varvec{I}_{n_0} &{} &{} &{} &{}\\ &{} g_\mathfrak {m}\varvec{I}_{n_1} &{} &{} &{}\\ &{} &{} \ddots &{} &{}\\ &{} &{} &{} g_{\mathfrak {m}}^{r-1}\varvec{I}_{n_{r-1}}&{}\\ &{} &{} &{} &{} \varvec{0} \end{bmatrix}\in {R}^{n\times n} \end{aligned}$$
and \(\varvec{T}\in {R}^{n\times m}\) has linearly independent rows over \({R}\). Then, the vectors in \({S}^n\) whose support is equal to \({\mathcal {E}}\) correspond to matrices \(\varvec{A}\varvec{D}\varvec{T}\) for \(\varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R})\), and their number is equal to the cardinality of the set
$$\begin{aligned} \mathrm {Vec}({\mathcal {E}},n):=\{\varvec{A}\varvec{D}\varvec{T}\mid \varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R})\}. \end{aligned}$$
The group \({{\,\mathrm{GL}\,}}(n,{R})\) left acts on \(\mathrm {Vec}({\mathcal {E}},n)\) and, by definition, its action is transitive. Hence, by the orbit-stabilizer theorem, we have
$$\begin{aligned} |\mathrm {Vec}({\mathcal {E}},n)|=\frac{|{{\,\mathrm{GL}\,}}(n,{R})|}{|\mathrm {Stab}(\varvec{D}\varvec{T})|}, \end{aligned}$$
where \(\mathrm {Stab}(\varvec{D}\varvec{T})=\mathrm {Stab}_{{{\,\mathrm{GL}\,}}(n,{R})}(\varvec{D}\varvec{T})=\{\varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R}) \mid \varvec{A}\varvec{D}\varvec{T}=\varvec{D}\varvec{T}\}\). Hence, we need to count how many matrices \(\varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R})\) satisfy
$$\begin{aligned} (\varvec{A}-\varvec{I}_n)\varvec{D}\varvec{T}=0. \end{aligned}$$
Let us call \(\varvec{S}:=\varvec{A}-\varvec{I}_n\) and divide it in \(r+1\) block \(\varvec{S}_i\in {R}^{n\times n_i}\) for \(i\in \{0,\ldots ,r-1\}\) and \(\varvec{S}_r\in {R}^{n\times (n-N)}\). Moreover, do the same with \(\varvec{T}\), dividing it in \(r+1\) blocks \(\varvec{T}_i\in {R}^{n_i\times m}\) for \(i\in \{0,\ldots ,r-1\}\) and \(\varvec{T}_r\in {R}^{(n-N)\times m}\). Therefore, we get
$$\begin{aligned} \begin{bmatrix}\varvec{S}_0&\varvec{S}_1&\cdots&\varvec{S}_{r-1}&\varvec{S}_r\end{bmatrix}\begin{bmatrix}\varvec{T}_0\\ g_{\mathfrak {m}} \varvec{T}_1 \\ \vdots \\ g_{\mathfrak {m}}^{r-1}\varvec{T}_{r-1}\\ \varvec{0}\end{bmatrix}=\varvec{0}. \end{aligned}$$
Since the rows of \(\varvec{T}\) are linearly independent over \({R}\), this is true if and only if \(\varvec{S}_i\in \mathfrak {m}^{r-i}{R}^{n\times n_i}\). This condition clearly only depends on the values \(n_i\)'s, and hence on \(\phi (x)\). \(\square \)
Failure of product condition
The product condition means that the product space of the randomly chosen support \({\mathcal {E}}\) and the fixed free module \(\mathcal {F}\) (in which the parity-check matrix coefficients are contained) has maximal rank profile \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\). If \({\mathcal {E}}\) was a free module, the condition would translate to \({\mathcal {E}}\cdot \mathcal {F}\) being a free module of rank \(\lambda t\). In fact, our proof strategy reduces the question if \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\) to the question whether a free module of rank t, which is related to \({\mathcal {E}}\), results in a product space with the free module \(\mathcal {F}\) of maximal rank profile. Hence, we first study this question for products of free modules. This part of the bound derivation is similar to the case of LRPC codes over finite fields (cf. [1]), but the proofs and counting arguments are more involved since we need to take care of non-units in the ring.
Let \(\alpha ',\beta \) be non-negative integers with \((\alpha '+1)\beta < m\). Further, let \({\mathcal {A}}',{\mathcal {B}}\) be free submodules of \({S}\) of free rank \(\alpha '\) and \(\beta \), respectively, such that also \({\mathcal {A}}'\cdot {\mathcal {B}}\) is a free submodule of \({S}\) of free rank \(\alpha '\beta \). For an element \(a \in {S}^*\), chosen uniformly at random, let \({\mathcal {A}}:= {\mathcal {A}}' + \langle a \rangle \). Then, we have
$$\begin{aligned} \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}}) < \alpha '\beta +\beta \big ) \le \left( 1-p^{-s\beta }\right) \sum _{j = 0}^{r-1} p^{s(r-j)[(\alpha '+1) \beta -m]}. \end{aligned}$$
First note that since a is a unit in \({S}\), the mapping \(\varphi _a \, : \, {\mathcal {B}}\rightarrow {S}, ~ b \mapsto ab\) is injective. This means that \(a{\mathcal {B}}\) is a free module with \(\mathrm {frk}_{{R}}(a{\mathcal {B}})=\mathrm {frk}_{{R}}({\mathcal {B}})=\beta \). Let \(b_1,\ldots ,b_\beta \) be a basis of \({\mathcal {B}}\). Then, \(a b_1, \ldots , a b_\beta \) is a basis of \(a{\mathcal {B}}\). Therefore, \({\mathcal {A}}\cdot {\mathcal {B}}\) is a free module with \(\mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}}) = \alpha \beta +\beta \) if and only if \(a{\mathcal {B}}\cap {\mathcal {A}}'\cdot {\mathcal {B}}= \{0\}\). Hence,
$$\begin{aligned} \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}}) < \alpha '\beta +\beta \big ) \le \Pr \left( \exists b \in {\mathcal {B}}\setminus \{0\} : ab \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) . \end{aligned}$$
Let c be chosen uniformly at random from \({S}\). Recall that a is chosen uniformly at random from \({S}^*\). Then,
$$\begin{aligned} \Pr \! \left( \exists b \in {\mathcal {B}}\setminus \{0\} : ab \in {\mathcal {A}}' \cdot {\mathcal {B}}\right) \le \Pr \! \left( \exists b \in {\mathcal {B}}\setminus \{0\} : cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) . \end{aligned}$$
This holds since if c is chosen to be a non-unit in \({S}\), then the statement "\(\exists \, b \in {\mathcal {B}}\setminus \{0\} \, : \, cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\)" is always true. To see this, write \(c = g_\mathfrak {m}c'\) for some \(c' \in {S}\). Since \(\beta >0\), there is a unit \(b^* \in {\mathcal {B}}\cap {S}^*\). Choose \(b := g_\mathfrak {m}^{r-1}b^* \in {\mathcal {B}}\setminus \{0\}\). Hence, \(c b = g_\mathfrak {m}c' g_\mathfrak {m}^{r-1}b^* = 0\), and b is from \({\mathcal {B}}\) and non-zero.
Now we bound the right-hand side of (6) as follows
$$\begin{aligned} \Pr \left( \exists b \in {\mathcal {B}}\setminus \{0\} : cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\right)&\le \textstyle \sum _{b \in {\mathcal {B}}\setminus \{0\}} \Pr \left( cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) \\&= \sum _{j = 0}^{r-1} \sum _{b \in {\mathcal {B}}: v(b) = j} \Pr \left( cb^* g_\mathfrak {m}^{j} \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) . \end{aligned}$$
Since \(b^*\) is a unit in \({S}\), for uniformly drawn c, \(c b^*\) is also uniformly distributed on \({S}\). Hence, \(cb^* g_\mathfrak {m}^{j}\) is uniformly distributed on the ideal \(\mathfrak {M}^{j}\) of \({S}\) (the mapping \({S}\rightarrow \mathfrak {M}^j\), \(\chi \mapsto \chi g_\mathfrak {m}^j\) is surjective and maps equally many elements to the same image) and we have \(\Pr \left( cb^* g_\mathfrak {m}^{j} \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) = \frac{\left| \mathfrak {M}^{j} \cap {\mathcal {A}}'\cdot {\mathcal {B}}\right| }{|\mathfrak {M}^{j}|}\). Let \(v_1,\ldots ,v_{\alpha '\beta }\) be a basis of \({\mathcal {A}}'\cdot {\mathcal {B}}\). Then, by (2), an element \(c \in {\mathcal {A}}'\cdot {\mathcal {B}}\) is in \(\mathfrak {M}^{j}\) if and only if it can be written as \(c = \sum _{i} \mu _i v_i\), where \(\mu _i \in \mathfrak {m}^j\) for all i.
Hence, \(\left| \mathfrak {M}^{j} \cap {\mathcal {A}}'\cdot {\mathcal {B}}\right| = |\mathfrak {m}^{j}|^{\alpha ' \beta }\). Moreover, we have \(|\mathfrak {M}^{j}| = |\mathfrak {m}^{j}|^m\), where \(|\mathfrak {m}^{j}| = p^{s(r-j)}\). Overall, we get
$$\begin{aligned} \Pr \left( \exists \, b \in {\mathcal {B}}\setminus \{0\} \, : \, cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\right)&\le \sum _{j = 0}^{r-1} \sum _{b \in {\mathcal {B}}\, : \, v(b) = j} p^{s(r-j)(\alpha ' \beta -m)} \nonumber \\&= \sum _{j = 0}^{r-1} \big |\{b \in {\mathcal {B}}\, : \, v(b) = j\}\big | p^{s(r-j)(\alpha ' \beta -m)}. \end{aligned}$$
Furthermore, we have (note that \(\mathfrak {M}^{j+1} \subseteq \mathfrak {M}^{j}\))
$$\begin{aligned} \big |\{b \in {\mathcal {B}}\, : \, v(b) = j\}\big |&= \Big |\big (\mathfrak {M}^{j} \setminus \mathfrak {M}^{j+1}\big ) \cap {\mathcal {B}}\Big | = \big |\mathfrak {M}^{j} \cap {\mathcal {B}}\big | - \big |\mathfrak {M}^{j+1} \cap {\mathcal {B}}\big | \nonumber \\&= p^{s(r-j)\beta }-p^{s(r-j-1)\beta }. \end{aligned}$$
Combining and simplifying (5), (6), (7), and (8) we obtain the desired result. \(\square \)
Let \({\mathcal {B}}\) be a fixed free submodule of \({S}\) with \(\mathrm {frk}_{{R}}({\mathcal {B}})=\beta \). For a positive integer \(\alpha \) with \(\alpha \beta <m\), let \({\mathcal {A}}\) be drawn uniformly at random from the set of free submodules of \({S}\) of free rank \(\alpha \). Then,
$$\begin{aligned} \Pr \left( \mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}})< \alpha \beta \right) \le \left( 1-p^{-s\beta }\right) \sum _{i=1}^{\alpha } \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)} \le 2 \alpha p^{s(\alpha \beta -m)} \end{aligned}$$
Drawing a free submodule \({\mathcal {A}}\subseteq {S}\) of rank \(\alpha \) uniformly at random is equivalent to drawing iteratively \({\mathcal {A}}_0 := \{0\}, ~ {\mathcal {A}}_i := {\mathcal {A}}_{i-1} + \langle a_i \rangle \) for \(i=1,\ldots ,\alpha \) where for each iteration i, the element \(a_i \in {S}\) is chosen uniformly at random from the set of vectors that are linearly independent of \({\mathcal {A}}_{i-1}\). The equivalence of the two random experiments is clear since the possible choices of the sequence \(a_1,\ldots ,a_\alpha \) gives exactly all bases of free \({R}\)-submodules of \({S}\) of rank \(\alpha \). Furthermore, all sequences are equally likely and each resulting submodule has the same number of bases that generate it (which equals the number of invertible \(\alpha \times \alpha \) matrices over \({R}\)). We have the following recursive formula for any \(i=1,\ldots ,\alpha \):
$$\begin{aligned}&\Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \big ) \\&\quad = \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \wedge \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta \big ) \\&\quad \quad + \underbrace{\Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \wedge \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})<(i-1)\beta \big )}_{\mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})<(i-1)\beta \text { implies }\mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta } \\&\quad = \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \mid \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta \big ) \\&\quad \quad \cdot \underbrace{\Pr (\mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta )}_{\le 1} + \Pr \big (\mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})<(i-1)\beta \big ) \\&\quad \overset{(*)}{\le } \left( 1-p^{-s\beta }\right) \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)} + \Pr \big (\mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})<(i-1)\beta \big ), \end{aligned}$$
where (\(*\)) follows from Lemma 4 by the following additional argument:
$$\begin{aligned}&\Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \mid \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta \, \wedge \, a_i \text { linearly independent and}\\&\quad \quad \text {its span trivially intersects with }{\mathcal {A}}_{i-1}\big ) \\&\quad \le \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \mid \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta \, \wedge \, a_i \text { uniformly from } {S}^* \big ) \\&\quad \le \left( 1-p^{-s\beta }\right) \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)}, \end{aligned}$$
where the last inequality is exactly the statement of Lemma 4. By \(\Pr \big (\mathrm {frk}_{{R}}(A_{0}B)<0\big ) = 0\), we get
$$\begin{aligned} \Pr \left( \mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}})< \alpha \beta \right)&= \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_\alpha \cdot {\mathcal {B}})< \alpha \beta \big ) \\&= \left( 1-p^{-s\beta }\right) \sum _{i=1}^{\alpha } \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)} \\&\le \alpha \underbrace{\left( 1-p^{-s\beta }\right) }_{\le 1} p^{-rs(m-\alpha \beta )} \underbrace{\sum _{j = 0}^{r-1} p^{js(m-\alpha \beta )}}_{\le 2 p^{(r-1)s(m-\alpha \beta )}} \\&\le 2 \alpha p^{s(\alpha \beta -m)}. \end{aligned}$$
This proves the claim. \(\square \)
Recall that the error support \({\mathcal {E}}\) is not necessarily a free module. In the following sequence of statements, we will therefore answer the question how the results of Lemmas 4 and 5 can be used to derive a bound on the product condition failure probability. To achieve this, we study the following free modules related to modules of arbitrary rank profile. Note that this part of the proof differs significantly from LRPC codes over finite fields, where all modules are vector spaces, and thus free.
For a module \({\mathcal {M}}\subseteq {S}\) with \(\mathfrak {m}\)-shaped basis \(\varGamma \), define \(\mathcal {F}(\varGamma ) \subseteq {S}\) be the free module that is obtained from \({\mathcal {M}}\) as follows: Let us write \(\varGamma =\{g_\mathfrak {m}^ia_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\), where the elements \(a_{i,\ell _i}\) are all reduced modulo \(\mathfrak {M}^{r-i}\), that is, the Teichmüller representation of \(a_{i,\ell _i}\) is of the form
$$\begin{aligned} a_{i,\ell _i}=\sum _{j=0}^{r-i-1} g_\mathfrak {m}^jz_j, \quad z_j\in T_{tm}. \end{aligned}$$
This is clearly possible since if we add to \(a_{i,\ell _i}\) an element \(y\in \mathfrak {M}^{r-i}=(g_\mathfrak {m}^{r-i})\), then \(g_\mathfrak {m}^i(a_{i,\ell _i}+y)=g_\mathfrak {m}^ia_{i,\ell _i}\). At this point, we define \(F(\varGamma ) := \{a_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i\}\), and \(\mathcal {F}(\varGamma ):=\langle F(\varGamma )\rangle _{{R}}\). The fact that \(\mathcal {F}(\varGamma )\) is free directly follows from considering its Smith Normal Form, which tells us that in the matrix representation it is spanned by (some of) the rows of an invertible matrix in \({{\,\mathrm{GL}\,}}(m,{R})\). In particular, we have \(\mathrm {frk}_{{R}}(\mathcal {F}(\varGamma ))={{\,\mathrm{rk}\,}}_{{R}}({\mathcal {M}})\).
Let \(p=2\), \(s=1\), \(r=3\) as in Example 2, \(h(z) = z^3+z+1\) and \({\mathcal {M}}\) a module with \(\mathfrak {m}\)-shaped basis \(\varGamma = \{1,2z^2+2z,4z^2+2z+2\}\). Then, \({\mathcal {M}}\) has a diagnonal matrix in Smith normal form of
$$\begin{aligned} \begin{bmatrix} 1&{}0&{}0\\ 0&{}2&{}0\\ 0&{}0&{}2 \end{bmatrix} \end{aligned}$$
and \(\phi ^{{\mathcal {M}}}(z) = 2z+1\). Using the notation above, we observe \(a_{0,1}=1\), \(a_{1,1}=z^2+z\), \(a_{1,2} = z^3+2z^2\) and \(\mathcal {F}(\varGamma ) = \langle \{1,z^2+z,z^3+2z^2\} \rangle _{{R}}\).
At this point, for two different \(\mathfrak {m}\)-shaped bases \(\varGamma , \Lambda \) of \({\mathcal {M}}\), one could ask whether \(\mathcal {F}(\varGamma )= \mathcal {F}(\Lambda )\). The answer is affirmative, and it can be deduced from the following result.
Let \(n_0,\ldots ,n_{r-1}\in \mathbb {N}\) be nonnegative integers, let \(N:=n_0+\cdots +n_{r-1}\) and let \(\varvec{D}\in {R}^{N\times N}\) be a diagonal matrix given by
$$\begin{aligned} \varvec{D}:=\begin{bmatrix} \varvec{I}_{n_0} &{} &{} &{} \\ &{} g_\mathfrak {m}\varvec{I}_{n_1} &{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} g_{\mathfrak {m}}^{r-1}\varvec{I}_{n_{r-1}} \end{bmatrix}. \end{aligned}$$
Moreover, let \(\varvec{T}_1,\varvec{T}_2 \in {R}^{r\times m}\) be such that the rows of \(\varvec{T}_i\) are \({R}\)-linearly independent for each \(i\in \{1,2\}\). Then, the rowspaces of \(\varvec{D}\varvec{T}_1\) and \(\varvec{D}\varvec{T}_2\) coincide if and only if for every \(i,j \in \{0,\ldots ,r-1\}\) there exist \(\varvec{Y}_{i,j}\in {R}^{n_i\times n_j}\) with \(\varvec{Y}_{i,i}\in {{\,\mathrm{GL}\,}}(n_i,{R})\) and \(\varvec{Z}_i\in {R}^{n_i\times m}\) such that
$$\begin{aligned} \varvec{T}_2=\varvec{Y}\varvec{T}_1+\varvec{Z}, \end{aligned}$$
$$\begin{aligned} \varvec{Y}= \begin{bmatrix} \varvec{Y}_{0,0} &{} g_\mathfrak {m}\varvec{Y}_{0,1} &{} g_\mathfrak {m}^2 \varvec{Y}_{0,2} &{} \cdots &{} g_\mathfrak {m}^{r-1} \varvec{Y}_{0,r-1} \\ \varvec{Y}_{1,0} &{} \varvec{Y}_{1,1} &{} g_\mathfrak {m}\varvec{Y}_{1,2} &{} \cdots &{} g_\mathfrak {m}^{r-2} \varvec{Y}_{1,r-1} \\ \varvec{Y}_{2,0} &{} \varvec{Y}_{2,1} &{} \varvec{Y}_{2,2} &{} \cdots &{} g_\mathfrak {m}^{r-3} \varvec{Y}_{2,r-1} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ \varvec{Y}_{r-1,0} &{} \varvec{Y}_{r-1,1} &{} \varvec{Y}_{r-1,2} &{} \cdots &{} \varvec{Y}_{r-1,r-1} \\ \end{bmatrix}, \quad \varvec{Z}=\begin{bmatrix} \varvec{0} \\ g_\mathfrak {m}^{r-1}\varvec{Z}_1 \\ g_\mathfrak {m}^{r-2}\varvec{Z}_2 \\ \vdots \\ g_\mathfrak {m}\varvec{Z}_{r-1}\end{bmatrix}. \end{aligned}$$
The rowspaces of \(\varvec{D}\varvec{T}_1\) and \(\varvec{D}\varvec{T}_2\) coincide if and only if there exists a matrix \(\varvec{X}\in {{\,\mathrm{GL}\,}}(N,{R})\) such that \(\varvec{X}\varvec{D}\varvec{T}_1=\varvec{D}\varvec{T}_2\). Divide \(\varvec{T}_\ell \) in r blocks \(\varvec{T}_{\ell ,i}\in {R}^{n_i \times m}\) for \(i\in \{0,\ldots , r-1\}\) and divide \(\varvec{X}\) in \(r\times r\) blocks \(\varvec{X}_{i,j}\in {R}^{n_i\times n_j}\) for \(i,j \in \{0,\ldots ,r-1\}\). Hence, from \(\varvec{X}\varvec{D}\varvec{T}_1=\varvec{D}\varvec{T}_2\) we get
$$\begin{aligned} \sum _{j=0}^{r-1} \varvec{X}_{i,j}g_\mathfrak {m}^j\varvec{T}_{1,j}=g_\mathfrak {m}^i\varvec{T}_{2,i}. \end{aligned}$$
Since the rows of \(\varvec{T}_{1}\) are \({R}\)-linearly independent, (9) implies that \(g_\mathfrak {m}^j\varvec{X}_{i,j} \in g_\mathfrak {m}^i{R}^{n_i\times n_j}\). This shows that
$$\begin{aligned} \varvec{X}= \begin{bmatrix} \varvec{Y}_{0,0} &{} \varvec{Y}_{0,1} &{} \varvec{Y}_{0,2} &{} \cdots &{} \varvec{Y}_{0,r-1} \\ g_\mathfrak {m}\varvec{Y}_{1,0} &{} \varvec{Y}_{1,1} &{} \varvec{Y}_{1,2} &{} \cdots &{} \varvec{Y}_{1,r-1} \\ g_\mathfrak {m}^2 \varvec{Y}_{2,0} &{} g_\mathfrak {m}\varvec{Y}_{2,1} &{} \varvec{Y}_{2,2} &{} \cdots &{} \varvec{Y}_{2,r-1} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ g_\mathfrak {m}^{r-1} \varvec{Y}_{r-1,0} &{} g_\mathfrak {m}^{r-2} \varvec{Y}_{r-1,1} &{} g_\mathfrak {m}^{r-3} \varvec{Y}_{r-1,2} &{} \cdots &{} \varvec{Y}_{r-1,r-1} \\ \end{bmatrix}, \end{aligned}$$
for some \(\varvec{Y}_{i,j}\in {R}^{n_i\times n_j}\). Observe now that \(\varvec{X}=\varvec{U}+g_\mathfrak {m}\varvec{L}\), where
$$\begin{aligned} \varvec{U}&= \begin{bmatrix} \varvec{Y}_{0,0} &{} \varvec{Y}_{0,1} &{} \varvec{Y}_{0,2} &{} \cdots &{} \varvec{Y}_{0,r-1} \\ \varvec{0} &{} \varvec{Y}_{1,1} &{} \varvec{Y}_{1,2} &{} \cdots &{} \varvec{Y}_{1,r-1} \\ \varvec{0} &{} \varvec{0} &{} \varvec{Y}_{2,2} &{} \cdots &{} \varvec{Y}_{2,r-1} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ \varvec{0} &{} \varvec{0} &{} \varvec{0} &{} \cdots &{} \varvec{Y}_{r-1,r-1} \\ \end{bmatrix}, \\ \varvec{L}&=\begin{bmatrix} \varvec{0} &{} \varvec{0} &{} \varvec{0} &{} \cdots &{} \varvec{0} \\ \varvec{Y}_{1,0} &{} \varvec{0} &{} \varvec{0} &{} \cdots &{} \varvec{0} \\ g_\mathfrak {m}\varvec{Y}_{2,0} &{} \varvec{Y}_{2,1} &{} \varvec{0} &{} \cdots &{} \varvec{0} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ g_\mathfrak {m}^{r-2} \varvec{Y}_{r-1,0} &{} g_\mathfrak {m}^{r-3} \varvec{Y}_{r-1,1} &{} g_\mathfrak {m}^{r-4} \varvec{Y}_{r-1,2} &{} \cdots &{} \varvec{0} \\ \end{bmatrix}. \end{aligned}$$
Since \(\varvec{X}\) is invertible and \(g_\mathfrak {m}\varvec{L}\) is nilpotent, then \(\varvec{U}\) is also invertible and hence \(\varvec{Y}_{i,i}\in {{\,\mathrm{GL}\,}}(n_i,{R})\), for every \(i\in \{0,\ldots ,r-1\}\). At this point, observe that \( \varvec{X}\varvec{D}= \varvec{D}\varvec{Y}\), from which we deduce
$$\begin{aligned} \varvec{D}(\varvec{T}_2-\varvec{Y}\varvec{T}_1)=\varvec{0}. \end{aligned}$$
This implies that the ith block of \(\varvec{T}_2-\varvec{Y}\varvec{T}_1 \in {{\,\mathrm{Ann}\,}}(g_\mathfrak {m}^i){R}^{n_i\times m}=g_\mathfrak {m}^{r-i}{R}^{n_i\times m}\) and we conclude. \(\square \)
Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\). Proposition 2 implies that if we restrict to take a \(\mathfrak {m}\)-shaped basis \(\varGamma =\left\{ g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\right\} \) such that the elements \(a_{i,j_i}\) have Teichmüller representation
$$\begin{aligned} a_{i,j_i}=\sum _{\ell =0}^{r-i-1}g_\mathfrak {m}^\ell z_\ell , \quad z_\ell \in T_{tm}, \end{aligned}$$
then the module \(\mathcal {F}(\varGamma )\) is well-defined and does not depend on the choice of \(\varGamma \).
We define \(\mathcal {F}({\mathcal {M}})\) to be the space \(\mathcal {F}(\varGamma )\), where \(\varGamma =\left\{ g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le \right. \)\(\left. r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\right\} \) is any \(\mathfrak {m}\)-shaped basis such that the elements \(a_{i,j_i}\) have Teichmüller representation as in (10).
The following two corollaries follow from observations in Proposition 2. We will use them to show that for certain uniformly chosen modules \({\mathcal {M}}\), the corresponding free modules \(\mathcal {F}({\mathcal {M}})\) are uniformly chosen from the set of free modules of rank equal to the rank of \({\mathcal {M}}\). The proofs can be found in Appendix A.
Now, for a given \({R}\)-submodule of \({S}\) we consider all the free modules that comes from a \(\mathfrak {m}\)-shaped basis for \({\mathcal {M}}\). More specifically, we set
$$\begin{aligned} \mathrm {Free}({\mathcal {M}}):=\Big \{ {\mathcal {A}}\mid&{\mathcal {A}} \text{ is } \text{ free } \text{ with } \mathrm {frk}_{{R}}({\mathcal {A}})=\mathrm {rk}_{{R}}({\mathcal {M}}) \text{ and } \exists \{ a_{i,\ell _i}\} \text{ basis } \text{ of } {\mathcal {A}}\\&\text{ such } \text{ that } \{ g_\mathfrak {m}^ia_{i,\ell _i}\} \text{ is } \text{ a } \mathfrak {m}\text{-shaped } \text{ basis } \text{ for } \mathcal M \Big \}. \end{aligned}$$
In fact, even though for the \({R}\)-module \({\mathcal {M}}\) there is a unique free module \(\mathcal {F}({\mathcal {M}})\) as explained in Definition 4, we have more than one free module \({\mathcal {A}}\) belonging to \(\mathrm {Free}({\mathcal {M}})\). The exact number of such free modules is given in the following Corollary.
Corollary 1
Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\) with rank profile \(\phi ^{{\mathcal {M}}}(x)\) and rank \(N := {{\,\mathrm{rk}\,}}_{{R}}({\mathcal {M}})\). Then
$$\begin{aligned} |\mathrm {Free}({\mathcal {M}})|=s^{(m-N)\sum _{i=1}^{r-1}i \phi ^{{\mathcal {M}}}_i }. \end{aligned}$$
In particular, \(|\mathrm {Free}({\mathcal {M}})|\) only depends on \(\phi ^{{\mathcal {M}}}(x)\).
See Appendix A. \(\square \)
Now we estimate an opposite quantity. For a fixed rank profile \(\phi (x)\) with \(\phi (1)\le m\), and given a free \({R}\)-submodule \({\mathcal {N}}\) of \({S}\) with free rank \(\mathrm {frk}_{{R}}({\mathcal {N}})=\phi (1)\), for how many \({R}\)-submodules \({\mathcal {M}}\) of \({S}\) with rank profile \(\phi ^{{\mathcal {M}}}(x)=\phi (x)\) the module \({\mathcal {N}}\) belongs to \(\mathrm {Free}({\mathcal {M}})\)? Formally, we want to estimate the cardinality of the set
$$\begin{aligned} \mathrm {Mod}(\phi ,{\mathcal {N}}):=\left\{ {\mathcal {M}}\subseteq {S}\mid \phi ^{{\mathcal {M}}}(x)=\phi (x) \text{ and } {\mathcal {N}}\in \mathrm {Free}({\mathcal {M}}) \right\} . \end{aligned}$$
Let \(\phi (x)=\sum _{i=0}^{r-1}n_ix^i\in \mathbb {N}[x]/(x^r)\) such that \(\phi (1)=N\le m\), and let \({\mathcal {N}}\) be a free \({R}\)-submodule of \({S}\) with free rank \(\mathrm {frk}_{{R}}({\mathcal {N}})=N\). Then
$$\begin{aligned} |\mathrm {Mod}(\phi ,{\mathcal {N}})|=\frac{|{{\,\mathrm{GL}\,}}(N,{R})|}{|G_{\phi }^*|}. \end{aligned}$$
In particular, \(|\mathrm {Mod}(\phi ,{\mathcal {N}})|\) only depends on \(\phi (x)\).
We need the following lemma to derive a sufficient condition for the product of two modules to have a maximal rank profile.
Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\), and let \({\mathcal {A}}, {\mathcal {B}}\in \mathrm {Free}({\mathcal {M}})\). Moreover, let \({\mathcal {N}}\) be a free \({R}\)-submodule of \({S}\). Then, \({\mathcal {N}}\cdot {\mathcal {A}}\) is free with \(\mathrm {frk}_{{R}}({\mathcal {N}}\cdot {\mathcal {A}})=\mathrm {rk}_{{R}}({\mathcal {M}})\mathrm {frk}_{{R}}({\mathcal {N}})\) if and only if \({\mathcal {N}}\cdot {\mathcal {B}}\) is free with \(\mathrm {frk}_{{R}}({\mathcal {N}}\cdot {\mathcal {B}})=\mathrm {rk}_{{R}}({\mathcal {M}})\mathrm {frk}_{{R}}({\mathcal {N}})\).
Let \(A=\{a_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\}\) be a basis of \({\mathcal {A}}\) and \(B=\{b_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\}\) be a basis of \({\mathcal {B}}\) such that \(\varGamma := \{g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\}\) and \(\Lambda := \{g_\mathfrak {m}^ib_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\}\) are two \(\mathfrak {m}\)-shaped bases for \({\mathcal {M}}\), and let \(\varDelta =\{u_1,\ldots ,u_t\}\) be a basis for \({\mathcal {N}}\). Assume that \(\varDelta \cdot A=\{u_{\ell }a_{i,j_i} \}\) has \(\mathrm {rk}_{{R}}({\mathcal {M}})\mathrm {frk}_{{R}}({\mathcal {N}})\) linearly independent elements over \({R}\). By symmetry, it is enough to show that this implies \({\mathcal {N}}\cdot {\mathcal {B}}\) is free. By Proposition 2, we know that there exists \(x_{i,j_i} \in {S}\) such that \({\mathcal {B}}=\langle \{a_{i,j_i}+g_\mathfrak {m}x_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i \} \rangle _{{R}}\). Hence, we need to prove that the elements \(\{u_{\ell }(a_{i,j_i}+g_mx_{i,j_i})\}\) are linearly independent over \({R}\). Suppose that there exists \(\lambda _{\ell ,i,j_i}\in {R}\) such that
$$\begin{aligned} \sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}u_{\ell }(a_{i,j_i}+g_mx_{i,j_i})=0, \end{aligned}$$
hence, rearranging the sum, we get
$$\begin{aligned} \sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}u_{\ell }a_{i,j_i}=- g_\mathfrak {m}\sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}u_{\ell }x_{i,j_i}. \end{aligned}$$
Multiplying both sides by \(g_\mathfrak {m}^{r-1}\) we obtain
$$\begin{aligned} \sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}g_\mathfrak {m}^{r-1}u_{\ell }a_{i,j_i}=0, \end{aligned}$$
and since by hypothesis \(\{u_{\ell }a_{i,j_i}\}\) is a basis, this implies \(\lambda _{\ell ,i,j_i}\in {{\,\mathrm{Ann}\,}}(g_\mathfrak {m}^{r-1})=\mathfrak {m}\) and therefore there exist \(\lambda '_{\ell ,i,j_i}\in {R}\), such that \(\lambda _{\ell ,i,j_i}=\varvec{g}_\mathfrak {m}\lambda '_{\ell ,i,j_i}\). Thus, (11) becomes
$$\begin{aligned} g_\mathfrak {m}\sum _{\ell ,i,j_i}\lambda '_{\ell ,i,j_i}u_{\ell }a_{i,j_i}=- g_\mathfrak {m}^2\sum _{\ell ,i,j_i}\lambda '_{\ell ,i,j_i}u_{\ell }x_{i,j_i}. \end{aligned}$$
Now, multiplying both sides by \(g_\mathfrak {m}^{r-2}\) and with the same reasoning as before, we obtain that all the \(\lambda '_{\ell ,i,j_i}\in \mathfrak {m}\) and the right-hand side of (11) belongs to \(\mathfrak {m}^3\). Iterating this process \(r-2\) times, we finally get that the right-hand side of (11) belongs to \(\mathfrak {m}^r=(0)\), and therefore (11) corresponds to
$$\begin{aligned} \sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}u_{\ell }a_{i,j_i}=0, \end{aligned}$$
which, by hypothesis implies \(\lambda _{\ell ,i,j_i}=0\) for every \(\ell ,i,j_i\). This concludes the proof, showing that the elements \(\{u_{\ell }(a_{i,j_i}+g_mx_{i,j_i})\}\) are linearly independent over \({R}\). \(\square \)
With the aid of Lemma 6 we can show that the property for the product of two arbitrary \({R}\)-modules \({\mathcal {M}}_1, {\mathcal {M}}_2\) of having maximal rank profile (according to Definition 1) depends on the free modules \(\mathcal {F}({\mathcal {M}}_1)\) and \(\mathcal {F}({\mathcal {M}}_2)\) and on their product.
Let \({\mathcal {M}}_1\) and \({\mathcal {M}}_2\) be submodules of \({S}\). If the product of free modules \(\mathcal {F}({\mathcal {M}}_1)\) and \(\mathcal {F}({\mathcal {M}}_2)\) has free rank
$$\begin{aligned} \mathrm {frk}_{{R}}\!\left( \mathcal {F}({\mathcal {M}}_1)\mathcal {F}({\mathcal {M}}_2)\right) = {{\,\mathrm{rk}\,}}_{{R}}(\mathcal {F}({\mathcal {M}}_1)) {{\,\mathrm{rk}\,}}_{{R}}(\mathcal {F}({\mathcal {M}}_2)), \end{aligned}$$
then we have
$$\begin{aligned} \phi ^{{\mathcal {M}}_1 \cdot {\mathcal {M}}_2}(x) = \phi ^{{\mathcal {M}}_1}(x) \phi ^{{\mathcal {M}}_2}(x). \end{aligned}$$
Moreover, if we assume that \(\deg (\phi ^{{\mathcal {M}}_1}(x))+\deg (\phi ^{{\mathcal {M}}_2}(x))<r\), then also the converse is true. In particular, the converse is true if one of the two modules is free.
First, observe that by Lemma 6 we can take any pair of \(\mathfrak {m}\)-shaped bases \(\varGamma _1\) and \(\varGamma _2\) of \({\mathcal {M}}_1\) and \({\mathcal {M}}_2\), respectively. Let us fix
$$\begin{aligned} \varGamma _1:=\left\{ g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}_1}_i \right\} \end{aligned}$$
\(\mathfrak {m}\)-shaped basis of \({\mathcal {M}}_1\) and
$$\begin{aligned} \varGamma _2:=\left\{ g_\mathfrak {m}^ib_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}_2}_i \right\} \end{aligned}$$
\(\mathfrak {m}\)-shaped basis of \({\mathcal {M}}_2\). By hypothesis, the set \(F(\varGamma _1)\cdot F(\varGamma _2)\) contains \(\mathrm {rk}_{{R}}({\mathcal {M}}_1)\mathrm {rk}_{{R}}({\mathcal {M}}_2)=t\) linearly independent elements over \({R}\). Let \(\varvec{A}\in {R}^{t\times m}\) be the matrix whose rows are the vectorial representations in \({R}^m\) of the elements in \(F(\varGamma _1)\cdot F(\varGamma _2)\). Clearly, a Smith Normal Form for \(\varvec{A}\) is \(\varvec{A}=\varvec{D}\varvec{T}\) where \(\varvec{D}= ( \varvec{I}_t \mid \varvec{0})\) and \(\varvec{T}\in {{\,\mathrm{GL}\,}}(n,{R})\) is any invertible matrix whose first \(t\times m\) block is equal to \(\varvec{A}\). By definition \(\varGamma _1\cdot \varGamma _2\) is a generating set for \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\) and hence \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\) is equal to the rowspace of the matrix \(\varvec{A}'\) whose rows are the vectorial representations of the elements in \(\varGamma _1\cdot \varGamma _2\). A row of \(\varvec{A}'\) corresponding to the element \(g_\mathfrak {m}^ia_{i,j_i}g_\mathfrak {m}^sb_{s,\ell _s}\in \varGamma _1\cdot \varGamma _2\) is equal to the row of \(\varvec{A}\) corresponding to the element \(a_{i,j_i}b_{s,\ell _s}\) multiplied by \(g_\mathfrak {m}^{i+s}\). Therefore, \(\varvec{A}'=\varvec{D}'\varvec{A}=\varvec{D}'\varvec{D}\varvec{T}=(\varvec{D}'\mid \varvec{0})\varvec{T}\), where \(\varvec{D}'\) is a \(t\times t\) diagonal matrix whose diagonal elements are all of the form \(g_{\mathfrak {m}}^{i+s}\) for suitable i, s. This shows that \(\varvec{A}'=(\varvec{D}'\mid \varvec{0})\varvec{T}\) is a Smith Normal Form for \(\varvec{A}'\) and the rank profile \(\phi ^{{\mathcal {M}}_1\cdot {\mathcal {M}}_2}(x)\) corresponds to \(\phi ^{{\mathcal {M}}_1}(x)\phi ^{{\mathcal {M}}_2}(x)\).
On the other hand, if \(\phi ^{{\mathcal {M}}_1\cdot {\mathcal {M}}_2}(x)=\phi ^{{\mathcal {M}}_1}(x)\phi ^{{\mathcal {M}}_2}(x)\), then the set \(\varGamma _1\cdot \varGamma _2\) is a \(\mathfrak {m}\)-shaped basis for \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\). Moreover, since \(\deg (\phi ^{{\mathcal {M}}_1}(x))+\deg (\phi ^{{\mathcal {M}}_2}(x))<r\), we have that \(F(\varGamma _1)\cdot F(\varGamma _2)=F(\varGamma _1\cdot \varGamma _2)\), which is a set of \(\mathrm {rk}_{{R}}({\mathcal {M}}_1)\mathrm {rk}_{{R}}({\mathcal {M}}_2)\) nonzero elements. Let \(\varvec{S}\varvec{D}\varvec{T}\) be a Smith normal form for \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\), then the elements of \(F(\varGamma _1\cdot \varGamma _2)\) correspond to the first \(\mathrm {rk}_{{R}}({\mathcal {M}}_1)\mathrm {rk}_{{R}}({\mathcal {M}}_2)\) rows of matrix \(\varvec{T}\), and hence they are \({R}\)-linearly independent. Thus, \(\mathcal {F}({\mathcal {M}}_1)\cdot \mathcal {F}({\mathcal {M}}_2)\) is free with free rank equal to \(\mathrm {rk}_{{R}}({\mathcal {M}}_1)\mathrm {rk}_{{R}}({\mathcal {M}}_2)\). \(\square \)
Observe that the second part of Proposition 3 does not hold anymore if we remove the hypothesis that \(\deg (\phi ^{{\mathcal {M}}_1}(x))+\deg (\phi ^{{\mathcal {M}}_2}(x))<r\).
Let \({\mathcal {A}}'\), \({\mathcal {A}}={\mathcal {A}}'+\langle a\rangle \) and \({\mathcal {B}}\) be three free modules of free rank \(\alpha -1\), \(\alpha \) and \(\beta \) respectively, such that \({\mathcal {A}}'\cdot {\mathcal {B}}\) is free of rank \((\alpha -1)\beta \), but \({\mathcal {A}}\cdot {\mathcal {B}}\) is not free of rank \(\alpha \beta \). Take a basis for \({\mathcal {A}}\) of the form \(\{a_1,\ldots , a_{\alpha -1},a\}\) such that \(\{a_1,\ldots , a_{\alpha -1}\}\) is a basis of \({\mathcal {A}}'\), and fix also a basis \(\{b_1,\ldots ,b_{\beta }\}\) for \({\mathcal {B}}\). Then, define \({\mathcal {M}}_1\) to be the \({R}\)-module whose \(\mathfrak {m}\)-shaped basis is \(\{a_1,\ldots ,\varvec{a}_{\alpha -1},g_\mathfrak {m}^{r-1}a\}\), and define \({\mathcal {M}}_2=\mathfrak {m}{\mathcal {B}}\). Consider the module \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\). It is easy to see that \({\mathcal {M}}_1\cdot {\mathcal {M}}_2=\mathfrak {m}({\mathcal {A}}'\cdot {\mathcal {B}})={\mathcal {A}}'\cdot {\mathcal {M}}_2\). Observe that \({\mathcal {B}}\in \mathrm {Free}({\mathcal {M}}_2)\) and by Proposition 3 and Lemma 6, we have that \(\phi ^{{\mathcal {M}}_1 \cdot {\mathcal {M}}_2}(x) = \phi ^{{\mathcal {M}}_1}(x) \phi ^{{\mathcal {M}}_2}(x)\). However, by construction we have \({\mathcal {A}}\in \mathrm {Free}({\mathcal {M}}_1)\), \({\mathcal {B}}\in \mathrm {Free}({\mathcal {M}}_1)\) and \({\mathcal {A}}\cdot {\mathcal {B}}\) is not free of rank \(\alpha \beta \). Therefore, by Lemma 6 this also holds for \(\mathcal {F}({\mathcal {M}}_1)\cdot \mathcal {F}({\mathcal {M}}_2)\).
We are now ready to put the various statements of this subsection together and prove an upper bound on the failure probability of the product condition—the main statement of this subsection.
Theorem 1
Let \({\mathcal {B}}\) be a fixed \({R}\)-submodule of \({S}\) with rank profile \(\phi ^{{\mathcal {B}}}(x)\) and let \(\lambda :=\phi ^{{\mathcal {B}}}(1)=\mathrm {rk}_{{R}}({\mathcal {B}})\). Let t be a positive integer with \(t \lambda <m\) and \(\phi (x) \in \mathbb {Z}[x]/(x^r)\) with nonnegative coefficients such that \(\phi (1)=t\). Let \({\mathcal {A}}\) be an \({R}\)-submodule of \({S}\) selected uniformly at random among all the modules with \(\phi ^{\mathcal {A}}= \phi \). Then,
$$\begin{aligned} \Pr \left( \phi ^{{\mathcal {A}}\cdot {\mathcal {B}}} \ne \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}} \right) \le \left( 1-p^{-s\beta }\right) \sum _{i=1}^{\alpha } \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)} \le 2 \alpha p^{s(\alpha \beta -m)} \end{aligned}$$
Let us denote by \(\mathrm {Mod}(\phi )\) the set of all \({R}\)-submodules of \({S}\) whose rank profile equals \(\phi \). Choose uniformly at random a module \({\mathcal {A}}\) in \(\mathrm {Mod}(\phi )\), and then select \(\mathcal {X}\) uniformly at random from \(\mathrm {Free}({\mathcal {A}})\). Then, this results in a uniform distribution on the set of all free modules with free rank equal to \(\phi (1)=t\), that is the set \(\mathrm {Mod}(t)\), where t denotes the constant polynomial in \(\mathbb {Z}[x]/(x^r)\) equal to t. Indeed, for an arbitrary free module \({\mathcal {N}}\) with \(\mathrm {frk}_{{R}}({\mathcal {N}})=t\),
$$\begin{aligned} \Pr (\mathcal {X}={\mathcal {N}})&=\Pr (\mathcal {X}={\mathcal {N}}\mid {\mathcal {A}}\in \mathrm {Mod}({\mathcal {N}},\phi ))\Pr ({\mathcal {A}}\in \mathrm {Mod}({\mathcal {N}},\phi ))\\&=\frac{1}{|\mathrm {Free}({\mathcal {A}})|}\frac{|\mathrm {Mod}({\mathcal {N}},\phi )|}{|\mathrm {Mod}(\phi )|},\end{aligned}$$
which by Corollaries 1 and 2 is a constant number that does not depend on \({\mathcal {N}}\).
Now, suppose that \(\phi ^{{\mathcal {A}}\cdot {\mathcal {B}}} \ne \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}}\). By Proposition 3, this implies \({\mathcal {N}}\cdot {\mathcal {N}}'\) is not a free module of rank \(t\lambda \), where \({\mathcal {N}}\) is any free module in \(\mathrm {Free}({\mathcal {A}})\) and \({\mathcal {N}}'\) is any free module in \(\mathrm {Free}({\mathcal {B}})\). Hence,
$$\begin{aligned} \Pr \left( \phi ^{{\mathcal {A}}\cdot {\mathcal {B}}} \ne \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}} \right) \le 1- \Pr \big ({\mathcal {N}}\cdot {\mathcal {N}}'\text { is a free module of free rank }t\lambda \big ), \end{aligned}$$
and we conclude using Lemma 5. \(\square \)
As a consequence, we can finally derive the desired upper bound on the product condition failure probability.
Let \(\mathcal {F}\) be defined as in Definition 2. Let t be a positive integer with \(t \lambda <m\) and \(\phi (x) \in \mathbb {Z}[x]/(x^r)\) with nonnegative coefficients and such that \(\phi (1)=t\) (recall that this means that an error of rank profile \(\phi \) has rank t). Let \(\varvec{e}\) be an error word, chosen uniformly at random among all error words with support \({\mathcal {E}}\) of rank profile \(\phi ^{\mathcal {E}}= \phi \). Then, the probability that the product condition is not fulfilled is
$$\begin{aligned}&\Pr \left( \phi ^{{\mathcal {E}}\cdot \mathcal {F}} \ne \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}} \right) \le \left( 1-p^{-s\lambda }\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)(i \lambda -m)} \le 2 t p^{s(t \lambda -m)} \end{aligned}$$
Let us denote by \(\mathrm {Mod}(\phi )\) the set of all \({R}\)-submodules of \({S}\) whose rank profile equals \(\phi \). By Lemma 3, choosing uniformly at random \(\varvec{e}\) among all the words whose support \({\mathcal {E}}\) has rank profile \(\phi \) results in a uniform distribution on \(\mathrm {Mod}(\phi )\). At this point, the claim follows from Theorem 1. \(\square \)
Failure of syndrome condition
Here we derive a bound on the probability that the syndrome condition is not fulfilled, given that the product condition is satisfied. As in the case of finite fields, the bound is based on the relative number of matrices of a given dimension that have full (free) rank. For completeness, we give a closed-form expression for this number in the following lemma. However, it can also be derived from the number of submodules of a given rank profile, which was given in [13, Theorem 2.4]. Note that the latter result holds also for finite chain rings.
Let a, b be positive integers with \(a < b\). Then, the number of \(a \times b\) matrices over \({R}={{\,\mathrm{GR}\,}}(p^r,s)\) of (full) free rank a is \(\mathrm {NM}(a,b;{R}) = p^{a b r s} \prod _{a'=0}^{a-1} \left( 1-p^{a'-b} \right) \).
First note that \(\mathrm {NM}(1,b;{R}) = p^{b r s}-p^{b (r-1) s} = p^{brs}\big (1-p^{bs}\big )\) since a \(1 \times b\) matrices over \({R}\) is of free rank 1 if and only if at least one entry is a unit. Hence we subtract from the number of all matrices (\(|{R}|^b = p^{b r s}\)) the number of vectors that consist only of non-units \((|{R}|-|{R}^*|)^b = p^{b(r-1)s}\) (cf. (1)).
Let now for any \(a' \le a\) be \(\varvec{A}\in {R}^{a' \times b}\) a matrix of free rank \(a'\). We define \(\mathcal {V}(\varvec{A}) := \big \{ \varvec{v}\in {R}^{1 \times b} \! : \! \mathrm {frk}\big (\begin{bmatrix} \varvec{A}^\top \varvec{v}^\top \end{bmatrix}^\top \big ) = a' \big \}\). We study the cardinality of \(\mathcal {V}(\varvec{A})\). We have \(\mathrm {frk}\big (\begin{bmatrix} \varvec{A}^\top \varvec{v}^\top \end{bmatrix}^\top \big ) = a'\) if and only if the rows of the matrix \(\hat{\varvec{A}} := \begin{bmatrix} \varvec{A}^\top \varvec{v}^\top \end{bmatrix}^\top \) are linearly dependent. Due to \(\mathrm {frk}(\varvec{A}) = a'\) and the existence of a Smith normal form of \(\varvec{A}\), there are invertibe matrices \(\varvec{S}\) and \(\varvec{T}\) such that \(\varvec{S}\varvec{A}\varvec{T}= \varvec{D}\), where \(\varvec{D}\) is a diagonal matrix with ones on its diagonal.
Since \(\varvec{S}\) and \(\varvec{T}\) are invertible, we can count the number of vectors \(\varvec{v}'\) such that the rows of the matrix \(\big [ \varvec{D}^\top {\varvec{v}'}^\top \big ]^\top \) are linearly independent instead of the matrix \(\hat{\varvec{A}}\) (note that \(\varvec{v}= \varvec{v}' \varvec{T}^{-1}\) gives a corresponding linearly dependent row in \(\hat{\varvec{A}}\)).
Since \(\varvec{D}\) is in diagonal form with only ones on its diagonal, the linearly dependent vectors are exactly of the form
$$\begin{aligned} \varvec{v}' = [v'_1,\ldots ,v'_a,v'_{a'+1},\ldots , v'_b], \end{aligned}$$
where \(v'_i \in {R}\) for \(i=1,\ldots ,a'\) and \(v'_i \in \mathfrak {m}\) for \(i=a'+1,\ldots ,b\). Hence, we have
$$\begin{aligned} |\mathcal {V}(\varvec{A})| = p^{a'rs} p^{(b-a')(r-1)s)} = p^{brs} p^{(a'-b)s}. \end{aligned}$$
Note that this value is independent of \(\varvec{A}\).
By the discussion on \(|\mathcal {V}(\varvec{A})|\), we get the following recursive formula:
$$\begin{aligned} \mathrm {NM}(a'\!+ \!1,b;{R}) \! = \! {\left\{ \begin{array}{ll} \mathrm {NM}(a',b;{R}) p^{brs}\! \left( 1- p^{(a'-b)s} \right) , \! \! &{}a'\ge 1, \\ p^{brs}\!\left( 1-p^{bs}\right) , &{}a'=0, \end{array}\right. } \end{aligned}$$
which resolves into \(\mathrm {NM}(a,b;{R}) = p^{a b r s} \prod _{a'=0}^{a-1} \left( 1-p^{(a'-b)s}\right) \). \(\square \)
At this point we can prove the bound on the failure probability of the syndrome condition similar to the one in [10], using Lemma 7. The additional difficulty over rings is to deal with non-unique decompositions of module elements in \(\mathfrak {m}\)-shaped bases and the derivation of a simplified bound on the relative number of non-full-rank matrices. Furthermore, the start of the proof corrects a minor technical impreciseness of Gaborit et al.'s proof.
Let \(\mathcal {F}\) be defined as in Definition 2, t be a positive integer with \(t \lambda < \min \{m,n-k+1\}\), and \({\mathcal {E}}\) be an error space of rank t. Suppose that the product condition is fulfilled for \({\mathcal {E}}\) and \(\mathcal {F}\). Suppose further that \(\varvec{H}\) has the maximal-row-span and unity properties (cf. Definition 3).
Let \(\varvec{e}\) be an error word, chosen uniformly at random among all error words with support \({\mathcal {E}}\). Then, the probability that the syndrome condition is not fulfilled for \(\varvec{e}\) is
$$\begin{aligned} \Pr \left( {\mathcal {S}}\ne {\mathcal {E}}\cdot \mathcal {F}\mid \phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}} \right) \le 1 - \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) < 4 p^{-s(n-k+1-\lambda t)}. \end{aligned}$$
Let \(\varvec{e}' \in {S}^n\) be chosen such that every entry \(e_i'\) is chosen uniformly at random from the error support \({\mathcal {E}}\).Footnote 1 Denote by \({\mathcal {S}}_{\varvec{e}}\) and \({\mathcal {S}}_{\varvec{e}'}\) the syndrome spaces obtained by computing the syndromes of \(\varvec{e}\) and \(\varvec{e}'\), respectively. Then, we have
$$\begin{aligned} \Pr \big ({\mathcal {S}}_{\varvec{e}'} = {\mathcal {E}}\cdot \mathcal {F}\big ) \le \Pr \big ( {\mathcal {S}}_{\varvec{e}'} = {\mathcal {E}}\cdot \mathcal {F}\mid \mathrm {supp}_\mathrm {R}(\varvec{e}') = {\mathcal {E}}\big ) = \Pr \big ( {\mathcal {S}}_{\varvec{e}} = {\mathcal {E}}\cdot \mathcal {F}\big ), \end{aligned}$$
where the latter equality follows from the fact that the random experiments of choosing \(\varvec{e}'\) and conditioning on the property that \(\varvec{e}'\) has support \({\mathcal {E}}\) is the same as directly drawing \(\varvec{e}\) uniformly at random from the set of errors with support \({\mathcal {E}}\). Hence, we obtain a lower bound on \(\Pr \big ( {\mathcal {S}}_{\varvec{e}} = {\mathcal {E}}\cdot \mathcal {F}\big )\) by studying \(\Pr \big ({\mathcal {S}}_{\varvec{e}'} = {\mathcal {E}}\cdot \mathcal {F}\big )\), which we do in the following.
Let \(f_1,\ldots ,f_\lambda \) and \(\varepsilon _1,\ldots ,\varepsilon _t\) be \(\mathfrak {m}\)-shaped bases of \(\mathcal {F}\) and \({\mathcal {E}}\), respectively, such that \(f_j \varepsilon _i\) for \(i=1,\ldots ,t\), \(j=1,\ldots ,\lambda \) form an \(\mathfrak {m}\)-shaped basis of \({\mathcal {E}}\cdot \mathcal {F}\). Note that the existence of such bases is guaranteed by the assumed product condition \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\).
Since \(e'_i\) is an element drawn uniformly at random from \({\mathcal {E}}\), we can write it as \(e'_i = \sum _{\mu =1}^{t} e_{i,\mu }' \varepsilon _\mu \), where \(e_{i,j}'\) are uniformly distributed on \({R}\). We can assume uniformity of \(e_{i,\mu }'\) since for a given \(e'_i\), the decomposition of \(e_{i,\mu }'\) is unique modulo \(\mathfrak {m}^{r-v(\varepsilon _i)}\). In particular, there are equally many decompositions \([e_{i,1}',\dots ,e_{i,t}']\) for each \(e'_i\) and the sets of these decompositions are disjoint for different i.
Due to the unity property of the parity-check matrix \(\varvec{H}\), we can write any entry \(H_{i,j}\) of \(\varvec{H}\) as \(H_{i,j} = \sum _{\eta =1}^{\lambda } h_{i,j,\eta } f_\eta \), where the \(h_{i,j,\eta }\) are units in \({R}\) or zero. Furthermore, since each row of \(\varvec{H}\) spans the entire module \(\mathcal {F}\) (full-row-span property), for each i and each \(\eta \), there is at least one \(j^*\) with \(h_{i,j^*,\eta } \ne 0\). By the previous assumption, this means that \(h_{i,j^*,\eta } \in {R}^*\).
Then, each syndrome coefficient can be written as
$$\begin{aligned} s_i = \sum \nolimits _{j=1}^{n} e'_j H_{i,j} = \sum \nolimits _{\mu =1}^{t} \sum \nolimits _{\eta =1}^{\lambda } \underbrace{\left( \sum \nolimits _{j=1}^{n} e_{j,\mu }' h_{i,j,\eta }\right) }_{=: s_{\mu ,\eta ,i}} \varepsilon _\mu f_\eta . \end{aligned}$$
By the above discussion, for each i and \(\eta \), there is a \(j^*\) with \(h_{i,j^*,\eta } \in {R}^*\). Hence, \(s_{\mu ,\eta ,i}\) is a sum (with at least one summand) of the products of uniformly distributed elements of \({R}\) and units of \({R}\). A uniformly distributed ring element times a unit is also uniformly distributed on \({R}\). Hence \(s_{\mu ,\eta ,i}\) is a sum (with at least one summand) of uniformly distributed elements of \({R}\). Hence, \(s_{\mu ,\eta ,i}\) itself is uniformly distributed on \({R}\).
All together, we can write
$$\begin{aligned} \begin{bmatrix} s_1 \\ s_2 \\ \vdots \\ s_{n-k} \end{bmatrix} = \underbrace{ \begin{bmatrix} s_{1,1,1} &{} s_{1,2,1} &{} \cdots &{} s_{t,\lambda ,1} \\ s_{1,1,2} &{} s_{1,2,2} &{} \cdots &{} s_{t,\lambda ,2} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ s_{1,1,n-k} &{} s_{1,2,n-k} &{} \cdots &{} s_{t,\lambda ,n-k} \\ \end{bmatrix}}_{=: \, \varvec{S}} \cdot \begin{bmatrix} \varepsilon _1 f_1 \\ \varepsilon _1 f_2 \\ \vdots \\ \varepsilon _t f_\lambda \\ \end{bmatrix}, \end{aligned}$$
where, by assumption, the \(\varepsilon _i f_j\) are a generating set of \({\mathcal {E}}\cdot \mathcal {F}\) and the matrix \(\varvec{S}\) is chosen uniformly at random from \({R}^{(n-k)\times t \lambda }\). If \(\varvec{S}\) has full free rank \(t \lambda \), then we have \({\mathcal {S}}_{\varvec{e}'} = {\mathcal {E}}\cdot \mathcal {F}\). By Lemma 7, the probability of drawing such a full-rank matrix is
$$\begin{aligned} \frac{\mathrm {NM}(a,b;{R})}{|{R}|^{ab}} = \prod _{a'=0}^{a-1} \left( 1-p^{(a'-b)s} \right) . \end{aligned}$$
This proves the bound
$$\begin{aligned} \Pr \left( {\mathcal {S}}\ne {\mathcal {E}}\cdot \mathcal {F}\mid \phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}} \right) \le 1 - \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) . \end{aligned}$$
We simplify the bound further using the observation that the product is a q-Pochhammer symbol. Hence, we have
$$\begin{aligned} 1 - \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) = \sum _{j=1}^{\lambda t} \underbrace{(-1)^{j+1} p^{-j(n-k)s} \left[ \begin{matrix} \lambda t \\ j \end{matrix} \right] _{p^s} p^{s\left( {\begin{array}{c}j\\ 2\end{array}}\right) }}_{=: \, a_j}, \end{aligned}$$
where \(\left[ \begin{matrix} a \\ b \end{matrix} \right] _{q} := \prod _{j=1}^{b} \tfrac{q^{a+1-j}-1}{q^{j}-1}\) is the Gaussian binomial coefficient. Using \(q^{b(a-b)} \le \left[ \begin{matrix} a \\ b \end{matrix} \right] _{q} < 4 q^{b(a-b)}\), we obtain
$$\begin{aligned} \left| \frac{a_{j+1}}{a_j}\right|&= p^{-(n-k-j)s} \frac{\left[ \begin{matrix} \lambda t \\ j+1 \end{matrix} \right] _{p^s}}{\left[ \begin{matrix} \lambda t \\ j \end{matrix} \right] _{p^s}}< p^{-(n-k-j)s} \frac{4 q^{s(j+1)(\lambda t-j-1)}}{q^{sj(\lambda t-j)}} \\&= 4 p^{s[\lambda t - j - (n-k+1)]} < 1 \end{aligned}$$
for \(\lambda t < n-k+1\), i.e., \(|a_j|\) is strictly monotonically decreasing. Since the summands \(a_j\) have alternating sign, we can thus bound \(\sum _{j=1}^{\lambda t}a_j \le a_1\), which gives
$$\begin{aligned} 1 - \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) \le a_1 < 4 p^{-s(n-k+1-\lambda t)} \end{aligned}$$
\(\square \)
In contrast to Theorem 3 the full-row-span property was not assumed in [1, Proposition 4.3], which is the analogous statement for finite fields. However, also the statement in [1, Proposition 4.3] is only correct if we assume additional structure on the parity-check matrix (e.g., that each row spans the entire space \(\mathcal {F}\) or a weaker condition), due to the following counterexample: Consider a parity-check matrix \(\varvec{H}\) that contains only non-zero entries on its diagonal and in the last row, where the diagonal entries are all \(f_1\) and the last row contains the remaining \(f_2,\ldots ,f_\lambda \), i.e.,
This is a valid parity-check matrix according to [1, Definition 4.1] since the entries of \(\varvec{H}\) span the entire space \(\mathcal {F}\). However, due to the structure of the matrix, the first \(n-k-1\) syndromes are all in \(f_1 {\mathcal {E}}\), hence \(\mathrm {rk}_{{R}}({\mathcal {S}}) \le t+1 < t \lambda \) for any error of support \({\mathcal {E}}\).
Failure of intersection condition
We use a similar proof strategy as in [1] to derive an upper bound on the failure probability of the intersection condition. The following lemma is the Galois-ring analog of [1, Lemma 3.4], where the difference is that we need to take care of the fact that the representation of module elements in an \(\mathfrak {m}\)-shaped basis is not necessarily unique in a Galois ring.
Let \({\mathcal {A}}\subseteq {S}\) be an \({R}\)-module of rank \(\alpha \) and \({\mathcal {B}}\subseteq {S}\) be a free \({R}\)-module of free rank \(\beta \). Assume that \(\phi ^{{\mathcal {A}}\cdot {\mathcal {B}}^2} = \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}^2}\) and that there is an element \(e \in {\mathcal {A}}\cdot {\mathcal {B}}\setminus {\mathcal {A}}\) with \(e {\mathcal {B}}\subseteq {\mathcal {A}}\cdot {\mathcal {B}}\). Then, there is an \(y \in {\mathcal {B}}\setminus {R}\) such that \(y {\mathcal {B}}\subseteq {\mathcal {B}}\).
Let \(a_1,\ldots ,a_\alpha \) be an \(\mathfrak {m}\)-shaped basis of \({\mathcal {A}}\) and \(b_1,\ldots ,b_\beta \) be a basis of \({\mathcal {B}}\). Due to \(e \in {\mathcal {A}}\cdot {\mathcal {B}}\), there are coefficients \(e_{i,j} \in {R}\) such that
$$\begin{aligned} \textstyle e = \sum _{i=1}^{\alpha } \underbrace{\left( \textstyle \sum _{j=1}^{\beta } e_{i,j} b_j \right) }_{=: \, b'_i} a_i. \end{aligned}$$
Due to the fact that \(e \notin {\mathcal {A}}\), there is an \(\eta \in \{1,\ldots ,\alpha \}\) with \(b_\eta ' a_\eta \notin {\mathcal {A}}\). In particular, \(y := g_\mathfrak {m}^{v(a_\eta )} b_\eta ' \in {\mathcal {B}}\setminus {R}\). We show that y fulfills \(y {\mathcal {B}}\subseteq {\mathcal {B}}\).
Let now \(b \in {\mathcal {B}}\). Since by assumption \(eb \in {\mathcal {A}}\cdot {\mathcal {B}}\), there are \(c_{i,j} \in {R}\) with \(e b = \sum _{i=1}^{\alpha } \left( \sum _{j=1}^{\beta } c_{i,j} b_j \right) a_i\). By (12), we can also write \(e b = \sum _{i=1}^{\alpha } \left( \sum _{j=1}^{\beta } e_{i,j} b_j b \right) a_i = \sum _{i=1}^{\alpha } b_i' b a_i\). Due to the maximality of the rank profile of \({\mathcal {A}}\cdot {\mathcal {B}}^2\), i.e., \(\phi ^{{\mathcal {A}}\cdot {\mathcal {B}}^2} = \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}^2}\), we have that the coefficients \(c_{i} \in {\mathcal {B}}^2\) of any representation \(c = \sum _i c_i a_i\) of an element \(c \in {\mathcal {A}}\cdot {\mathcal {B}}^2\) are unique modulo \(\mathfrak {M}^{r-v(a_i)}\). Hence, for every \(i=1,\ldots ,\alpha \), there exists \(\chi _i \in {\mathcal {B}}^2\) such that
$$\begin{aligned} b_i' b = \sum _{j=1}^{\beta } c_{i,j} b_j + g_\mathfrak {m}^{r-v(a_i)} \chi _i. \end{aligned}$$
Thus, with \(\sum _{j=1}^{\beta } c_{\eta ,j} b_j \in {\mathcal {B}}\), \(g_\mathfrak {m}^{v(a_i)} \in {R}\), and \(g_\mathfrak {m}^{r}=0\), we get
$$\begin{aligned} y b = g_\mathfrak {m}^{v(a_\eta )} b_\eta ' b = g_\mathfrak {m}^{v(a_\eta )}\sum _{j=1}^{\beta } c_{\eta ,j} b_j + g_\mathfrak {m}^{r} \chi _\eta \in {\mathcal {B}}. \end{aligned}$$
Since this hold for any b, we have \(y {\mathcal {B}}\subseteq {\mathcal {B}}\), which proves the claim. \(\square \)
We get the following bound using Lemma 8, Theorem 1, and a similar argument as in [10].
Let \(\mathcal {F}\) be defined as in Definition 2 such that it has the base-ring property (i.e., \(1 \in \mathcal {F})\). Suppose that no intermediate ring \(R'\) between \({R}\subsetneq R' \subseteq {S}\) is contained in \(\mathcal {F}\) (this holds, e.g., for \(\lambda \) greater than the smallest divisor of m or for special \(\mathcal {F})\).
Let t be a positive integer with \(t \tfrac{\lambda (\lambda +1)}{2} < m\) and \(t \lambda < n-k+1\), and let \(\phi (x) \in \mathbb {Z}[x]/(x^r)\) with nonnegative coefficients such that \(\phi (1)=t\). Choose \(\varvec{e}\in {S}^n\) uniformly at random from the set of vectors with whose support has rank profile \(\phi \).
Then, the probability that the intersection condition is not fulfilled, given that syndrome and product conditions are satisfied, is
$$\begin{aligned}&\Pr \left( \textstyle \bigcap _{i=1}^{\lambda } {\mathcal {S}}_i = {\mathcal {E}}\mid {\mathcal {S}}= {\mathcal {E}}\cdot \mathcal {F}\wedge \phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}} \right) \\&\quad \le \left( 1-p^{-s\frac{\lambda (\lambda +1)}{2}}\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)\left( i \frac{\lambda (\lambda +1)}{2}-m\right) } \le 2 t p^{s\left( t \frac{\lambda (\lambda +1)}{2}-m\right) } \end{aligned}$$
Suppose that the product (\(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\)) and syndrome (\({\mathcal {S}}= {\mathcal {E}}\cdot \mathcal {F}\)) conditions are fulfilled, and assume that the intersection condition is not fulfilled. Then we have \(\bigcap _{i=1}^{\lambda } {\mathcal {S}}_i =: {\mathcal {E}}' \supsetneq {\mathcal {E}}\). Choose any \(e \in {\mathcal {E}}' \setminus {\mathcal {E}}\). Since \(\mathcal {F}\) contains 1 by assumption, we have \(e \in {\mathcal {A}}\cdot {\mathcal {B}}\). Due to \({\mathcal {A}}\subseteq {\mathcal {E}}\), we have \(e \notin {\mathcal {A}}\). Furthermore, we have \({\mathcal {E}}' \cdot {\mathcal {B}}= {\mathcal {E}}\cdot {\mathcal {B}}\), so all conditions on e of Lemma 8 are fulfilled.
Since \({\mathcal {E}}\) is chosen uniformly at random from all free submodules of \({S}\) of rank t, we can apply Theorem 1 and obtain that \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}^2} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}^2}\) with probability at least
$$\begin{aligned}&\Pr \!\left( \phi ^{{\mathcal {A}}\cdot {\mathcal {B}}^2} \ne \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}^2}\right) \\&\quad \le \left( 1-p^{-s\lambda '}\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)(i \lambda '-m)}\\&\quad \le \left( 1-p^{-s\frac{\lambda (\lambda +1)}{2}}\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)\left( i \frac{\lambda (\lambda +1)}{2}-m\right) } \\&\quad \le 2 t p^{s\left( t \frac{\lambda (\lambda +1)}{2}-m\right) } \end{aligned}$$
where \(\lambda ' := \mathrm {rk}_{{R}}(\mathcal {F}^2) \le \tfrac{1}{2}\lambda (\lambda +1)\) (this is clear since \(\mathcal {F}^2\) is generated by the products of all unordered element pairs of an \(\mathfrak {m}\)-shaped basis of \(\mathcal {F}\)).
Hence, with probability at least one minus this value, both conditions of Lemma 8 are fulfilled. In that case, there is an element \(y \in \mathcal {F}\setminus {R}\) such that \(y \mathcal {F}\subseteq \mathcal {F}\). Thus, also \(y^i \mathcal {F}\subseteq \mathcal {F}\) for all positive integers i, and we have that the ring \({R}(y)\) extended by the element \(y \notin {R}\) fulfills \({R}(y) \subseteq \mathcal {F}\) (this holds since \(\mathcal {F}\) contains at least one unit). This is a contradiction to the assumption on intermediate rings. \(\square \)
Overall failure probability
The following theorem states the overall bound on the failure probability, exploiting the bounds derived in Theorems 2, 3, and 4.
Let \(\mathcal {F}\) be defined as in Defintion 2 such that it has the base-ring property (i.e., \(1 \in \mathcal {F})\). Suppose that no intermediate ring \(R'\) between \({R}\subsetneq R' \subseteq {S}\) is contained in \(\mathcal {F}\) (this holds, e.g., for \(\lambda \) greater than the smallest divisor of m or for special \(\mathcal {F})\). Suppose further that \(\varvec{H}\) has the maximal-row-span and unity properties (cf. Definition 3).
Then, Algorithm 1 with input \(\varvec{c}+\varvec{e}\) returns \(\varvec{c}\) with a failure probability of at most
$$\begin{aligned} \Pr (\text {failure})&\le \left( 1-p^{-s\lambda }\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)\left( i \lambda -m\right) } \nonumber \\&\quad + \left[ 1- \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) \right] \nonumber \\&\quad + \left( 1-p^{-s\frac{\lambda (\lambda +1)}{2}}\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)\left( i \frac{\lambda (\lambda +1)}{2}-m\right) } \end{aligned}$$
$$\begin{aligned}&\le 4 p^{s[\lambda t-(n-k+1)]} + 4 t p^{s\left( t \frac{\lambda (\lambda +1)}{2}-m\right) } \end{aligned}$$
The statement follows by applying the union bound to the failure probabilities of the three success conditions, derived in Theorems 2, 3, and 4. \(\square \)
The simplified bound (14) in Theorem 5 coincides up to a constant with the bound by Gaborit et at. [10] in the case of a finite field (Galois ring with \(r=1\)). If we compare an LRPC code over a finite field of size \(p^{rs}\) and with an LRPC code over a Galois ring with parameters p, r, s (i.e., the same cardinality), then we can observe that the bounds have the same exponent, but the base of the exponent is different: It is \(p^{rs}\) for the field and \(p^s\) for the ring case. Hence, the maximal decoding radii \(t_\mathrm {max}\) (i.e., the maximal rank t for which the bound is \(<1\)) are roughly the same, but the exponential decay in \(t_\mathrm {max}-t\) for smaller error rank t is slower in case of rings due to a smaller base of the exponential expression. This "loss" is expected due to the weaker structure of modules over Galois rings compared to vector spaces over fields.
Decoding complexity
We discuss the decoding complexity of the decoding algorithm described in Sect. 4. Over a field, all operations within the decoding algorithm are well-studied and it is clear that the algorithm runs in roughly \(\tilde{O}(\lambda ^2 n^2 m)\) operations over the small field \(\mathbb {F}_q\). Although we believe that an analog treatment over the rings studied in this paper must be known in the community, we have not found a comprehensive complexity overview of the corresponding operations in the literature. Hence, we start the complexity analysis with an overview of complexities of ring operations and linear algebra over these rings.
Cost model and basic ring operations
We express complexities in operations in \({R}\). For some complexity expressions, we use the soft-O notation, i.e., \(f(n) \in \tilde{O}(g(n))\) if there is a \(r \in \mathbb {Z}_{\ge 0}\) such that \(f(n) \in \tilde{O}(g(n) \log (g(n))^r)\). We use the following result, which follows straightforwardly from standard computer-algebra methods in the literature.
(Collection of results in [27]) Addition in \({S}\) costs m additions in \({R}\). Multiplication in \({S}\) can be done in \(O(m \log (m) \log (\log (m)))\) operations in \({R}\).
We represent elements of \({S}\) as residue classes of polynomials in \({R}[z]/(h(z))\) (e.g., each residue class is represented by its unique representative of degree \(<m\)), where \(h \in {R}[z]\) is a monic polynomial of degree m as explained in the preliminaries.
Addition is done independently on the m coefficients of the polynomial representation, so it only requires m additions in \({R}\). Multiplication consists of multiplying two residue classes in \({R}[z]/(h(z))\), which can be done by multiplying the two representatives of degree \(<m\) and then taking them modulo (h(z)) (i.e., take the remainder of the division by the monic polynomial h). Both multiplication and division can be implemented in \(O(m \log (m) \log (\log (m)))\) time using Schönhage and Strassen's polynomial multiplication algorithm (cf. [27, Sect. 8.3]) and a reduction of division to multiplication using a Newton iteration (cf. [27, Sect. 9.1]). Note that both methods work over any commutative ring with 1. \(\square \)
Linear algebra over Galois rings
We recall how fast we can compute the Smith normal form of a matrix over \({R}\) and show that computing the right kernel of a matrix and solving a linear system can be done in a similar speed. Let \(2 \le \omega \le 3\) be the matrix multiplication exponent (e.g., \(\omega = 2.37\) using the Coppersmith–Winograd algorithm).
Lemma 10
([25, Proposition 7.16]) Let \(\varvec{A}\in {R}^{a \times b}\). Then, the Smith normal form \(\varvec{D}\) of \(\varvec{A}\), as well as the corresponding transformation matrices \(\varvec{S}\) and \(\varvec{T}\), can be computed in
$$\begin{aligned} O(a b \min \{a,b\}^{\omega -2} \log (a+b)) \end{aligned}$$
operations in \({R}\).
Let \(\varvec{A}\in {R}^{a \times b}\). An \(\mathfrak {m}\)-shaped basis of the right kernel of \(\varvec{A}\) can be computed in \(O(a b \min \{a,b\}^{\omega -2} \log (a+b))\) operations in \({R}\).
We compute the Smith normal form \(\varvec{D}= \varvec{S}\varvec{A}\varvec{T}\) and the transformation matrices \(\varvec{S}\) and \(\varvec{T}\) of \(\varvec{A}\). To compute the right kernel, we need to solve the homogeneous linear system \(\varvec{A}\varvec{x}= \varvec{0}\) for \(\varvec{x}\). Using the Smith normal form, we can rewrite it into
$$\begin{aligned} \varvec{D}\varvec{T}^{-1} \varvec{x}= \varvec{0}. \end{aligned}$$
Denote \(\varvec{y}:= \varvec{T}^{-1} \varvec{x}\) and first solve \(\varvec{D}\varvec{y}= \varvec{0}\). W.l.o.g., let the diagonal entries of \(\varvec{D}\) be of the form
$$\begin{aligned} \begin{bmatrix} \varvec{I}_{n_0} &{} &{} &{} &{} \\ &{} g_\mathfrak {m}\varvec{I}_{n_1} &{} &{} &{} \\ &{} &{} \ddots &{} &{} \\ &{} &{} &{} g_{\mathfrak {m}}^{r-1}\varvec{I}_{n_{r-1}} &{} \\ &{} &{} &{} &{} \varvec{0} \end{bmatrix} \end{aligned}$$
where the \(n_i\) are the coefficients of the rank profile \(\phi (x)=\sum _{i=0}^{r-1}n_ix^i\in \mathbb {N}[x]/(x^r)\) of \(\varvec{A}\)'s row space. Then, the rows of the following matrix are an \(\mathfrak {m}\)-shaped basis of the right kernel of \(\varvec{D}\) (we denote by \(\eta := n_0\) the free rank of \(\varvec{A}\)'s row space and by \(\mu := \sum _{i=0}^{r-1}n_i)\) the rank of \(\varvec{A}\)'s row space):
$$\begin{aligned} \varvec{K}:= \begin{bmatrix} \varvec{0}_{(\mu -\eta ) \times \eta } &{} \varvec{B}&{} \varvec{0}_{(\mu -\eta ) \times (b-\mu )} \\ \varvec{0}_{(b-\mu ) \times \eta } &{} \varvec{0}_{(b-\mu ) \times (\mu -\eta )} &{} \varvec{I}_{(b-\mu ) \times (b-\mu )} \\ \end{bmatrix} \in {R}^{(b-\eta ) \times b}, \end{aligned}$$
$$\begin{aligned} \varvec{B}:= \begin{bmatrix} g_\mathfrak {m}^{r-1} \varvec{I}_{n_1} &{} &{} &{} \\ &{} g_\mathfrak {m}^{r-2} \varvec{I}_{n_1} &{} &{} \\ &{}&{} \ddots &{} \\ &{} &{} &{} g_{\mathfrak {m}}^{1}\varvec{I}_{n_{r-1}} \\ \end{bmatrix}. \end{aligned}$$
Hence, the rows of \(\varvec{K}\varvec{T}^\top \) form an \(\mathfrak {m}\)-shaped basis of the right kernel of \(\varvec{A}\). Note that this matrix multiplication can be implemented with complexity \(O(b^2)\) since \(\varvec{K}\) has only at most one entry per row and column. \(\square \)
Let \(\varvec{A}\in {R}^{a \times b}\) and \(\varvec{b}\in {R}^{a}\). A solution of the linear system \(\varvec{A}\varvec{x}= \varvec{b}\) (or, in case no solution exists, the information that it does not exist) can be obtained in \(O(a b \min \{a,b\}^{\omega -2} \log (a+b))\) operations in \({R}\).
We follow the same strategy and the notation as in Lemma 11. Solve
$$\begin{aligned} \varvec{D}\underbrace{\varvec{T}^{-1} \varvec{x}}_{=: \, \varvec{y}} = \varvec{S}\varvec{b}=: \varvec{b}'. \end{aligned}$$
for one \(\varvec{y}\). The system has a solution if and only if \(b_j' \in \mathfrak {M}^{i_j}\) for \(j=1,\ldots ,r'\), and \(b_j' = 0\) for all \(j>r'\). In case it has a solution, it is easy to obtain a solution \(\varvec{y}\). Then we only need to compute \(\varvec{x}= \varvec{T}\varvec{y}\), which is a solution of \(\varvec{A}\varvec{x}= \varvec{b}\). The heaviest step is to compute the Smith normal form, which proves the complexity statement. \(\square \)
Complexity of the LRPC decoder over Galois rings
Suppose that the inverse elements \(f_1^{-1},\ldots ,f_\lambda ^{-1}\) are precomputed. Then, Algorithm 1 has complexity \(\tilde{O}(\lambda ^2 n^2 m)\) operations in \({R}\).
The heaviest steps of Algorithm 1 (see Sect. 4) are as follows:
Line 1 computes the syndrome \(\varvec{s}\) from the received word. This is a vector-matrix multiplication in \({S}\), which costs \(O(n(n-k)) \subseteq O(n^2)\) operations in \({S}\), i.e., \(\tilde{O}(n^2m)\) operations in \({R}\).
Line 4 is called \(\lambda \) times and computes for each \(f_i\) the set \(S_i = f_i^{-1} {\mathcal {S}}\) (recall that the inverses \(f_i^{-1}\) are precomputed). We obtain a generating set of \({\mathcal {S}}_i\) by multiplying \(f_i^{-1}\) to all syndrome coefficients \(s_1,\ldots ,s_{n-k}\). This costs \(O(\lambda (n-k))\) operations in \({S}\) in total, i.e., \(\tilde{O}(\lambda n m)\) operations in \({R}\). If we want a minimal generating set, we can compute the Smith normal form for each \({\mathcal {S}}_i\), which costs \(\tilde{O}(\lambda n^{\omega -1}m)\) operations in \({R}\) according to Lemma 10.
Line 5 computes the intersection \({\mathcal {E}}' \leftarrow \bigcap _{i=1}^{\lambda } {\mathcal {S}}_i\) of the modules \({\mathcal {S}}_i\). This can be computed via the kernel computation algorithm as follows: Let \({\mathcal {A}}\) and \({\mathcal {B}}\) be two modules. Then, we have \({\mathcal {A}}\cap {\mathcal {B}}= \mathcal {K} \left( \mathcal {K} ({\mathcal {A}}) \cup \mathcal {K} ({\mathcal {B}}) \right) \). Hence, we can compute the intersection \({\mathcal {A}}\cap {\mathcal {B}}\) by writing generating sets of the modules as the rows of two matrices \(\varvec{A}\) and \(\varvec{B}\), respectively. Then, we compute matrices \(\varvec{A}'\) and \(\varvec{B}'\), whose rows are generating sets of the right kernel of \(\varvec{A}\) and \(\varvec{B}\), respectively. Then, rows of the matrix \(\varvec{C}:= \begin{bmatrix} \varvec{A}' \\ \varvec{B}' \end{bmatrix}\) are a generating set of \(\mathcal {K} ({\mathcal {A}}) \cup \mathcal {K} ({\mathcal {B}})\), and be obtain \({\mathcal {A}}\cap {\mathcal {B}}\) by computing again the right kernel of \(\varvec{C}\). By applying this algorithm iteratively to the \({\mathcal {S}}_i\) (using the kernel computation algorithm described in Lemma 11), we obtain the intersection \({\mathcal {E}}'\) in \(\tilde{O}(\lambda n^{\omega -1}m)\) operations.
Line 6 recovers an error vector \(\varvec{e}\) from the support \({\mathcal {E}}'\) and syndrome \(\varvec{s}\). As shown in the proof of Lemma 2, this can be done by solving t linear systems over \({R}\) with each n unknowns and \((n-k)\lambda \) equations w.r.t. the same matrix \(\varvec{H}_{\mathrm {ext}}\). Hence, we only once need to compute the Smith normal form of \(\varvec{H}_{\mathrm {ext}}\), which requires \(\tilde{O}(n [(n-k)\lambda ]^{\omega -1})\) operations. The remaining steps for solving the systems (see Lemma 12 to compute one solution, if it exists, and Lemma 11 to compute an affine basis) consist mainly of matrix-vector operations, which require in total \(\tilde{O}(t \lambda ^2(n-k)^2)\) operations in \({R}\), where \(t \le m\) is the rank of \({\mathcal {E}}'\). Note that during the algorithm, it is easy to detect whether the systems have no solution, a unique solution, or more than one solution. \(\square \)
The assumption that \(f_1^{-1},\ldots ,f_\lambda ^{-1}\) are precomputed makes sense since in many application, the code is chosen once and then several received words are decoded for the same \(f_1,\ldots ,f_\lambda \). Precomputation of all \(f_1^{-1},\ldots ,f_\lambda ^{-1}\) costs at most \(\tilde{O}(\lambda m^\omega )\) since for \(a \in {S}\), the relation \(a^{-1} a \equiv 1 \mod h\) (for a and \(a^{-1}\) being the unique representative in \({R}[z]/(h)\) with degree \(<m\)) gives a linear system of equations of size \(m \times m\) over \({R}\) with a unique solution \(a^{-1}\). This complexity can only exceed the cost bound in Theorem 6 if \(m \gg n\).
In fact, we conjecture, but cannot rigorously prove, that the inverse of a unit in \({S}\) can be computed in \(\tilde{O}(m)\) operations in \({R}\) using a fast implementation of the extended Euclidean algorithm (see, e.g., [27]). If this is true, the precomputation cost is smaller than the cost bound in Theorem 6.
The currently fastest decoder for Gabidulin codes over finite rings, the Welch–Berlekamp-like decoder in [14], has complexity \(O(n^\omega )\) operations over \({S}\) since its main step is to solve a linear system of equations. Over \({R}\), this complexity bound is \(\tilde{O}(n^\omega m)\), i.e., it is larger than the complexity bound for our LRPC decoder for constant \(\lambda \) and the same parameters n and m.
We performed simulations of LRPC codes with \(\lambda =2\), \(k=8\) and \(n=20\) (note that we need \(k \le \tfrac{\lambda -1}{\lambda }n\) by the unique-decoding property) over the ring \({S}\) with \(p=r=2\), \(s=1\) and \(m=21\). In each simulation, we generated one parity-check matrix (fulfilling the maximal-row-span and the unity properties) and conducted a Monte Carlo simulation in which we collected at least 1000 decoding errors and at least 50 failures of every success condition. All simulations gave very similar results and confirmed our analysis. We present one of the simulation results in Fig. 1 for errors of rank weight \(t=1,\ldots ,7\) and three different rank profiles.
We indicate by markers the estimated probabilities of violating the product condition (S: Prod), the syndrome condition (S: Synd), the intersection condition (S: Inter) as well as the decoding failure rate (S: Dec). Black markers denote the result of the simulations with errors of rank profile \(\phi _1(x) = t\), blue markers show the result with errors of rank profile \(\phi _2(x) = tx\) and orange markers indicate the result with rank profile \(\phi _3(x) \in \{1,1+x,2+x,2+2x,3+2x,3+3x,4+3x \}\). Further, we show the derived boundsFootnote 2 on the probabilities of not fulfilling the product condition (B: Prod) given in Theorem 2, the syndrome condition (B: Synd) derived in Theorem 3, the intersection condition (B: Inter) provided in Theorem 4 and the union bound (B: Dec) stated in Theorem 5. Since the derived bounds depend only on the rank weight t but not on the rank profile, we show each bound only once.
One can observe that the bound on the probability of not fulfilling the syndrome condition is very close to the true probability while the bounds on the probabilities of violating the product and syndrome condition are loose. Gaborit et al. have made the same observation in the case of finite fields. In addition, it seems that only the rank weight but not the rank profile has an impact on the probabilities of violating the success conditions.
Simulation results for \(\lambda =2\), \(k=8\) and \(n=20\) over \({S}\) with \(p=r=2\), \(s=1\) and \(m=21\). The markers indicate the estimated probabilities of not fulfilling the product condition (S: Prod), the syndrome condition (S: Synd), the intersection condition (S: Inter) and the decoding failure rate (S: Dec), where the black, blue and orange markers refer to errors of rank profile \(\phi _1(x) =t\), \(\phi _2(x) =tx\) and \(\phi _3(x)\in \{ 1,1+x,2+x,2+2x,3+2x,3+3x,4+3x \}\), respectively. The derived bounds on these probabilities are shown as lines
We also found that the base-ring property of \(\mathcal {F}\) is—in all tested cases—not necessary for the failure probability bound on the intersection condition (Theorem 4) to hold. It is an interesting question whether we can prove the bound without this assumption, both for finite fields and rings.
We have adapted low-rank parity-check codes from finite fields to Galois rings and showed that Gaborit et al.'s decoding algorithm works as well for these codes. We also presented a failure probability bound for the decoder, whose derivation is significantly more involved than the finite-field analog due to the weaker structure of modules over finite rings. The bound shows that the codes have the same maximal decoding radius as their finite-field counterparts, but the exponential decay of the failure bound has \(p^s\) as a basis instead of the cardinality of the base ring \(|{R}|=p^{rs}\) (note \({R}\) is a finite field if and only if \(r=1\)). This means that there is a "loss" in failure probability when going from finite fields to finite rings, which can be expected due to the zero divisors in the ring.
The results show that LRPC codes work over finite rings, and thus can be considered, as an alternative to Gabidulin codes over finite rings, for potential applications of rank-metric codes, such as network coding and space-time codes—recall from the introduction that network and space-time coding over rings may have advantages compared to the case of fields. It also opens up the possibility to consider the codes for cryptographic applications, the main motivation for LRPC codes over fields.
Open problems are a generalization of the codes to more general rings (such as principal ideal rings); an analysis of the codes in potential applications; as well as an adaption of the improved decoder for LRPC codes over finite fields in [1] to finite rings. To be useful for network coding (both in case of fields and rings), the decoder must be extended to handle row and column erasures in the rank metric (cf. [14, 23]).
This means that \(\varvec{e}'\) might have a support that is contained in, but not equal to \({\mathcal {E}}\). The difference to the actual error \(\varvec{e}\) is that \(\varvec{e}\) is chosen uniformly from all errors of support exactly \({\mathcal {E}}\).
In Fig. 1, we show for each condition the tightest bound that we derived.
Aragon N., Gaborit P., Hauteville A., Ruatta O., Zémor G.: Low rank parity check codes: New decoding algorithms and applications to cryptography. arXiv:1904.00357 (2019).
Bini G., Flamini F.: Finite commutative rings and their applications, vol. 680. Springer Science & Business Media, New York (2012).
MATH Google Scholar
Blake I.F.: Codes over certain rings. Inf. Control 20(4), 396–404 (1972).
Article MathSciNet Google Scholar
Blake I.F.: Codes over integer residue rings. Inf. Control 29(4), 295–300 (1975).
Constantinescu I., Heise W.: A metric for codes over residue class rings. Problemy Peredachi Inf. 33(3), 22–28 (1997).
MathSciNet MATH Google Scholar
Delsarte P.: Bilinear forms over a finite field, with applications to coding theory. J. Comb. Theory Ser. A 25(3), 226–241 (1978).
Feng C., Nóbrega R.W., Kschischang F.R., Silva D.: Communication over finite-chain-ring matrix channels. IEEE Trans. Inf. Theory 60(10), 5899–5917 (2014).
Feng C., Silva D., Kschischang F.R.: An algebraic approach to physical-layer network coding. IEEE Trans. Inf. Theory 59(11), 7576–7596 (2013).
Gabidulin E.M.: Theory of codes with maximum rank distance. Problemy Peredachi Inf. 21(1), 3–16 (1985).
Gaborit P., Murat G., Ruatta O., Zémor G.: Low rank parity check codes and their application to cryptography. In: Proceedings of the Workshop on Coding and Cryptography WCC. vol. 2013 (2013).
Gorla E., Ravagnani A.: An algebraic framework for end-to-end physical-layer network coding. IEEE Trans. Inf. Theory 64(6), 4480–4495 (2017).
Hammons A.R., Kumar P.V., Calderbank A.R., Sloane N.J., Solé P.: The Z4-linearity of Kerdock, Preparata, Goethals, and related codes. IEEE Trans. Inf. Theory 40(2), 301–319 (1994).
Honold T., Landjev I.: Linear codes over finite chain rings. Electr. J. Comb. 7, R11–R11 (2000).
Kamche H.T., Mouaha C.: Rank-metric codes over finite principal ideal rings and applications. IEEE Trans. Inf. Theory 65(12), 7718–7735 (2019).
Kiran T., Rajan B.S.: Optimal STBCs from codes over Galois rings. In: IEEE International Conference on Personal Wireless Communications (ICPWC), pp. 120–124 (2005).
McDonald B.R.: Finite rings with identity, vol. 28. Marcel Dekker Incorporated, New York (1974).
Melchor C.A., et al.: Nist post-quantum cryptography standardization proposal: rank-Ouroboros, LAKE and LOCKER (ROLLO) (2020).
Nazer B., Gastpar M.: Compute-and-forward: harnessing interference through structured codes. IEEE Trans. Inf. Theory 57(10), 6463–6486 (2011).
Qachchach I.E., Habachi O., Cances J., Meghdadi V.: Efficient multi-source network coding using low rank parity check code. In: IEEE Wireless Communications and Networking Conference (WCNC) (2018).
Renner J., Jerkovits T., Bartz H.: Efficient decoding of interleaved low-rank parity-check codes. In: International Symposium on Problems of Redundancy in Information and Control Systems (REDUNDANCY) (2019).
Renner J., Puchinger S., Wachter-Zeh A., Hollanti C., Freij-Hollanti R.: Low-rank parity-check codes over the ring of integers modulo a prime power. In: IEEE International Symposium on Information Theory (ISIT), conference version of this paper. arXiv:2001.04800 (2020).
Roth R.M.: Maximum-rank array codes and their application to crisscross error correction. IEEE Trans. Inf. Theory 37(2), 328–336 (1991).
Silva D., Kschischang F.R., Koetter R.: A rank-metric approach to error control in random network coding. IEEE Trans. Inf. Theory 54(9), 3951–3967 (2008).
Spiegel E.: Codes over Zm, revisited. Inf. Control 37(1), 100–104 (1978).
Storjohann A.: Algorithms for matrix canonical forms. Ph.D. thesis, ETH Zurich (2000).
Tunali N.E., Huang Y.C., Boutros J.J., Narayanan K.R.: Lattices over Eisenstein integers for compute-and-forward. IEEE Trans. Inf. Theory 61(10), 5306–5321 (2015).
Von Zur Gathen J., Gerhard J.: Modern Computer Algebra. Cambridge University Press, Cambridge (2013).
Wilson M.P., Narayanan K., Pfister H.D., Sprintson A.: Joint physical layer coding and network coding for bidirectional relaying. IEEE Trans. Inf. Theory 56(11), 5641–5654 (2010).
Yazbek A.K., EL Qachchach I., Cances J.P., Meghdadi V.: Low rank parity check codes and their application in power line communications smart grid networks. Int. J. Commun. Syst. 30(12), e3256 (2017).
The work of J. Renner was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 801434). A. Neri was supported by the Swiss National Science Foundation through grant no. 187711. S. Puchinger received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement no. 713683.
Institute for Communications Engineering, Technical University of Munich (TUM), Münich, Germany
Julian Renner & Alessandro Neri
Department of Applied Mathematics and Computer Science, Technical University of Denmark (DTU), Lyngby, Denmark
Sven Puchinger
Julian Renner
Alessandro Neri
Correspondence to Alessandro Neri.
Communicated by I. Landjev.
Appendix A: Proofs of Corollaries 1 and 2
In this section we provide the proofs of Corollaries 1 and 2 in Sect. 5.1.
Inspired by Proposition 2, we study the following notions. For a given potential rank profile \(\phi (x)=\sum _{i=0}^{r-1}n_ix^i\in \mathbb {N}[x]/(x^r)\), with \(\phi (1)=N\le m\), we consider the sets
$$\begin{aligned} G_\phi&:=\left\{ \begin{bmatrix} \varvec{Y}_{0,0} &{} g_\mathfrak {m}\varvec{Y}_{0,1} &{} g_\mathfrak {m}^2 \varvec{Y}_{0,2} &{} \cdots &{} g_\mathfrak {m}^{r-1} \varvec{Y}_{0,r-1} \\ \varvec{Y}_{1,0} &{} \varvec{Y}_{1,1} &{} g_\mathfrak {m}\varvec{Y}_{1,2} &{} \cdots &{} g_\mathfrak {m}^{r-2} \varvec{Y}_{1,r-1} \\ \varvec{Y}_{2,0} &{} \varvec{Y}_{2,1} &{} \varvec{Y}_{2,2} &{} \cdots &{} g_\mathfrak {m}^{r-3} \varvec{Y}_{2,r-1} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ \varvec{Y}_{r-1,0} &{} \varvec{Y}_{r-1,1} &{} \varvec{Y}_{r-1,2} &{} \cdots &{} \varvec{Y}_{r-1,r-1} \\ \end{bmatrix} : \varvec{Y}_{i,j}\in {R}^{n_i\times n_j} \right\} ,\\ G_\phi ^*&:=\left\{ \begin{bmatrix} \varvec{Y}_{0,0} &{} g_\mathfrak {m}\varvec{Y}_{0,1} &{} g_\mathfrak {m}^2 \varvec{Y}_{0,2} &{} \cdots &{} g_\mathfrak {m}^{r-1} \varvec{Y}_{0,r-1} \\ \varvec{Y}_{1,0} &{} \varvec{Y}_{1,1} &{} g_\mathfrak {m}\varvec{Y}_{1,2} &{} \cdots &{} g_\mathfrak {m}^{r-2} \varvec{Y}_{1,r-1} \\ \varvec{Y}_{2,0} &{} \varvec{Y}_{2,1} &{} \varvec{Y}_{2,2} &{} \cdots &{} g_\mathfrak {m}^{r-3} \varvec{Y}_{2,r-1} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ \varvec{Y}_{r-1,0} &{} \varvec{Y}_{r-1,1} &{} \varvec{Y}_{r-1,2} &{} \cdots &{} \varvec{Y}_{r-1,r-1} \\ \end{bmatrix} : \varvec{Y}_{i,j}\in {R}^{n_i\times n_j}, \varvec{Y}_{i,i}\in {{\,\mathrm{GL}\,}}(n_i,{R}) \right\} \\ H_\phi&:=\left\{ \begin{bmatrix} 0 \\ g_\mathfrak {m}^{r-1}\varvec{Z}_1 \\ g_\mathfrak {m}^{r-2}\varvec{Z}_2 \\ \vdots \\ g_\mathfrak {m}\varvec{Z}_{r-1}\end{bmatrix}: \varvec{Z}_i\in {R}^{n_i\times m}\right\} . \end{aligned}$$
Notice that
(P1)
\((G_\phi ,+,\cdot )\) is a subring of \({R}^{N\times N}\);
\(G_\phi ^*=G_\phi \cap {{\,\mathrm{GL}\,}}(N,{R})\);
\((G_\phi ^*,\cdot )\) is a subgroup of \({{\,\mathrm{GL}\,}}(N,{R})\);
\((H_\phi ,+)\) is a subgroup of \({R}^{N\times m}\);
For every \(\varvec{Y}\in G_\phi \), \(\varvec{Z}\in H_\phi \), we have \(\varvec{Y}\varvec{Z}\in H_\phi \);
If \(\varvec{Y}\in G_\phi ^*\), then \(\varvec{Z}\longmapsto \varvec{Y}\varvec{Z}\) is a bijection of \(H_\phi \).
With these tools and from Proposition 2 we can deduce the two corollaries.
Proof of Corollary 1
First, denote by \(n_i:=\phi _i^{{\mathcal {M}}}\) and let \(N:=n_0+\cdots +n_{r-1}\), and fix an \({R}\)-basis of \({S}\) so that we identify \({S}\) with \({R}^m\). Fix a free module \({\mathcal {N}}\in \mathrm {Free}({\mathcal {M}})\) and let \(\varvec{T}_{{\mathcal {N}}}\) be such that \({{\,\mathrm{rowspace}\,}}(\varvec{T}_{{\mathcal {N}}})={\mathcal {N}}\) By Proposition 2, we have
$$\begin{aligned} \mathrm {Free}({\mathcal {M}})&=\{{{\,\mathrm{rowspace}\,}}(\varvec{Y}\varvec{T}_{{\mathcal {N}}}+\varvec{Z}) \mid \varvec{Y}\in G_\phi ^*,\varvec{Z}\in H_\phi \}\\&=\{{{\,\mathrm{rowspace}\,}}(\varvec{T}_{{\mathcal {N}}}+\varvec{Y}^{-1}\varvec{Z}) \mid \varvec{Y}\in G_{\phi }^*,\varvec{Z}\in H_\phi \} \\&=\{{{\,\mathrm{rowspace}\,}}(\varvec{T}_{{\mathcal {N}}}+\varvec{Z}) \mid \varvec{Z}\in H_\phi \}, \end{aligned}$$
where the last equality follows from (p6). It is immediate to see that \({{\,\mathrm{rowspace}\,}}(\varvec{T}_{{\mathcal {N}}}+\varvec{Z})={\mathcal {N}}={{\,\mathrm{rowspace}\,}}(\varvec{T}_{{\mathcal {N}}})\) if and only if all the rows of \(\varvec{Z}\) belong to \({\mathcal {N}}\). For the ith block of \(n_i\) rows of \(\varvec{Z}\), we can freely choose among all the elements in \(g_{\mathfrak {m}}^{r-i}{\mathcal {N}}\), that are \(s^{iN}\). Hence we get
$$\begin{aligned} |\{ \varvec{Z}\in H_{\phi } \mid {{\,\mathrm{rowspace}\,}}(\varvec{T}_{{\mathcal {N}}}+\varvec{Z})={\mathcal {N}}\}|&=|\{\varvec{Z}\in H_{\phi } \mid {{\,\mathrm{rowspace}\,}}(\varvec{Z}) \subseteq {\mathcal {N}}\}| =\prod _{i=1}^{r-1}s^{in_iN}. \end{aligned}$$
This means that every module is counted \(\prod _{i=1}^{r-1}s^{in_iN}\) many times and we finally obtain
$$\begin{aligned} |\mathrm {Free}({\mathcal {M}})|=\frac{|H_\phi |}{\prod _{i=1}^{r-1}s^{in_iN}}=\prod _{i=1}^{r-1}\frac{s^{in_im}}{s^{in_iN}}=s^{(m-N)\sum _{i=1}^{r-1}in_i }. \end{aligned}$$
Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\) with rank profile \(\phi ^{{\mathcal {M}}}\) and observe that \({\mathcal {M}}\in \mathrm {Mod}(\phi ,{\mathcal {N}})\) if and only if \({\mathcal {N}}\in \mathrm {Free}({\mathcal {M}})\). Identify \({S}\) with \({R}^m\), and define
With this notation, we have
$$\begin{aligned} \mathrm {Mod}(\phi ,{\mathcal {N}})=\{{{\,\mathrm{rowspace}\,}}(\varvec{D}\varvec{T}) \mid \varvec{T}\in {R}^{N\times m}, {{\,\mathrm{rowspace}\,}}(\varvec{T})={\mathcal {N}}\}. \end{aligned}$$
Moreover, there are exactly \(|{{\,\mathrm{GL}\,}}(N,{R})|\) many matrices \(\varvec{T}\in {R}^{N\times m}\) such that \({{\,\mathrm{rowspace}\,}}(\varvec{T})={\mathcal {N}}\), and they are obtained by fixing any matrix \(\bar{\varvec{T}}\) and considering \(\{\varvec{A}\bar{\varvec{T}} \mid \varvec{A}\in {{\,\mathrm{GL}\,}}(N,{R})\}\). Let us fix \(\bar{{\mathcal {M}}}:={{\,\mathrm{rowspace}\,}}(\varvec{D}\bar{\varvec{T}})\in \mathrm {Mod}(\phi ,{\mathcal {N}})\). We count for how many \(A\in {{\,\mathrm{GL}\,}}(N,{R})\) we have \({{\,\mathrm{rowspace}\,}}(\varvec{D}\varvec{A}\bar{\varvec{T}})=\bar{{\mathcal {M}}}\). By Proposition 2, this happens if and only if there exist \(\varvec{Y}\in G_\phi ^*, \varvec{Z}\in H_\phi \) such that \(\varvec{A}\bar{\varvec{T}}=\varvec{Y}\bar{\varvec{T}}+\varvec{Z}\), which in turn is equivalent to the condition that there exists \(\varvec{Y}\in \varvec{G}_\phi ^*\) such that \((\varvec{A}-\varvec{Y})\bar{\varvec{T}}\in H_\phi \). Let us call \(\varvec{S}:=\varvec{A}-\varvec{Y}\) and divide \(\varvec{S}\) in \(r\times r\) blocks \(\varvec{S}_{i,j}\in {R}^{n_i\times n_j}\), for \(i,j \in \{0,\ldots , r-1\}\). Divide also \(\varvec{T}\) in r blocks \(\varvec{T}_i\in {R}^{n_i\times m}\) for \(i \in \{0,\ldots ,r-1\}\). Hence, we have, for every \(i \in \{0,\ldots ,r-1\}\)
$$\begin{aligned} \sum _{j=0}^{r-1} \varvec{S}_{i,j}\varvec{T}_j \in \mathfrak {m}^{r-i} {R}^{n_i\times m}. \end{aligned}$$
Since the rows of \(\varvec{T}\) are linearly independent over \({R}\), this implies that \(\varvec{S}_{i,j}\in \mathfrak {m}^{r-i}\), that is \(\varvec{S}\) is of the form
$$\begin{aligned} \varvec{S}=\varvec{A}-\varvec{Y}=\begin{bmatrix} 0 \\ g_\mathfrak {m}^{r-1}\varvec{Z}_1 \\ g_\mathfrak {m}^{r-2}\varvec{Z}_2 \\ \vdots \\ g_\mathfrak {m}\varvec{Z}_{r-1}\end{bmatrix}. \end{aligned}$$
Therefore, we have \({{\,\mathrm{rowspace}\,}}(\varvec{D}\varvec{A}\bar{\varvec{T}})=\bar{{\mathcal {M}}}\) if and only if \(\varvec{A}=\varvec{Y}+\varvec{S}\). It is easy to see that this holds if and only if \(\varvec{A}\in G_{\phi }^*\). Hence, the \({R}\)-submodule \(\bar{{\mathcal {M}}}\) is counted \(|G_{\phi }^*|\) many times. Since the choice of \(\bar{{\mathcal {M}}}\) was arbitrary, we conclude
Renner, J., Neri, A. & Puchinger, S. Low-rank parity-check codes over Galois rings. Des. Codes Cryptogr. 89, 351–386 (2021). https://doi.org/10.1007/s10623-020-00825-9
Revised: 12 November 2020
Issue Date: February 2021
Low-rank parity-check codes
Rank-metric codes
Mathematics Subject Classification
11T71 | CommonCrawl |
Consider the conditional entropy and mutual information for the binary symmetric channel. The input source has alphabet $X=\{0,1\}$ and associated probabilities ${\frac{1}{2}, \frac{1}{2}}$. The channel matrix is $\begin{pmatrix} 1-p & p \\ p & 1-p \end{pmatrix}$ wgere p is the transition probability. Then the conditional entropy is given by:
-plog(p)-(1-p)log(1-p)
1+plog(p)+(1-p)log(1-p)
digital-image-processing
asked Aug 11, 2016 in Digital Image Processing by jothee Veteran (105k points)
recategorized Nov 13, 2017 by Arjun | 1k views
http://nptel.ac.in/courses/106106097/pdf/Lecture33-34_MutualInformation.pdf
answered Apr 7, 2017 by Allwin (75 points)
Ans is B: -plog(p)-(1-p)log(1-p)
commented May 9, 2019 by Adnan Ashraf Junior (505 points)
Given a simple image of size $10 \times 10$ whose histogram models the symbol probabilities and is given by $P_{1}$ $P_{2}$ $P_{3}$ $P_{4}$ a b c d The first order estimate of image entropy is maximum when $a = 0, b = 0, c = 0, d = 1$ $a=\frac{1}{2}, b=\frac{1}{2}, c=0, d=0$ ... $a=\frac{1}{4}, b=\frac{1}{4}, c=\frac{1}{4}, d=\frac{1}{4}$
asked Aug 2, 2016 in Digital Image Processing by makhdoom ghaya Boss (30.7k points) | 1k views
image-processing
UGCNET-Nov2017-iii-18
The three aspects of Quantization, programmers generally concerned with are: A. Coding error, Sampling rate and Amplification B. Sampling rate, Coding error and Conditioning C. Sampling rate, Aperture time and Coding error D. Aperture time, Coding error and Strobing
asked Nov 5, 2017 in Computer Graphics by Arjun Veteran (431k points) | 327 views
ugcnetnov2017-iii
computer-graphics
A Butterworth lowpass filter of order $n$, with a cutoff frequency at distance $D_{0}$ from the origin, has the transfer function $H(u, v)$ given by $\frac{1}{1+\left[\frac{D(u, v)}{D_{0}}\right]^{2n}}$ $\frac{1}{1+\left[\frac{D(u, v)}{D_{0}}\right]^{n}}$ $\frac{1}{1+\left[\frac{D_{0}}{D(u, v)}\right]^{2n}}$ $\frac{1}{1+\left[\frac{D_{0}}{D(u, v)}\right]^{n}}$
asked Aug 2, 2016 in Digital Signal Processing by makhdoom ghaya Boss (30.7k points) | 504 views
butterworth-lowpass-filter
If f(x, y) is a digital image, then x, y and amplitude values of f are Finite Infinite Neither finite nor infinite None of the above
IS&Software Engineering 299
Web Technologies 62
Numerical Methods 56
Computer Graphics 93
Object Oriented Programming 72
Java 24
Distributed Computing 14
Machine Language 7
Knowledge Representation 20
Information Theory 0
Digital Image Processing 17
Digital Signal Processing 7
Computer Peripherals 11
Geometry 53
Integrated Circuits 8 | CommonCrawl |
category-theory higher-algebra
nLab > Latest Changes: monoidal category
Format: MarkdownItexTodd, when you see this here and have a minute, would you mind having a look at _[[monoidal category]]_ to see if you can remove the query-box discussion there and maybe replace it by some crisp statement? Thanks!
when you see this here and have a minute, would you mind having a look at monoidal category to see if you can remove the query-box discussion there and maybe replace it by some crisp statement?
Format: MarkdownItexI'll get to it when I can, probably sometime later today.
I'll get to it when I can, probably sometime later today.
Format: MarkdownItexI have removed the first query box and inserted a proof of one of Max Kelly's lemmas. I'll get to the other in a bit, the one that says $\lambda_1 = \rho_1$.
I have removed the first query box and inserted a proof of one of Max Kelly's lemmas. I'll get to the other in a bit, the one that says λ 1=ρ 1\lambda_1 = \rho_1.
Format: MarkdownItexThat's great, thank you, Todd! I'll have a look as soon as the Lab wakes up again...
That's great, thank you, Todd!
I'll have a look as soon as the Lab wakes up again…
Format: MarkdownItexYes, it's slow, isn't it? But I managed to stick in the other lemma as well. I'll finish up by describing what Joyal and Street do (will have to be later today).
Yes, it's slow, isn't it? But I managed to stick in the other lemma as well. I'll finish up by describing what Joyal and Street do (will have to be later today).
Format: MarkdownItexThanks, Todd. Looking at what you have now, I wonder if the section _Definition -- Other coherence conditions_ should not be moved to the Properties-section, where already a stub section "Properties - Coherence" is waiting with a link to [[coherence theorem for monoidal categories]], which in turn linke to [[Mac Lane's proof of the coherence theorem for monoidal categories]]. Somehow all this would deserve to be put coherently in one place. What do you think? Do you have any plans with this material?
Thanks, Todd.
Looking at what you have now, I wonder if the section Definition – Other coherence conditions should not be moved to the Properties-section, where already a stub section "Properties - Coherence" is waiting with a link to coherence theorem for monoidal categories, which in turn linke to Mac Lane's proof of the coherence theorem for monoidal categories.
Somehow all this would deserve to be put coherently in one place. What do you think? Do you have any plans with this material?
Format: MarkdownItex> all this would deserve to be put coherently in one place Heh. Good one. Anyway, yes, I agree with you. I have to be doing other things now, but if you would like to rearrange the material, please go right ahead. I was mainly trying to take care of Adam's queries (that have now been removed). Looking at the two nLab articles you linked to -- they could use some more work. "Mac Lane's proof" is really long and might look scarier to the reader than it actually is. Hopefully I'll get some time soon to give them a crack.
all this would deserve to be put coherently in one place
Heh. Good one.
Anyway, yes, I agree with you. I have to be doing other things now, but if you would like to rearrange the material, please go right ahead. I was mainly trying to take care of Adam's queries (that have now been removed).
Looking at the two nLab articles you linked to – they could use some more work. "Mac Lane's proof" is really long and might look scarier to the reader than it actually is. Hopefully I'll get some time soon to give them a crack.
Format: MarkdownItexOkay, if I may, might play with rearranging the material in some way a little later. Thanks.
Okay, if I may, might play with rearranging the material in some way a little later. Thanks.
Format: MarkdownItexAdded to the References-section at _[[monoidal category]]_ right at the beginning a pointer to the pretty comprehensive set of lecture notes: * [[Pavel Etingof]], Shlomo Gelaki, Dmitri Nikshych, [[Victor Ostrik]], _Topics in Lie theory and Tensor categories_, Lecture notes (spring 2009) ([web](http://ocw.mit.edu/courses/mathematics/18-769-topics-in-lie-theory-tensor-categories-spring-2009/lecture-notes/))
Added to the References-section at monoidal category right at the beginning a pointer to the pretty comprehensive set of lecture notes:
Pavel Etingof, Shlomo Gelaki, Dmitri Nikshych, Victor Ostrik, Topics in Lie theory and Tensor categories, Lecture notes (spring 2009) (web)
CommentAuthorRodMcGuire
CommentTimeAug 31st 2017
Author: RodMcGuire
Format: MarkdownItexsome super cryptic Anonymous added the following reference which I have rolled back * 1337777.OOO , _Maclane pentagon is some recursive square (word-normalization functor)_ , [https://github.com/1337777/maclane/blob/master/maclaneSolution.svg](https://github.com/1337777/maclane/blob/master/maclaneSolution.svg) , [https://github.com/1337777/dosen/blob/master/coherence2.v](https://github.com/1337777/dosen/blob/master/coherence2.v)
some super cryptic Anonymous added the following reference which I have rolled back
1337777.OOO , Maclane pentagon is some recursive square (word-normalization functor) , https://github.com/1337777/maclane/blob/master/maclaneSolution.svg , https://github.com/1337777/dosen/blob/master/coherence2.v
CommentTimeSep 1st 2017
Format: MarkdownItexAnonymous put back again this reference which I have again reverted. The two edits come from Bell Canada in Montreal but they are different IP which means we can't use IP blocking. * [IP address 70.29.194.190](https://db-ip.com/70.29.194.190) * [IP address 70.29.198.51](https://db-ip.com/70.29.198.51) Should we put a note in [[monoidal category#references]] telling him to desist and directing him to this thread in the nForum?
Anonymous put back again this reference which I have again reverted. The two edits come from Bell Canada in Montreal but they are different IP which means we can't use IP blocking.
IP address 70.29.194.190
IP address 70.29.198.51
Should we put a note in monoidal category#references telling him to desist and directing him to this thread in the nForum?
Format: MarkdownItexThanks for dealing with this Rod. That might be a reasonable strategy; put it in a query box probably.
Thanks for dealing with this Rod. That might be a reasonable strategy; put it in a query box probably.
(edited Sep 1st 2017)
Format: MarkdownItexHe did it a gain so I put in a query box. Edit. I checked wikipedia [monoidal category](https://en.wikipedia.org/wiki/Monoidal_category) and he did the same thing there which I also removed.
He did it a gain so I put in a query box.
Edit. I checked wikipedia monoidal category and he did the same thing there which I also removed.
CommentAuthorNoam_Zeilberger
CommentTimeSep 2nd 2017
Author: Noam_Zeilberger
Format: MarkdownItexLooks like he also did the same on August 31 at [[coherence theorem for monoidal categories]] and [[Mac Lane's proof of the coherence theorem for monoidal categories]]. I've removed the links.
Looks like he also did the same on August 31 at coherence theorem for monoidal categories and Mac Lane's proof of the coherence theorem for monoidal categories. I've removed the links.
CommentAuthorYaron
Author: Yaron
Format: MarkdownItexThere is something strange now in [[Mac Lane's proof of the coherence theorem for monoidal categories]]. There are many new general sections (apparently not belonging in this entry, but rather in [[monoidal category]]) even before the Contents, and the first actual section ("Introduction and statement") is labeled section 18.
There is something strange now in Mac Lane's proof of the coherence theorem for monoidal categories. There are many new general sections (apparently not belonging in this entry, but rather in monoidal category) even before the Contents, and the first actual section ("Introduction and statement") is labeled section 18.
(edited Sep 2nd 2017)
Format: MarkdownItexThere was a '>' right at the start that was mucking things up! I removed it and it looks much better!
There was a '>' right at the start that was mucking things up! I removed it and it looks much better!
Format: MarkdownItexGreat, thanks!
Great, thanks!
CommentTimeSep 3rd 2017
(edited Sep 3rd 2017)
Format: MarkdownItexHe back, reinserting his reference into every thing its been removed from. I haven't reverted them yet. Do we need to contact Bell Canada and see if they can get him to stop or just block all the Bell Canada Montreal IP addresses? Maybe we should just precede his references with a query box that says "1337777.OOO is a deranged crackpot that insists on including this reference and every time we remove it he adds it back "
He back, reinserting his reference into every thing its been removed from. I haven't reverted them yet.
Do we need to contact Bell Canada and see if they can get him to stop or just block all the Bell Canada Montreal IP addresses?
Maybe we should just precede his references with a query box that says "1337777.OOO is a deranged crackpot that insists on including this reference and every time we remove it he adds it back "
Format: MarkdownItexBah. Can we tell whether anyone legitimate is using those IP addresses?
Bah. Can we tell whether anyone legitimate is using those IP addresses?
CommentAuthorAlexisHazell
Author: AlexisHazell
Format: MarkdownItexMike: Do you mean, any legitimate nLab user? At any rate, trying to block this user by IP address is unlikely to be fruitful; I note that their latest re-addition is from yet another IP address, 70.29.194.190. Blocking Bell Canada's entire IP block (or at least their entire Montreal block) seems quite the sledgehammer. Is the Instiki [config/spam_patterns.txt file](https://github.com/parasew/instiki/blob/master/config/spam_patterns.txt) used by the nLab wiki? If so, adding things like: 1337777\.OOO Maclane pentagon is some recursive square github\.com/1337777 to that list might be another way forward. To get around that, the user would need to change the name they're using, change the name of their article, and change the name of their GitHub account (or create a new one). (Also, this user appears to have started 'contributing' to the Coq-club list: [https://sympa.inria.fr/sympa/arc/coq-club/2017-08/msg00048.html](https://sympa.inria.fr/sympa/arc/coq-club/2017-08/msg00048.html).)
Do you mean, any legitimate nLab user?
At any rate, trying to block this user by IP address is unlikely to be fruitful; I note that their latest re-addition is from yet another IP address, 70.29.194.190. Blocking Bell Canada's entire IP block (or at least their entire Montreal block) seems quite the sledgehammer.
Is the Instiki config/spam_patterns.txt file used by the nLab wiki? If so, adding things like:
1337777\.OOO
Maclane pentagon is some recursive square
github\.com/1337777
to that list might be another way forward. To get around that, the user would need to change the name they're using, change the name of their article, and change the name of their GitHub account (or create a new one).
(Also, this user appears to have started 'contributing' to the Coq-club list: https://sympa.inria.fr/sympa/arc/coq-club/2017-08/msg00048.html.)
Format: MarkdownItexGah, just realised the user also has a GitLab account, so gitlab\.com/1337777 would be another thing to add to that list.
Gah, just realised the user also has a GitLab account, so
gitlab\.com/1337777
would be another thing to add to that list.
CommentAuthoradeelkh
Author: adeelkh
Format: MarkdownItex20: Good idea, I've just [done that](https://github.com/ncatlab/nlab/commit/98ee37ce2921f19bfe5bdb9835376d8433fcc9cd) .
20: Good idea, I've just done that .
Format: MarkdownItexOk I've removed 1337777''s references from [[monoidal category]], [[coherence theorem for monoidal categories]], and [[Mac Lane's proof of the coherence theorem for monoidal categories]]. and also the query box <pre>+-- {: .query} __1337777.OOO__. Stop trying to insert your reference until you have explained and discussed it in the [nForum: monoidal-category](https://nforum.ncatlab.org/discussion/4226/monoidal-category). It is annoying to keep having to remove it. =-- </pre> Let's see if the blocking works and see if 1337777 is determined enough to work around it. EDIT: I've also removed his reference from Wikipedia [Monoidal_category](https://en.wikipedia.org/wiki/Monoidal_category), [Coherence_theorem](https://en.wikipedia.org/wiki/Coherence_theorem ), and [Coherence_condition](https://en.wikipedia.org/wiki/Coherence_condition).
Ok I've removed 1337777"s references from monoidal category, coherence theorem for monoidal categories, and Mac Lane's proof of the coherence theorem for monoidal categories.
and also the query box
+-- {: .query}
__1337777.OOO__. Stop trying to insert your reference until you have explained and discussed it in
the [nForum: monoidal-category](https://nforum.ncatlab.org/discussion/4226/monoidal-category). It is annoying to keep having to remove it.
=--
Let's see if the blocking works and see if 1337777 is determined enough to work around it.
EDIT: I've also removed his reference from Wikipedia Monoidal_category, Coherence_theorem, and Coherence_condition.
Format: MarkdownItexThanks everyone!
Format: MarkdownItexHe's back on all three pages. DId the spam list changes not propagate to the running code?
He's back on all three pages. DId the spam list changes not propagate to the running code?
Format: MarkdownItexHe bypassed the spam filter by subtly modifying the offending keywords (underscores, extra slashes, etc.). I think maybe I should just block the keyword `1337777` for a while.
He bypassed the spam filter by subtly modifying the offending keywords (underscores, extra slashes, etc.). I think maybe I should just block the keyword 1337777 for a while.
CommentAuthorDavid_Corfield
Author: David_Corfield
Format: MarkdownItexCould the 'Block or report user' at https://github.com/1337777 be used? There's an option "Contact Support about this user's behavior".
Could the 'Block or report user' at https://github.com/1337777 be used? There's an option "Contact Support about this user's behavior".
Format: MarkdownItexIt seems unlikely that there will ever be a legitimate use of 1337777. Reporting him to github seems like a good plan too.
It seems unlikely that there will ever be a legitimate use of 1337777. Reporting him to github seems like a good plan too.
Format: MarkdownItexHm, looking at [the user's change to the 'monoidal category' page](https://ncatlab.org/nlab/show/diff/monoidal+category), the spam filter should have still blocked the edit via the patterns for "Maclane pentagon is some recursive square" and "1337777.OOO". Did the restart of Instiki, so that the new patterns get included, maybe not complete properly?
Hm, looking at the user's change to the 'monoidal category' page, the spam filter should have still blocked the edit via the patterns for "Maclane pentagon is some recursive square" and "1337777.OOO". Did the restart of Instiki, so that the new patterns get included, maybe not complete properly?
(edited Sep 5th 2017)
Format: MarkdownItexI just removed a new insertion of the link. The second part of that link makes more sense than the first part, which is just two diagrams, but is also very difficult to read and does not fit in that part of the reference list.
I just removed a new insertion of the link. The second part of that link makes more sense than the first part, which is just two diagrams, but is also very difficult to read and does not fit in that part of the reference list.
Format: MarkdownItex29: if you look at the source, 1337777\.OOO , _Maclane pentagon is some recursive_ _square ... The backslash makes it a different string than 1337777.OOO, and similarly the `_ _` "escapes" the second string. Anyway, let's see how he gets around [this](https://github.com/ncatlab/nlab/commit/8264ce28e9cc1c13bdedcad665bc7d0206dfeb94)...
29: if you look at the source,
1337777\.OOO , _Maclane pentagon is some recursive_ _square ...
The backslash makes it a different string than 1337777.OOO, and similarly the _ _ "escapes" the second string. Anyway, let's see how he gets around this…
Format: MarkdownItexAh, good point, I'd not looked at the page source .... Yes, will indeed be interested to see if this user is able to work around your latest change. :-)
Ah, good point, I'd not looked at the page source ….
Yes, will indeed be interested to see if this user is able to work around your latest change. :-)
Format: MarkdownItexHe's certainly persistent!
He's certainly persistent!
Format: MarkdownItexIndeed, rudely so. The latest readdition gets around the spam filter by using HTML character entities: 1337777.OOO I've taken a look at the Instiki source, and the patterns used in `spam_patterns.txt` are actually [Ruby regexes](https://ruby-doc.org/core-2.4.1/doc/regexp_rdoc.html). So maybe an entry like this could be used: (?:1|&#\d{2,4};|&#x\d{2,4};)(?:3|&#\d{2,4};|&#x\d{2,4};)(?:3|&#\d{2,4};|&#x\d{2,4};)(?:7|&#\d{2,4};|&#x\d{2,4};)(?:7|&#\d{2,4};|&#x\d{2,4};)(?:7|&#\d{2,4};|&#x\d{2,4};)(?:7|&#\d{2,4};|&#x\d{2,4};) Also note that [the user seems to be following this thread](https://www.reddit.com/r/math/comments/6ym0sr/dummy_math_bullies_101/).
Indeed, rudely so.
The latest readdition gets around the spam filter by using HTML character entities:
1337777.OOO
I've taken a look at the Instiki source, and the patterns used in spam_patterns.txt are actually Ruby regexes. So maybe an entry like this could be used:
(?:1|&#\d{2,4};|&#x\d{2,4};)(?:3|&#\d{2,4};|&#x\d{2,4};)(?:3|&#\d{2,4};|&#x\d{2,4};)(?:7|&#\d{2,4};|&#x\d{2,4};)(?:7|&#\d{2,4};|&#x\d{2,4};)(?:7|&#\d{2,4};|&#x\d{2,4};)(?:7|&#\d{2,4};|&#x\d{2,4};)
Also note that the user seems to be following this thread.
Format: MarkdownItexHow does wikipedia deal with people like this?
How does wikipedia deal with people like this?
CommentAuthorDmitri Pavlov
Author: Dmitri Pavlov
Format: TextWhy not simply report this user to Bell Canada and allow them to deal with it? Spam is certainly covered by their policies. Their whois information states: Comment: For abuse cases please use [email protected]
Why not simply report this user to Bell Canada and allow them to deal with it?
Spam is certainly covered by their policies.
Their whois information states:
Comment: For abuse cases please use [email protected]
Format: MarkdownItex@Mike: The wiki software used by Wikipedia, MediaWiki, has [functionality to deal with this sort of situation](https://en.wikipedia.org/wiki/MediaWiki#Groups_and_restriction_of_access). Instiki doesn't seem to have functionality to e.g. allow the site admin to lock a page against edits until further notice.
@Mike:
The wiki software used by Wikipedia, MediaWiki, has functionality to deal with this sort of situation. Instiki doesn't seem to have functionality to e.g. allow the site admin to lock a page against edits until further notice.
Format: MarkdownItexI'm surprised if simply locking a few pages temporarily is usually sufficient. But it would certainly be a nice option to have. Reporting the user to his ISP seems like a reasonable decision to me.
I'm surprised if simply locking a few pages temporarily is usually sufficient. But it would certainly be a nice option to have.
Reporting the user to his ISP seems like a reasonable decision to me.
Format: MarkdownItex> Instiki doesn't seem to have functionality to e.g. allow the site admin to lock a page against edits until further notice. It is pretty easy to manually hardcode this though, as I've just [done](https://github.com/ncatlab/nlab/commit/dde3aa9f429d6b24b89b8ce910e80b75f0dcd9b4).
Instiki doesn't seem to have functionality to e.g. allow the site admin to lock a page against edits until further notice.
It is pretty easy to manually hardcode this though, as I've just done.
Format: MarkdownItexOh, you're right, of course - temporarily locking pages is not necessarily going to be sufficient. It's a game of Chicken; who's going to give up first? Still, MediaWiki has finer-grained functionality available: MediaWiki offers flexibility in creating and defining user groups. For instance, it would be possible to create an arbitrary "ninja" group that can block users and delete pages, and whose edits are hidden by default in the recent changes log. It is also possible to set up a group of "autoconfirmed" users that one becomes a member of after making a certain number of edits and waiting a certain number of days. Some groups that are enabled by default are bureaucrats and sysops. Bureaucrats have power to change other users' rights. Sysops have power over page protection and deletion and the blocking of users from editing. MediaWiki's available controls on editing rights have been deemed sufficient for publishing and maintaining important documents such as a manual of standard operating procedures in a hospital. Whereas Instiki seems to only provides access control at the 'web' level, not the 'page' level. From experience, I don't actually have much confidence in large ISPs actively addressing this sort of situation adequately, but I guess it's worth a go at this point (as might be contacting GitHub and GitLab about this user as well).
Oh, you're right, of course - temporarily locking pages is not necessarily going to be sufficient. It's a game of Chicken; who's going to give up first? Still, MediaWiki has finer-grained functionality available:
MediaWiki offers flexibility in creating and defining user groups. For instance, it would be possible to create an arbitrary "ninja" group that can block users and delete pages, and whose edits are hidden by default in the recent changes log. It is also possible to set up a group of "autoconfirmed" users that one becomes a member of after making a certain number of edits and waiting a certain number of days. Some groups that are enabled by default are bureaucrats and sysops. Bureaucrats have power to change other users' rights. Sysops have power over page protection and deletion and the blocking of users from editing. MediaWiki's available controls on editing rights have been deemed sufficient for publishing and maintaining important documents such as a manual of standard operating procedures in a hospital.
Whereas Instiki seems to only provides access control at the 'web' level, not the 'page' level.
From experience, I don't actually have much confidence in large ISPs actively addressing this sort of situation adequately, but I guess it's worth a go at this point (as might be contacting GitHub and GitLab about this user as well).
Format: MarkdownItex@Adeel: Nice!
@Adeel:
CommentTimeMar 9th 2018
Format: MarkdownItex1337777.OOO is back, adding links to at least * [inductive-recursive type#4](https://ncatlab.org/nlab/revision/diff/inductive-recursive+type/4) * [inductive-inductive type#9](https://ncatlab.org/nlab/revision/diff/inductive-inductive+type/9)
1337777.OOO is back, adding links to at least
inductive-recursive type#4
inductive-inductive type#9
CommentAuthorRichard Williamson
Author: Richard Williamson
Format: MarkdownItexOn it.
On it.
(edited Mar 9th 2018)
Format: MarkdownItexI have tightened the spam filter now. I will not say exactly how, as, as mentioned above, the user may be watching this thread. I have not committed the change anywhere, so there is nowhere to find it (unless one has access to the server).
I have tightened the spam filter now. I will not say exactly how, as, as mentioned above, the user may be watching this thread. I have not committed the change anywhere, so there is nowhere to find it (unless one has access to the server).
Format: MarkdownItexI will now see if I can clear up in the database (update: actually will have to postpone until later; I will also attempt to strengthen the spam filter further, what I've done now will probably only stop the posts for a while).
I will now see if I can clear up in the database (update: actually will have to postpone until later; I will also attempt to strengthen the spam filter further, what I've done now will probably only stop the posts for a while).
Format: MarkdownItexHave now cleared up everything with 1337777.OOO with the HTML character in the middle, including some older pages. There exist some historical ones (only in revision history I think) without the HTML character in the middle, I'll try to remove those tomorrow. I've also tightened the spam filter a little further. Thanks very much for the alert, Rod!
Have now cleared up everything with
with the HTML character in the middle, including some older pages. There exist some historical ones (only in revision history I think) without the HTML character in the middle, I'll try to remove those tomorrow.
I've also tightened the spam filter a little further.
Thanks very much for the alert, Rod!
CommentTimeDec 23rd 2018
(edited Dec 23rd 2018)
Format: MarkdownItexThe page [[Mac+Lane's+proof+of+the+coherence+theorem+for+monoidal+categories]] still contains links of similar type at the bottom.
The page Mac+Lane's+proof+of+the+coherence+theorem+for+monoidal+categories still contains links of similar type at the bottom.
Format: MarkdownItexThanks for raising this, deleted these revisions from September now, and blocked the IP address as well in nginx, because it has been used consistently. I actually don't know how the author managed to get those edits through; my attempts to reproduce the spam result in the spam filter blocking the edits correctly. It is possible that I misread the date to be from an earlier year than this year (i.e. it was just something that I forgot to clear up).
Thanks for raising this, deleted these revisions from September now, and blocked the IP address as well in nginx, because it has been used consistently.
I actually don't know how the author managed to get those edits through; my attempts to reproduce the spam result in the spam filter blocking the edits correctly. It is possible that I misread the date to be from an earlier year than this year (i.e. it was just something that I forgot to clear up).
Format: MarkdownItexNumber the last natural isomorphism in this definition. relrod <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/114">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/114">v114</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Number the last natural isomorphism in this definition.
relrod
diff, v114, current
CommentTimeApr 25th 2019
Format: MarkdownItexIt seems to me that the facets of the 4-simplex oriental are the edges, not the vertices, of the pentagon identity. DavidMJC <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/115">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/115">v115</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
It seems to me that the facets of the 4-simplex oriental are the edges, not the vertices, of the pentagon identity.
DavidMJC
CommentAuthorziggurism
Author: ziggurism
Format: MarkdownItexMention earlier in the definition section that a monoidal category is just a category. Also add paragraph about how the definition of the monoidal structure of a category relies on the monoidal structure of the parent 2-category, in accordance with the microcosm principle. See discussion at https://nforum.ncatlab.org/discussion/11003/if-defining-monoidal-category-as-monoid-is-circular-then-sos-our-definition-of-monoidal-category/ <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/119">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/119">v119</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Mention earlier in the definition section that a monoidal category is just a category. Also add paragraph about how the definition of the monoidal structure of a category relies on the monoidal structure of the parent 2-category, in accordance with the microcosm principle. See discussion at https://nforum.ncatlab.org/discussion/11003/if-defining-monoidal-category-as-monoid-is-circular-then-sos-our-definition-of-monoidal-category/
Format: MarkdownItexlatex fix <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/119">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/119">v119</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
latex fix
Format: MarkdownItexThis looks good to me, thanks.
This looks good to me, thanks.
CommentAuthorGuest
Author: Guest
Format: TextIs there a typo in the expression just before the text "is the cartesian associator"? It seems to me like the parens are misplaced on one of the terms.
Is there a typo in the expression just before the text "is the cartesian associator"?
It seems to me like the parens are misplaced on one of the terms.
Format: MarkdownItexYes, I think you're right. Why don't you fix it?
Yes, I think you're right. Why don't you fix it?
CommentAuthorluidnel.maignan
Author: luidnel.maignan
Format: TextHi, In [[enriched category]], section "2 -Definition", it is written that "one may think of a monoidal category as a bicategory with a single object". Is it possible to add a sentence about this in this page ?
In enriched category, section "2 -Definition", it is written that "one may think of a monoidal category as a bicategory with a single object".
Is it possible to add a sentence about this in this page ?
Format: MarkdownItexI added a few words; does that help?
I added a few words; does that help?
(edited Apr 25th 2020)
Format: TextYour revision did not make it apparently. The last revision was March 30, 2020 according to the history page.
Your revision did not make it apparently.
The last revision was March 30, 2020 according to the history page.
Format: MarkdownItexNot according [to mine](https://ncatlab.org/nlab/history/enriched+category).
Not according to mine.
Format: TextSorry, I was expecting a change in the [[monoidal category]] page.
Sorry, I was expecting a change in the monoidal category page.
Format: MarkdownItexAdded to the Idea section the fact that monoidal categories can be considered as one-object bicategories (and added relevant material to [[bicategory]], under the Examples section). <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/122">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/122">v122</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Added to the Idea section the fact that monoidal categories can be considered as one-object bicategories (and added relevant material to bicategory, under the Examples section).
Format: TextI checked both edits, they are great! Thank you :)
I checked both edits, they are great! Thank you :)
Format: TextCan someone please help me understand why this holds "Since all the arrows are isomorphisms, it suffices to show that the diagram formed by the perimeter commutes" .As a newcomer to category theory this is not at all obvious to me.
Can someone please help me understand why this holds "Since all the arrows are isomorphisms, it suffices to show that the diagram formed by the perimeter commutes" .As a newcomer to category theory this is not at all obvious to me.
Format: MarkdownItexGiven a commuting diagram with an isomorphism, whiskering it with the inverse of that isomorphism gives a commuting diagram of the same shape as before but with that one arrow now pointing in the opposite direction. Now once the perimeter of that big diagram commutes apply this reversal to the top right morphism. The resulting top right triangle is then seen to commute, and hence so does the original triangle in question.
Given a commuting diagram with an isomorphism, whiskering it with the inverse of that isomorphism gives a commuting diagram of the same shape as before but with that one arrow now pointing in the opposite direction.
Now once the perimeter of that big diagram commutes apply this reversal to the top right morphism. The resulting top right triangle is then seen to commute, and hence so does the original triangle in question.
Format: MarkdownItex[ duplicate removed ]
[ duplicate removed ]
Format: TextWhat do you mean by whiskering?
What do you mean by whiskering?
Format: MarkdownItexI mean [[whiskering]]. But just convince yourself that given a commuting triangle of isomorphisms, there is a corresponding commuting triangle with the direction of any one of the arrows reversed and labeled by the inverse of the original morphism. It's immediate, by the definition of inverse morphisms.
I mean whiskering. But just convince yourself that given a commuting triangle of isomorphisms, there is a corresponding commuting triangle with the direction of any one of the arrows reversed and labeled by the inverse of the original morphism. It's immediate, by the definition of inverse morphisms.
CommentAuthorJohn Baez
Author: John Baez
Format: MarkdownItexAdded: every monoidal category is equivalent to a skeletal strict monoidal one. (FinSet, $\times$) is, but (Set, $\times$) is not. <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/126">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/126">v126</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Added: every monoidal category is equivalent to a skeletal strict monoidal one. (FinSet, ×\times) is, but (Set, ×\times) is not.
Format: MarkdownItexAdded related concepts: * [[monoidal category with diagonals]] * [[relevance monoidal category]] Removed several duplicates and rearranged the list. <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/129">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/129">v129</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Added related concepts:
monoidal category with diagonals
relevance monoidal category
Removed several duplicates and rearranged the list.
CommentTimeAug 29th 2021
Format: MarkdownItexadd link to page dedicated to coherence theorem Antonin Delpeuch <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/133">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/133">v133</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
add link to page dedicated to coherence theorem
Antonin Delpeuch
CommentAuthorvarkor
Author: varkor
Format: MarkdownItexCorrected reference to Mac Lane's terminology. <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/135">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/135">v135</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Corrected reference to Mac Lane's terminology.
CommentAuthorsamwinnick
Author: samwinnick
Format: TextI think there is a mistake in the wording here: "Note that, in accordance with the microcosm principle, just as defining a monoid in a 1-category requires that the 1-category carry its own monoidal structure, defining a monoidal category in the 2-category of categories requires that the 2-category carry a monoidal structure as well. In this case we are implicitly employing the cartesian monoidal structure on Cat, so"... It should read ..."defining a monoidal category in a 2-category requires"... instead of ..."defining a monoidal category in the 2-category of categories requires"... The rest I think makes sense since in this article one is defining monoidal categories not internal to some exotic 2-category but in the 2-category Cat, which uses the cartesian monoidal structure of Cat. I don't want to change it myself because I am just learning about this stuff for the first time. -Sam Winnick
I think there is a mistake in the wording here:
"Note that, in accordance with the microcosm principle, just as defining a monoid in a 1-category requires that the 1-category carry its own monoidal structure, defining a monoidal category in the 2-category of categories requires that the 2-category carry a monoidal structure as well. In this case we are implicitly employing the cartesian monoidal structure on Cat, so"...
It should read ..."defining a monoidal category in a 2-category requires"... instead of ..."defining a monoidal category in the 2-category of categories requires"...
The rest I think makes sense since in this article one is defining monoidal categories not internal to some exotic 2-category but in the 2-category Cat, which uses the cartesian monoidal structure of Cat.
I don't want to change it myself because I am just learning about this stuff for the first time.
-Sam Winnick
Format: MarkdownItexRe #72: It looks correct to me as stated. A monoid internal to other monoidal 2-categories would, in general, no longer be a monoidal category, so the suggested change in the 3rd paragraph of #72 wouldn't really work. The intention is as picked up in the 4th paragraph, and that is what the entry is saying. Which is not to say that the wording in the entry could not be improved on.
Re #72:
It looks correct to me as stated. A monoid internal to other monoidal 2-categories would, in general, no longer be a monoidal category, so the suggested change in the 3rd paragraph of #72 wouldn't really work. The intention is as picked up in the 4th paragraph, and that is what the entry is saying.
Which is not to say that the wording in the entry could not be improved on.
Format: MarkdownItexHereby moving the following old query box discussion out of the entry to here: ---- begin forwarded discussion --- +--{.query} [[Ronnie Brown]] I entirely understand that most monoidal categories in nature are not strict, and CWM gives an example to show that you cannot even get strictness for the cartesian product. On the other hand, for the cartesian product we get coherence properties directly from the universal property. Now the tensor product in many monoidal categories in nature comes from the cartesian product, but with more elaborate morphisms. Thus the tensor product of vector spaces comes from bilinear maps. The associativity of this tensor product comes from looking at trilinear maps, and so derives from the associativity of the cartesian product. In a sense, this tensor product is as coherently associative as the cartesian product, which could means that in a rough and ready way we do not need to worry. My query is whether there is a study of this kind of argument in categorical generality? [[Peter LeFanu Lumsdaine]]: The setting for a statement like this would presumably be the connections between monoidal categories and multicategories, which are discussed very nicely in Chapters 2 and 3 of [[Tom Leinster]]'s [[Higher Operads, Higher Categories|book]]. As far as I remember he doesn't give anything that would quite make this argument, and I don't know the literature of these well enough to say whether it's been done elsewhere, but I'd guess it has, or at least that it would be fairly straightforward to give in that terminology. The statement would look something like: "If $\mathbf{C}$ is a multicategory generated by its nullary, unary and binary arrows, $C$ its underlying category, and $\otimes$, $1$ are functors on $C$ [[representable functor|representing]] the nullary and binary arrows of $C$, then $\otimes$ and $1$ form the tensor and unit of a monoidal structure on $C$." The ugly part of this is the generation condition, which will be needed since we only start with $\otimes$ and $1$ (indeed, some stronger presentation condition might be needed, actually). The [[bias|unbiased]] version, where we have not just $\otimes$ and $1$ but an $n$-ary tensor product for every $n$, is essentially given in Leinster's book, iirc, and doesn't require such a condition. =-- --- end forwarded discussion --- <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/136">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/136">v136</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Hereby moving the following old query box discussion out of the entry to here:
—- begin forwarded discussion —
+–{.query}
Ronnie Brown I entirely understand that most monoidal categories in nature are not strict, and CWM gives an example to show that you cannot even get strictness for the cartesian product. On the other hand, for the cartesian product we get coherence properties directly from the universal property.
Now the tensor product in many monoidal categories in nature comes from the cartesian product, but with more elaborate morphisms. Thus the tensor product of vector spaces comes from bilinear maps. The associativity of this tensor product comes from looking at trilinear maps, and so derives from the associativity of the cartesian product. In a sense, this tensor product is as coherently associative as the cartesian product, which could means that in a rough and ready way we do not need to worry.
My query is whether there is a study of this kind of argument in categorical generality?
Peter LeFanu Lumsdaine: The setting for a statement like this would presumably be the connections between monoidal categories and multicategories, which are discussed very nicely in Chapters 2 and 3 of Tom Leinster's book. As far as I remember he doesn't give anything that would quite make this argument, and I don't know the literature of these well enough to say whether it's been done elsewhere, but I'd guess it has, or at least that it would be fairly straightforward to give in that terminology. The statement would look something like:
"If C\mathbf{C} is a multicategory generated by its nullary, unary and binary arrows, CC its underlying category, and ⊗\otimes, 11 are functors on CC representing the nullary and binary arrows of CC, then ⊗\otimes and 11 form the tensor and unit of a monoidal structure on CC."
The ugly part of this is the generation condition, which will be needed since we only start with ⊗\otimes and 11 (indeed, some stronger presentation condition might be needed, actually). The unbiased version, where we have not just ⊗\otimes and 11 but an nn-ary tensor product for every nn, is essentially given in Leinster's book, iirc, and doesn't require such a condition.
— end forwarded discussion —
Format: MarkdownItexRe #72, #73, I can see why that sounds odd to Sam's ear. > Note that, in accordance with the microcosm principle, just as defining a monoid in a 1-category requires that the 1-category carry its own monoidal category structure, defining a monoidal category in the 2-category of categories requires that the 2-category carry a monoidal structure as well. Since just before it says "a monoidal category is a pseudomonoid in the cartesian monoidal 2-category Cat", how about: > Note that, in accordance with the microcosm principle, just as defining a monoid in a 1-category requires that the 1-category carry its own monoidal category structure, defining a monoidal category as a pseudomonoid in the 2-category of categories requires that this 2-category carry a pseudomonoidal structure as well.
Re #72, #73, I can see why that sounds odd to Sam's ear.
Note that, in accordance with the microcosm principle, just as defining a monoid in a 1-category requires that the 1-category carry its own monoidal category structure, defining a monoidal category in the 2-category of categories requires that the 2-category carry a monoidal structure as well.
Since just before it says "a monoidal category is a pseudomonoid in the cartesian monoidal 2-category Cat", how about:
Note that, in accordance with the microcosm principle, just as defining a monoid in a 1-category requires that the 1-category carry its own monoidal category structure, defining a monoidal category as a pseudomonoid in the 2-category of categories requires that this 2-category carry a pseudomonoidal structure as well.
Format: MarkdownItexIf I were to express this thought I would erase the existing paragraph and start again from scratch, more directly to the point: > Notice how the very definition of monoidal categories above invokes the Cartesian product *of* categories, namely in the definition of the tensor product *in* categories. But the operation of forming product categories is itself a (Cartesian) monoidal structure one level higher up in the higher category theory ladder, namely on the ambient 2-category of categories. This state of affairs, where the definition of (higher) algebraic structures uses and requires analogous algebraic structure present on the ambient higher category is a simple instance of the general *microcosm principle*.
If I were to express this thought I would erase the existing paragraph and start again from scratch, more directly to the point:
Notice how the very definition of monoidal categories above invokes the Cartesian product of categories, namely in the definition of the tensor product in categories. But the operation of forming product categories is itself a (Cartesian) monoidal structure one level higher up in the higher category theory ladder, namely on the ambient 2-category of categories. This state of affairs, where the definition of (higher) algebraic structures uses and requires analogous algebraic structure present on the ambient higher category is a simple instance of the general microcosm principle.
Format: MarkdownItexThat sounds good to me. I'll make the change. <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/137">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/137">v137</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
That sounds good to me. I'll make the change.
Format: MarkdownItexOkay, thanks. Since all this discussion was sitting inside one humongous Definition-environment, I have now taken it apart into several numbered Definitions and Remarks. Also added more cross-links between such items where they referred to each other, fixed a bunch of links (somebody once did a lot of work on this entry without knowing how to code links in Instiki...). Also added missing subsection headers. (Previously, the discussion of the 2-category $MonCat$ was sitting in the subcategory for "Strict monoidal categories"...) In the definition of strict monoidal categories I fixed the wording: Now the ambient $Cat$ is of course regarded as a 1-category, *not* as a 2-category, unless we are trying to defeat the point laboriously made further above. <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/138">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/138">v138</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Since all this discussion was sitting inside one humongous Definition-environment, I have now taken it apart into several numbered Definitions and Remarks. Also added more cross-links between such items where they referred to each other, fixed a bunch of links (somebody once did a lot of work on this entry without knowing how to code links in Instiki…).
Also added missing subsection headers. (Previously, the discussion of the 2-category MonCatMonCat was sitting in the subcategory for "Strict monoidal categories"…)
In the definition of strict monoidal categories I fixed the wording: Now the ambient CatCat is of course regarded as a 1-category, not as a 2-category, unless we are trying to defeat the point laboriously made further above.
Format: TextGot it. My confusion was: I thought the intended meaning of the remark in question was that it's possible to define a monoidal category in any monoidal 2-category, not just in the Cartesian monoidal 2-category Cat. But now I see that no such claim is being made, rather, our attention is just being drawn to the fact that we use the monoidal product (cartesian product) in the 2-category Cat in our definition of 'monoidal category', an instance of the microcosm principle.
Got it. My confusion was: I thought the intended meaning of the remark in question was that it's possible to define a monoidal category in any monoidal 2-category, not just in the Cartesian monoidal 2-category Cat. But now I see that no such claim is being made, rather, our attention is just being drawn to the fact that we use the monoidal product (cartesian product) in the 2-category Cat in our definition of 'monoidal category', an instance of the microcosm principle.
Format: MarkdownItexYes! But if we have examples of (pseudo-)monoids in other monoidal 2-categories, then this would be a good point to mention them/link to them.
Yes! But if we have examples of (pseudo-)monoids in other monoidal 2-categories, then this would be a good point to mention them/link to them.
Format: MarkdownItexI've fiddled with the wording a little > The ability to define pseudomonoids in any monoidal 2-category is an [example](microcosm+principle#examples) of the so-called [[microcosm principle]], where the definition of ([[higher structure|higher]]) algebraic [[mathematical structure|structures]] uses and requires analogous algebraic structure present on the ambient higher category. <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/139">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/139">v139</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
I've fiddled with the wording a little
The ability to define pseudomonoids in any monoidal 2-category is an example of the so-called microcosm principle, where the definition of (higher) algebraic structures uses and requires analogous algebraic structure present on the ambient higher category.
Format: MarkdownItexMaybe "necessity" instead of or in addition to "ability": I think the point being made is that in defining monoidal categories one (secretly, maybe) *needs* to appeal to monoidal 2-category structure.
Maybe "necessity" instead of or in addition to "ability": I think the point being made is that in defining monoidal categories one (secretly, maybe) needs to appeal to monoidal 2-category structure.
Format: MarkdownItexI guess the "ability" is implicitly about "sufficiency", so that with the later "needs" both are covered, but yes there should be better wording. Why "secretly"? But then we don't even say this at [[microcosm principle]], which just mentions the sufficiency part: > In higher algebra/higher category theory one can define (generalized) algebraic structures internal to categories which themselves are equipped with certain algebraic structure, in fact with the same kind of algebraic structure. In (Baez-Dolan 97) this has been called the microcosm principle. So what in fact is the case? Is it both necessary and sufficient that higher structure be in place?
I guess the "ability" is implicitly about "sufficiency", so that with the later "needs" both are covered, but yes there should be better wording. Why "secretly"?
But then we don't even say this at microcosm principle, which just mentions the sufficiency part:
In higher algebra/higher category theory one can define (generalized) algebraic structures internal to categories which themselves are equipped with certain algebraic structure, in fact with the same kind of algebraic structure. In (Baez-Dolan 97) this has been called the microcosm principle.
So what in fact is the case? Is it both necessary and sufficient that higher structure be in place?
Format: MarkdownItexI said "secretly" because the point of this discussion is (as far as I see) that when one looks at the standard definition of monoidal categories, it is typically not made explicit that an ambient 2-categorical monoidal structure is being used, this happens tacitly or secretly in the background. We are adding a remark highlighting this pedantic subtlety.
I said "secretly" because the point of this discussion is (as far as I see) that when one looks at the standard definition of monoidal categories, it is typically not made explicit that an ambient 2-categorical monoidal structure is being used, this happens tacitly or secretly in the background. We are adding a remark highlighting this pedantic subtlety.
Format: MarkdownItexRegarding sufficiency or necessity: In Lurie's actual realization of the microcosm principle ([here](https://ncatlab.org/nlab/show/microcosm+principle#ForAlgebrasOverInfinityOperads)) it is both: algebras over an $\infty$-operad $\mathcal{O}$ are defined internal to $\mathcal{O}$-monoidal $\infty$-categories. Incidentally, the $(\infty,1)$-categorical formulation resolves what in the original formulation of the principle looks like an infinite regression: To define monoidal categories we don't actually need the monoidal 2-category $Cat$ but just its $(2,1)$-category core (since the coherence 2-morphisms that it is to supply are all invertible).
Regarding sufficiency or necessity: In Lurie's actual realization of the microcosm principle (here) it is both: algebras over an ∞\infty-operad 𝒪\mathcal{O} are defined internal to 𝒪\mathcal{O}-monoidal ∞\infty-categories.
Incidentally, the (∞,1)(\infty,1)-categorical formulation resolves what in the original formulation of the principle looks like an infinite regression: To define monoidal categories we don't actually need the monoidal 2-category CatCat but just its (2,1)(2,1)-category core (since the coherence 2-morphisms that it is to supply are all invertible).
Format: TextIsn't this discussion of necessity vs sufficiency sort of analogous to the difference of structure and property?: sufficiency : property :: necessity : structure By that I mean, there exist a non-monoidal category C and a sense in which one can define a monoid object M in C, by specifying a functor \otimes, associator, unitors, etc for M. In this sense, having C monoidal guarantees that this _can_ be done. So we are referring to "properties" of this object M. On the other hand, we _need_ C to be a monoidal category in order to be able to define monoid objects whose monoidal structure is canonical. Here M is equipped with structure inherited from C. If what I wrote makes sense, I wonder if it would be relevant to talk about this nuance between necessity and requirement in the [[stuff, structure, property]] page, say with a hyperlink on the word "ability" or "necessity" on this article. I wonder if the first notion (sufficiency/property) is not so good from the perspective of category theory. In its defense, there are certainly categories where only certain objects have something special about them. For example, elliptic curves, among curves, have an addition law. But there is no biproduct for algebraic curves is there? Nevertheless, the notion of addition on elliptic curves isn't completely arbitrary; we still "can" define group objects here in a meaningful sense. Thank you both for your replies. This has been really helpful for me (as is the entire website). Also I hope to understand the last remark (#85) by Urs in the not-so-distant future. In the meantime, I am content with the following non-circular recipe: (1) Cat is a category; (2) Cat has products (thinking of pairs of sets as (x,y)={{x},{x,y}} to prove existence, but rarely ever again thinking of pairs like this); (3) monoidal categories are defined in terms of Cat and its finite Cartesian product operation; (4) categories with finite products are monoidal categories with respect to these; (5) define bicategory again using (Cat,\times); (6) in addition to being monoidal under the Cartesian product, Cat is a strict bicategory when using natural transformations for its 2-morphisms; (7) the 2-category Cat is a monoidal 2-category with respect to Cartesian products. Out of curiosity, do people not like this way of thinking about things because of step (2)? What is the reason for preferring a different recipe--using the "(2,1)-category core", as you say? Is there an nlab page that has the answer to this?
Isn't this discussion of necessity vs sufficiency sort of analogous to the difference of structure and property?:
sufficiency : property :: necessity : structure
By that I mean, there exist a non-monoidal category C and a sense in which one can define a monoid object M in C, by specifying a functor \otimes, associator, unitors, etc for M. In this sense, having C monoidal guarantees that this _can_ be done. So we are referring to "properties" of this object M.
On the other hand, we _need_ C to be a monoidal category in order to be able to define monoid objects whose monoidal structure is canonical. Here M is equipped with structure inherited from C.
If what I wrote makes sense, I wonder if it would be relevant to talk about this nuance between necessity and requirement in the stuff, structure, property page, say with a hyperlink on the word "ability" or "necessity" on this article.
I wonder if the first notion (sufficiency/property) is not so good from the perspective of category theory. In its defense, there are certainly categories where only certain objects have something special about them. For example, elliptic curves, among curves, have an addition law. But there is no biproduct for algebraic curves is there? Nevertheless, the notion of addition on elliptic curves isn't completely arbitrary; we still "can" define group objects here in a meaningful sense.
Thank you both for your replies. This has been really helpful for me (as is the entire website). Also I hope to understand the last remark (#85) by Urs in the not-so-distant future. In the meantime, I am content with the following non-circular recipe: (1) Cat is a category; (2) Cat has products (thinking of pairs of sets as (x,y)={{x},{x,y}} to prove existence, but rarely ever again thinking of pairs like this); (3) monoidal categories are defined in terms of Cat and its finite Cartesian product operation; (4) categories with finite products are monoidal categories with respect to these; (5) define bicategory again using (Cat,\times); (6) in addition to being monoidal under the Cartesian product, Cat is a strict bicategory when using natural transformations for its 2-morphisms; (7) the 2-category Cat is a monoidal 2-category with respect to Cartesian products. Out of curiosity, do people not like this way of thinking about things because of step (2)? What is the reason for preferring a different recipe--using the "(2,1)-category core", as you say? Is there an nlab page that has the answer to this?
Format: MarkdownItexRe #85, #86: It was a wide-spread mistake of old-school higher category theorists to think that to obtain a good theory of $n$-categories one needs to first define $(n+1)$-categories, because, so the logic went, the collection of all $n$-categories is bound to form an $(n+1)$-category which is needed to provide the ambient context for dealing with $n$-categories, notably to discuss their coherence laws. This perceived infinite regression was arguably one of the reasons why the field of higher category theory was, by and large, stuck and fairly empty, before the revolution. The error in the above thinking was to miss the fact that coherences only ever take value in *invertible* higher morphisms, so that a decent theory of $n$-categories is available already inside the $(\infty,1)$-category of $n$-categories. This insight breaks the impasse: First define $(\infty,1)$-categories all at once, and then find the tower of $(\infty,n)$-categories on that homotopy-theoretic foundation. The microcosm principle is an archetypical example of the need for this perspective: The coherences (unitor, associator, triangle, pentagon) on a monoidal category are all invertible, hence can be made sense of already inside the $(2,1)$-category of categories, functors, and natural *iso*-morphisms between them.
Re #85, #86:
It was a wide-spread mistake of old-school higher category theorists to think that to obtain a good theory of nn-categories one needs to first define (n+1)(n+1)-categories, because, so the logic went, the collection of all nn-categories is bound to form an (n+1)(n+1)-category which is needed to provide the ambient context for dealing with nn-categories, notably to discuss their coherence laws.
This perceived infinite regression was arguably one of the reasons why the field of higher category theory was, by and large, stuck and fairly empty, before the revolution.
The error in the above thinking was to miss the fact that coherences only ever take value in invertible higher morphisms, so that a decent theory of nn-categories is available already inside the (∞,1)(\infty,1)-category of nn-categories.
This insight breaks the impasse: First define (∞,1)(\infty,1)-categories all at once, and then find the tower of (∞,n)(\infty,n)-categories on that homotopy-theoretic foundation.
The microcosm principle is an archetypical example of the need for this perspective: The coherences (unitor, associator, triangle, pentagon) on a monoidal category are all invertible, hence can be made sense of already inside the (2,1)(2,1)-category of categories, functors, and natural iso-morphisms between them.
Format: MarkdownItexThank you for that chunk of wisdom! I was definitely on track to falling into that way of thinking. In response to #80, I wonder if certain combinatorial species (those closed under product, so not trees, but forests, for example) are monoidal category objects in the monoidal 2-category of combinatorial species, with product given by the "star product" of combinatorial species. I'll have to think about it a bit more in detail.
Thank you for that chunk of wisdom! I was definitely on track to falling into that way of thinking. In response to #80, I wonder if certain combinatorial species (those closed under product, so not trees, but forests, for example) are monoidal category objects in the monoidal 2-category of combinatorial species, with product given by the "star product" of combinatorial species. I'll have to think about it a bit more in detail.
CommentAuthorHurkyl
Author: Hurkyl
Format: MarkdownItexOn the history lesson, when did the idea of $n$-categories get refined into the idea of $(n,m)$-categories? When I was first casually reading about higher categories, it took a long time before I really encountered the latter being given any serious attention, but that could very well just be an artifact of what I was reading.
On the history lesson, when did the idea of nn-categories get refined into the idea of (n,m)(n,m)-categories? When I was first casually reading about higher categories, it took a long time before I really encountered the latter being given any serious attention, but that could very well just be an artifact of what I was reading.
Format: MarkdownItexCertainly by [Lectures on n-Categories and Cohomology](https://arxiv.org/abs/math/0608420), but I think it was much earlier.
Certainly by Lectures on n-Categories and Cohomology, but I think it was much earlier.
Format: MarkdownItexRegarding serious attention: This began with the use of $(\infty,n)$-categories by Lurie in the classification of TQFTs and the article on Goodwillie calculus. I remember the revelation when opening this, having been brought up with the old-school ideas forever "towards an $n$-category of cobordisms" ([tac:18-10](http://www.tac.mta.ca/tac/volumes/18/10/18-10abs.html)). Suddenly there was a definition that worked.
Regarding serious attention: This began with the use of (∞,n)(\infty,n)-categories by Lurie in the classification of TQFTs and the article on Goodwillie calculus.
I remember the revelation when opening this, having been brought up with the old-school ideas forever "towards an nn-category of cobordisms" (tac:18-10). Suddenly there was a definition that worked.
Format: MarkdownItexThe drama of the eventual lifting of the impasse of old-school higher category is also reflected in Voevodsky's "breakthrough" through his "greatest roadblock" by realizing that (my slight paraphrase): "categories are not higher sets but higher posets; the actual higher sets are groupoids" ([here](https://mathoverflow.net/q/309515/381)). This is referring to old-school higher category theory folklore being fond of the fact that "groupoids are just certain categories". While true, it mislead people into not recognizing that homotopy theory is the foundation of higher category theory, not the other way around. Only when this was turned around and put on its feet did higher category theory start to run.
The drama of the eventual lifting of the impasse of old-school higher category is also reflected in Voevodsky's "breakthrough" through his "greatest roadblock" by realizing that (my slight paraphrase): "categories are not higher sets but higher posets; the actual higher sets are groupoids" (here).
This is referring to old-school higher category theory folklore being fond of the fact that "groupoids are just certain categories". While true, it mislead people into not recognizing that homotopy theory is the foundation of higher category theory, not the other way around. Only when this was turned around and put on its feet did higher category theory start to run.
Format: MarkdownItexWe're wondering about such matters in a conversation from 2012 beginning [here](https://golem.ph.utexas.edu/category/2012/06/directed_homotopy_type_theory.html#c041572): > When I was learning about the higher dimensional program from John Baez all those years ago, I took it that n-categories were to be the basic entity. Then n-groupoids were to be thought of a special case of n-categories, particularly useful because homotopy theorists had worked out very powerful theories to deal with the former. The trick was to extend what they'd done, but to an environment with no inverses. > Do you think that what you're finding here about the difficulty of directed homotopy type theory suggests that in some sense n-groupoids shouldn't be thought of as a variant of something more basic? I wonder if we have the points made in #85 and #87 on the nLab anywhere.
We're wondering about such matters in a conversation from 2012 beginning here:
When I was learning about the higher dimensional program from John Baez all those years ago, I took it that n-categories were to be the basic entity. Then n-groupoids were to be thought of a special case of n-categories, particularly useful because homotopy theorists had worked out very powerful theories to deal with the former. The trick was to extend what they'd done, but to an environment with no inverses.
Do you think that what you're finding here about the difficulty of directed homotopy type theory suggests that in some sense n-groupoids shouldn't be thought of as a variant of something more basic?
I wonder if we have the points made in #85 and #87 on the nLab anywhere.
Format: MarkdownItexInteresting that old quote. Yes, that's the point. I have a vague memory of digging out, in a similar conversation years ago, quotes that explicitly make the error mentioned in #87. I am pretty sure where to look for them, but would have to search again. Maybe it's not worthwhile. I suppose if A. Joyal had been more into publishing his insights, the drama could have been shortcut by about two decades. I felt this was all well-understood by now, but it wouldn't hurt to have an $n$Lab entry on it. I might try to start something later on the weekend.
Interesting that old quote. Yes, that's the point.
I have a vague memory of digging out, in a similar conversation years ago, quotes that explicitly make the error mentioned in #87. I am pretty sure where to look for them, but would have to search again. Maybe it's not worthwhile.
I suppose if A. Joyal had been more into publishing his insights, the drama could have been shortcut by about two decades.
I felt this was all well-understood by now, but it wouldn't hurt to have an nnLab entry on it. I might try to start something later on the weekend.
Format: MarkdownItexI was intrigued by the above and for the historical record, I looked back at my letters to Grothendieck from 1983. I pointed out there that Kan complexes were a good model for infinity groupoids and that there were several good candidates for infinity categories. (I do not seem to have explicitly mentioned weak Kan complexes / quasi-categories, but about that time Cordier and I started working on both fibrant SSet-categories and on quasicategories. We did not seem to appreciated the importance of the (infty,1)-idea however.) We had a sketch of the theory of weak Kan complexes to include the analogues of limits and colimits, ends and coends, but never wrote that up, as Jean-Marc felt that the SSet-categories would be more acceptable to both homotopy theorists and category theorists. Our write up of the ends and coends stuff in that latter setting took a lot longer that we had expected due to health issues and excessive teaching loads. We put that SSet-category view forward in the paper _Homotopy Coherent Category Theory_, Trans. Amer. Math. Soc. 349 (1997) 1-54, but that paper, which had been essentially finished several years earlier, was initally rejected by another journal on the basis that `homotopy theorists did not need such a categorical way of looking at homotopy coherence', or some such wording. It received a good report from the referee for TAMS however. There were thus people who were looking at what eventually became quasi-category theory at about the same time as Joyal's lovely approach was being developed, and with the Bangor approach to strict omega categories etc. the idea of doing all dimensions at once was pushed quite firmly. It should be also mentioned that, of course, Ross Street, Dominic Verity , Michael Batanin, and others in Sydney were putting forward a parallel vision at that time; (Edit) see for instance [here](http://web.science.mq.edu.au/~street/Minneapolis.pdf) for the Australian view in 2004. In the category theory conferences of the time there were talks which were more top-down, doing all dimensions at one by concentrating on the coherence questions, as well as those which were approaching the definition from the bottom-up. I also remember, I think it was Maxim Kontsevich. giving a talk (probably 1992), which used A_infty categories and this was clearly linked in his mind and for many of the category theorists in the audience, to that of 'doing infinity category theory in all dimensions' albeit for him it was based on a more algebraic dg-cat like structure. I think the idea that one could do all dimensions at once was therefore well represented in talks during the 1980s and 90s, but some people preferred to be cautious and to try to understand the low dimensional weak categories (bicategories, tricategories, etc) which were combinatorially very tricky, and were therefore avoided by some (I would say that if one uses homotopy coherence and in particular higher operads (which we missed completely in our approach in the 1980s) , the combinatorics becomes more manageable, but can be hard work!) By the way, the Grothendieck correspondence is due to be published some time next year I think.
I was intrigued by the above and for the historical record, I looked back at my letters to Grothendieck from 1983. I pointed out there that Kan complexes were a good model for infinity groupoids and that there were several good candidates for infinity categories. (I do not seem to have explicitly mentioned weak Kan complexes / quasi-categories, but about that time Cordier and I started working on both fibrant SSet-categories and on quasicategories. We did not seem to appreciated the importance of the (infty,1)-idea however.) We had a sketch of the theory of weak Kan complexes to include the analogues of limits and colimits, ends and coends, but never wrote that up, as Jean-Marc felt that the SSet-categories would be more acceptable to both homotopy theorists and category theorists. Our write up of the ends and coends stuff in that latter setting took a lot longer that we had expected due to health issues and excessive teaching loads. We put that SSet-category view forward in the paper Homotopy Coherent Category Theory, Trans. Amer. Math. Soc. 349 (1997) 1-54, but that paper, which had been essentially finished several years earlier, was initally rejected by another journal on the basis that 'homotopy theorists did not need such a categorical way of looking at homotopy coherence', or some such wording. It received a good report from the referee for TAMS however.
There were thus people who were looking at what eventually became quasi-category theory at about the same time as Joyal's lovely approach was being developed, and with the Bangor approach to strict omega categories etc. the idea of doing all dimensions at once was pushed quite firmly. It should be also mentioned that, of course, Ross Street, Dominic Verity , Michael Batanin, and others in Sydney were putting forward a parallel vision at that time; (Edit) see for instance here for the Australian view in 2004. In the category theory conferences of the time there were talks which were more top-down, doing all dimensions at one by concentrating on the coherence questions, as well as those which were approaching the definition from the bottom-up.
I also remember, I think it was Maxim Kontsevich. giving a talk (probably 1992), which used A_infty categories and this was clearly linked in his mind and for many of the category theorists in the audience, to that of 'doing infinity category theory in all dimensions' albeit for him it was based on a more algebraic dg-cat like structure.
I think the idea that one could do all dimensions at once was therefore well represented in talks during the 1980s and 90s, but some people preferred to be cautious and to try to understand the low dimensional weak categories (bicategories, tricategories, etc) which were combinatorially very tricky, and were therefore avoided by some (I would say that if one uses homotopy coherence and in particular higher operads (which we missed completely in our approach in the 1980s) , the combinatorics becomes more manageable, but can be hard work!)
By the way, the Grothendieck correspondence is due to be published some time next year I think.
Format: MarkdownItexJoyal did not just have an "approach" (nor just a "pursuit" "towards" a goal) as many had. He had seen and then worked out the *theory*, essentially what is now called $(\infty,1)$-category theory. It wasn't as widely known as it should have. I remember him opening a talk on quasi-categories in 2007 at the Fields Institute with the words "In this talk I want to convince you that higher category theory exists." An innocent sounding statement, but somewhat damning to a room full of people supposedly all working on higher categories.
Joyal did not just have an "approach" (nor just a "pursuit" "towards" a goal) as many had. He had seen and then worked out the theory, essentially what is now called (∞,1)(\infty,1)-category theory.
It wasn't as widely known as it should have. I remember him opening a talk on quasi-categories in 2007 at the Fields Institute with the words "In this talk I want to convince you that higher category theory exists." An innocent sounding statement, but somewhat damning to a room full of people supposedly all working on higher categories.
Format: MarkdownItexNowhere in what I wrote was I suggesting that André had not put in a lot of hard work in developing the theory, and I was agreeing with you, Urs, that there were some in the 1980s and 90s who were still trying to do the inductive process. You are remembering 2007, I am remembering 15 to 20 years earlier, so there is no inconsistency between what you are saying and what I wrote. What is disappointing is that after that 24 year period, André still felt he had to justify that higher category theory existed, especially after the Minnesota conference of 2004, where a large number of people had met to discuss the state of the theory, and there were many talks about the various approaches. It was not 100% certain at that time which of the many versions were going to survive the race, nor if they were all equivalent.
Nowhere in what I wrote was I suggesting that André had not put in a lot of hard work in developing the theory, and I was agreeing with you, Urs, that there were some in the 1980s and 90s who were still trying to do the inductive process. You are remembering 2007, I am remembering 15 to 20 years earlier, so there is no inconsistency between what you are saying and what I wrote. What is disappointing is that after that 24 year period, André still felt he had to justify that higher category theory existed, especially after the Minnesota conference of 2004, where a large number of people had met to discuss the state of the theory, and there were many talks about the various approaches. It was not 100% certain at that time which of the many versions were going to survive the race, nor if they were all equivalent.
CommentTimeJun 1st 2022
Format: MarkdownItexadded pointer to: * [[Francis Borceux]], Section 6.1 of: *Handbook of Categorical Algebra* Vol. 2: *Categories and Structures* $[$[doi:10.1017/CBO9780511525865](https://doi.org/10.1017/CBO9780511525865)$]$, Encyclopedia of Mathematics and its Applications **50**, Cambridge University Press (1994) <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/141">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/141">v141</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Francis Borceux, Section 6.1 of: Handbook of Categorical Algebra Vol. 2: Categories and Structures [[doi:10.1017/CBO9780511525865]], Encyclopedia of Mathematics and its Applications 50, Cambridge University Press (1994)
CommentAuthorJ-B Vienney
CommentTimeAug 1st 2022
Author: J-B Vienney
Format: MarkdownItexAdded a very explicit definition of a strict monoidal category. <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/144">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/144">v144</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Added a very explicit definition of a strict monoidal category.
Format: MarkdownItexAdded a subsection on the rig of scalars $\mathcal{C}[I,I]$. <a href="https://ncatlab.org/nlab/revision/diff/monoidal+category/146">diff</a>, <a href="https://ncatlab.org/nlab/revision/monoidal+category/146">v146</a>, <a href="https://ncatlab.org/nlab/show/monoidal+category">current</a>
Added a subsection on the rig of scalars 𝒞[I,I]\mathcal{C}[I,I]. | CommonCrawl |
Modelling virtual radio resource management in full heterogeneous networks
Sina Khatibi ORCID: orcid.org/0000-0002-6704-33261,2 &
Luis M. Correia1
EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 73 (2017) Cite this article
Virtual radio access networks (RANs) is the candidate solution for 5G access networks, the concept of virtualised radio resources completing the virtual RAN paradigm. This paper proposes a new analytical model for the management of virtual radio resources in full heterogeneous networks. The estimation of network capacity and data rate allocation are the model's two main components. Based on the probability distribution of the signal-to-interference-plus-noise-ratio observed at the user terminal, the model leads to the probability distribution for the total network data rate. It considers different approaches for the estimation of the total network data rate, based on different channel qualities, i.e., optimistic, realistic and pessimistic. The second component uses the outcome of the first one in order to maximise the weighted data rate subject to the total network capacity, the SLAs (service level agreements) of Virtual Network Operators (VNOs), and fairness. The weights for services in the objective function of the resource allocation component enable the model to have prioritisation among services. The performance of the proposed model is evaluated in a practical heterogeneous access network. Results show an increase of 2.5 times in network capacity by implementing an access point at the centre of each cell of a cellular network. It is shown that the cellular network capacity itself can vary from 0.9 Gbps in the pessimistic approach up to 5.5 Gbps in the optimistic one. Finally, the isolation of service classes and VNOs by means of virtualisation of radio resources is clearly demonstrated.
The monthly global data traffic is going to surpass 10 EB in 2017, as the result of the proliferation of smart devices and of traffic-hungry applications [1]. In order to address this issue, operators have to find a practical, flexible and cost-efficient solution for their networks expansion and operation, using the scarce available radio resources. The increment of cellular networks' capacity by deploying dense base stations (BSs) is the groundwork in any candidate solution.
In addition, traffic offloading, e.g., to Wi-Fi access points (APs), has proven to be a valuable complementary approach. According to [2, 3], an acceptable portion of traffic can be offloaded to APs, just by deferring delay-tolerant services for a pre-specified maximum interval until reaching an AP. Offloading approaches are generally based on using other connectivity capabilities of mobile terminals, whenever it is possible, instead of using further expensive cellular bands. The authors in [4] discuss the economics of traffic offloading, and in [5] address an energy-saving analysis.
Nevertheless, drastic temporal and geographical variations of traffic, in addition to the shortage of network capacity, make the situation for operators even worse [6]. The usual provisioning of radio access networks (RANs) for busy hours leads to an inefficient resource usage with relatively high CApital and OPerational Expenditure (CAPEX and OPEX) costs, which is not acceptable anymore. Instead, operators are in favour of flexible and elastic solutions, where they can also share their infrastructure.
Lately, the sharing of network infrastructure using network function virtualisation (NFV) has become an active research topic to transform the way operators architect their networks [7]. In the same research path, the concept of virtualisation of radio resources for cellular networks is proposed in [8–10]. The key idea is to aggregate and manage all the available physical radio resources in a set of infrastructures, offering pay-as-you-go connectivity-as-a-service (CaaS) to virtual network operators (VNOs). Virtual radio resource management (VRRM) is a non-trivial task, since it has to serve multiple VNOs with different requirements and service level agreements (SLAs) over the same infrastructure. Furthermore, wireless links are always subject to fading and interference, hence, their performance is variable [11]. The proposed model for virtual radio resource management in [8–10] has two key parts: (i) estimation of available radio resources and (ii) allocation of the available resources estimated in the first step to the services of VNOs. In this paper an analytical description of the model is provided followed by the evaluation for a practical scenario. The novelty of this paper can be summarised as follows:
This paper extends the analytical model for the management of virtual radio resources, considering full heterogeneous access networks, and including both non-cellular (e.g., Wi-Fi) and cellular (e.g., GSM, UMTS, LTE and whatever comes next in 5G–5th generation) networks. The key point in extending the model to non-cellular networks is the consideration of the effect of collision on the total network throughput. Consequently, the model has to consider the number of connected terminals to the Wi-Fi network, while optimising the other objectives.
The techniques for estimating network capacity are improved with three extra approaches, i.e., optimistic, realistic and pessimistic ones, which approximate the model to a real network operation.
A comprehensive study of the proposed model for VRRM in full heterogeneous networks under different channel quality conditions and different traffic load scenarios is given.
This paper is organised as follows: Section II addresses the background and related works, and Section III describes the proposed model for VRRM. The scenario for model evaluation is stated in Section IV. In Section V, numeric results are presented and discussed. The paper is concluded in Section VI.
Background and related works
Based on [12, 13], infrastructure sharing solutions can be categorised into three main types: geographical, passive and active sharing. In geographical sharing or national roaming, a federation of operators can achieve full coverage in a short time, by dividing the service area into several regions, over which each of the operators provides coverage [14]. Passive sharing refers to the sharing agreement of fundamental infrastructures, such as tower masts, equipment houses and power supply, in order to reduce operational costs. Active sharing, however, is the sharing of transport infrastructures, radio spectrum and baseband processing resources. In [15], two types of sharing are introduced: multi-operator RAN and multi-core network. In the former, operators maintain a maximum level of independent control over their traffic quality and capacity, by splitting BSs and their controller nodes into logically independent units over a single physical infrastructure. In the latter, however, operators give up their independent control, by sharing the aforementioned entities in conjunction with the pooling of radio resources. Although the cost items in multi-core network are identical to multi-operator RAN, radio resources pooling leads to further savings in extremely low-traffic areas over equipment-related costs. Moreover, a network-wide radio resource management framework is proposed [16], in order to achieve isolation in addition to the optimal distribution of resources across the network.
Despite of RAN sharing benefits, surprisingly few sharing agreements have been made, especially in mature markets. The reasons offered by operators for not engaging into sharing deals are often the up-front transformation costs, the potential loss of control over their networks and the challenge of operational complexity [17, 18]. Sharing deals may be too expensive, and the initial cost of a network-sharing deal can be daunting; hence, operators without a comfortable margin of funds to make the necessary investment are likely to assume that they simply cannot afford to participate in such operation. 3rd Generation Partnership Project (3GPP) standards also limit the shared RAN to serve only four operators [19]. Moreover, many operators, particularly incumbent ones whose early entrance into markets has given them the best coverage and network qualities, assume that sharing their network with rivals would dilute their competitive advantage. Some of them may feel that they would not be able to control the direction for the development of their network in future rollout strategies and choices about hardware and vendors. Last, but not the least, having a shared RAN running properly is an elaborate and a complex task. Some operators believe that having a shared network operation puts many technological and operational challenges, which may lead to little financial benefits and great potential of chaos [19]. However, some studies, e.g. [20], show that the pros of sharing are larger than the cons, and that this approach can really be seen as a very promising solution for the future, namely, in a broader perspective, i.e., looking at RAN virtualisation instead of RAN sharing as the candidate solution.
NFV has captured the attention of many researchers, e.g. [7], and some studies have also considered RAN as well. By introducing an entity called "hypervisor" on the top of physical resources, the authors in [21] addressed the concept of a virtualised eNodeB. The hypervisor allocates the physical resources among various virtual instances and coordinates multiple virtual eNodeBs over the same physical one; the LTE spectrum is shared among them using the concept of RAN sharing, i.e., each virtual eNodeB receives a portion of the available frequency bands. The virtualisation of BSs in LTE is also addressed in [22], by considering the resource allocation to be static or dynamic spectrum sharing among virtual operators. The authors in [23] look into the advantages of a virtualised LTE system, via an analytical model for FTP (File Transfer Protocol) transmissions; the evaluation is done with considerations on realistic situations to present multiplexing gain in addition to the analytical analysis.
As the next step in RAN virtualisation, the concept of radio resource virtualisation and a management model is proposed in [8], and the extension of the model to support the shortage of radio resources is presented in [9]. In the same research path, the current paper, as well as [10], considers the virtualisation of radio resources over a full heterogeneous access network (i.e., a combination of cellular networks and WLANs (wireless local area networks)), over which pay-as-you-go CaaS is offered to VNOs.
Radio resource management in virtual RANs
Figure 1 presents the hierarchy for the management of virtual radio resources, consisting of a VRRM entity on the top of the usual radio resource management (RRM) entities of heterogeneous access networks [24], i.e., common RRM (CRRM) and local RRMs (for each of the physical networks), the latter managing different RATs (Radio Access Technologies)
Radio resource management in virtual RANs [8]
VNOs, placed at the top of the hierarchy, require wireless connectivity to be offered to their subscribers, not owning any radio access infrastructure [8, 9]. VNOs ask for RAN-as-a-service (RANaaS) from the RAN provider with physical infrastructure [25]. VNOs do not have to deal with the management of virtual RANs; they just define requirements, such as contracted capacity, in their SLAs with RAN providers. The role of VRRM is to translate VNOs' requirements and SLAs into a set of policies for the lower levels [8]. These policies contain data rates for different services, in addition to their priorities. Although VRRM optimises the usage of virtual radio resources, it does not deal with physical ones. However, VRRM has to consider practical issues, as the effect of the collision rate in WLANs on network data rate, in order to have an effective management of virtual radio resources. Reports and monitoring information provided by CRRM enables VRRM to improve policies. Load balancing among RANs, controlling the offloading procedure, is the duty of CRRM also known as Join RRM. Finally, the local RRMs are in charge of managing the physical resources based on the policies of VRRM and CRRM.
The VNOs' SLAs can generally be categorised into three main groups:
Guaranteed bitrate (GB), in which the RAN provider guarantees the VNO a minimum and a maximum level of data rates, regardless of the network status. Allocating the maximum guaranteed data rate to the VNO leads to its full satisfaction. The upper boundary in this type of SLA enables VNOs to have full control on their networks. For instance, a VNO offering VoIP (voice over IP) to its subscribers may foresee to offer this service to only 30 up 50% of its subscribers simultaneously, hence, the VNO can put this policy into practice by choosing a guaranteed SLA for its VoIP service. It is expected that subscribers always experience a good quality of service (QoS) in return of relatively more expensive services.
Best effort with minimum guaranteed (BG), where the VNO is guaranteed with a minimum level of service. The request for data rates higher than the guaranteed level is served in the best effort manner, hence, the minimum guaranteed data rate is the one received during busy hours. In this case, although VNOs do not invest as much as former ones, they can still guarantee the minimum QoS to their subscribers. From the subscribers' viewpoint, the acceptable service (not as good as the previous ones) is offered with a relatively lower cost.
Best effort (BE), in which the VNO is served in the pure best effort approach. In this case, operators and their subscribers may suffer from low QoS and resource starvation during busy hours, but the associated cost will be lower as well.
VRRM can be considered as a decision-making problem, under uncertainty for a dynamic environment; the decision is on the allocation of resources to the different services of VNOs, by considering the set of available radio resources. In what follows, the VRRM problem is discussed in detail.
The first step is forming the set of radio resources containing the number of available radio resource units (RRUs) per RAT (e.g., time-slots in GSM, codes in UMTS, resource-blocks in LTE and channels in Wi-Fi), as follows:
$$ {s}_{\mathrm{t}}^{\mathrm{RRU}}=\left\{{N}_{\mathrm{SRRU}}^{\mathrm{RA}{\mathrm{T}}_1},\ {N}_{\mathrm{SRRU}}^{\mathrm{RA}{\mathrm{T}}_2}, \dots,\ {N}_{\mathrm{SRRU}}^{\mathrm{RA}{\mathrm{T}}_{{\mathrm{N}}_{\mathrm{RA}\mathrm{T}}}}\right\} $$
\( {s}_{\mathrm{t}}^{\mathrm{RRU}} \): the set of radio resources at t,
\( {N}_{\mathrm{SRRU}}^{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}} \): number of spare RRUs in the ith RAT,
N RAT: number of RATs.
Mapping the set of radio resources onto the total network data rate is the next step. Since radio resources' performance is not deterministic, it is not possible to have a precise prediction of capacity, requiring the estimation of the total network data rate via a probability distribution, as a function of the available RRUs for further decisions,
$$ {s}_{\mathrm{t}}^{\mathrm{RRU}}\overset{\mathrm{mapping}}{\to }\ {R}_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}(t) $$
\( {R}_{\mathrm{b}}^{\mathrm{CRRM}} \): total network data rate.
The third step is the allocation of the available resources to the services of VNOs. Sets of policies are designed in this step, in order to assign a portion of the total network capacity to each service of each VNO. Meeting the guaranteed service levels and increasing the resources' usage efficiency are the primary objectives, but other goals, such as fairness, may be considered. The resource allocation mapping of the total network capacity onto different services' data rates is given by:
$$ {R}_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}(t)\overset{\mathrm{map}}{\to }\ \left\{{R}_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}(t)\Big| j=1, \dots,\ {N}_{\mathrm{VNO}},\ i=1, \dots,\ {N}_{\mathrm{srv}}\right\} $$
\( {R}_{{\mathrm{b}}_{\mathrm{ji}}}^{\mathrm{Srv}} \): serving (allocated) data rate for service j of VNO i;
N VNO: number of VNOs;
N srv: number of services.
Finally, monitoring the used resources and updating their status is the observation part of this decision-making problem. It enables the manager to evaluate the accuracy of its decisions and to modify former ones. Updating changes in the set of radio resources helps the manager to cope with dynamic changes in the environment and in VNOs requirements. In summary, it can be claimed that resource management solutions generally have two main components; estimation of available resources and optimisation of their allocation, which are addressed next.
Estimation of available resources
The estimation of available resources, and their allocation to the different services, are the two key steps in VRRM procedures. Obtaining a probabilistic relationship in the form of a probability density function (PDF), between the set of available RRUs and network capacity is the goal in a first step, then, by having an estimation of network capacity, VRRM allocates a portion of this capacity to each service of each VNO.
Depending on various parameters, such as RAT, modulation and coding, the allocation of an RRU can provide different data rates to mobile terminals. However, the data rate of an RRU is generally a function of the signal-to-interference-plus-noise-ratio (SINR), [8, 9]. Since SINR is a random variable, given the channel characteristics from path loss, fading and mobility, among other things, one needs to express the data rate also as a random variable:
$$ {R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}\left[\mathrm{Mbps}\right]}\left({\rho}_{\mathrm{i}\mathrm{n}}\right)\in \left[0,\ {R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}\left[\mathrm{Mbps}\right]}^{\max}\right] $$
\( {R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}} \): data rate of an RRU from the ith RAT,
ρ in:SINR,
\( {R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}}^{\max } \):maximum data rate of an RRU from the ith RAT.
Based on [8], the PDF of \( {R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}} \) can be given as:
$$ {p}_{{\mathrm{R}}_{\mathrm{b}}}\left({R_{\mathrm{b}}}_{\mathrm{R}\mathrm{A}{\mathrm{T}}_{\mathrm{i}}\left[\mathrm{Mbps}\right]}\right)=\frac{\frac{0.46}{\alpha_p}\left({\displaystyle {\sum}_{k=1}^5} k\ {a}_k\ {\left({R_{\mathrm{b}}}_{\mathrm{R}\mathrm{A}{\mathrm{T}}_{\mathrm{i}}}\right)}^{k-1}\right) \exp \left(-\frac{0.46}{\alpha_p}{\displaystyle {\sum}_{k=0}^5}{a}_k\ {\left({R_{\mathrm{b}}}_{\mathrm{R}\mathrm{A}{\mathrm{T}}_{\mathrm{i}}}\right)}^k\right)}{ \exp \left(-\frac{0.46}{\alpha_p}{a}_0\right)- \exp \left(-\frac{0.46}{\alpha_p}{\displaystyle {\sum}_{k=0}^5}{a}_k\ {\left({R_{\mathrm{b}}}_{\mathrm{R}\mathrm{A}{\mathrm{T}}_{\mathrm{i}}}^{\max}\right)}^k\right)} $$
α p ≥ 2: path loss exponent,
a k : coefficients in a polynomial approximation of SINR, as a function of data rate in each RAT (presented in [8] for cellular networks, and in [10] for Wi-Fi).
The total data rate for a single RAT pool is
$$ {R}_{{\mathrm{b}}_{\mathrm{tot}}}^{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}}={\displaystyle \sum_{n=1}^{N_{\mathrm{RRU}}^{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}}}}{R}_{{\mathrm{b}}_{\mathrm{n}}}^{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}} $$
\( {N}_{\mathrm{RRU}}^{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}} \): number of RRUs of the ith RAT,
\( {R}_{{\mathrm{b}}_{\mathrm{tot}}}^{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}} \): data rate from the ith RAT pool,
\( {R}_{{\mathrm{b}}_{\mathrm{n}}}^{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}} \): data rate from the nth RRU of the ith RAT.
The PDF of a RAT's data rate is equal to the convolution of all RRUs' PDFs when the channels, and consequently, the data rates' random variables (R bi) are independent [26]. In the deployment of heterogeneous access networks, the resource pools of the different RATs can be aggregated under the supervision of CRRM. The total data rate aggregated from all RATs is then the summation of the total data rate from each individual:
$$ {R}_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}={\displaystyle \sum_{i=1}^{N_{\mathrm{RA}\mathrm{T}}}}{R}_{{\mathrm{b}}_{\mathrm{tot}}\left[\mathrm{Mbps}\right]}^{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}} $$
By having the number of available resources mapped onto probability functions, the VRRM has an estimation of the total network capacity. In the estimation procedure, the total network data rate is highly dependent on the channel quality at the mobile terminal, to which the radio resources are allocated. A higher network data rate can be achieved when the RRUs are allocated to mobile terminals with a high SINR. Thus, the allocation of the radio resources to mobile terminals with a low SINR leads to a lower network data rate. In a very low network capacity, VRRM may not be able to meet the minimum guaranteed requirements. The aforementioned estimation approach does not consider any assumption on the channel quality of the mobile terminals, this approach being referred to as the general (G) one. By adding assumptions about the mobile terminals' channel quality, three additional approaches for the estimation of network capacity are considered:
Optimistic approach (OP): all RRUs are assigned to users with very good channel quality (i.e., high SINR), therefore, it is assumed that the data rate of each RRU satisfies:
$$ 0.5\kern0.5em {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max}\le {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}\le {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max } $$
Realistic approach (RL): it is assumed that the RRUs of each RAT are divided into two equal groups, and that the data rate of the RRU from each group is as follows:
$$ 0\le {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}<0.5\ {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max}\kern0.5em \mathrm{Low}\ \mathrm{SINR}\ \mathrm{Group} $$
$$ 0.5\ {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max}\le {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}<{R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max}\kern0.5em \mathrm{High}\ \mathrm{SINR}\ \mathrm{Group} $$
Pessimistic approach (PE): it is assumed that all the RRUs in the system are assigned to users with low SINR so that the boundaries are
$$ 0\le {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}\le 0.5\kern0.5em {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max } $$
Equation (5) can be further developed for these special case studies, where data rate is bounded between high and low values, the conditional PDF of a single RRU in this case being calculated as follows [26]:
$$ {p}_{\mathrm{Rb}}\left({R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}\left[\mathrm{Mbps}\right]}\Big|{R}_{\mathrm{b}\mathrm{Low}\left[\mathrm{Mbps}\right]}\le {R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}\left[\mathrm{Mbps}\right]}\le {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max}\right)=\frac{\frac{0.46}{\alpha_p}\left({\displaystyle {\sum}_{k=1}^5} k\ {a}_k{\left({R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}}\right)}^{k-1}\right)\ \exp \left(-\frac{0.46}{\alpha_p}{\displaystyle {\sum}_{k=0}^5}{a}_k\ {\left({R_{\mathrm{b}}}_{\mathrm{RA}{\mathrm{T}}_{\mathrm{i}}}\right)}^k\right)}{ \exp \left(-\frac{0.46}{\alpha}{\displaystyle {\sum}_{k=0}^5}{a}_k{\left({R}_{\mathrm{b}\mathrm{Low}}\right)}^k\right)- \exp \left(-\frac{0.46}{\alpha}{\displaystyle {\sum}_{k=0}^5}{a}_{k\ }{a}_k{\left({R}_{\mathrm{b}\mathrm{High}}\right)}^k\right)} $$
R bLow: low boundary for the RRU data rate;
R bHigh: high boundary for the RRU data rate.
One should note that there is a relationship in between the various parameters related to the data rate:
$$ 0\le {R}_{\mathrm{b}\mathrm{Low}\left[\mathrm{Mbps}\right]}\le {R}_{\mathrm{b}\mathrm{High}\left[\mathrm{Mbps}\right]}\le {R}_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max } $$
Furthermore, one should note that this approach is not scenario-dependent, i.e., it can be applied to any network/system fitting into the general network architecture previously presented, which encompasses all current cellular networks and WLANs, as well as to other approaches for the estimation of network capacity (just by considering the corresponding conditions).
Allocation of resources in cellular networks
In the next step, the services of the VNOs have to be granted with a portion of the network capacity. The allocation of resources has to be based on services' priority and SLAs, e.g., the services from conversation (e.g., VoIP) and streaming (e.g., video) classes are delay-sensitive, but they have almost constant data rates, thus, the allocation of data rates higher than the ones contracted for these services does not increase the QoS, in contrast to interactive (e.g., FTP) and background (e.g., email) classes; as a consequence, operators offering the former set of services are not interested in allocating higher data rates.
The primary goal in the allocation procedure is to increase the total network data rate, while considering the priority of different services, subject to the constraints. On the ground of this fact, the objective function for VRRM is the total weighted network data rate, being expressed for cellular RATs as:
$$ {f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{cell}}\left({\mathbf{R}}_{\mathbf{b}}^{\mathbf{cell}}\right) = {\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}}{W}_{\mathrm{ji}}^{\mathrm{Srv}}\ {R}_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{cell}} $$
\( {\mathbf{R}}_{\mathbf{b}}^{\mathbf{cell}} \): vector of serving data rates from cellular networks,
N VNO: number of served VNOs by this VRRM,
N srv: number of services for each VNO,
\( {W}_{\mathrm{ji}}^{\mathrm{Srv}} \): weight of serving unit of data rate for service j of VNO i by VRRM, where \( {W}_{\mathrm{ji}}^{\mathrm{Srv}}\in \left[0,1\right] \).
The weights in (14) are used to prioritise the allocation of data rates to services, being a common practice to have the summation of all them equal to unit. The choice of these weights is based on the SLAs between VNOs and VRRM, and they can be modified depending on the agreed KPIs (key performance indicators) during runtime.
Allocation of resources in WLAN
It is desirable that the services with the higher serving weights receive data rates higher than the ones with the lower serving weights. The equivalent function for WLANs is
$$ {f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{WLAN}}\left({\mathbf{R}}_{\mathbf{b}\left[\mathrm{Mbps}\right]}^{\mathbf{WLAN}}\right)={\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}{\displaystyle \sum_{j=1}^{N_{s\mathrm{rv}}}\left({W}_{\mathrm{j}\mathrm{i}}^{\mathrm{Srv}}\ {R}_{{\mathrm{b}}_{\mathrm{j}\mathrm{i}}\left[\mathrm{Mbps}\right]}^{\mathrm{WLAN}} + {W}^{\mathrm{SRb}}\ \frac{\overline{R_{{\mathrm{b}}_{\mathrm{j}}}}}{\overline{R_{\mathrm{b}}^{\max }}}\kern0.5em {R}_{{\mathrm{b}}_{\mathrm{j}\mathrm{i}}\left[\mathrm{Mbps}\right]}^{\mathrm{WLAN}}\right)}} $$
\( {\mathbf{R}}_{\mathbf{b}}^{\mathbf{WLAN}} \): vector of serving data rates from APs,
W SRb: weight for session average data rate, where W SRb ? [0, 1],
\( \overline{R_b^{max}} \): maximum average data rate among all services,
\( \overline{R_{{\mathrm{b}}_{\mathrm{j}}}} \): average data rate for service j.
In (15), W SRb is introduced to give priority to services with a higher data rate per session. Assigning these services to a Wi-Fi network reduces collision rates, leading to a higher network data rate. Obviously, assigning zero to this weight completely eliminates the average data rate effect (i.e., the effect of collision on network data rate) and converts the objective function of WLANs in (15) into the cellular one in (14). The aforementioned weight is chosen by the VRRM approach, based on the SLAs and the Wi-Fi and LTE coverage maps, in addition to applied network planning and load-balancing policies. It can also be subject to modifications during runtime, based on measurements and reports.
In addition to increasing network data rate, a fair resource allocation is another objective in VRRM. On the one hand, the model is expected to allocate more resources to services with a higher serving weight, while on the other hand, services with a lower weight not being served at all or being served in very poor conditions are not acceptable. A fair allocation of resources is achieved when the deviation from the weighted average for all services is minimised:
$$ \underset{R_{{\mathrm{b}}_{\mathrm{ji}}}^{\mathrm{Srv}}}{ \min }\ \left\{{\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}}\left|\frac{R_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}}{W_{\mathrm{ji}}^{\mathrm{Srv}}}-\frac{1}{N_{\mathrm{VNO}}\kern0.5em {N}_{\mathrm{srv}}}{\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}}\frac{R_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}}{W_{\mathrm{ji}}^{\mathrm{Srv}}}\right|\right\} $$
This concept is addressed as a fairness function, being written as:
$$ {f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{f}\mathrm{r}}\left({\mathbf{R}}_{\mathbf{b}}^{\mathbf{f}}\right) = {\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}}\left({R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{f}}\right) $$
\( {R}_{{\mathrm{b}}_{\mathrm{ji}}}^{\mathrm{f}} \): the boundary for deviation data rate from the normalised average for service j of VNO i, defined as:
$$ \left\{\begin{array}{c}\hfill \frac{R_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}}{W_{\mathrm{ji}}^{\mathrm{Srv}}} - {\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}\frac{R_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}}{N_{VNO}\kern0.5em {N}_{\mathrm{srv}}\kern0.5em {W}_{\mathrm{ji}}^{\mathrm{Srv}}}\le\ {R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{f}}}}\hfill \\ {}\hfill -\frac{R_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}}{W_{\mathrm{ji}}^{\mathrm{Srv}}} + {\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}\frac{R_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}}{N_{\mathrm{VNO}}\kern0.5em {N}_{\mathrm{srv}}\kern0.5em {W}_{\mathrm{ji}}^{\mathrm{Srv}}}\le {R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{f}}}}\hfill \end{array}\right. $$
In order to better discuss the balance between these two objectives, i.e., the fairness and the total weighted network data rate, the boundaries of these two objectives have to be compared. The highest resource efficiency (i.e., the highest weighted data rate) is obtained when all resources are allocated to the service(s) with the highest serving weight, hence, the maximum of the first objective can be written as:
$$ \underset{R_{{\mathrm{b}}_{\mathrm{ji}}}^{\mathrm{Srv}}}{ \max}\left\{{\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}{W}_{\mathrm{ji}}^{\mathrm{Srv}}\kern0.5em {R}_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}}}\right\}= \max \left\{{W}_{\mathrm{ji}}^{\mathrm{Srv}}\right\}{R}_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}} $$
This means that, as the network capacity increases, the summation of the weighted data rate in (14) increases as well. In the same situation, the fairness objective function also reaches its maximum:
$$ \underset{R_{{\mathrm{b}}_{\mathrm{ji}}}^{\mathrm{Srv}}}{ \max}\left\{{\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}}{R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{f}}\right\}=\frac{R_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}}{ \max \left\{{W}_{\mathrm{ji}}^{\mathrm{Srv}}\right\}} $$
Based on (19) and (20), the complete objective function for the management of virtual radio resources is defined as:
$$ {f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{v}}\left({\mathbf{R}}_{\mathbf{b}}^{\mathbf{Srv}}\right) = {f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{cell}}\left({\mathbf{R}}_{\mathbf{b}}^{\mathbf{cell}}\right)+{f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{WLAN}}\left({\mathbf{R}}_{\mathbf{b}}^{\mathbf{WLAN}}\right) - {\alpha}_{\mathrm{f}}\left({W}_{\mathrm{f}}\right)\kern0.5em {f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{f}}\left({\mathbf{R}}_{\mathbf{b}}^{\mathbf{f}}\right) $$
\( {\mathbf{R}}_{\mathbf{b}}^{\mathbf{f}} \): vector of intermediate fairness variables,
\( {\mathbf{R}}_{\mathbf{b}}^{\mathbf{Srv}} \): vector of serving data rates,
α f : fairness coefficient as a function of fairness weight:
$$ {\alpha}_f\left({w}_f\right)=\frac{W_f\kern0.5em {R}_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}}{\left(1-{w}_f\right)\kern0.5em {R}_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}\kern0.5em {N}^{\mathrm{SmaxRb}}+{W}_f\ \overline{R_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max }}} $$
N SmaxRb: number of subscribers using the service with maximum data rate,
$$ {N}^{\mathrm{SmaxRb}}=\frac{R_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}}{\overline{R_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\max }}} $$
In (18) and (21), the allocated data rate for a specific service is defined as:
$$ {R}_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}} = {R}_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{cell}} + {R}_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{WLAN}} $$
In addition, there are more constraints for VRRM to allocate data rates to various services, which should not be violated. The very fundamental constraint is the total network capacity estimated in the last section. The summation of all assigned data rates to all services cannot be higher than the total estimated capacity of the network:
$$ {\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}{R}_{{\mathrm{b}}_{\mathrm{ji}}\ \left[\mathrm{Mbps}\right]}^{\mathrm{Srv}}\le {R}_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}}} $$
The data rate offered to GB and BG services imposes the next constraints. The data rate allocated to these services has to be higher than a minimum guaranteed level (for both GB and BG) and lower than the maximum guaranteed one (for GB only):
$$ {R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{Min}}\le {R}_{{\mathrm{b}}_{\mathrm{ji}\ \left[\mathrm{Mbps}\right]}}^{\mathrm{Srv}}\le {R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{Max}} $$
\( {R}_{{\mathrm{b}}_{\mathrm{ji}}}^{\mathrm{Min}} \): minimum data rate for service j of VNO i,
\( {R}_{{\mathrm{b}}_{\mathrm{ji}}}^{\mathrm{Max}} \): maximum data rate for service j of VNO i.
Based on this model, the objective function presented in (21) has to be optimised subject to constraints addressed in (18), (24), (25) and (26).
Resource allocation with violation
In the allocation process, there are situations where resources are not enough to meet all guaranteed capacity, and the allocation optimisation is no longer feasible. A simple approach in these cases is introduced in [9], which is to relax the constraints by the introduction of violation (also known as slack) variables. In case of VRRM, the relaxed constraint is given by:
$$ {R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{Min}}\le {R}_{{\mathrm{b}}_{\mathrm{ji}\ \left[\mathrm{Mbps}\right]}}^{\mathrm{Srv}}+\varDelta {R}_{{\mathrm{b}}_{\mathrm{ji}\ \left[\mathrm{Mbps}\right]}}^{\mathrm{v}} $$
\( \varDelta {R}_{{\mathrm{b}}_{\mathrm{ji}}}^{\mathrm{v}} \): non-negative violation variable for the minimum guaranteed data rate of service j of VNO i.
By introducing the violation parameter, the former infeasible optimisation problem turns into a feasible one. The optimal solution maximises the objective function and minimises the weighted average constraints violations. The weighted average constraints violation is defined as follows:
$$ \varDelta {\overline{R_{\mathrm{b}}^{\mathrm{v}}}}_{\left[\mathrm{Mbps}\right]} = \frac{1}{N_{\mathrm{VNO}}\kern0.5em {N}_{\mathrm{srv}}}{\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}{W}_{\mathrm{ji}}^{\mathrm{v}}\ \varDelta {R}_{{\mathrm{b}}_{\mathrm{ji}\ \left[\mathrm{Mbps}\right]}}^{\mathrm{v}}}} $$
\( \varDelta \overline{R_{\mathrm{b}}^{\mathrm{v}}} \): average constraint violation.
\( {W}_{\mathrm{ji}}^{\mathrm{v}} \): weight of violating minimum guaranteed data rate of service j of VNO i¸ where \( {W}_{\mathrm{ji}}^{\mathrm{v}}\in \left[0,1\right] \) .
The objective function presented in (21) has also to be changed as follows:
$$ {f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{v}}\left({\boldsymbol{R}}_{\boldsymbol{b}}\right)={f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{cell}}\left({\boldsymbol{R}}_{\mathbf{b}}^{\mathbf{cell}}\right)+{f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{WLAN}}\left({\boldsymbol{R}}_{\mathbf{b}}^{\mathbf{WLAN}}\right) - {\alpha}_{\mathrm{f}}\left({W}_{\mathrm{f}}\right)\kern0.5em {f}_{{\mathbf{R}}_{\mathbf{b}}}^{\mathrm{f}}\left({\boldsymbol{R}}_{\mathbf{b}}^{\mathbf{f}}\right)-{f}_{{\mathrm{R}}_{\mathrm{b}}^{\mathrm{v}}}^{\mathrm{v}\mathrm{i}}\left(\varDelta \overline{R_{\mathrm{b}}^{\mathrm{v}}}\right) $$
where \( {f}_{{\mathrm{R}}_{\mathrm{b}}^{\mathrm{v}}}^{\mathrm{v}\mathrm{i}} \) is the constraint violation function, given by:
$$ {f}_{{\mathrm{R}}_{\mathrm{b}}^{\mathrm{v}}}^{\mathrm{v}\mathrm{i}}\left(\varDelta \overline{R_{\mathrm{b}}^{\mathrm{v}}}\right) = \frac{R_{\mathrm{b}\ \left[\mathrm{Mbps}\right]}^{\mathrm{CRRM}}}{\overline{R_{\mathrm{b}\left[\mathrm{Mbps}\right]}^{\min }}}\kern0.5em \varDelta {\overline{R_{\mathrm{b}}^{\mathrm{v}}}}_{\left[\mathrm{Mbps}\right]} $$
However, the definition of fairness in a congestion situation is not the same, i.e., in this case, fairness is to make sure that the weighted violation of all services is the same. Fairness constraints are changed as follows:
$$ \left\{\begin{array}{c}\hfill {W}_{\mathrm{ji}}^{\mathrm{v}}\kern0.5em \varDelta {R}_{{\mathrm{b}}_{\mathrm{ji}\ }\left[\mathrm{Mbps}\right]}^{\mathrm{v}}-\frac{1}{N_{\mathrm{VNO}}\kern0.5em {N}_{\mathrm{srv}}}{\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}{W}_{\mathrm{ji}}^{\mathrm{v}}\kern0.5em \varDelta {R}_{{\mathrm{b}}_{\mathrm{ji}\ }\left[\mathrm{Mbps}\right]}^{\mathrm{v}}\le\ {R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{f}}}}\hfill \\ {}\hfill -{W}_{\mathrm{ji}}^{\mathrm{v}}\kern0.5em \varDelta {R}_{{\mathrm{b}}_{\mathrm{ji}\ }\left[\mathrm{Mbps}\right]}^{\mathrm{v}}+\frac{1}{N_{\mathrm{VNO}}\kern0.5em {N}_{\mathrm{srv}}}{\displaystyle \sum_{i=1}^{N_{\mathrm{VNO}}}{\displaystyle \sum_{j=1}^{N_{\mathrm{srv}}}\varDelta {R}_{{\mathrm{b}}_{\mathrm{ji}\ }\left[\mathrm{Mbps}\right]}^{\mathrm{v}}\le {R}_{{\mathrm{b}}_{\mathrm{ji}}\left[\mathrm{Mbps}\right]}^{\mathrm{f}}}}\hfill \end{array}\right. $$
The management of virtual radio resources is a complex optimisation problem, since network status and constraints vary in time. In [8, 9], it is proposed to divide the time axis into decision windows and to maximise the objective function in each of these intervals, independently. However, the output of partial VRRM may only be a local optimum and not the global one, since the effect of each decision on the network state and other dependencies are neglected. Nevertheless, partial VRRM is a simple solution, which can be used as the starting step and reference point.
The optimisation problem is solved by using MATLAB Linear Programming solver (i.e., the linprog function) [27], using the interior-point approach [28], which is a variant of Mehrotra's predictor-corrector algorithm [29], a primal-dual interior-point method. The termination tolerance on the function is chosen to be 10−8.
The realistic scenario for evaluating the proposed model offers cellular networks coverage by means of a set of radio remote heads (RRHs) [14], supporting OFDMA (LTE-Advanced), CDMA (UMTS/HSPA+) and FDMA/TDMA (GSM/EDGE). VRRM is in charge of a service area, as described in Table 1:
Table 1 Different RAT cell radius
The OFDMA cells, with a 400 m radius, are the smallest ones; based on the 100 MHz LTE-Advanced feature, each cell has 500 RRUs to be assigned to traffic bearers.
The configurations of CDMA cells are chosen according to UMTS/HSPA+, at 2.1 GHz, each cell with a 1.2 km radius and 3 carriers (each carrier has 16 codes); only 45 codes, out of all 48 in each cell, are assigned to users' traffic.
The FDMA/TDMA cells are the biggest ones, with a 1.6 km radius, based on GSM900, each cell having 10 radio channels (each one has 8 timeslots), being assumed that 75 timeslots out of the total 80 available ones in each cell are used for users' traffic.
In addition to cellular networks, in this scenario Wi-Fi (OFDM) coverage is provided by means of IEEE802.11ac APs, configured to work with an 80 MHz channel bandwidth. It is assumed that each AP covers a cell with an 80 m radius, being facilitated with beamforming and MU-MIMO to support up to 8 spatial streams. Five radio channels are taken for the 80 MHz APs, following European Union regulations, [30]. In contrast to former RATs, APs use the same set of links for up- and download streams. In order to achieve coherency among various RATs, the total data rate of APs is equally divided between down- and uplinks, therefore, in Table 1, the number of RRUs in each Wi-Fi cell is indicated as half of the total number of available channels. This table also presents the maximum data rate for each RAT in downlink. It is also assumed that the APs are only deployed on the OFDMA BS, hence, not providing full coverage; however, the entire Wi-Fi capacity can be used for traffic offloading.
The path-loss exponent, αp, corresponding to signal propagation in the various environments, is considered to be 3.8 for regular urban environments, a value between 2 for free space and 5 for high attenuation dense urban ones [31].
Three VNOs are assumed to operate in this area, each one with a different SLA, i.e., GB, BG and BE. All of them offer the same set of services, as in Table 2 The serving weights in (14) and (15) are based on the general service classes: 0.4 for conversational (Con), 0.3 for streaming (Str), 0.2 for interactive (Int) and 0.1 for background (Bkg). Besides the usual human interaction services, one is also considering several machine-to-machine (M2M) applications, as this is one of the areas foreseen for large development of VNOs. In order not to compromise the objective function for achieving fairness, the fairness weight, W f, in (22) is considered to be unit, leading to a maximum fairness, while W SRb in (15) is heuristically chosen to be 0.02.
Table 2 Network traffic mixture
Each VNO is assumed to have 500 subscribers, where each one requires the average data rate of 6.375 Mbps [32]. Hence, the contracted data rate, \( {R}_{\mathrm{b}}^{\mathrm{Con}} \), for all operators is 3.11 Gbps, and each service receives a portion based on a volume percentage, in Table 2. In the second step, the number of subscribers for each VNO is swept from 300 (low load) up to 1400 (high load), in order to observe how VNOs capacity and their services are affected by this increase of load.
On the ground of each service data rate, the SLAs of these VNOs are defined as follows:
VNO GB: the data rates allocated to services are guaranteed to be in the range of 50 to 100% of the corresponding service data rate.
VNO BG: it has the best effort, with a minimum 25% of the service data rate guaranteed by the SLA.
VNO BE: it has all services served in the best effort approach, without any guarantee.
Network capacity
For the network capacity estimation, in addition to the general approach, the other three, i.e., pessimistic, realistic and optimistic, are also considered. The minimum and maximum data rates from each RAT, considering different approaches, are presented in Table 3. Equation (7) is used to obtain the PDF for the general approach and (12) for the other ones.
Table 3 Minimum and maximum data rate of each RAT in different approaches
Using (5) and (12), the PDF and the cumulative distribution functions (CDFs) of the considered network capacity are obtained. Figure 2 compares the differences in between the three approaches in conjunction with the general one. As expected, the lowest network capacity estimation is achieved by applying the pessimistic (PE) approach, since the assumption is the allocation of RRUs to mobile terminals with the lowest SINR. In this PE approach, the median capacity of the network for regular urban environments is 1.32 Gbps, while the general approach (G) leads to 1.76 Gbps, i.e., the former is 75.0% of this one; however, the realistic (RL) and the optimistic (OP) approaches provide a quite higher median network capacity of 3.60 and 5.93 Gbps, respectively, i.e., 2.0 and 3.4 times higher. Moreover, Fig. 2 also shows that a higher path-loss exponent yields a higher network capacity: the higher the path loss, the higher the attenuation, implying that interference is more attenuated than the signal, hence, increasing the carrier-interference-ratio, ultimately, yielding a higher capacity.
CDF of cellular networks data rate for different approaches
When adding the capacity offered by traffic offloading to Wi-Fi APs in regular urban environments (i.e., with a path-loss exponent of 3.8 for all systems, which is an approximation for a real scenario), one gets the CDF of the network capacity as plotted in Fig. 3. The comparison of Figs. 2 and 3 shows that the median values increase to 3.7 Gbps (1.8 times) in PE, 7.2 Gbps (3.0 times) in G, 19.5 Gbps (4.4 times) in RL and 35.3 Gbps (2.3 times) in OP, which is quite an increase in capacity. The total network capacity, according to Fig. 3, is 9.5 times higher in the OP approach than in the PE one, which without any doubt affects the allocation of resources to different services of the VNOs. The interdecile intervals range in between 2.14 Gbps in PE and 4.3 Gbps in G, with 2.93 Gbps and 3.52 Gbps for RL and OP, respectively, showing that the type of approach does not have a monotonic impact on the probability of obtaining a given capacity.
CDF of the total network data rate for different approaches
Resource allocation with traffic offloading
Figure 4 presents the data rates allocated to each of the services from cellular networks and WLANs in the G case, when there are 500 subscribers per VNO. As expected, conversational services (VoIP, Video Call and M2M/MMI), which are the ones with the highest serving weights, receive the highest data rates, streaming (music, M2M/MMS and video streaming) being placed second; the services of the background class (email and M2M/MMM) are the ones that are allocated the smallest portion of the available capacity.
Data rate allocated to different services from WLAN and cellular networks
The effect of the weight for session average data rate in WLAN, W SRb, can be observed in the data rates balance between WLANs and cellular networks, e.g., the data rate allocated to video streaming (ViS) from WLANs is 6.5 times higher than the one from cellular networks, while, in contrast, email, a service with a low average data rate, is allocated a higher data rate in cellular networks than in WLANs. However, VoIP is not following the same rule, since it has a relatively high serving weight, which overcomes the effect of the average data rate in (15), hence, being allocated a comparatively high data rate from both type of networks; the same phenomenon can be observed among M2M services, i.e., the ones with high serving weights, e.g., M2M/MMI, received relatively high capacity from WLANs comparing to the other ones.
Figure 5 demonstrates the allocation of virtual radio resources to the services of each VNO. Maximum guaranteed data rates are provided to almost all the services of VNO GB, e.g., VoIP and music with the relative assigned data rate of 31.87 and 95.62 Mbps (which are the maximum requested). The upper boundary in the allocation of virtual resources to the services is the primary difference between the services of VNO GB and BG, in other words, while the data rates allocated to services of the guaranteed VNO are bounded by maximum guaranteed values, the services of VNO BG have no limitation. In contrast, the capacity offered to VNO BG in a resource shortage situation can be smaller than the VNO GB one.
Data rate allocated to different services of the VNOs
Resource allocation for different number of subscribers
In this section, one analyses the performance of the proposed model under different network traffic loads. The number of subscribers is swept between 300 and 1 400 per VNO (i.e., low and high loads). Figure 6 illustrates the distribution of the available virtual resources among VNOs, in addition to the total network capacity (\( {R}_{\mathrm{b}}^{\mathrm{CRRM}} \)), the total minimum guaranteed and the contracted data rates (\( {R}_{\mathrm{b}}^{\mathrm{Con}} \)). The contracted data rate for each VNO increases from 1.91 Gbps (low load) to 8.92 Gbps (high load).
Variation of the data rate allocated to each VNO
The acceptable regions for VNOs GB and BG in the plot are shown by solid blue and light green colours. The total minimum guaranteed data rate, i.e., the summation of minimum guaranteed data rates of VNOs GB and BG, in the low load is 20.6% of the network capacity (i.e., 1.4 Gbps).
Since network capacity is considerably higher than the minimum guaranteed data rates, best effort services are also served well; the allocation of 2.39 Gbps to VNO BE is the evidence to this claim. It is worth noting that the share of VNO BE is 35.1% of the whole network capacity, which is 1.6 times higher than VNO GB. The reason behind this observation is the maximum guaranteed data rate of the guaranteed services. Although the assigned portion of available resources to VNO GB is not as big as the other two VNOs, it is served up to its maximum satisfaction.
In contrast, VNO BG has a minimum guaranteed data rate, but the maximum received is 43.3% of the network capacity (2.94 Gbps). The guarantee data rates grow up to 6.53 Gbps (i.e., 96.1% of the whole available capacity) as the load increases. Obviously, the share of the best effort services in this situation considerably decreases. The allocated capacity to VNO BE reduces to only 65.6 Mbps, which is 0.9% of the total available capacity and 97.3% of its initial value. As shown in Fig. 6, the total network capacity is only enough for serving the contracted data rate of only one of the VNOs. In addition, the increase of the subscribers to 1400 makes the total minimum guaranteed data rate of the three VNOs equal to the total network capacity, which means that the data rates allocated to the services of VNO BE reach zero.
Furthermore, Fig. 7 illustrates the effect of demand variation on the allocation of data rates to the service classes of VNO GB: this VNO is a guaranteed one, therefore, each service class has a minimum and a maximum guaranteed data rate, presented in the figure with the solid colour. By increasing the number of subscribers, demand increases 4.7 times.
Data rate allocated to service classes of VNO GB
It can be seen that the streaming services are the ones with the highest volume, having the highest data rate; the minimum guaranteed data rate varies between 0.58 and 2.71 Gbps. The data rate allocated to this class in low load (when there are only 300 subscribers) is 67% of the maximum guaranteed one, but it reduces to the minimum one (i.e., 50% of the contracted data rate) for the maximum load case. The other service classes (i.e., interactive, conversational and background) are served according to capacity needs. It can be seen that, in the low load situation, the maximum guaranteed data rates are assigned, but as demand increases, data rates move towards the lower boundary. The interactive service class is a very good example for this behaviour: while it receives the maximum guaranteed data rate of 0.54 Gbps in low load, the allocated capacity for the high one is reduced from the maximum to 1.45 Gbps, the minimum guaranteed data rate. Considering the slope of allocated data rates in various services, the effect of serving weights and the service volume can be seen. Since the interactive class has a lower serving weight compared to the conversational one, it receives almost the minimum acceptable data rate with 1100 subscribers; in the same situation, conversational services are still provided by the highest acceptable data rate.
The effect of channel quality effect on VRRM
The effect of channel quality on the management of virtual radio resources by considering the three approaches (i.e., OP, RL and PE) is studied as well. Figure 8 presents the data rates allocated to VNO GB in conjunction with minimum and maximum guaranteed ones. As long as the data rates are in the acceptable region (shown by the solid colour), there is no violation of the SLAs and guaranteed data rates.
Data rate allocated to VNO GB in different approaches
Figure 8 shows that the maximum guaranteed data rate reaches 3.74 Gbps when there are 600 subscribers, being possible to allocate all of it in the OP approach, and that for 1400 subscribers it reaches 8.7 Gbps, but with only 83.3% of it being allocated to the OP approach. However, VNO GB faces the violation on the minimum guaranteed data rate in the PE approach, as the number of subscribers passes 1100: while at least 3.42 Gbps is required, only 97.3% of it is allocated to the VNO; this means that the network capacity in this approach is lower than the total minimum guaranteed data rates, and the resource management entity has to violate some of the minimum guaranteed data rates. The VNO requires at least 4.36 Gbps for the heavy load, the allocated data rate being 6.28 Gbps in RL and 4.42 Gbps in G, which are still enough to fulfil the SLA, but it goes down to 3.3 Gbps, i.e., 76.8% of the minimum required data rate, in PE, in clear violation of the SLA. This clearly shows the effect of SINR on resources usage efficiency and QoS offered to VNOs.
The data rates allocate to VNOs BG and BE are plotted in Fig. 9. Just as VNO GB, it can be seen that a high data rate is allocated to these VNOs in the OP and RL approaches (i.e., 18.46 and 9.68 Gbps). In these cases, the high SINR leads to the high network capacity, and the model is not only able to serve the minimum guaranteed data rates, but it can also serve acceptable data rates to the BE and BG VNOs. Consequently, VNOs BG and BE suffer more from resources shortage in the high load situations. The allocation of resources to VNO BE even stops for more than 1100 subscribers when the PE approach is considered.
The data rate allocated to VNOs (a) BG and (b) BE in different approaches
Regarding the distribution of data rates allocated to the service classes of VNO GB, Fig. 9 illustrates the variation of the capacity assigned to each one in different approaches. It can be seen that the conversational class (i.e., the class with the highest service weight) receives the maximum guaranteed data rate for the OP and RL approaches. The data rate allocated for the G and PE approaches is more than 50% of the contracted data rate. In the PE case, although for high-density situations the data rate decreases to a minimum guaranteed data rate, the services of this class never experience violation of the guaranteed data rate. Likewise, the streaming class is always served with a data rate higher than the minimum guaranteed. The maximum guaranteed data rate in heavy load reaches 5.34 Gbps and 72.9% in OP, 63.2% in RL, 50.4% in G and 50.0% in PE is allocated (Fig. 10).
The data rate allocated to service classes of VNO GB. a Conversational service class. b Streaming service class. c Interactive service class. d Background service class
For interactive and background classes, it is shown that they face violation of minimum guaranteed data rate in the PE approach. The violation situation in the background class is, to such an extent, that no capacity is allocated to its services when there are more than 1100 subscribers per VNO. The data rate allocated to the interactive class reaches 15.1% of the contracted data rate in heavy load, while the minimum guaranteed is 50%.
For the sake of comparison, the data rate allocated to the interactive and background classes of VNOs BG and BE is shown respectively in Figs. 11 and 12. It can be seen that for VNO BG the situation is very much similar to VNO GB, the main difference being the high boundary of allocated data rate. VNO BG does not have a maximum guaranteed data rate or high boundary for allocation of data rates. Consequently, the services of this VNO are served by comparatively higher data rates comparing to VNO GB when a high network capacity is available, e.g., in the OP situation. As an example, consider the conversational class on both VNOs: for the OP approach with 400 subscribers, VNO GB is granted with 0.1 Gbps while VNO BG receives 1.3 Gbps; on the other hand, in the case of resource shortage, VNO BG receives data rates lower than VNO GB, e.g., the share of interactive class of VNO BG when there are 1200 subscribers in the PE approach is only 0.387 Gbps while VNO GB is allocated 0.908 Gbps.
The data rate allocated to the interactive class. a VNO BG. b VNO BE
The data rate allocated to the background class. a VNO BG. b VNO BE
In conclusion, the effect of channel quality on the total available resource, and consequently on the performance of VRRM, is studied in this section. Through numeric results, one shows that the proposed model for managing virtual radio resources can serve different service classes of VNOs with different requirements, while offering an acceptable level of isolation. As evidence to this claim, one can consider services of the conversational class, i.e., services with the highest priority and serving weight, which are always allocated with a satisfactory amount of resources as the demands and the network capacity changes. Likewise, the minimum guaranteed data rates are offered to the relevant VNOs. Moreover, the prioritising of service classes offered by VRRM enables to serve the more important services, even when there are not enough resources.
A model for the management of virtual radio resources in a full heterogeneous network (i.e., a network with both cellular and WLANs) is proposed, which has two key components: the estimation of available resources and the allocation of resources. In the first step, the model maps the number of the available RRUs from different RATs onto the total network capacity by obtaining a probabilistic relationship. The model is able to consider multiple channel quality assumptions for the terminals through different estimation approaches. The allocation of resources to maximise the weighted data rate of the network based on the estimated network capacity is the next step. The serving weights in the objective function make possible to prioritise services. The resource allocation in a shortage of resources (i.e., when there are not enough resources to meet the minimum guaranteed data rates) tries to minimise the violation of the guaranteed data rates. In addition, the model also considers fairness among services.
Moreover, the performance of the proposed model is evaluated for a set of practical scenarios, and numeric results are achieved. It is shown that by adding an AP to each OFDMA cell, the network capacity increases up to 2.8 times. As a result of traffic offloading, the VRRM model is able to properly serve not only guaranteed services, but best effort ones are allocated with a relatively high data rate. Services with a higher serving weight, such as services of the conversational class, are provided with a higher data rate.
Furthermore, the results show that the network capacity increases from 0.9 Gbps in the pessimistic approach to 5.5 Gbps in the optimistic one. The effect of these capacity changes on the allocated data rates for different VNOs, and their service classes are presented through a series of plots. It is shown that when there is enough capacity, not only the guaranteed VNO is satisfied, but also best effort VNOs are well served. However, as the network capacity decreases due the channel quality (G and PE approaches), best effort VNOs are affected more than the guaranteed one. The same situation is shown at the service class level too. Conversational and streaming classes are the ones with the highest serving priority (i.e., serving weights), being generally allocated with data rates higher than the other two classes. When there is a shortage of resources, i.e., in the G and PE approaches, violations start relatively by background and interactive classes.
The model performance under different loads is also evaluated. The results confirm that the model is able to realise an acceptable level of isolation. When there is a shortage of radio resources, the model for resource management starts violating the guaranteed levels of services with lower serving weights, which are the background and interactive services. As evidence to this claim, the VNO GB, and particularly its conversational class, can be considered where the requested service quality regardless of network situation is offered. In addition, the flexibility of the applied model makes the equilibrium among different services or different VNOs possible.
In conclusion, it is shown that our model achieves the desired goals: (a) on-demand wireless capacity offering by serving three VNOs with different SLAs; (b) isolation by considering changes in the networks into almost one tenth of its original value (from OP to PE), in addition to changing the demand from 300 UE per VNO to 1 400; (c) element abstraction and multi-RAT support by providing wireless connectivity using both cellular networks and WLANs, while the VNOs do not have to deal with the details. In the future, the aforementioned concept of virtual radio resources and the proposed model will be implemented in realistic test-beds. In addition, the modelling of service demands and user behaviour in full heterogeneous networks will be taken in the next extension of the model.
Cisco Systems, Global mobile data traffic forecast update, 2012–2017, in from Visual Network Index (VNI) White Paper (Cisco Systems, CA, 2013)
L Kyunghan, L Joohyun, Y Yung, R Injong, C Song, Mobile data offloading: how much can Wi-Fi deliver? IEEE/ACM Trans Networking 21, 536–550 (2013)
A Balasubramanian, R Mahajan, A Venkataramani, Augmenting mobile 3G using Wi-Fi, in 8th international conference on Mobile, Systems, Applications, and Services (ACM, San Francisco, 2010), pp. 209–222
L Joohyun, Y Yung, C Song, J Youngmi, Economics of Wi-Fi offloading: trading delay for cellular capacity. IEEE Trans Wirel Commun 13, 1540–1554 (2014)
A Kliks, N Dimitriou, A Zalonis, O Holland, Wi-Fi traffic offloading for energy saving, in 20th International Conference on Telecommunications (IEEE, Casablanca, 2013), pp. 1–5
H Guan, T Kolding, P Merz, Discovery of cloud-RAN, in NSN Cloud-RAN Workshop (Nokia Siemens Networks, Beijing, 2010)
M Chiosi, D Clarke, P Willis, A Reid, J Feger, M Bugenhagen, W Khan, M Fargano, C Cui, H Deng, J Benitez, U Michel, H Damker, K Ogaki, T Matsuzaki, Network function virtualisation: an introduction, benefits, enabler, challenges, and call for action (European Telecommunications Standards Institute, Darmstadt, 2012)
S Khatibi, LM Correia, Modelling of virtual radio resource management for cellular heterogeneous access networks, in IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (Washington, 2014)
S Khatibi, LM Correia, A model for virtual radio resource management in virtual RANs. EURASIP J Wirel Commun Netw 68, 2015 (2015)
S Khatibi, LM Correia, Modelling virtual radio resource management with traffic offloading support, in IEEE 24th European Conference on Networks and Communications (IEEE, Paris, 2015)
S Khatibi, LM Correia, The effect of channel quality on virtual radio resource management, in IEEE 82nd Vehicular Technology Conference (IEEE, Boston, 2015)
JA Village, KP Worrall, DI Crawford, 3G shared infrastructure, in 3rd International Conference on 3G Mobile Communication Technologies (IEEE, London, 2002), pp. 10–16
X Costa-Perez, J Swetina, G Tao, R Mahindra, S Rangarajan, Radio access network virtualization for future mobile carrier networks. IEEE Commun Mag 51, 27–35 (2013)
L Samson, LTE network sharing: some operational and management aspects. ITU cross regional seminar for CIS, ASP and EUR regions on broadband access (fixed, wireless including mobile), Chisinau, Moldova, 2011
T Frisanco, P Tafertshofer, P Lurin, R Ang, Infrastructure sharing and shared operations for mobile network operators: from a deployment and operations view (IEEE International Conference on Communications, Beijing, 2008), pp. 2193–2200
R Mahindra, MA Khojastepour, Z Honghai, S Rangarajan, Radio access network sharing in cellular networks, in 21st IEEE International Conference on Network Protocols (IEEE, Göttingen, 2013), pp. 1–10
R Friedrich, S Pattheeuws, D Trimmel, H Geerdes, Sharing mobile networks—why the pros outweigh the cons (Booz & Company Inc., New York, 2012)
A Khan, W Kellerer, K Kozu, M Yabusaki, Network sharing in the next mobile network: TCO reduction, management flexibility, and operational independence. IEEE Commun Mag 49, 134–142 (2011)
3GPP, Universal Mobile Telecommunications System (UMTS); LTE; Network sharing; architecture and functional description (European Telecommunications Standards Institute (ETSI), France, 2013)
YK Song, H Zo, AP Ciganek, Multi-criteria evaluation of mobile network sharing policies in Korea. ETRI J 36, 572–580 (2014)
Y Zaki, Z Liang, C Goerg, A Timm-Giel, LTE wireless virtualization and spectrum management, in 3rd Joint IFIP Wireless and Mobile Networking Conference (IEEE, Budapest, 2010), pp. 1–6
Y Zaki, L Zhao, C Goerg, A Timm-Giel, LTE mobile network virtualization. Mob Netw Appl 16, 424–432 (2011)
Z Liang, L Ming, Y Zaki, A Timm-Giel, C Gorg, LTE virtualization: from theoretical gain to practical solution, in 23rd International Teletraffic Congress (IEEE, San Francisco, 2011), pp. 71–78
J Pérez-Romero, X Gelabert, O Sallent, Radio resource management for heterogeneous wireless access, in Heterogeneous wireless access networks, ed. by E Hossain (Springer US, New York, 2009), pp. 1–33
J Carapinha, C Parada (eds.), Reference scenarios and technical system requirements definition (Mobile Cloud Networking Project, 2013)
A Papoulis, SU Pillai, Probability, random variables, and stochastic processes (McGraw-Hill, NY, 2002)
MMATLAB and Statistics Toolbox, The MathWorks, Inc., Natick, Massachusetts, United States, 2015
Y Zhang, Solving large-scale linear programs by interior-point methods under the MATLAB environment, in Technical Report TR96-01 (Department of Mathematics and Statistics, University of Maryland, Baltimore, 1995)
S Mehrotra, On the implementation of a primal-dual interior point method. Soc Ind Appl Math J Optim 2, 575–601 (1992)
MathSciNet MATH Google Scholar
O Bejarano, EW Knightly, P Minyoung, IEEE 802.11ac: from channelization to multi-user MIMO. IEEE Commun Mag 51, 84–90 (2013)
E Damosso, LM Correia (eds.), COST 231 Final report—digital mobile radio: evolution towards future generation systems (COST Secretariat, European Commission, Brussels, 1999)
Cisco Systems, The Zettabyte Era - Trends and Analysis, in from Visual Network Index (VNI) White Paper (Cisco Systems, CA, 2013)
The research leading to these results was partially funded by the European Union's Seventh Framework Programme Mobile Cloud Networking project (FP7-ICT-318109).
The authors have contributed jointly in all the parts for preparing this manuscript. All authors read and approved the final manuscript.
Nomor Research GmbH, Munich, Germany
Sina Khatibi & Luis M. Correia
University of Lisbon, Lisbon, Portugal
Sina Khatibi
Luis M. Correia
Correspondence to Sina Khatibi.
Khatibi, S., Correia, L.M. Modelling virtual radio resource management in full heterogeneous networks. J Wireless Com Network 2017, 73 (2017). https://doi.org/10.1186/s13638-017-0858-7
Virtualisation of radio resources
Virtual radio resource management
Radio access networks
Network function virtualisation | CommonCrawl |
Meridional heat transport variability induced by mesoscale processes in the subpolar North Atlantic
Jian Zhao ORCID: orcid.org/0000-0002-6458-51461,
Amy Bower1,
Jiayan Yang1,
Xiaopei Lin2 &
N. Penny Holliday3
Nature Communications volume 9, Article number: 1124 (2018) Cite this article
Physical oceanography
An Author Correction to this article was published on 14 June 2018
This article has been updated
The ocean's role in global climate change largely depends on its heat transport. Therefore, understanding the oceanic meridional heat transport (MHT) variability is a fundamental issue. Prevailing observational and modeling evidence suggests that MHT variability is primarily determined by the large-scale ocean circulation. Here, using new in situ observations in the eastern subpolar North Atlantic Ocean and an eddy-resolving numerical model, we show that energetic mesoscale eddies with horizontal scales of about 10–100 km profoundly modulate MHT variability on time scales from intra-seasonal to interannual. Our results reveal that the velocity changes due to mesoscale processes produce substantial variability for the MHT regionally (within sub-basins) and the subpolar North Atlantic as a whole. The findings have important implications for understanding the mechanisms for poleward heat transport variability in the subpolar North Atlantic Ocean, a key region for heat and carbon sequestration, ice–ocean interaction, and biological productivity.
Ocean heat transport is fundamental to maintaining the earth's energy balance. While the time-mean oceanic heat transport has been reasonably well documented using hydrographic observations and air–sea fluxes1,2,3, our knowledge of its temporal variability is less developed, in part, due to insufficient sampling of mesoscale processes in many regions. The large-scale ocean circulation, such as the Atlantic Meridional Overturning Circulation (AMOC), is found to be a big player to modulate the oceanic meridional heat transport (MHT)4,5,6. Some studies have shown, however, that mesoscale eddies also play an important role in the meridional transfer of heat. For example, observations and eddy-permitting models have indicated that eddy heat transport near the western boundary current (WBC) extensions and the Antarctic Circumpolar Current (ACC) is comparable to the time-mean heat transport7,8,9,10,11,12,13.
The Atlantic Ocean dominates the global oceanic heat transport, and its northward heat transport reaches a maximum of 1.3 PW at 26.5°N, where 1 PW = 1015 W (refs2,5). In the subpolar North Atlantic, northward moving warm waters release heat to the atmosphere and thereby are transformed into the deep and intermediate water masses that feed the deep limb of the AMOC. A transatlantic observing system (Overturning in the Subpolar North Atlantic Program, OSNAP)14 was initiated in summer 2014 to continuously monitor the variability of the meridional volume, heat, and freshwater transport across ~58°N and investigate the relationship between meridional transport and dense water formation. OSNAP is configured with two sections: OSNAP West extends from southern Labrador to southwestern Greenland, and OSNAP East spans from southeastern Greenland to Scotland (Fig. 1a). Previous studies have shown that almost all of the relatively warm water from southern latitudes crosses OSNAP East and leads to a mean MHT of about 0.5 PW (refs.6,15), while <0.05 PW crosses OSNAP West16 (Labrador Sea). The temporal variability for the MHT along the OSNAP East section is much greater than that along the OSNAP West17. In addition, the warm Atlantic-origin waters flow across the OSNAP East line and further enter the high latitudes, consequently maintaining a relatively warm climate in Northern Europe and modulating the Arctic sea ice extent18,19,20. Note that a meaningful heat transport value can only be estimated by measuring the temperature of all meridional currents in a basin (mass-conserving system). In reality, there are a net, albeit small, mass transport (about 1 Sv, where 1 Sv=106 m3 s−1) across the OSNAP East and West sections, resulting from the Bering Strait throughflow to the Arctic21. For convenience and consistency, hereafter, we will use heat transport to refer to the temperature transport relative to 0 °C in some local regions, so that their magnitude and variability can be evaluated within the framework of basin-wide heat transport5,6.
The major circulation elements and the corresponding meridional heat transport distribution in the subpolar North Atlantic Ocean. a The red and yellow mark the warm currents and blue and purple denote the cold currents. The map illustrates that the northward flow carries relatively warm water and southward flow generally transports colder water, leading to northward meridional heat transport in the subpolar North Atlantic Ocean. The labels are Denmark Strait (DS), Faroe Bank Channel (FBC), East and West Greenland Currents (EGC and WGC, respectively), North Atlantic Current (NAC), Denmark Strait overflow (DSO), Deep Western Boundary Current (DWBC), Iceland-Scotland Overflow (ISO), and Mid-Atlantic-Ridge (MAR). The figure is made by H. Furey, Woods Hole Oceanographic Institution and modified from Fig. 1 in Lozier et al. (2017). Green line denotes the OSNAP East section between Greenland and Scotland. b Meridional heat transport from surface to bottom zonally accumulated from Greenland towards Scotland along the OSNAP East line. Black solid line is the heat transport computed from in situ hydrographic observations in June 2014. Red solid line is the mean heat transport computed from 1/12 degree HYCOM simulation (1992–2014) and red shaded area represents the uncertainties measured by standard deviation. The vertical black dashed lines mark locations of the OSNAP glider transect endpoints
The time-mean MHT in the subpolar North Atlantic is set up by the large-scale circulation, which is actually a superposition of the cyclonic gyre circulation and the AMOC (Fig. 1a). An important element of this system is the North Atlantic Current (NAC), which plays a dual role of being both the upper limb of the AMOC and the southern and eastern limbs of the subpolar cyclonic gyre. The warm waters transported by the NAC originate in the Gulf Stream, then flow northward along the western boundary east of the Grand Banks as far as about 53°N, where the NAC makes a large anti-cyclonic meander to turn eastward toward the Mid-Atlantic Ridge (MAR). East of the MAR, the main streams of the NAC head northeastward into the Iceland Basin and the Rockall Trough, and then some flow farther north into the Nordic Seas, with the remainder flowing cyclonically around the topography of the subpolar region22,23,24,25,26,27. It continues into the Irminger Sea on the west side of the Reykjanes Ridge (i.e., the Irminger Current) and runs parallel to the East Greenland Current (EGC) against the Greenland continental slope before flowing into the Labrador Sea28,29.
The contributions of different currents to the MHT are reflected in the Zonally Accumulated Heat Transport (ZAHT) over the full water column starting from the Greenland coast towards Scotland (Fig. 1b). The mean ZAHT from observations and a high-resolution (1/12°) numerical simulation suggest that the relatively cold water carried by the southward EGC and deep WBC (DWBC) leads to about −0.5 PW MHT, which is gradually compensated by the northward transport of relatively warm waters in the east. After incorporating flows in the Irminger Sea and over the Reykjanes Ridge, the ZAHT increases to −0.2 PW, indicating that these regions transport about 0.3 PW heat northward. Moving further eastward to include the Iceland Basin, the ZAHT becomes positive and increases to about 0.1–0.2 PW. Adding the Rockall Plateau and Rockall Trough, the ZAHT now becomes the total poleward heat transport and reaches the magnitude of 0.4–0.6 PW. The overall structure of ZAHT shows that the three sub-basins —Irminger Sea, Iceland Basin, and Rockall Trough—each provides about 0.3 PW northward heat transport, which more than compensates for the southward heat transport and generates a net poleward heat transport.
This study utilizes new high-resolution hydrographic and velocity observations in the Iceland Basin and an eddy-resolving model to investigate the mesoscale processes there and quantify their influence on the MHT. The observational data identifies two circulation regimes: a mesoscale eddy-like circulation pattern and the northward NAC circulation pattern. The transition between the two regimes coupled with the strong temperature front in the Iceland Basin significantly modifies the local heat transport and is the dominant source for the MHT variability on time scales shorter than 1 year. The numerical model results also suggest that these mesoscale processes produce sizable interannual variability for the MHT in the subpolar North Atlantic Ocean.
Previous studies have shown that the MHT variability on seasonal to interannual time scales is more closely tied to variability in velocity or volume transport, rather than temperature4,5,6. In the subpolar North Atlantic, where the currents have a relatively strong barotropic component27, the surface eddy kinetic energy (EKE) provides valuable information about the spatial distribution of ocean velocity variability over the whole water column. Satellite altimetry data suggests that enhanced EKE is located in the eastern part of the subpolar region, especially in the Iceland Basin and Rockall Trough (Fig. 2), coincident with the branches of the NAC26,30,31. Along the OSNAP East line, the EKE maximum is co-located with the MHT variability, with the highest values located in the Iceland Basin.
The eddy activity and meridional heat transport variability in the subpolar North Atlantic Ocean. a Mean surface eddy kinetic energy (EKE) from 1993 to 2015 from the satellite data. Magenta dash line represents the OSNAP East. Black diamonds denote the endpoints for the glider transect. The isobaths are illustrated by white contour lines. b Standard deviation of the meridional heat transport at each longitude in numerical simulation (red). The mean surface geostrophic EKE from altimeter observations (1992–2015) and numerical model (1992–2014) are displayed in blue and black, respectively. The vertical black dashed lines mark the endpoints of the glider transect, where the meridional heat transport has largest variability
To investigate the potentially important role of eddies in modulating northward heat transport in this region, we successively deployed two gliders—autonomous buoyancy-driven underwater vehicles—in June and November 2015, respectively. The gliders profiled from the surface to about 1,000 m along the OSNAP East line at 58 °N between 24.5 °W and 21 °W, where both the maximum EKE and largest heat transport variability are located (Fig. 2b and Methods). Our analysis uses observed profiles of temperature, salinity, and depth-averaged velocity for the period between July 2015 and May 2016. In July 2015, a mesoscale anti-cyclonic eddy occupied the glider section (Fig. 3). The eddy had a radius of about 60 km and was characterized by a core of relatively homogenous warm and salty water (Fig. 3c, e). Similar anti-cyclonic eddies are often found in this region32,33,34. Detailed examination of the 23-year altimeter-derived absolute dynamic topography (ADT) indicates that the eddy usually occupies the glider transect for more than 2 months at a time, and that a new eddy is generated every few months, so that an eddy is apparent in the long-term mean ADT map (Supplementary Fig. 1). In October 2015, the eddy center moved to around 59°N, and a simpler frontal structure began to develop along 58°N, separating the warm, salty water to the east from the relatively cold, fresh and high oxygen water to the west. The hydrographic features associated with the eddy and front circulation patterns also project onto the velocity field and consequently affect the MHT (see Methods and Supplementary Fig. 2). A new anti-cyclonic eddy emerged in March 2016 and its characteristics were quite similar to those observed in July–September, 2015. During the observational period, the ocean circulation near the glider transect appears to be dominated by the alternation between eddy and front patterns.
Circulation and hydrographic properties in the Iceland Basin for mesoscale eddy and frontal circulation patterns near 58°N. The left panels show the ocean state in 3–13 August 2015, for absolute dynamic topography (a), glider potential temperature (c), and glider salinity (e). The corresponding ocean state in 14–20 December 2016 is displayed in the right panels (b, absolute dynamic topography; d, potential temperature; f, salinity). Glider transect is marked by black lines in (a and b). The isobaths in (a and b) are represented by gray lines. The gray contour lines from (c to f) display the relative potential density (unit: kg m−3)
The glider observations were used to generate monthly MHT over the top 1,000 m between July 2015 and May 2016. The mean heat transport for the monthly time series was 0.23 PW with standard deviation of 0.07 PW. Using the surface circulation pattern identified in the maps of ADT, the heat transport estimates have been separated into "eddy" (6) and "front" (3) groups (Fig. 4a). The mean heat transport is lower when the eddy is present, 0.19 PW, and increases to 0.30 PW when the eddy is replaced by a frontal pattern. These means, differing by 0.11 PW, are statistically different at the 95% confidence level using the Student's t test.
Meridional heat transport from observations and numerical model results. a Estimates of meridional heat transport for the upper 1,000 m across the glider section at 58°N between 24.5 and 21°W using glider observations. The estimated values, representing monthly averaged ocean state, are shown together with error bars illustrating the uncertainties due to depth-averaged velocity from the glider data. The results are separated into eddy (blue) and frontal (red) patterns. The transitional periods between eddy and front are shown in black. The magenta lines show the heat transport induced by velocity change in glider observations (a) and numerical model (b). Black line in (b) denotes the simulated monthly time series of meridional heat transport for the upper 1,000 m along the glider section. For comparison, the simulated mean heat transport across the glider section is 0.24 PW in the upper 1,000 m. Blue and red dots mark the eddy and front scenarios in the model. The months between those dots are transitional periods. c The interanual anomalies for the heat transport induced by the large-scale (black solid) and mesoscale processes (black dashed) in the Iceland Basin, respectively, are displayed. The interanual heat transport anomalies across the Iceland Basin (29–19°W, including both large-scale and mesoscale processes) is shown in blue. Unit: PW
To further identify the underlying physical processes associated with the eddy and frontal patterns, we break the observed heat transport (Qtotal) down into several components using standard Reynolds decomposition, which individually represent the heat transport variability induced by temperature (Qtemp), velocity (Qvel), and correlations between the two (Qeddy, see Methods). Qvel is the dominant term and its standard deviation is 0.06 PW, very close to the variability of Qtotal (0.07 PW). This indicates that the observed MHT variability is mainly driven by the ocean velocity change, which results from the alternating mesoscale eddy and frontal patterns. After examining the ADT structure in the Iceland Basin between 1992 and 2015, we conclude that the alternating mesoscale eddy and frontal structure is a common occurrence, suggesting that the mesoscale processes and the corresponding MHT variability observed by the 1-year glider observations to date are generally representative of long-term conditions.
Model results
To put the limited observational results in a larger context, the MHT variability on different time scales is evaluated using monthly output from a high-resolution (1/12°) numerical simulation35,36. The simulated mean MHT across the glider transect in the top 1,000 m between 1992 and 2014 is about 0.24 PW, and its variability, in terms of standard deviation, reaches about 0.1 PW. These long-term statistics are not directly comparable with the glider observations, collected over only 1 year. However, when the simulated monthly mean MHT in the top 1,000 m is separated into eddy and front cases (Fig. 4b), we found that the maximum MHT mostly occurs under the frontal pattern when the local flow is mainly northward, and the minimum is mostly associated with the eddy structure when the local circulation is dramatically modified by the rotational currents of the eddy. The mean MHT estimates during the front and eddy patterns are 0.38 ± 0.07 and 0.11 ± 0.06 PW, respectively, yielding a difference of 0.27 PW. This difference is statistically significant at the 95% confidence level. Even if the similarity of this difference with that estimated from the gliders, 0.11 PW, is somewhat fortuitous, the tendency for higher heat transport with the frontal pattern and lower with the eddy pattern suggests that the impacts of eddy and front on the MHT variability are successfully captured by the model. Similar to the observations, the role of eddy and frontal patterns is quantified by Qvel, which has variability of 0.09 PW and is significantly correlated with Qtotal (correlation coefficient is 0.97). In contrast, the variations for temperature-induced heat transport (Qtemp) and eddy heat transport (Qeddy) are only 0.02 and 0.01 PW, respectively. In addition, the comparison between Qvel and Qtotal indicates that the variability of Qtotal on time scales from subseasonal to interannual is mostly induced by the velocity change (i.e., Qvel).
In addition to modifying the velocity structure along the glider transect (Supplementary Fig. 2), the alternating eddy and front events can also alter the velocity field for the regions surrounding the glider track. To quantify the broader influence of mesoscale features on MHT variability, a spatial filter is applied to the numerical model output to separate the large-scale and mesoscale variability in the temperature and velocity fields. The spatially low-pass and high-pass temperature and velocity are used to compute the MHT induced by large-scale and mesoscale processes, respectively (see Methods and Supplementary Fig. 3).
Focusing first on the Iceland Basin, the standard deviation for the unfiltered monthly mean MHT across the section 29–19°W between 1992 and 2014 is 0.11 PW. The standard deviation associated with just the large-scale variability is 0.09 PW, and for the mesoscale, 0.06 PW. So it appears that the MHT variability in the Iceland Basin is almost equipartitioned between large-scale and mesoscale processes.
One might expect that the mesoscale processes dominate the MHT variability on shorter time scales, that is, <1 year, and that the larger spatial scale variability dominates on interannual and longer time scales. However, we found that mesoscale processes also contribute significantly to MHT variability on these longer time scales. To demonstrate this, we time filtered the unfiltered (i.e., the raw MHT), mesoscale, and large-scale time series of MHT for the Iceland Basin (Fig. 4c). The MHT interannual variability associated with mesoscale phenomena is about 0.03 PW, more than half of that induced by the large-scale circulation (0.05 PW). In fact, the model results show that the MHT anomalies produced by mesoscale processes are larger than that due to large-scale processes in some years (e.g., 2000 and 2006; Fig. 4c). The superposition of the individual processes at different spatial scales recovers the total MHT interannual variability in the Iceland Basin, and its standard deviation reaches about 0.06 PW. This indicates that both large-scale and mesoscale processes need to be fully resolved to accurately recover the MHT variability in the Iceland Basin, even on interannual and longer time scales.
Subpolar mesoscale processes are not limited to the Iceland Basin, and they also contribute to substantial MHT variability in the Irminger Sea and Rockall Trough (Supplementary Fig. 3). To evaluate the impact of mesoscale processes on MHT variability across the entire OSNAP East section, the unfiltered, mesoscale, as well as large-scale time series of MHT across the whole East section are obtained in a similar way to those in the Iceland Basin. Not surprisingly, the time-mean MHT (0.61 PW) is dominated by the large spatial scales (mean of 0.72 PW), and the mesoscale actually generates a southward MHT across the section (mean of −0.11 PW), induced by mesoscale activity east of Greenland (Supplementary Fig. 3).
Of particular interest here is how the mesoscale and large scale contribute not just to the mean, but to interannual MHT variability across the OSNAP East (Fig. 5). While the large scale dominates the total MHT interannual change, mesoscale processes also lead to sizable interannual variability, for example, in 2006 and 2010 (Fig. 5). Similar to the Iceland Basin, the velocity change on the mesoscale in space is the leading mechanism to generate the mesoscale MHT variability. Here the mesoscale MHT reflects the integral effects of all different types of mesoscale phenomena along the OSNAP East section. Its standard deviation is about 0.01 PW, or about 20% of the basin-wide MHT variability (about 0.05 PW). Therefore, the overall impact of mesoscale processes is non-negligible to the MHT variability in the subpolar North Atlantic.
The interannual variability for the total meridional heat transport and its large-scale and mesoscale components. The interanual anomalies for the meridional heat transport along the entire OSNAP East section is shown in red. The heat transport anomalies induced by large-scale and mesoscale processes are illustrated by solid and dashed black lines, respectively. Unit: PW
It is widely accepted that mesoscale processes have critical consequences for the global climate through redistribution of heat and other properties in various ocean regions. For example, eddies in the tropics, the Southern Ocean, and WBC extensions were found to significantly contribute to both the time mean and the variability of the total heat transport8,10,11,12,37,38. Here, results from new in situ observations in the Iceland Basin provide a fresh perspective on the dynamics responsible for the poleward heat transport in the subpolar North Atlantic Ocean, revealing that the alternating eddy and front patterns contributes significantly to the total poleward heat transport variability on time scales from subseasonal to interannual. For the Iceland Basin, the MHT variability induced by velocity changes associated with mesoscale processes can produce about 50% of the total heat transport variability. Similarly, mesoscale processes in the Irminger Sea and Rockall Trough also play important roles in producing MHT variability. The overall mesoscale MHT variability in different sub-basins accounts for about 20% of the MHT variability across the OSNAP East section. This is different from our understanding about the mechanisms for oceanic heat transport variability, where large-scale circulation changes are believed to be the main driver5,6. Our results emphasize the importance of resolving mesoscale processes in observations and numerical simulations to realistically capture their roles in modulating heat transport variability in the northern North Atlantic. High-resolution observational arrays capable of capturing both large-scale and mesoscale variability, such as the OSNAP observing system (which includes moorings, gliders, Argo floats, and satellite altimetry), are needed to measure the basin-wide ocean MHT in the subpolar North Atlantic.
The ADT and surface geostrophic velocity fields between 1993 and 2015 were measured by the satellite altimetry. The Ssalto/Duacs altimeter products are produced and distributed by the Copernicus Marine and Environment Monitoring Service (http://www.marine.copernicus.eu). The eddy kinetic energy is defined as EKE = [(u′)2 + (v′)2]/2, where u′and v′ are derived by removing the long-term mean from the original surface geostrophic velocity. These data are used to make Figs. 2, 3 and Supplementary Fig. 1.
During the cruises in May–June 2014 and June–July 2015, conventional conductivity/temperature/depth (CTD) profiles were acquired using a SeaBird SBE-911plus pumped system, and direct velocity profiles were measured using a dual-ADCP system mounted on the CTD package (lowered ADCP (LADCP)).
Since summer 2015, G2 Slocum gliders have been jointly operated by the Woods Hole Oceanographic Institution and Ocean University of China (OUC) and serve as an important element of OSNAP to monitor the meridional volume and heat transport in the energetic Iceland Basin. The data analyzed here were collected by two gliders deployed in June and November 2015, respectively. Moving at approximately 0.2 m s−1, gliders "fly" through the ocean from surface to 1,000 m. In each dive-climb cycle, they navigate along a sawtooth trajectory and measure temperature, conductivity (salinity) and pressure with a Seabird pumped CTD sensor package. The horizontal sample spacing averages to be about 3 km, but near the surface and 1,000 m turnaround points, distance ranges from hundreds of meters to 6 km. The collected data are binned to profiles with vertical resolution of 1 m (Supplementary Fig. 4). The surveyed section is along 58°N with endpoints at 24.5 °W and 21°W, respectively. The section is about 200 km in length and a one-way transect is usually completed in 7–10 days.
The barotropic, or depth-averaged component of the velocity, is calculated directly from the gliders using both the glider surfacing positions and a glider flight model with calibrated parameters. This depth-averaged velocity contains all motions induced by different processes occurring in each cycle. The contributions from these processes are split into three types: geostrophic, tidal, and wind-driven Ekman currents. The motions induced by other phenomena are assumed to be errors. Therefore, the depth-averaged velocity, vav = vek + vtide + vgeos.
Tidal current, vtide, is extracted using two ADCPs deployed at 300 and 500 m of the two OSNAP moorings at the western and eastern endpoints of the glider section, respectively. Each ADCP provides hourly ocean velocities in the upper ocean. The 36-h low-pass filtered velocities can be removed from the original measurements to obtain the tidal current.
The Meridional wind-driven Ekman current in the Ekman layer was derived from the zonal wind stress: \(v_{{\mathrm{ek}}}\left( x,y,t \right) = \frac{{ - 1}}{{\rho fh}}\tau _{x}\left( {x,y,t} \right)\), where τ x , ρ, h, and f are the zonal wind stress, reference density, Ekman layer depth, and Coriolis parameter, respectively. The Ekman layer depth is assumed to 50 m (ref. 39); 1,027 kg m−3 is used for the reference density. Zonal wind stress comes from the daily product of ERA-Interim. The estimated wind-driven current is further weighted according to the time when the gliders stayed in the top 50 m.
The removal of vek, vtide from vav is used as reference for geostrophic calculation, that is, \(v_{{\mathrm{geos}}}\left( {x,z,t} \right)|_{{\mathrm{refer}}} = v_{{\mathrm{av}}} - v_{{\mathrm{ek}}} - v_{{\mathrm{tide}}}\). As mentioned above, \(v_{{\mathrm{geos}}}\left (x,z,t \right)|_{{\mathrm{refer}}}\) definitely includes motions due to processes not explicitly considered here, and those are considered to be errors.
The geostrophic velocity relative to 1,000 m is computed from the density difference between pairs of density profiles according to:
$$v_{{\mathrm{geos}}}\left ({x,z,t} \right)|_{1,000} = \frac{{ - g}}{{\rho f}}\mathop {\int }\limits_{1,000}^z \frac{{\left[ {\rho _{\mathrm{e}}\left( {x,z,t} \right) - \rho _{\mathrm{w}}\left( {x,z,t} \right)} \right]}}{D}{\mathrm{d}}z.$$
where g, D, ρe, and ρw are the gravitational acceleration, the distance between the density pairs, and east and west density profiles of the pairs, respectively. \(v_{{\mathrm{geos}}}\left( x,z,t \right)|_{1,000}\) is further averaged over the top 1,000 m to match with \(v_{{\mathrm{geos}}}\left( x,z,t \right)|_{{\mathrm{refer}}}\). The absolute geostrophic velocity is computed by adding the drift between the depth-averaged \(v_{{\mathrm{geos}}}\left( x,z,t \right)|_{1,000}\) and \(v_{{\mathrm{geos}}}\left( x,z,t \right)|_{{\mathrm{refer}}}\) to \(v_{{\mathrm{geos}}}\left( x,z,t \right)|_{1,000}\).
The MHT is defined by \(Q\left( t \right) = \mathop {\int }\limits_{x_{\mathrm{w}}}^{x_{\mathrm{e}}} \mathop {\int }\limits_{1,000}^0 \rho C_{\mathrm{p}}v(x,z,t)\theta (x,z,t){\mathrm{d}}z{\mathrm{d}}x\), where θ is potential temperature derived from observed temperature using SeaWater Matlab library, Cp is the specific heat of seawater, v(x, y, z, t) is meridional velocity, and xw, xe are the western and eastern endpoints of the section along 58 °N, respectively. v(x, y, z, t) equals to the absolute geostrophic velocity for the water depth between 50 and 1000 m. In the top 50 m, the sum of absolute geostrophic velocity and Ekman current is set to v(x,y,z,t). Two examples of the calculated meridional velocity in the top 1,000 m along the surveyed section are shown in Supplementary Figure 2.
Uncertainties in the obtained absolute geostrophic velocity and heat transport are estimated in the following ways:
Measurement errors: All sensors were calibrated before and after the cruise. No drift was found in the conductivity measurements. According to calibration results, the measurement uncertainty of the temperature, salinity and pressure are estimated to be 0.001 °C, and 0.002 and 0.02 dbar, respectively. Incorporating them into the estimation of the geostrophic velocity relative to 1,000 m (\(v_{{\mathrm{geos}}}\left( x,z,t \right)|_{1,000}\)), the corresponding uncertainty is <1 cm s−1.
The largest uncertainty in the depth-averaged velocity is caused by the errors in the records of pitch, roll and heading when the glider is underwater. According to our calibrations, the uncertainties of pitch, roll and heading are about 10°–15°. The accuracy of GPS positions is about 10 m, but this only contributes to <0.1 cm s−1 error for a 6-h dive. Overall, the uncertainty in the depth-averaged velocity is about 1 to 2 cm s−1, which is consistent with other glider observations40.
Temporal variability not observed by gliders: It took 7–10 days for a glider to completely survey the 200 km long section; therefore, the variability due to the processes on the time scales shorter than 7–10 days can induce uncertainties. Observed currents from ADCPs are used to estimate the variability on time scales shorter than the period of each complete glider transect. They are taken as the uncertainties induced by the time variability not observable in glider surveys.
Errors in the meridional velocity calculations: Tidal currents were assumed to be uniform and barotropic in the surveyed region. According to the analysis using the two ADCPs deployed in the endpoints of the glider section, their difference is <1 cm s−1 on the tidal frequency. We thus take this number to be the uncertainties associated with the predicted tidal currents.
The wind-driven Ekman transport is assumed to be uniformly distributed in the Ekman layer, which is assumed to be 50 m. The assumptions are imperfect because observations found that the wind-driven Ekman currents have spiral-like structure and are strongly surface-trapped41. However, during a 6-h dive, gliders only took several minutes in the top 50 m. Therefore, the errors induced by the wind-driven Ekman current are negligible.
We also noted that there are non-Ekman ageostrophic currents, such as the motions induced by the sub-mesoscale processes near the eddy edge. These motions are irregularly distributed in space and time, so their overall impacts on the density profiles of the geostrophic velocity calculation are assumed to be small.
The numerical simulation was performed using the eddy-resolving high-resolution (1/12°) HYbrid Coordinate Ocean Model (HYCOM). The model domain spans from 28 °S to 80 °N and was configured originally by Xu et al.35 The initial state was from the experiment E026 in Xu et al.36 where monthly climatological forcing from the European Center for Medium-Range Weather Forecasts reanalysis (ERA40) was used to spin up for 25 years. Starting from model year 25 in E026, our HYCOM simulation is further spun up for 25 years using the daily National Centers for Environmental Prediction (NCEP) Climate Forecast System Reanalysis (CFSR) data. After spin-up, the model was integrated from 1992 to 2014 forced by daily data NCEP CFSR data. The daily model outputs are used to construct the monthly mean fields that are analyzed in this study.
The 1/12° HYCOM simulations were found to successfully reproduce both the long-term mean and variations of the subpolar North Atlantic circulation, particularly the AMOC, the boundary currents in the Labrador Sea and the NAC36. As shown in Figs. 1 and 4c, the monthly time series of total poleward heat transport across the OSNAP East line between Greenland and Scotland has a mean value of about 0.6 PW and a standard deviation of 0.14 PW, with a minimum of 0.3 PW and maximum of 1.0 PW. These numbers are in line with the estimates using synoptic trans-basin hydrographic measurements near similar latitudes6,15. Near the glider transect in the Iceland Basin, an anti-cyclonic eddy can be found in the model mean surface height between 1992 and 2014 (Supplementary Fig. 5a), which is quite similar to the satellite ADT results (Supplementary Fig. 1). The EKE in the model is calculated in the same way as for the altimetry observations. The simulated mean EKE pattern also resembles well the main features in the satellite altimetry data (Fig. 2 and Supplementary Fig. 5b). The simulated meridional velocity associated with anti-cyclonic eddy composite in the Iceland Basin is shown in Supplementary Fig. 2 and its vertical structure agrees well with the in situ observations. Based on these comparisons, we conclude that the eddy-resolving HYCOM has a reasonably good skill not only in simulating the basin-wide features but also eddies in the Iceland Basin.
Mesoscale eddies in the Iceland Basin are detected in numerical results following the algorithm developed by Nencioli et al.42. The Nencioli algorithm consists of four constraints: first, a reversal of the meridional velocity (v) along an east–west section; second, a reversal of the zonal velocity (u) along a north–south section; third, a local minimum of the velocity magnitude at the eddy center; and last, a constant sense of rotation along the four quadrants of the eddy. The eddy scenarios in the Iceland Basin are defined using the criteria that the eddy boundary falls within the glider section. In order to identify the frontal structure, anomalies of monthly total meridional volume transport across the glider section are calculated. The standard deviation of the anomalies is 5.4 Sv (1 Sv = 106 m3 s−1). The frontal structures are assumed to be established when the anomalies are positive and are larger than the standard deviation of 5.4 Sv. The eddy and frontal structures are marked in Fig. 4. Their corresponding sea surface height patterns are shown in Supplementary Fig. 6.
Reynolds decomposition
To reveal the physical process for the MHT variability, standard Reynolds decomposition is used to separate the heat transport into several components: Qtotal = Qvel + Qtemp + Qeddy, where the left side is the heat transport and right side is the heat transport induced by velocity, temperature, and eddy, respectively.
$$Q_{{\mathrm{total}}} = \mathop {\int }\limits_{x_{\mathrm{w}}}^{x_{\mathrm{e}}} \mathop {\int }\limits_h^0 \rho C_{\mathrm{p}}v(x,z,t)\theta (x,z,t){\mathrm{d}}z{\mathrm{d}}x,$$
where h is the depth to integrate heat transport.
Velocity and potential temperature are decomposed as follows: \(v= \bar v + v\prime\) ; \(\theta = \bar \theta + \theta \prime\), where overbar denotes time average and prime refer to the fluctuating part with respect to the time mean. Therefore,
$$Q_{{\mathrm{vel}}} = \mathop {\int }\limits_{x_{\mathrm{w}}}^{x_{\mathrm{e}}} \mathop {\int }\limits_h^0 \rho C_{\mathrm{p}}v\prime (x,z,t)\bar \theta (x,z,t){\mathrm{d}}z{\mathrm{d}}x,$$
$$Q_{{\mathrm{temp}}} = \mathop {\int }\limits_{x_{\mathrm{w}}}^{x_{\mathrm{e}}} \mathop {\int }\limits_h^0 \rho C_{\mathrm{p}}\bar v(x,z,t)\theta \prime (x,z,t){\mathrm{d}}z{\mathrm{d}}x,$$
$$Q_{{\mathrm{eddy}}} = \mathop {\int }\limits_{x_{\mathrm{w}}}^{x_{\mathrm{e}}} \mathop {\int }\limits_h^0 \rho C_{\mathrm{p}}v\prime (x,z,t)\theta \prime (x,z,t){\mathrm{d}}z{\mathrm{d}}x.$$
Spatial filter
In order to separate the large-scale and mesoscale features, a spatial Butterworth filter with a cutoff length scale of 10° in longitude (about 600 km) is applied to the velocity and temperature field of the monthly HYCOM results along the OSNAP East section. The cutoff length scale is determined by the spatial scale for the zonal shift of the NAC and eddy diameters in the Iceland Basin, which is estimated in the satellite altimetry maps. The low-pass spatially filtered velocity and temperature are defined as large-scale process. The variables for the mesoscale process are obtained by removing the low-pass filtered from the original model outputs and are named as high-pass filtered dataset. The unfiltered (i.e., the original), low-pass and high-pass spatially filtered variables are used to compute the MHT for the total, large-scale and mesoscale processes, respectively.
In addition, the time series for the three different MHTs are further split into interannual and short time scales (intra-seasonal to seasonal). This is achieved by applying a temporal Butterworth filter with a cutoff length of 2 years to all the time series. The interannual changes for the MHT induced by large-scale and mesoscale processes are exhibited in Supplementary Fig. 3.
Code availability
The source codes for HYCOM can be downloaded online (https://hycom.org/hycom/source-code).
Observations collected by gliders and synoptic ship surveys are archived at OSNAP (http://www.o-snap.org/observations/data/). The satellite altimeter products are distributed by the Copernicus Marine and Environment Monitoring Service (http://www.marine.copernicus.eu). The data that support the findings of this study are available from J.Z. upon reasonable request.
The original version of this Article omitted the author N. Penny Holliday from the National Oceanography Centre, European Way, Southampton SO14 3ZH, UK. Consequently, the following was originally omitted from the Acknowledgements: 'N.P.H. and the JR302 cruise were funded through the UK Natural Environment Research Council programmes UK OSNAP (NE/K010875/1), RAGNARRoCC (NE/K002511/1) and the Extended Ellett Line (National Capability)'. The corrected version of the Acknowledgements also removes the following from the original version: 'The Zonally Accumulated Heat Transport in observation was calculated by N.P.H. from the National Ocean Center, United Kingdom'.Additionally, the following was originally omitted from the Author Contributions: 'N.P.H. calculated the observed Zonally Accumulated Heat Transport along the OSNAP east section'.This has been corrected in both the PDF and HTML versions of the Article.
Ganachaud, A. & Wunsch, C. Improved estimates of global ocean circulation, heat transport and mixing from hydrography data. Nature 408, 453–456 (2000).
Trenberth, K. E. & Caron, J. M. Estimates of meridional atmosphere and ocean heat transports. J. Clim. 14, 3433–3443 (2001).
Talley, L. D. Shallow, intermediate, and deep overturning components of the global heat budget. J. Phys. Oceanogr. 33, 530–560 (2003).
Biastoch, A., Böning, C. W., Getzlaff, J., Molines, J.-M. & Madec, G. Causes of interannual–decadal variability in the meridional overturning circulation of the midlatitude North Atlantic Ocean. J. Clim. 21, 6599–6615 (2008).
Johns, W. E. et al. Continuous, array-based estimates of Atlantic Ocean heat transport at 26.5°N. J. Clim. 24, 2429–2449 (2011).
Mercier, H. Variability of the meridional overturning circulation at the Greenland-Portugal OVIDE section from 1993 to 2010. Prog. Oceanogr. 132, 250–261 (2015).
Wunsch, C. Where do ocean eddy heat fluxes matter? J. Geophys. Res. 104, 13235–13249 (1999).
Jayne, S. R. & Marotzke, J. The oceanic eddy heat transport. J. Phys. Oceanogr. 32, 3328–3345 (2002).
Gille, S. T. Float observations of the Southern Ocean. Part II: eddy fluxes. J. Phys. Oceanogr. 33, 1182–1196 (2003).
Phillips, H.E., . & Rintoul, S.R. Eddy variability and energetics from direct current measurements in the Antarctic Circumpolar Current south of Australia. J. Phys. Oceanogr. 30, 3050–3076 (2000).
Volkov, D. L., Lee, T. & Fu, L.-L. Eddy-induced meridional heat transport in the ocean. Geophys. Res. Lett. 35, L20601 (2008).
Bishop, S. P., Watts, D. R. & Donohue, K. A. Divergent eddy heat fluxes in the Kuroshio Extension at 1438–1498E. Part I: Mean structure. J. Phys. Oceanogr. 43, 1533–1550 (2013).
Dong, C., McWilliams, J., Liu, Y. & Chen, D. Global heat and salt transports by eddy movement. Nat. Commun. 5, 3294 (2014).
Lozier, S. et al. Overturning in the Subpolar North Atlantic Program: a new international ocean observing system. Bull. Am. Meteorol. Soc. 98, 737–752 (2017).
Lumpkin, R. & Speer, K. Global ocean meridional overturning. J. Phys. Oceanogr. 37, 2550–2562 (2007).
Pickart, R. S. & Spall, M. A. Impact of Labrador Sea convection on the North Atlantic meridional overturning circulation. J. Phys. Oceanogr. 37, 2207–2227 (2007).
Li, F., Lozier, M. S. & Johns, W. E. Calculating the meridional volume, heat and freshwater transports from an observing system in the Subpolar North Atlantic: observing system simulation experiment. J. Atmos. Ocean Technol. 34, 1483–1500 (2017).
Häkkinen, S., Rhines, P. B. & Worthen, D. L. Warm and saline events embedded in the meridional circulation of the northern North Atlantic. J. Geophys. Res. 116, C03006 (2011).
Vellinga, M. & Woods, R. A. Global impacts of a collapse of the Atlantic thermohaline circulation. Clim. Change 54, 251–267 (2002).
Zhang, R. Mechanisms for low-frequency variability of summer Arctic sea ice extent. Proc. Natl. Acad. Sci. USA 112, 4570–4575 (2015).
Serreze, M. C. et al. The large-scale fresh water cycle of the Arctic. J. Geophys. Res. 111, C11010 (2006).
Bower, A. S. et al. Directly measured mid-depth circulation in the northeastern North Atlantic Ocean. Nature 419, 603–607 (2002).
Flatau, M., Talley, L. & Niiler, P. P. The North Atlantic Oscillation, surface current velocities, and SST changes in the subpolar North Atlantic. J. Clim. 16, 2355–2369 (2003).
Lavender, K. L., Owens, W. B. & Davis, R. E. The mid-depth circulation of the subpolar North Atlantic as measured by subsurfacefloats. Deep Sea Res. I 52, 767–785 (2005).
Sarafanov, A. et al. Mean full-depth summer circulation and transports at the northern periphery of the Atlantic Ocean in the 2000s. J. Geophys. Res. 117, C01014 (2012).
Chafik, L., Rossby, T. & Schrum, C. On the spatial structure and temporal variability of poleward transport between Scotland and Greenland. J. Geophys. Res. Oceans 119, 824–841 (2014).
Daniault, N. et al. The northern North Atlantic Ocean mean circulation in the early 21st century. Prog. Oceanogr. 146, 142–158 (2016).
Holliday, N., Bacon, S., Allen, J. & McDonagh, E. Circulation and transport in the western boundary currents at Cape Farewell, Greenland. J. Phys. Oceanogr. 39, 1854–1870 (2009).
Våge, K. et al. The Irminger Gyre: circulation, convection, and interannual variability. Deep Sea Res. Part I 58, 590–614 (2011).
White, M. A. & Heywood, K. J. Seasonal and interannual changes in the North Atlantic subpolar gyre from Geosat and TOPEX/Poseidon altimetry. J. Geophys. Res. 100, 24931–24941 (1995).
Volkov, D. L. Interannual variability of the altimetry-derived eddy field and surface circulation in the extratropical North Atlantic Ocean in 1993–2001. J. Phys. Oceanogr. 35, 405–426 (2005).
Martin, A. P., Wade, I. P., Richards, K. J. & Heywood, K. J. The PRIME eddy. J. Mar. Res. 56, 439–462 (1998).
Read, J. F. & Pollard, R. T. A long-lived eddy in the Iceland Basin 1998. J. Geophys. Res. 106, 11411–11421 (2001).
Shoosmith, D. R., Richardson, P. L., Bower, A. S. & Rossby, H. T. Discrete eddies in the northern North Atlantic as observed by looping RAFOS floats. Deep Sea Res. Part II 52, 627–650 (2005).
Xu, X., Schmitz, W. J. Jr, Hurlburt, H. E., Hogan, P. J. & Chassignet, E. P. Transport of Nordic Seas overflow water into and within the Irminger Sea: an eddy-resolving simulation and observations. J. Geophys. Res. Oceans 115, C12048 (2010).
Xu, X. et al. On the currents and transports connected with the Atlantic meridional overturning circulation in the subpolar NorthAtlantic. J. Geophys. Res. Oceans 118, 502–506 (2013).
Roemmich, D. & Gilson, J. Eddy transport of heat and thermocline waters in the North Pacific: a key to interannual/decadal climate variability? J. Phys. Oceanogr. 31, 675–687 (2001).
Rhein, M. et al. Deep water formation, the subpolar gyre, and the meridional overturning circulation in the subpolar NorthAtlantic. Deep Sea Res. Part II 58, 1819–1832 (2011).
Rio, M.-H. & Hernandez, F. High-frequency response of wind-driven currents measured by drifting buoys and altimetry over the world ocean. J. Geophys. Res. 108, 3283 (2003).
Todd, R. E., Rudnick, D. L. & Davis, R. E. Monitoring the greater San Pedro Bay region using autonomous underwater gliders during fall of 2006. J. Geophys. Res. 114, C06001 (2009).
Price, J. F., Weller, R. A. & Schudlich, R. R. Wind-driven ocean currents and Ekman. Transp. Sci. 238, 1534–1538 (1987).
Nencioli, F., Dong, C., Dickey, T., Washburn, L. & McWilliams, J. C. A vector geometry based eddy detection algorithm and its application to a high-resolution numerical model product and high-frequency radar surface velocities in the Southern California Bight. J. Atmos. Ocean Technol. 27, 564–579 (2010).
Our field experiment was enabled by the Cooperative Research Initiative between the Ocean University of China and the Woods Hole Oceanographic Institution (WHOI). We wish to acknowledge B. Hodges and H. Furey at WHOI and the captains and crews of the research vessels Pelagia and Discovery for their expert assistance in the deployment, recovery, and operations of the gliders. Dr. R.E. Todd helped the glider velocity processing. The CTD, LADCP, and ADCP data used in this study were collected by the OSNAP project and generously provided by Dr. W. Johns and Dr. A. Papapostolou from the University of Miami. We thank Dr. X.Xu at Florida State University for providing the initial model configuration. J.Z. was financially supported by the Postdoctoral Scholar Program at WHOI, with funding provided by the Ocean and Climate Change Institute. This work was also supported by the US National Science Foundation (OCE-1258823 and OCE-1634886), as well as by China's national key research and development projects (2016YFA0601803), the National Natural Science Foundation of China (41521091 and U1606402), the Qingdao National Laboratory for Marine Science and Technology (2015ASKJ01), and the Fundamental Research Funds for the Central Universities (201424001 and 201362048). N.P.H. and the JR302 cruise were funded through the UK Natural Environment Research Council programmes UK OSNAP (NE/K010875/1), RAGNARRoCC (NE/K002511/1) and the Extended Ellett Line (National Capability).
Woods Hole Oceanographic Institution, Woods Hole, MA, 02543, USA
Jian Zhao, Amy Bower & Jiayan Yang
Physical Oceanography Laboratory/CIMST, Ocean University of China and Qingdao National Laboratory for Marine Science and Technology, Qingdao, 266100, China
Xiaopei Lin
National Oceanography Centre, European Way, Southampton, SO14 3ZH, UK
N. Penny Holliday
Jian Zhao
Amy Bower
Jiayan Yang
J.Z. conceived the research, conducted the model simulations, and performed the data analyses. J.Z. wrote the manuscript with improvement by all co-authors. A.B., J.Y., and X.L. designed and led the field experiments. All authors contributed to the project. N.P.H. calculated the observed Zonally Accumulated Heat Transport along the OSNAP east section.
Correspondence to Jian Zhao or Xiaopei Lin.
Zhao, J., Bower, A., Yang, J. et al. Meridional heat transport variability induced by mesoscale processes in the subpolar North Atlantic. Nat Commun 9, 1124 (2018). https://doi.org/10.1038/s41467-018-03134-x
Barotropic vorticity balance of the North Atlantic subpolar gyre in an eddy-resolving model
Mathieu Le Corre
, Jonathan Gula
& Anne-Marie Tréguier
Ocean Science (2020)
Interannual Variability of the South Atlantic Ocean Heat Content in a High‐Resolution Versus a Low‐Resolution General Circulation Model
Alexandra Gronholz
, Shenfu Dong
, Hosmay Lopez
, Sang‐Ki Lee
, Gustavo Goni
& Molly Baringer
Geophysical Research Letters (2020)
An Investigation of the Ocean's Role in Atlantic Multidecadal Variability
Laifang Li
, M. Susan Lozier
& Martha W. Buckley
Journal of Climate (2020)
LONG-TERM VARIABILITY OF CURRENTS IN THE SUBARCTIC FRONT OF THE ATLANTIC OCEAN
A. K. Ambrosimov
, N. A. Diansky
, A. A. Kluvitkin
& V. A. Melnikov
Journal of Oceanological Research (2019)
Mechanisms of Ocean Heat Anomalies in the Norwegian Sea
Helene Asbjørnsen
, Marius Årthun
, Øystein Skagseth
& Tor Eldevik
Journal of Geophysical Research: Oceans (2019)
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online) | CommonCrawl |
EURASIP Journal on Wireless Communications and Networking
Research | Open | Published: 01 February 2017
Link residual lifetime-based next hop selection scheme for vehicular ad hoc networks
Siddharth Shelly1 &
A. V. Babu1
EURASIP Journal on Wireless Communications and Networkingvolume 2017, Article number: 23 (2017) | Download Citation
In Vehicular Ad Hoc Networks (VANETs), geographic routing protocols rely on a greedy strategy for hop by hop packet forwarding by selecting vehicle closest to the destination as the next hop forwarding node. However, in a high-mobility network such as VANET, the greedy forwarding strategy may lead to packet transmission failure since it does not consider the reliability of the newly formed link when next hop forwarding nodes are chosen. In this paper, we propose a scheme for next hop selection in VANETs that takes into account the residual lifetime of the communication links. In the proposed approach, a source vehicle selects a forwarding vehicle from a given set of candidate vehicles by estimating the residual lifetime of the corresponding links and finding the link with maximum residual lifetime. Initially, we present Kalman filter based approach for estimating the link residual lifetime in VANETs. We then present the details of the proposed next hop selection method. Simulation results show that the proposed scheme exhibits better performance in terms of packet delivery ratio and average end-to-end delay as compared to other conventional method.
Vehicular Ad Hoc Networks (VANETs), an integral component of intelligent transportation systems (ITS), are aimed to provide support for road safety, traffic management and comfort applications by enabling communication in two distinct modes: vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) [1]. Since the nodes in VANETs (i.e., vehicles with on-board units) move with very high speed, the network topology is highly dynamic and consequently the inter-vehicle communication links will be highly unstable or may even become disconnected frequently. A route that is established between a source-destination pair through a sequence of road segments will cease to be invalid when at least one communication link along the route fails. Hence, it is very important and desirable for the routing algorithm to choose an optimal route consisting of highly reliable links in the network [2].
Generally, routing within a road segment is performed using a greedy forwarding approach in which the tagged node carrying a data packet will select a vehicle from among its neighboring set that is closer to destination or the next junction, for forwarding the data packet. The greedy forwarding approach is continued until the next junction or the destination is reached. Geographic routing, which is the preferred means of routing in VANETs, also employs greedy forwarding approach [3]. Adoption of greedy forwarding reduces the number of hops for a packet to move from the source to the destination leading to a decrease of end-to-end delay experienced by the packet. However, greedy forwarding does not take into account the quality and reliability of the link that is chosen for forwarding the packet. In VANETs, since the established link may become highly unreliable from time to time, the probability of packet transmission failure may become very high when greedy forwarding is employed. This in turn can result in more retransmissions leading to reduction of the network throughput and significant increase of end-to-end delay.
The mean link lifetime is defined as the mean time period for which two vehicles are within the communication range of each other, while the residual lifetime of an existing link is defined as the time duration from the current time until the time the link breaks. Both these quantities have direct impact on many performance metrics such as route reliability, packet delivery ratio, throughput, and end-to-end delay of the network. Accurate knowledge of mean link lifetime and the residual lifetime of existing links will aid the design of reliability based routing protocols to improve the routing performance, and to achieve the desired network performance.
In this paper, we propose a method for the selection of next hop forwarding node in VANETs that improves the reliability of communication links along the path from source to destination. In the proposed method, a packet-carrying vehicular node (i.e., source vehicle) selects a forwarding vehicle from a given set of candidate vehicles by estimating the residual lifetime of the corresponding individual communication links. We present an algorithm to predict the link residual lifetime in VANETs by making use of Kalman filter based prediction technique. The proposed method relies on predicting the relative location and speed of vehicular nodes using Kalman filter. Once the estimates for the residual lifetimes of all the probable one-hop links are available, a vehicle belonging to the forwarding set that result in maximum value for the link residual lifetime is chosen as the forwarding vehicle. Simulation results reveal that the proposed scheme significantly improves the packet delivery ratio. The rest of the paper is organized as follows: In the Section 2, we briefly describe the related work. The system model employed for the analysis is presented in Section 3.1. In Section 3.2, we describe a procedure for the prediction of link residual lifetime based on Kalman filter. In Section 3.3, we present the residual lifetime based approach for packet forwarding. The simulation results are presented in Section 4, and finally, the paper is concluded in Section 5.
Several papers have recently appeared that deal with the design of reliable routing protocols for VANETs [4–20]. In [4], Taleb et al. describe a reliable routing protocol in which vehicles are grouped according to their velocity vectors and, the routing algorithm dynamically searches for the most stable route that includes only hops from the same group of vehicles. S. Wan et al. [5] propose a reliable routing protocol for V2I networks on rural highways based on prediction of link lifetime. However, the proposed protocol requires the exchange of a large number of route request (RREQ) and route reply (RREP) packets. Namboodiri et al. [6] describe a routing algorithm, specifically tailored to the mobile gateway scenario, that predicts how long a route will last and creates a new route before the failure of the existing route. In [7], Menouar et al. describe a routing algorithm, that can predict the future coordinates of a vehicle and build new stable routes. In [8], the same authors propose a movement prediction-based routing (MOPR) in which each vehicle estimates the link stability for each neighboring vehicle before selecting the next hop for data forwarding. In the above mentioned papers, the link lifetime is computed by assuming vehicle speed to be a constant. Sofra et al. [9] discuss an algorithm capable of finding reliable routes in VANETs. In [10], Rao et al. present a protocol called GPSR-L, an improved version of greedy perimeter stateless routing (GPSR) protocol that takes into account the link lifetime to ensure reliable routing. However, the author assumes vehicle velocity to be a constant for finding the link lifetime. In [11], Eiza et al. propose a reliable routing protocol known as AODV-R by incorporating link reliability metric in the original AODV routing protocol. In [12], Niu et al. describes a QoS routing algorithm based on the AODV protocol and a criterion for link reliability. In [13], Yu et al. present a routing procedure, AODV-VANET, that use vehicle's movement information in the route discovery process. Notice that protocols described in [11–13] are based on AODV. Recently several studies have reported that, topology based routing schemes such as AODV performs badly in VANETs, as compared to the geographic routing protocols [3].
In [14], Eiza and Ni propose a reliable routing algorithm that exploits the evolving characteristics of VANETs on highway. Naumov et al. in [15], propose connectivity aware routing (CAR), which adapts to current network conditions to find a route with sufficient connectivity, so as to maximize the chance of successful packet delivery. In [16], Boukerche et al. describe a routing approach for providing QoS in VANETs in which the link reliability is estimated based on exchange of beacons among vehicles. Shelly et al. [17] propose an enhancement for the well-known GPSR protocol, which exploits information about link reliability for the selection of forwarding node. In [18], Yu et al. propose a routing protocol for VANETs based on vehicle density so as to provide fast and reliable message delivery. In [19], Cai et al. propose a link state aware geographic opportunistic (LSGO) routing protocol, in which the forwarding nodes are selected based on their geographic location and the link quality. Here, the link quality is expressed in terms of a metric known as expected transmission count (ETX), which is the expected number of data transmissions required to send a packet over the source-destination link. However, the computation of ETX involves exchange of Hello packets across each link, leading to significant increase in the overhead. Further, ETX is computed by considering transmission of Hello packets during a window of w seconds (s), leading to higher end-to-end delay. Wang et al. [20] propose a Stochastic Minimum-hops Forwarding Routing (SMFR) algorithm for VANETs with heterogeneous types of vehicles that minimizes the number of hops to the destination. However, the work reported in [20] does not consider link reliability for the selection of end-to-end route. Since VANETs are poised to support critical road safety related applications in a highly dynamic environment, communication reliability along the end-to-end route is of prime importance as compared to other design criterion such as the number of hops along the route, as investigated in [20]. Accordingly, it is desirable for the routing protocol to consider link reliability when vehicles are chosen for forwarding the packet.
When routing in VANETs is considered, the main disadvantage of the traditional greedy forwarding method is that next hop selection procedure does not consider the quality and reliability of the resulting link. While the source vehicle forwards the data packet to the vehicle closest to the destination node under traditional greedy forwarding, it is very important to consider the residual lifetime of the link formed by the source vehicle and the selected one-hop neighbor. This is because, if the residual lifetime of the newly formed link is very low, the probability of packet transmission failure is very high that will lead to more retransmissions and deterioration of the network throughput. In this paper, we investigate the problem of improving communication reliability when a source vehicle selects next hop nodes for data forwarding. We propose a method for the selection of reliable one-hop neighbor based on the residual lifetime of the corresponding communication link. To meet this objective, we present an algorithm to predict the residual lifetime of links in VANETs by making use of Kalman filter based prediction technique. In this case, a source vehicle tries to predict the residual lifetime of one-hop links to all the available neighbor vehicles. The neighbor with maximum value for the link residual lifetime is chosen as the next-hop forwarding vehicle. Kalman filter is a recursive filter that can be used to estimate the state of a linear dynamic system from a series of noisy measurements [21]. A major advantage of Kalman filter is that they can quickly and efficiently compute estimates and can be used for both state estimation and prediction. Kalman filter is a convenient tool for online real-time processing of data. The optimal estimate is derived by the Kalman filter based on minimizing the mean square error [22]. Due to the simplicity and robust nature of the Kalman filter, they are extensively used for velocity and location prediction techniques in ad hoc networks [23–25].
Proposed method
In this section, we describe the procedure for the selection of next-hop forwarding vehicle that relies on estimates of link residual lifetime. We begin this section by introducing the system model employed throughout the paper and then describe the residual lifetime estimation procedure. This is followed by a description of the proposed method for the next hop selection.
We consider a scenario in which vehicles move on a straight lane highway and drivers can drive independent of the other vehicles on the highway. Further, we assume all the vehicles to move in the same direction as shown in Fig. 1. We make use of vehicle's effective transmission range R eff for the analysis of residual lifetime. Under the distance dependent path loss model, the received power at distance d away from a given transmitter is given by P r (d)=P t β(d 0/d)α where α is the path loss exponent; d 0 is a reference distance close to the transmitter; and P t is the transmit power of the node. Here β=(G T G R λ 2)/(2π d 0)2 where G T and G R , respectively, represent the gain of transmitting and receiving antennas (assumed to be equal to 1) and λ is the wavelength. The received power at distance d by embedding the effect of path loss, shadowing and multipath fading can be written as [26]:
$$\begin{array}{@{}rcl@{}} P_{r}(d) &=& P_{t} + 10{log}_{10}\beta - 10\alpha {log}_{10}(d/d_{0})\\ &&- 10{log}_{10}E\left[\chi^{2}\right] - \psi_{s} \end{array} $$
Highway scenario considered
where 10log 10 E[χ 2] is the average power due to multipath fading in dB and ψ s is a log normal distributed random variable with mean zero and variance $\sigma _{s}^{2}$. The Outage probability at distance d, P out (P min ,d) is defined as the probability that the received power at a given distance d,P r (d) falls below P min . Thus the outage probability is given by
$$\begin{array}{*{20}l} P_{out}\left(P_{min},d\right) &= P\left(P_{r}(d)\preceq P_{min}\right) \\ &= 1 - Q \Big[{\vphantom{\left.\left.10\alpha {log}_{10}(d/d_{0})- 10{log}_{10}E[\chi^{2}]\right) \right)/{\sigma_{s}}}}\left(P_{min}-\left(P_{t} + 10{log}_{10}\beta\right.\right.\\ &\quad-\left.\left.10\alpha {log}_{10}(d/d_{0})- 10{log}_{10}E\left[\chi^{2}\right]\right) \right)/{\sigma_{s}}\Big] \end{array} $$
where Q[.] is the Q function. The probability that the received power at distance R eff is greater than the minimum required threshold (P min ), is given by:
$$ {\begin{aligned} &P\left(P_{r}\left(R_{eff}\right)\succeq P_{min}\right)\\ &= Q\left[\frac {P_{min}-\left(P_{t} + 10{log}_{10}\beta - 10\alpha {log}_{10}\left(R_{eff}/d_{0}\right) - 10{log}_{10}E\left[\chi^{2}\right]\right)}{\sigma_{s}}\right] \end{aligned}} $$
We define R eff as the distance at which the above probability is equal to 0.99. Assuming the reference distance for antenna far field d 0 to be 1 m, we have:
$$\begin{array}{@{}rcl@{}} R_{eff} = 10^{\frac{-2.33\sigma_{s} + P_{t} - P_{min} + 10{log}_{10}\beta -10{log}_{10}E\left[\chi^{2}\right]}{10\alpha}} \end{array} $$
To describe the mobility of vehicles on the highway, we consider time as partitioned into small equal length time steps of Δt time duration, with each time epoch represented as t k =Δ t+t k−1; k=1,2..n. The vehicles are assumed to move according to a Gauss-Markov (GM) mobility model [27]. In this case, vehicle speed at any time slot is a function of its previous speed, i.e; the model incorporates temporal variation of vehicle speed. The degree of temporal dependency is determined by the parameter τ, known as time correlation factor. By adjusting τ, we can generate various mobility scenarios for the vehicles on the highway. Let v Ak and v Bk , respectively, be the speed of vehicles A and B at the k th instant of time. Then, at the (k+1)th instant, the speed is computed as:
$$ v_{Ak+1}=\tau v_{Ak} + (1-\tau)\mu_{A} + \sqrt{1-\tau^{2}}~y_{Ak} $$
$$ v_{Bk+1}=\tau v_{Bk} + (1-\tau)\mu_{B} + \sqrt{1-\tau^{2}}~y_{Bk} $$
In (5), μ A and μ B are the mean speed of vehicles A and B, respectively. For the single lane case, we consider μ A =μ B =μ. Further, y Ak and y Bk are independent, uncorrelated and stationary Gaussian random variables with zero mean and standard deviation σ, where σ is the standard deviation of v Ak and v Bk [27]. Further, τ represents the time correlation factor of the speed which is in the range of 0≤τ≤1. In other words, τ shows how much the speed varies between two consecutive epochs. When τ=0, the time correlation disappears and the vehicle speed becomes a Gaussian random variable. When the Gauss-Markov model has strong memory, i.e; τ=1, the vehicle speed at time slot t is exactly same as its previous speed, which is equivalent to a fluid-flow model. The degree of randomness in the speed is adjusted by the parameter τ. As τ increases, the current speed is more likely to be influenced by its previous speed. The Gauss Markov mobility model can thus be used to represent different mobility scenarios in VANETs. Since both the vehicles are assumed to be moving in the same direction, the relative speed between the two vehicles at the (k+1)th instant is calculated as follows:
$$\begin{array}{*{20}l} v_{Rk+1}&=v_{Ak+1}-v_{Bk+1} \\ &= \tau\left(v_{Ak} - v_{Bk}\right) + \sqrt{1-\tau^{2}}\left(y_{Ak} - y_{Bk}\right) \end{array} $$
Define v Rk =v Ak −v Bk and y Rk =(y Ak −y Bk ), k=1,…n. Notice that {y Rk } represent independent Gaussian random variables with zero mean and standard deviation $\sigma _{v_{R}}=\sqrt {2}\sigma $. Thus, the relative speed at the (k+1)th instant of time can be expressed as:
$$ v_{Rk+1}=\tau v_{Rk} + \sqrt{1-\tau^{2}}y_{Rk} $$
Residual lifetime prediction using Kalman filter
The Kalman filter is a recursive filter that can be used to estimate the state of a linear dynamic system from a series of noisy measurements [21]. Consider two vehicles A and B moving in the network as shown in Fig. 2. Even though both A and B move according to the Gauss-Markov mobility model, for simplicity, we assume that the vehicle A is static and the vehicle B moves with the relative speed as defined in the Eq. (7). Further, we assume that the vehicle A is placed at the origin (0,0) of the Cartesian system. Since we consider straight line highway scenario for the analysis, the coordinate y has no importance. Whenever vehicle B is within the coordinates (−R eff ,0) and (R eff ,0), we say that the link between vehicles A and B is alive. When vehicle B enters the communication range of vehicle A, the latter receives beacon message and predicts the distance travelled by the vehicle B and the relative speed, by using Kalman filter. The predicted location and relative speed results are used to find the estimate for residual lifetime.
Considering a single link scenario
The Kalman filter recursively predicts the state variable at each time step t k : the x coordinate of the vehicular node B i.e the distance travelled by the node B. Thus the process equations used to predict the state of the system at a given time instant k+1 are defined as follows:
$$\begin{array}{@{}rcl@{}} v_{Rk+1}=\tau v_{Rk} + \sqrt{1-\tau^{2}}y_{Rk} \\ x_{k+1}=x_{k} + \varDelta t v_{Rk} + y_{xk} \end{array} $$
Here, x k+1 and x k are the location of vehicle B at the (k+1)th and k th time duration, respectively; v Rk+1 and v Rk are the relative speed between the two vehicles at the (k+1)th and k th time duration, respectively; y xk is the process noise which is assumed to be Gaussian with zero mean and standard deviation σ x . Thus, the process equation can be written in matrix form as:
$$\begin{array}{@{}rcl@{}} \left[\begin{array}{c} v_{Rk+1}\\ x_{k+1} \end{array}\right] = \left[\begin{array}{cc} \tau & 0 \\ \varDelta t & 1 \end{array}\right] \left[\begin{array}{c} v_{Rk} \\[0.3em] x_{k} \end{array}\right] + \left[\begin{array}{c} \sqrt{1-\tau^{2}}y_{Rk} \\ y_{xk} \end{array}\right] \end{array} $$
Notice that Eq. (9) represents the general form of the process equation given by X k+1=AX k +w k , where X k+1 is the state vector which describes dynamic behaviour of the system at (k+1)th instance of time, A is the state transition matrix at time k; w k is the system error which is assumed to be Gaussian with mean zero and covariance matrix Q. For starting the Kalman filter recursive steps, the process noise covariance matrix Q must be known which can be obtained as
$$\begin{array}{*{20}l} Q &= E\left[w_{k}w_{k}^{\ast}\right] \\ &= \left[\begin{array}{cc} (1-\tau)^{2} \sigma_{Rk} & 0 \\ 0 & \sigma_{xk} \end{array}\right] \end{array} $$
where ∗ corresponds to complex conjugate. In the measurement update stage, we adjust estimation of the unknown state X k based on the measurement values Z k . Here, the position and the speed of the neighboring vehicles are obtained from the beacon messages at the vehicular node A. The measurement equation at the k th instant of time is Z k =HX k +u k , where Z k is the measurement vector, H is the measurement matrix and u k is the measurement noise which is also Gaussian with zero mean and covariance matrix R. From the measurement equation, the measurement matrix and covariance matrix R are given by [25]
$$ H= \left[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right] ; R= \left[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right] $$
In Kalman filter, the recursive estimate of X k is based on the measurement values of Z k up to the time instant k. Let $\hat {X}_{k/k-1}$ be the a priori estimate of X k and $\hat {X}_{k/k}$ be its posteriori estimate. Further, let P k/k−1 and P k/k respectively be the apriori and the posteriori error covariance matrices. For the Kalman filter, the estimation begins with no prior measurements. So, the initial state is fixed without any condition as follows
$$ \hat{X}_{0/-1}=0 $$
For the recursive steps to start in Kalman filter, we need the knowledge of apriori error covariance matrix P k/k−1. The initial values for P k/k−1, i.e; when k=0, is taken in such a way that the diagonal elements are very high and non diagonal elements are fixed at zero. Thus, initial value of P k/k−1 at k=0 is given by [25]
$$ P_{0/-1}= \left[\begin{array}{cc} 1000 & 0 \\ 0 & 1000 \end{array}\right] $$
Once the initial a priori estimates are obtained, then posteriori estimate and posteriori error covariance matrix can be calculated. The posteriori estimate $\hat {X}_{k/k}$ is given by [25]:
$$ \hat{X}_{k/k}=\hat{X}_{k/k-1} + K_{k}\left(Z_{k}-H\hat{X}_{k/k-1}\right) $$
where K k is the Kalman gain given by [25]:
$$ K_{k}=P_{k/k-1}H^{T}\left({HP}_{k/k-1}H^{T} + R\right)^{-1} $$
With the Kalman gain K k and a priori error covariance matrix defined, the posteriori error covariance matrix can be determined as [25]:
$$ P_{k/k}=\left(I-K_{k}H\right)P_{k/k-1} $$
Then one-step ahead estimate and one step ahead error covariance matrix are given by [22, 25]:
$$ \hat{X}_{k+1/k}=A\hat{X}_{k/k} $$
$$ P_{k+1/k} = {AP}_{k/k}A^{T}+Q $$
Based on Eqs. (14–17), recursive steps for the one step prediction of the location of vehicle B and the relative speed can be done.
Next, we describe the prediction of residual lifetime using the information obtained from the Kalman filter prediction method. As mentioned earlier, the residual lifetime of a link at a given instant of time is defined as the remaining amount of time during which two vehicles are within the transmission range of each other. Once the measurement value Z k is obtained from the beacon messages, this information is used to predict the residual lifetime of the link formed by the vehicle A and B in the (k+1)th time duration. Figure 3 shows the algorithm for calculating the link residual lifetime. Here, vehicle B moves with a relative speed as we described earlier. Its location and relative speed with respect to A at a particular instant of time k+1 is predicted by using the Kalman filter available at A. So, the predicted residual lifetime at (k+1)th time instant is given by
$$ \widehat{RLT}_{k+1} = \frac{R_{eff}-s~\hat{x}_{k+1}}{\left|\hat{v}_{Rk+1}\right|} $$
Algorithm for calculating link residual lifetime
where R eff is the effective transmission range; $\hat {x}_{k+1}$ and $\hat {v}_{Rk+1}$ are the predicted relative position and the relative speed of vehicle B with respect to A obtained from the Kalman filter prediction and s is given by
$$\begin{array}{@{}rcl@{}} s=\left\{ \begin{array}{lll} -1 &{\phantom{;v_{Rk}}}&;v_{Rk+1}> 0 \\ 1 & &;v_{Rk+1}< 0 \\ \end{array}\right. \end{array} $$
Next-hop selection based on link residual lifetime
In this subsection, we describe the proposed method for next-hop selection that relies on the prediction algorithm discussed previously. It is assumed that all the vehicles possess GPS facility to know their location and speed. Each vehicle generates a beacon for every Δt time duration which contains the information of its location coordinates and speed. From the beacon message, a vehicle will get the measurement values from each neighbor node. A tagged vehicle, on receiving the beacon message from a node, can perform the one-step ahead prediction of the location and relative speed of the particular node, from which the residual lifetime of the corresponding link can be calculated. The tagged vehicle then forms the neighbor list by including all the one-hop neighbors, their ID's and the residual lifetime of the corresponding links. Since the tagged vehicle receives the beacon from its one hop neighbors for every Δt time duration, the entries in the neighbor list would get updated periodically for every Δt time duration. The neighbor list also gets updated when a new vehicle enters the effective transmission range of the tagged vehicle or when the tagged vehicle fails to receive beacon from a node in the neighbor list.
Figure 4 shows how the forwarding will happen in the proposed protocol. On receiving a packet, the tagged node will check whether the received one is a beacon or a data packet. If it is a beacon, it will be used to modify the neighbor list. When the tagged node receives a data packet, it has to find the next hop forwarding node. The tagged node will immediately refer the neighbor list. The forwarding node is selected in such a way that the corresponding link has maximum residual lifetime. If two or more such nodes are available, then the node closer to the destination is chosen, as the forwarding node. Since the next-hop forwarding node is selected based on residual lifetime, the probability of link breakage is reduced as compared to a greedy selection and hence the proposed scheme can improve the communication reliability. In the next section, we present the results of our implementation of the proposed method.
Flowchart: residual lifetime based forwarding
In this section, we present the results of our investigation. Initially, we perform a detailed simulation study using Matlab tool to find the R eff of a vehicular node for a given set of parameters such as transmit power, path loss exponent etc. We simulate a realistic channel environment with lognormal shadow fading and Rayleigh distribution for the multipath fading model, and measure the R eff for various channel conditions. It is observed that the R eff is significantly affected by path loss exponent, shadow fading standard deviation and multipath fading. Later, we use these values of R eff for the computation of residual lifetime of the communication links.
We evaluate the performance of proposed packet forwarding strategy and compare the results against that of conventional greedy forwarding approach. We use the Network Simulator 2.33 (NS2.33) to conduct simulation experiments [28]. Our simulation has two components: a mobility simulator and a wireless network simulator, which are connected by trace files that specify the vehicle mobility during simulation. A realistic vehicular mobility scenario is generated by using MOVE (mobility model generator for vehicular networks) [29] built on top of SUMO (simulation of urban mobility) [30], which is an open source micro-traffic simulation package. We simulate a 2 KM long highway in which 75 vehicles are kept uniformly distributed. Each vehicle is assigned a random speed chosen from a Gaussian distribution with mean μ=20m/s and standard deviation of σ=5m/s initially. Then, we analyse the network for each beacon interval (i.e; Δt s). The individual movement of the vehicles are based on Gauss-Markov mobility model where the speed is updated for each Δt time duration. Accordingly, the position of the vehicles are also updated for each Δt s. The values of Δt and the time correlation factor τ is fixed for each simulations. The mobility trace file from MOVE contains information about realistic vehicle movements (such as their location, speed and direction), which can be fed into discrete event simulators for network simulation. We record the trace files corresponding to vehicle mobility from SUMO, convert these to NS2-compatible files using MOVE and use them for network simulation using NS 2.33. Each node in the network simulation represents one vehicle of the mobility simulations, moving according to the represented vehicles movement history in the trace file. In our simulations, IEEE 802.11e EDCA has been assumed as the MAC protocol and the implementation of EDCA in NS-2 from the TKN group in Technical University of Berlin has been used [31]. Currently, IEEE 802.11p draft amendments have been proposed as the PHY and MAC protocols for VANETs [32]. IEEE 802.11p MAC uses 802.11e EDCA scheme with some modifications; while the physical layer is similar to that of the IEEE 802.11a standard. For the current simulations, even though IEEE 802.11 EDCA protocol has been used, we have not simulated multiple queues for different access categories (ACs) at each node. Instead, we assume each node to implement one queue only, i.e., each node handles one AC and single type of traffic only and we assume nodes to be always saturated; i.e., there is always a packet ready for transmission at the MAC layer of the node. The minimum contention window has been set as equal to 15 while its maximum value has been chosen as 255. Further, we use some of the parameters of IEEE 802.11p for simulations as given in Table 1 [31, 33].
Table 1 System parameters [31, 33]
Each vehicle transmits its location and speed information to its neighbor vehicles through the beacons, which are transmitted every Δt time duration. On receiving this beacon, a tagged vehicle will calculate the relative position and relative speed with the neighboring nodes, which forms the measurement data for the Kalman filter. In the simulations, every tagged vehicle predicts the residual lifetime of the link formed with every other node that enters the communication range of the tagged vehicle. We consider the data traffic to be constant bit rate (CBR) that is attached to each source vehicle to generate packets of fixed size. We further assume user datagram protocol (UDP) as the transport layer protocol for the simulation studies. A total of 10 source-destination pairs are identified in the simulation which generate packets of size 512 bytes for every 0.25 s (we consider the case of variable packet size as well). Total time duration for the simulation is set as 200 s. The source vehicle will start generating the data packet after the first 10 s of the simulation time and stops generating the data packet at 150 s. For each simulation experiment, the sender/receiver node pairs are randomly selected.
We evaluate the performance of the prediction algorithm in terms of prediction inaccuracy which is defined as follows:
$$ \eta_{k+1} = \frac{\left|{RLT}_{k+1} - \widehat{RLT}_{k+1}\right|}{RLT_{k+1}} $$
where RLT k+1 and $\widehat {RLT}_{k+1}$, respectively, are actual residual lifetime and predicted residual lifetime at a particular instance of time k+1. We plot the histogram of the prediction inaccuracy of residual lifetime after sorting the sample values. Figure 5 shows the CDF of prediction inaccuracy for different values of Δt (i.e., beacon interval) with time correlation selected as τ=0.9. The results show that when the value of Δt is 0.6s or lesser, 70% of all predictions have an inaccuracy of less than 20%. When the value of Δt increases, the number of measurement values will be reduced, which results in reduction of accuracy of the prediction. Figure 6 shows the CDF of prediction inaccuracy for different values of time correlation factor τ, for a fixed value of Δt=0.6s. When the value of τ is 0.8 and 0.9, more than 60% of all prediction have prediction inaccuracy of less than 20%. The time correlation factor τ shows how much the speed of the nodes varies for each Δt time duration. When τ=1, the node will not change its speed for each Δt time duration, i.e., constant speed movement and when the value of τ=0, the speed of nodes in an epoch does not depend upon its past speed, i.e., the speed is highly random. Thus, when τ is reduced, the randomness in the speed between the epoch increases resulting in an increase of prediction inaccuracy.
CDF of prediction inaccuracy of the residual lifetime for different values of Δt for τ=0.9s
CDF of prediction inaccuracy of the residual lifetime for different values of τ for Δt=0.6s
The time correlation factor τ can be used to model the operation of VANETs in three different traffic flow conditions: uncongested (i.e, free flow traffic state with low vehicle density), near capacity (i.e., vehicle density takes intermediate values) and congested state (i.e., high values for the vehicle density). Higher values of τ results in negligible temporal variations in the vehicle speed, which represents an uncongested highway scenario where drivers can drive independent of other vehicles. However, in the uncongested highway scenario, there would be deterioration of link lifetime because of frequent disconnections and non-availability of neighbor nodes for forwarding the packets. When time correlation factor is less, the vehicle speed would exhibit very high temporal variations and this is equivalent to a congested traffic state. We have selected τ to be equal to 0.9 for some of our simulation experiments so that the performance of the protocol can be studied for a free flow traffic state. The beacon interval determines the frequency with which measurement values are taken for the prediction. It has been observed that, beacon interval Δt has strong influence on accuracy of residual link lifetime prediction as shown in Fig. 6. When Δt is reduced, more measurement values would be available, resulting in accurate prediction at the cost of an increase in complexity. For some of our simulation experiments, we have chosen Δt=0.6.
Figure 7 shows the prediction inaccuracy of the residual lifetime for different values of normalized time interval and for different values of time correlation parameter τ. We define the normalised time as the ratio between the estimated residual time to total lifetime of the link. During the initial time periods when the link is formed, the predicted residual lifetime has less accuracy and at later stages as the measurement values increases the correction and prediction process of the Kalman filter reduces the prediction inaccuracy. At the same time with the decrease of time correlation parameter τ from 0.9 to 0.7, the randomness of the speed between the epoch duration increases resulting in the reduction of prediction inaccuracy.
Prediction inaccuracy of the residual lifetime for different values of normalized time interval for different values of τ
We consider the following performance metrics for the evaluation of the proposed protocol.
Packet delivery ratio (PDR): this quantity is the ratio of average number of successfully received data packets at the destination vehicle to the number of packets generated by the source.
Average end-to-end (E2E) delay: this is the time interval between receiving and sending time for a packet for a source to destination pair averaged over all such pairs. Here, the data packets that are successfully delivered to destinations are only considered for the calculation.
We investigate the impact of packet size on the performance of the two forwarding strategies in VANETs. The packet delivery ratio is analysed for different values of packet size from 512 to 3072 bytes in Fig. 8 and average end-to-end delay is plotted against different values of packet size in Fig. 9. As the packet size increases, there is reduction in packet delivery ratio for both the routing protocols. The larger packets may be fragmented and these fragmented data packet can be lost during a link failure, resulting in the failure of the entire packet. Even though the PDR of both the protocols is degraded when the packet size increases, the PDR reduction is less for the proposed residual lifetime based routing as compared to greedy forwarding, since in the proposed method, one hop neighbor is selected based on maximum residual lifetime which reduces the link breakage. Similar results can be observed in Fig. 9 as well. When the link failures occur, fragmented smaller size packets will be lost affecting the delivery of the original packet. Hence in conventional greedy forwarding, end-to-end delay increases as the packet size exceeds the fragmentation threshold. In the case of proposed residual lifetime based routing, since the forwarding nodes are selected based on link residual lifetime, there is high probability that all the fragments of a larger packet will be successfully delivered. Accordingly, the delay performance of residual lifetime-based routing is not affected significantly by varying packet size. However, we find that the end-to-end delay for the link residual lifetime based next hop selection and forwarding method is higher than that of the conventional greedy forwarding approach. In link residual lifetime based next hop selection method, the forwarding node is selected in such a way that the newly formed link has maximum residual lifetime. Accordingly in the proposed method, the next hop forwarding vehicle selected need not be the one closest to the destination. This procedure is continued till the data packet reaches the destination. The immediate consequence of this approach is that the data packet may be required to travel more number of hops to reach the destination as compared to the conventional greedy forwarding procedure in which nodes closer to the destination are always selected for forwarding the packets. Since the proposed scheme requires more number of hops, the data packets would suffer higher end-to-end as compared to the greedy forwarding protocol.
Average packet delivery ratio versus packet size
Average end to end delay versus packet size
Figures 10 and 11, respectively, show the packet delivery ratio and average end-to-end delay for different values of Δt or beacon arrival time (since we assume that always one beacon is obtained in each Δt time duration). When the value of Δt increases, the number of measurement values will be reduced which result in reduction of accuracy of the predicted value. This will result in reduction of packet delivery ratio. The average end-to-end delay for different values of beacon interval for both the protocol is shown in Fig. 11. Here the size of the data packet is 512 bytes and the time correlation factor is 0.9. As the beacon interval is increased, the end-to-end delay increases owing to the increase in prediction inaccuracy as mentioned earlier. At the same time, the proposed method leads to higher delay as compared to greedy forwarding, owing to higher number of hops to the destination in the former case.
Average packet delivery ratio versus beacon arrival time
Average end to end delay versus beacon arrival time
In Fig. 12, we plot the packet delivery ratio by varying the time correlation factor τ. Here, we keep the size of data packet to be 512 bytes and the beacon interval to be 0.6 s. The time correlation factor τ shows the randomness of the speed between each Δt time duration. With high value of correlation factor, the randomness of the speed will be less or the time variation of the speed will become more smooth. Alternately, when τ is reduced, the randomness in the speed between the epochs increases resulting in an increase of prediction inaccuracy. When τ is comparatively smaller, the prediction accuracy gets affected leading to deterioration of the PDR. In this case, the end-to-end delay, also gets affected badly as shown in Fig. 13, since inaccurate prediction of residual lifetime results in link failures.
Average packet delivery ratio versus time correlation factor τ
Average end to end delay versus time correlation factor τ
In Figs. 14 and 15, we evaluate the performance of the proposed scheme when the path loss exponent α is varied. As α increases, the effective transmission range decreases. This leads to deterioration of PDR and an increase of end-to-end delay. As effective transmission range decreases, the number of hops increases resulting in higher end-to-end delay. However, the proposed method shows significant improvement in terms of PDR and end-to-end delay as compared to greedy forwarding.
Average packet delivery ratio versus path loss exponent α
Average end to end delay versus path loss exponent α
In Fig. 16, we plot the packet delivery ratio by varying the average speed of vehicles. We fix the time correlation factor τ=0.9 and the size of data packet to be 512 bytes. When the average speed of the vehicles increases, the network topology get changed frequently resulting in the reduction of packet delivery ratio. In residual lifetime-based forwarding, since the forwarding node is chosen based on link residual lifetime, the probability of link breakages reduces, leading to higher packet delivery ratio. Figure 17 shows the impact of average speed on end-to-end delay for the greedy forwarding and residual lifetime based forwarding. For both the forwarding schemes, the end-to-end delay increases with the increase of average speed, since the network becomes more dynamic in nature and chances of occurrence of link breakages increase. The greedy forwarding selects the node closest to the destination than node having maximum residual lifetime, which results in lower end-to-end delay for greedy forwarding than the residual lifetime based forwarding.
Average packet delivery ratio versus average vehicle velocity
Average end to end delay versus average vehicle velocity
In Figs. 18 and 19, we compare the performance of the proposed residual lifetime-based scheme against that of LSGO [19], AODV-R [11], GPSR-R [17], and MOPR-GPSR [8]. Figure 18 shows the comparison results for the packet delivery ratio of the network for all the above mentioned scheme. We select two distinct values for the average speed of vehicles μ=20 and 30 m/s. We set the packet size to be 512 bytes, pathloss exponent α=3 and vehicle density 0.038 veh/m. The simulation results show that our proposed scheme has the highest packet delivery ratio. The topology based routing protocol, AODV-R gives the lowest packet delivery ratio compared to all other scheme under consideration, since AODV-R requires the exchange of several route requests and route reply messages which are not suitable for high mobility applications like VANETs. LSGO scheme requires involvement of exchange of Hello packets and ACK packets for the computation of the ETX metric. This also increases the overhead. In residual lifetime based forwarding, a one step ahead prediction of position and speed of the vehicles are performed based on which the residual lifetime is calculated for selecting appropriate forwarding node, which results in higher packet delivery ratio. In the case of MOPR-GPSR, the packet delivery ratio is higher than the AODV-R, since the link lifetime is estimated before selecting the next hop for data forwarding. But the assumption taken here is the speed of the vehicles is a deterministic quantity which is not a real scenario for VANETs. Figure 19 shows the comparison results for the end-to-end delay for all the above mentioned scheme for two distinct average speed values. We set the packet size to be 512 bytes, pathloss exponent α=3 and vehicle density to 0.038 veh/m. AODV-R protocol incurs the highest delay compared to other protocols under consideration owing to the exchange of RREQ and RREP route request packets. In LSGO, for computing the ETX metric, control packets need to be exchanged between the nodes leading to significant increase of end-to-end delay. The packet delivery ratio is analysed for different values of vehicle density in Fig. 20, for residual lifetime based forwarding and GPSR-R. Here we keep the mean speed as 20m/s, packet size as 512 bytes and pathloss exponent α=3. In GPSR-R, the forwarding node is selected from among a set of nodes whose reliability factor is greater than a given threshold. Though the protocol works well in a higher density scenario, the performance will be degraded when the vehicle density reduces since the probability of finding a forward vehicle with reliability factor greater than the given threshold reduces. It is observed that the performance of the proposed residual lifetime-based forwarding scheme is not affected significantly by the vehicle density. This is because, in the proposed scheme a forwarding node will surely be chosen from among the available links based on the residual lifetime criterion.
Packet delivery ratio comparison between various protocols
End to end delay comparison for various protocols
Packet delivery ratio comparison between residual lifetime based forwarding and GPSR-R against vehicle density
Even though the paper provides results for a unidirectional scenario alone, the results presented in this paper are valid for multilane unidirectional highways as well. This is because, the transmission range of a vehicle is usually much greater than the highway width and a vehicle can always communicate with any other vehicle located within its range [34]. Consider a scenario where two vehicles denoted A and B of equal transmission range R move along two distinct lanes in a multilane highway of width L. If vehicle A intends to transmit a message to vehicle B along the direction of the highway, it must use a larger transmission range R ′. In other words, if destination vehicle is on a different lane (i.e., interlane transmissions), the transmission range of vehicle A must be R ′, which is slightly larger than R. However, the standard highway's lane width is approximately 3.6m, and the vehicle transmission range can be increased to 500m or so as suggested by the dedicated short-range communication (DSRC) Standard [32]. Therefore, the difference between R and R ′ is negligible. So, the scenario consisting of two vehicles travelling in the same direction on multiple lanes along a multilane highway can be considered as equivalent to both of them moving on the same lane. These vehicles can communicate with each other if they are in the transmission range of each other. This means that highway width does not introduce major changes in the calculations. However, vehicles moving on different lanes will have different mean speeds (i.e., dynamic range of their speeds could be different). This should be considered in the problem formulation. Further, the results can be immediately extended for the bi-directional scenario as well. When the vehicles are assumed to move in the same direction, the relative speed among a pair of vehicles is calculated as the difference of their individual speed and is given by Eq. (6). In the bidirectional scenario, the relative speed among a pair of vehicles would be the sum of their speed. Appropriate changes have to be made in the measurement equations accordingly.
In this paper, we have proposed a new scheme for the selection of next hop link based on knowledge of link residual lifetime. We assumed vehicle speed to follow the Gauss Markov mobility model and the notion of effective transmission range was considered for the analysis and evaluation. We have also described a method for the prediction of residual lifetime using Kalman filter. The proposed next hop selection method ensures that links with maximum residual lifetime is chosen for forwarding the data packet. Through extensive simulation results, we have showed that the proposed selection scheme is superior to conventional method of greedy forwarding. Even though the end-to-end delay was observed to be slightly higher for the proposed scheme, significant empowerment in communication reliability (i.e; expressed as PDR) was obtained, as compared to greedy forwarding.
G Karagiannis, O Altintas, E Ekici, G Heijenk, B Jarupan, K Lin, T Weil, Vehicular networking: a survey and tutorial on requirements, architectures, challenges, standards and solutions. IEEE Commun. Surv. Tutorials. 13(4), 584–616 (2011).
M Boban, G Misek, OK Tonguz, in IEEE GLOBECOM Workshops. What is the best achievable qos for unicast routing in vanets? (IEEENew Orleans, 2008), pp. 1–10.
F Li, Y Wang, Routing in vehicular ad hoc networks: a survey. IEEE Veh. Technol. Mag.2(2), 12–22 (2007).
T Taleb, M Ochi, A Jamalipour, N Kato, Y Nemoto, An efficient vehicle-heading based routing protocol for vanet networks. IEEE Wireless Commun. Netw. Conf. WCNC 2006. 4:, 2199–2204 (2006).
S Wan, J Tang, RS Wolff, in IEEE International Conference on Communications ICC'08. Reliable routing for roadside to vehicle communications in rural areas (IEEEBeijing, 2008), pp. 3017–21.
V Namboodiri, L Gao, Prediction-based routing for vehicular ad hoc networks. IEEE Trans. Veh. Technol.56(4), 2332–2345 (2007).
M Menouar, M Lenardi, F Filali, A movement prediction based routing protocol for vehicle-to-vehicle communications. Communications. 21:, 07–2005 (2005).
H Menouar, M Lenardi, F Filali, in IEEE 66th Vehicular Technology Conference, VTC-2007. Movement prediction-based routing (mopr) concept for position-based routing in vehicular networks (IEEEMaryland, 2007), pp. 2101–2105.
N Sofra, A Gkelias, KK Leung, Route construction for long lifetime in vanets. IEEE Trans. Veh. Technol.60(7), 3450–3461 (2011).
SA Rao, M-C Pai, M Boussedjra, J Mouzna, in 8th International conference on ITS Telecommunications0- ITST 2008. Gpsr-l: Greedy perimeter stateless routing with lifetime for vanets (IEEEPhuket, 2008), pp. 299–304.
MH Eiza, Q Ni, T Owens, G Min, Investigation of routing reliability of vehicular ad hoc networks. EURASIP J. Wirel. Commun. Netw. 2013(1), 1–15 (2013).
Z Niu, W Yao, Q Ni, Y Song, in Proceedings of the 2007 international conference on Wireless communications and mobile computing. Dereq: a qos routing algorithm for multimedia communications in vehicular ad hoc networks (ACMHawaii, 2007), pp. 393–398.
X Yu, H Guo, W-C Wong, in 7th International Wireless Communications and Mobile Computing Conference (IWCMC). A reliable routing protocol for vanet communications. (IEEEIstanbul, 2011), pp. 1748–1753.
MH Eiza, Q Ni, An evolving graph-based reliable routing scheme for vanets. IEEE Trans. Veh. Technol. 62(4), 1493–1504 (2013).
V Naumov, TR Gross, in 26th IEEE International Conference on Computer Communications INFOCOM 2007. Connectivity-aware routing (car) in vehicular ad-hoc networks (IEEEAnchorage, 2007), pp. 1919–1927.
A Boukerche, C Rezende, RW Pazzi, in IEEE International Conference on Communications ICC'09. A link-reliability-based approach to providing qos support for vanets (IEEEDresden, 2009), pp. 1–5.
S Shelly, A Babu, Link reliability based greedy perimeter stateless routing for vehicular ad hoc networks. Intl. J. Veh. Technol. 2015:, 1–16 (2015).
H Yu, S Ahn, J Yoo, A stable routing protocol for vehicles in urban environments. Intl. J. Distributed Sensor Netw. 2013(759261), 9 (2013).
X Cai, Y He, C Zhao, L Zhu, C Li, Lsgo: link state aware geographic opportunistic routing protocol for vanets. EURASIP J. Wirel. Commun. Netw. 2014(1), 1–10 (2014).
C-F Wang, Y-P Chiou, G-H Liaw, Nexthop selection mechanism for nodes with heterogeneous transmission range in vanets. Comput. Commun. 55:, 22–31 (2015).
J Petit, M Feiri, F Kargl, in IEEE Vehicular Networking Conference (VNC). Spoofed data detection in vanets using dynamic thresholds (IEEEAmsterdam, 2011), pp. 25–32.
S Yang, T Liu, State estimation for predictive maintenance using kalman filter. Reliab. Eng. Syst. Safety. 66(1), 29–39 (1999).
S Ammoun, F Nashashibi, C Laurgeau, Crossroads risk assessment using gps and inter-vehicle communications. IET Intell. Transp. Syst. 1(2), 95–101 (2007).
C Barrios, H Himberg, Y Motai, A Sadek, in IEEE Intelligent Transportation Systems Conference ITSC'06. Multiple model framework of adaptive extended kalman filtering for predicting vehicle location (IEEEToronto, 2006), pp. 1053–1059.
H Feng, C Liu, Y Shu, O Yang, Location prediction of vehicles in vanets using kalman filter. Wirel. Pers. Commun. Springer Publications. 80(2), 543–559 (2015).
A Goldsmith, Wireless communications (Cambridge University Press, UK, 2005).
B Liang, ZJ Haas, in IEEE Proceedings Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies INFOCOM'99, 3. Predictive distance-based mobility management for pcs networks (IEEENew York, 1999), pp. 1377–1384.
The network simulator ns-2, http://www.isi.edu/nsnam/ns/ns-documentation.html. Accessed 17 Mar 2015.
FK Karnadi, ZH Mo, K-C Lan, in IEEE Wireless Communications and Networking Conference WCNC 2007. Rapid generation of realistic mobility models for vanet (IEEEKowloon, 2007), pp. 2506–2511.
D Krajzewicz, J Erdmann, M Behrisch, L Bieker, Recent development and applications of sumo–simulation of urban mobility. Intl. J. Adv. Syst. Meas. 5(3&4), 128–138 (2012).
S Wiethölter, M Emmelmann, C Hoene, A Wolisz, TKN EDCA model for ns-2, TKN Technical Report TKN-06-003, Technical University Berlin, Telecommunication Networks Group, (Berlin, 2006).
IEEE 802.11p/D5.0, draft amendment to standard for information technology telecommunications and information exchange between systems LAN/MAN specific requirements part 11: WLAN medium access control (MAC) and physical layer (PHY) specifications: wireless access in vehicular environments (WAVE) (2008).
G Bianchi, Performance analysis of the ieee 802.11 distributed coordination function. IEEE J. Selected Areas Commun. 18(3), 535–547 (2000).
S Yousefi, E Altman, R El-Azouzi, M Fathy, Analytical model for connectivity in vehicular ad hoc networks. IEEE Trans. Veh. Technol. 57(6), 3341–3356 (2008).
Department of Electronics and Communication Engineering, National Institute of Technology Calicut, Calicut, 673 601, India
Siddharth Shelly
& A. V. Babu
Search for Siddharth Shelly in:
Search for A. V. Babu in:
Correspondence to Siddharth Shelly.
https://doi.org/10.1186/s13638-017-0810-x
Vehicular Ad Hoc networks
Residual lifetime | CommonCrawl |
RSM based expert system development for cutting force prediction during machining of Ti–6Al–4V under minimum quantity lubrication
R. Shetty ORCID: orcid.org/0000-0002-8256-59661,
C. R. Sanjeev Kumar2 &
M. R. Ravindra3
International Journal of System Assurance Engineering and Management (2021)Cite this article
In recent days the manufacturing process have become more precise and cost efficient due to advancement in the field of computer technology. Information technology has been integrated with manufacturing practice and has resulted in time reduction from concept of a product to marketing of the product. Cutting force generated is the main manufacturing issue raised among industries as it clearly affects quality and cost of the final product. Hence using extensive literature and data base knowledge optimum cutting parameters are selected. Therefore, this paper focuses on a response surface methodology (RSM) based expert system that has been developed using JAVA programming with the help of response surface second order model to automatically generate values of cutting force during machining of Ti–6Al–4V alloy under minimum quantity lubrication (MQL) for different process input parameters. From RSM it has been observed that calculated value of F (20.36) was greater than the F-table value (3.02) and hence the model developed can be effectively used for machining of Ti–6Al–4V alloy. Further the developed RSM based expert system model can be successfully used to predict the force generated during cutting process while machining Ti–6Al–4V alloy under MQL conditions.
The automobile and aerospace sectors are well-known for working under extreme mechanical and temperature conditions. It is desirable that the mechanical and machinability qualities of the material be understood in order to provide efficient and long-term performance under these conditions (Tolga 2005). Titanium alloys have been regarded as difficult to machine materials due to their high hardness, high chemical reactivity, and low thermal conductivity during machining, despite their exceptional characteristics. When comparing alpha phase titanium alloy to beta phase titanium alloy, there is a noticeable increase in cutting force (Donachie 2000). There has been various research carried out on machining of Titanium alloys. Siekmann (1955) suggested that machining of titanium alloy will be a challenge to the manufacturing industry. The cutting forces which were generated during machining of titanium alloys were similar to those obtained while machining steels (Nagi et al. 2008). Arrazola et al. (2009) carried out research on machining of titanium alloy and suggested that chatter was the major problem during machining due to low modulus of elasticity of titanium alloy. CaR and Milwain (1968), Konig et al. (1980) revealed that high temperature and high stresses induced at the tool cutting edge is the major problem while machining of titanium alloys. Hence proper selection of a cutting fluid to minimise this problem is of primary importance. Kaymakci et al. (2012) developed a unique cutting force model for various conventional machining process. Yang and Liu (1999) and Ahmed et al. (2007) suggested that cryogenic cooling and minimum quantity lubrication (MQL) can be considered as cutting fluids to increase the tool life. Ibrahim et al. (2014) carried out research on study of surface roughness under different lubrication modes. Wyen and Wegener (2010) studied the effects of cutting speed, feed and cutting edge radius on cutting forces while turning orthogonally the Ti–6Al–4 V titanium alloy workpiece. Islam et al. (2013) studied the effects of various input parameters such as cutting speed, feed and coolants on dimensional accuracy of cylindrical titanium alloy and suggested that by proper selection of cutting parameters the spring back problem can be nullified. Raviraj et al. (2014) during turning of Ti–6Al–4 V under almost dry machining using RSM, had analyzed the surface roughness. They had concluded that with selecting proper parameters for machining, a better performance at less time and cost could be achieved. Mookherjee and Bhattacharyya (2001) an expert system, namely EXTOOL, has been developed by the author and this system is based on the customer requisite material and geometry, to select inserts for turning and milling processes automatically". Sapuan et al. (2002) have developed the expert tool material selection system for machining of automobile components. Chee et al. (2012) have developed an expert system for selection of carbide cutting tools for computer numerical control (CNC) lathe machine. Chougule et al. (2014) had developed an expert system to optimally select carbide cutting tools for turning operation. However, in spite of these many research on machining of titanium alloy researchers are still facing some challenges to titanium alloy part manufacture (Ezugwu and Wang 1997). However, application of MQL during machining of Titanium alloy using Response Surface Methodology (RSM) approach has been proved to be most effective technique for identifying the process output parameters (Dixit et al. 2012).
Hence in this paper an RSM based expert system has been developed using JAVA programming with the help of RSM model to predict cutting forces enhancement during machining of Ti–6Al–4 V alloy under MQL conditions.
The cutting experiments have been carried out in "PSG A141 lathe (2.2 KW)" using "Cubic Boron Nitride tool" under Minimum Quantity Lubrication (MQL) Fig. 1.
Experimental set up
In MQL, coconut oil is used as lubricant during machining. The cutting tool specification used to machine Ti–6Al–4V alloy are shown in Table 1. Table 2–Table 3 shows the Ti–6Al–4V chemical composition and mechanical properties at room temperature. The Ti–6Al–4V alloy specimen and its microstructure is shown in Fig. 2. The cutting forces generated during turning of Ti–6Al–4V alloy are measured by "9257BA KISTLER Dynamometer".
Table 1 Cutting tool specification
Table 2 Chemical composition (wt%) of Ti–6Al–4V
Table 3 Mechanical properties of Ti–6Al–4V
Ti–6Al–4V specimen and its microstructure
Response surface methodology
Cutting force evolution during machining is challenging to analyse in the metal cutting industry. Evaluating cutting force has been the most effective way of determining the machining properties of any metal or alloy. As a result, an expert system based on RSM can anticipate cutting force as a function of cutting conditions in advance. RSM is a tool that uses a combination of mathematical and statistical techniques to develop a model, analyse the problem, and select the optimal cutting conditions (Montgomery 2005).
"In RSM the relationship between a process output variable of interest 'y' and a set of controllable variables {x1, x2.... xn} can be written in the form
$$y = f(x_{1} ,x_{2} , \ldots ,x_{n} ) + \varepsilon$$
where ε represents noise or error observed in the response y. If we denote the expected response be
$$E(y) = f(x_{1} ,x_{2} ,....,x_{n} ) = \hat{y}$$
then the surface represented by
$$\hat{y} = f(x_{1} ,x_{2} ,....,x_{n} )$$
is called response surface. The first step in RSM is to find a suitable approximation for the true functional relationship between y and set of independent variables employed. Usually a second order model is utilized in response surface methodology".
$$\hat{y} = \beta_{o} + \sum\limits_{i = 1}^{k} {\beta_{i} } x_{i} + \sum\limits_{i = 1}^{k} {\beta_{ii} } x_{i}^{2} + \sum\limits_{i}^{{}} {} \sum\limits_{j}^{{}} {\beta_{ij} } x_{i} x_{j} + \varepsilon$$
Proposed by (Raviraj et al. 2008), the β coefficients, used in the above model can be calculated by means of least square method. However, a second-order model is normally used when the response function is unknown or nonlinear. "Face centred central composite design (FCCD)" is used while conducting the experiment in which α = 1, three controllable process factors (p = 3), region of interest coded {− 1, 1} whose levels are presented in Table 4 followed by 20 sets of experiments using "two-level full factorial with 8 factorial points, augmented with additional 6 centre and 6 axial points" as shown in Fig. 3.
Table 4 Experimental design matrix
Representation of a 23 central composite design
A second-order model has been established for cutting force using response surface methodology. The selected levels and factors in machining of Ti–6Al–4V under MQL using response surface methodology are shown in Table 5.
Table 5 Levels and factors
Because of the widespread use of Ti–6Al–4V in modern manufacturing industries, expert systems have evolved in process planning of machine tool, cutting conditions, and operating sequences. Because cutting conditions differ from material to material, appropriate instructions for selecting cutting condition must be provided. In the final section of the study, expert system-based software was developed utilizing JAVA programming to analyze the cutting force generated at various cutting parameters during Ti–6Al–4V machining under MQL. Figure 4 shows the general structure of the Expert system.
General structure of the expert system
Cutting force (RSM)
Response surface methodology is a popular technique for identifying the best process output variables with the fewest number of trials. This technique can be used to construct a second order mathematical model during metal cutting.
The relationship between the cutting parameters and cutting force has been expressed as follows:
$$\begin{aligned} Cutting \, force \, \left( N \right) & = 194.63 - 1.258A + 0.958B + 25.26C + 0.0062A^{2} \\ & \quad + 172.45B^{2} - 2.98C^{2} - 0.181AB - 0.043AC - 53.72BC \\ \end{aligned}$$
From the analysis of Variance (ANOVA) (Table 6), it was observed that F calculated value (20.36) was greater than the F-table value (3.02) followed by P-table value less than 0.05 (95% confidence level) and hence the developed model was quiet adequate.
Table 6 ANOVA table for response function of the cutting force
The contour and surface plots for each of the response surfaces at varied cutting conditions are plotted using the second order model (Figs. 5, 6 and 7). Cutting force can be predicted using response contours and surface plots in any zone of the machining domain. Scanning Electron Microscope (SEM) photographs of a machined surface at various cutting speeds are shown in Fig. 8. Figure 9 shows the mean cutting force profile at "101 m/min (Cutting Speed), 0.11 mm/rev (Feed) and 0.25 mm (Depth of cut)".
Cutting force contour plot and surface plot in cutting speed-feed planes at different depth of cut
Cutting force contour plot and surface plot in depth of cut-feed planes at different cutting speed
Cutting force contour plot and surface plot in cutting speed-depth of cut planes at different feed
SEM images of machined surface under different cutting speed a 45 m/min, b 73 m/min, c 101 m/min
Cutting force profile at 101 m/min (cutting speed), 0.11 mm/rev (feed) and 0.25 mm (depth of cut)
Cutting force (RSM based expert system)
Cutting force during Ti–6Al–4V machining has a significant impact on manufacturing costs. As a result, a JAVA-based expert system model based on RSM has been created. NET BEANS was the software which has been used in this paper. Figure 10 shows the cutting force component representation file.
Cutting force component representation file
The RSM-based expert system calculates the cutting force generated during Ti–6Al–4V machining under various cutting circumstances first. The method for calculating cutting force is based on the response surface mathematical model generated in Eq. 1. Figure 11 shows the cutting parameter chosen using an RSM-based expert system under MQL lubrication conditions. "From RSM based expert system it is observed that at 101 m/min (Cutting Speed), 0.11 mm/rev (Feed) and 0.25 mm (Depth of cut) the cutting force generated was 134.565 N".
$${\text{Cutting}}{\mkern 1mu} {\text{force}}{\mkern 1mu} \left( N \right) = {\beta _o} + {\beta _1}A + {\beta _2}B + {\beta _3}C + {\beta _4}{A^2} + {\beta _5}{B^2} + {\beta _6}{C^2} + {\beta _7}AB + {\beta _8}AC + {\beta _9}BC + \varepsilon$$
Cutting parameter selection using RSM based expert system
Confirmatory experiments for cutting force
Cutting force generated during machining of Ti–6Al–4V is predicted and verified using MQL application verification tests. Figure 12 depicts the cutting force validation of experimental results and RSM-based Expert system results. Figure 12 shows that the experimental and predicted values for all of the tests in the experiment were fairly similar.
Validation of experimental results and RSM based Expert system results
The following findings can be made from the study of RSM and RSM-based expert systems for machining Ti–6Al–4V under MQL:
The calculated F value (20.36) was found to be more than the F-table value (3.02) from RSM, indicating that the proposed model may be used successfully for machining Ti–6Al–4V.
It was discovered from the response contours and surface plot that increasing cutting speed resulted in a reduction in cutting force.
The developed RSM-based expert system allows the user to analyse the cutting force and select cutting parameters in order to get the desired result in a short amount of time and at a low cost.
During the validation of experimental data using an RSM-based expert system, it was discovered that the experimental and projected values for all of the selected tests of experiments were relatively similar.
To begin with, only few variables were selected for the RSM based expert system model. In order to investigate the effects of other variables, further study is required. Though the values and methods recommended in the literature were selected, some of the important factors such as lubricant pressure, nozzle diameter, lubricant impinging angle, nose radius, tool materials, rake angle, clearance angle and shank size were treated as constant input factors in the RSM based expert system model. An experiment aimed at determining the influence of these factors would be appropriate. Further, the RSM-based expert system model developed in this paper can be utilized to develop a mechanistic model, which will include other elements such as machine dynamics and tool geometry, and can forecast cutting force outputs for a larger variety of cutting situations.
Ahmed MI, Ismail AI, Abakr YAN, Amin AKM (2007) Effectiveness of cryogenic machining with modified tool holder. J Mater Process Technol 185:91–96
Arrazola PJ, Garay A, Iriarte LM, Armendia M, Marya S, le Maître F (2009) Machinability of titanium alloys (Ti6Al4V and Ti555.3). J Mater Process Technol 209:2223–2230
CaR EJ, Milwain D (1968) ISI Special Report 94. London, pp 143–150
Chee F, Ranjit S, Kher V (2012) An expert carbide cutting tools selection system for CNC lathe machine. Int Rev Mech Eng 6(7):1402–1405
Chougule PD, Kumar S, Raval HK (2014) An expert system for selection of carbide cutting tools for turning operations. In: 5th International and 26th All India manufacturing technology. Design and research conference, IIT Guwahati, Assam, India
Dixit US, Sarma DK, Davim JP (2012) Environmentally friendly machining. Springer, Berlin
Donachie JMJ (2000) Titanium—a technical guide, 2nd edn. ASM International, New York, pp 79–84
Ezugwu EO, Wang ZM (1997) Titanium alloys and their machinability—a review. J Mater Process Technol 68:262–272
Ibrahim D, Syed Waqar R, Salman P (2014) Analysis of lubrication strategies for sustainable machining during turning of titanium Ti–6Al–4V alloy. Procedia CIRP 17:766–771
Islam MN, Anggono JM, Pramanik A, Boswell B (2013) Effect of cooling methods on dimensional accuracy and surface finish of a turned titanium part. Int J Adv Manuf Technol 69(9–12):2711–2722
Kaymakci M, Kilic ZM, Altintas Y (2012) Unified cutting force model for turning, boring, drilling and milling operations. Int J Mach Tools Manuf 54(55):34–45
Konig W, Schroder B, Treffert H (1980) High speed grinding of any contour using CBN wheels, Friction and Lubrication in the fabrication of titanium and its alloys. Metal Inter Newslett 5(3):14–21
Montgomery DC (2005) Design and analysis of experiments, 6th edn. Wiley, New York
MATH Google Scholar
Mookherjee R, Bhattacharyya B (2001) Development of an expert system for turning and rotating tool selection in a dynamic environment. J Mater Process Technol 113(1–3):306–311
Nagi E, Che HCH, Jaharah AG, Shuaeib FM (2008) High speed milling of Ti–6Al–4V using coated carbide tools. Eur J Sci Res 22(2):153–162
Raviraj S, Tony KJ, Goutam DR, Srikanth SR, Diwakar S (2014) Surface roughness analysis during turning of Ti–6Al–4V under near dry machining using statistical tool. Int J Curr Eng Technol 4(3):2061–2067
Raviraj S, Raghuvir P, Kamath V, Rao SS (2008) Study on surface roughness minimization in turning of DRACs using surface roughness methodology and Taguchi under pressurized stream jet approach. ARPN J Eng Appl Sci 3(1):1819–6608
Sapuan SM, Jacob MSD, Mustapha F, Ismail N (2002) A prototype knowledge-based system for material selection of ceramic matrix composites of automotive engine components. Mater Des 23(8):701–708
Siekmann HJ (1955) How to machine titanium. The Tool Engineer 34:78–82
Tolga B (2005) On the mechanical surface enhancement techniques in aerospace industry—a review of technology. Aircraft engineering and aerospace technology. Int J 77(4):279–292
Wyen CF, Wegener K (2010) Influence of cutting edge radius on cutting forces in machining titanium. CIRP Ann Manuf Technol 59:93–96
Yang X, Liu CR (1999) Machining titanium and its alloys. Mach Sci Technol 3(1):107–139
Open access funding provided by Manipal Academy of Higher Education, Manipal.
Faculty of Mechanical and Manufacturing Engineering, Manipal Academy of Higher Education, Manipal, Karnataka, India
R. Shetty
Faculty of Mechanical Engineering, Government Polytechnic, Belagavi, India
C. R. Sanjeev Kumar
Faculty of Product Design and Manufacturing, Visvesvaraya Technological University, Belagavi, India
M. R. Ravindra
Correspondence to R. Shetty.
The authors declare that they have no conflict of interests.
Research involving human participants and/or animals
The authors declare that this project does not involve Human Participants and/or animals in any capacity.
The authors declare that this research does not involve any surveys or participants in any capacity.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Shetty, R., Kumar, C.R.S. & Ravindra, M.R. RSM based expert system development for cutting force prediction during machining of Ti–6Al–4V under minimum quantity lubrication. Int J Syst Assur Eng Manag (2021). https://doi.org/10.1007/s13198-021-01495-z
Revised: 27 October 2021
Accepted: 03 November 2021
Ti–6Al–4V
Cutting force | CommonCrawl |
Board index ‹ Game of Life ‹ Patterns
Caterloopillar WIP (all speeds < c/4)
For discussion of specific patterns or specific families of patterns, both newly-discovered and well-known.
150 posts • Page 6 of 6 • 1, 2, 3, 4, 5, 6
Re: Caterloopillar WIP (all speeds < c/4)
by simsim314 » July 4th, 2016, 12:17 pm
muzik wrote: are corderships actually considered engineered?
My answer is definitely yes. Even if they can occur in soup with high probability, yet they were found by using simpler components and "logical manipulations" with those components, they by definition engineered. This is why Gosper gun is engineered in my opinion.
In other words: everything is "natural" in a sense that for every pattern there is some very rare soup that precedes it. The definition of naturality should be limited by some reasonable probability (like the probability to appear in a random soup > 1/10^12 or something).
Another point is that in my view symmetric soups should also be included in the definition natural objects, like the symmetrical ships are considered "elementary" although they're symmetrical (and symmetry is definitely a "designed" feature in those ships). Symmetry is natural feature.
simsim314
by dvgrn » July 4th, 2016, 2:00 pm
simsim314 wrote:
Seems like a reasonable argument. Coming at it from a slightly different direction: Corderships are built out of natural puffers -- not the bare switch engine per se, but block-laying and glider-producing switch engines show up in C1 soups very regularly. So while they may be engineered, they're much less engineered than other types of macro-spaceships.
Is it worth having different terminology for minimally-engineered (Cordership), moderately-engineered (Caterpillar), and maximally-engineered (Demonoid) objects? The current distinction between "engineered" and "engineerable" seems a little too subtle, when it's really trying to distinguish "non-adjustable" vs. "adjustable".
What I've been using for the last several years is "engineered" for stuff like the various Corderships, high-period rakes, and so on... but then "self-supporting" for the Caterpillar, Centipede, HBKs, and waterbear, and "self-constructing" for any object that carries around a construction recipe for itself. Caterpillars and HBKs and so on don't contain their own construction recipe -- the positions of the pi climbers or the half-bakeries or whatever aren't encoded anywhere. They do the work of closing the cycle so that the pi climbers can keep climbing indefinitely, or the half-bakeries will continue to get activated by the right gliders... but the spaceship as a whole is definitely not self-constructing.
But you could convert a Demonoid glider stream into, say, a stream of loafers, and feed it into a completely different-looking loafer-based replicator unit, and it would still construct a Demonoid replicator unit. The construction data is important, but the particular encoding is irrelevant.
simsim314 wrote: Another point is that in my view symmetric soups should also be included in the definition natural objects, like the symmetrical ships are considered "elementary" although they're symmetrical (and symmetry is definitely a "designed" feature in those ships). Symmetry is natural feature.
People have been pretty consistent about calling symmetric-soup objects like the pufferfish "almost natural". Really big areas of perfect symmetry happen very-nearly-never in random soup, so there is something highly artificial about starting with a symmetric soup. But yes, the products of symmetric soup are still elementary and not engineered.
by simsim314 » July 4th, 2016, 2:40 pm
I think in a way the glider stream is preparing the "rail" for itself in Demonoid, similarly to the way pi climbers preparing its own rail in caterpillar. Just the rail in case of pi climbers are simple blinkers, while in case of demonoid is more complex conduit. No spaceship "contains" its own tape, so you always can say the tape is "gliding" over some sort of "self constructible" rails. The only question is how complex the rails are? In case of more complex ships the rails could be very complex and the tape can be pretty simple (gliders, or array of SLs), or the rail can be simple and the tape can contain all the logic (like in case of centipede/caterpillar/waterbear).
What's unique in case of complex rail and simple tape, is that due to this feature the speed is usually adjustable (Geminiods and cateloopillars have adjustable speeds), while when you place the logic into the tape, and the rail stay simple, you usually have very limited options for adjusting the speed.
EDIT Another way to look at it, is the option to add some artificial artifact (like SL array in shape of santa claus). All the designed ships can add such feature.
About constructability I agree - it should be considered continuous property. I would say the probability to appear in random soup is good estimation of the degree of entropy some component has, we should obviously get into logarithmic scale on this one.
Well why people use "almost natural" regarding objects appeared in symmetrical soup, but not "almost elementary" for ships appeared in symmetrical search? If the term elementary is placed to distinguish designed and not-designed ships, symmetry is definitely artificially designed property introduced into the search.
More than that, does ships found using zfind are designed or elementary (and this goes for any new search utility)? Are ships found by iterating different parameters during the search, like the new 3c/7 is designed? because it's everything but elementary, Tim Coe did a lot of tweaks and played with huge amount of parameters to find that one, it definitely looks more like designed to me than "elementary".
I think we currently have two distinguishable technologies to "find" ships. One technology is helping us to find very artificial and small ships, the other uses natural reaction to generate "big" ships. This is just current state, we could get into some "grey" area where medium size somewhat artificial ships are being built, using simple components (like we have in case of corderships).
I don't like this terminology - or some assumption that somehow designed ships are somehow worse. They're big yes, and because they're big they're less usable and less elegant, yes - but I do think all the current ships are "designed" and artificial in some way, except maybe the *WSS the glider and the SE puffers.
To make my point more clear: what is more probable, to find Gemini in random soup, or the spaghetti monster? If someone could show spaghetti monster is more probable than Gemini - I would agree to the distinction between designed and "elementary".
by muzik » July 4th, 2016, 2:42 pm
I take it that corderships are now officially classed as engineered spaceships then?
Also, "elementary" spaceships are spaceships which cannot be broken down into any smaller spaceships, that actually take place on certain ship-driving reactions (like blinker syntheses as in the caterpillar). So the gliders in the c/7 diagonal lobster do not make the lobster an engineered spaceship. The spaghetti monster therefore is not engineered but elementary, despite its size.
Engineered spaceships don't have adjustable slopes or speeds; they are engineered by their own design.
Engineerable spaceships can have either controllable speeds or directions; their components are engineerable.
muzik wrote: I take it that corderships are now officially classed as engineered spaceships then?
Well, I'm not actually official enough to say for sure. I wouldn't mind throwing them into that category, but then again I wouldn't mind if they got their own category, along with pufferfish spaceships, and puffer-based spaceships in other rules. If they were just "puffer-based spaceships", would that make everyone happy?
Sticking a few puffers together to suppress each other's exhaust, and maybe adding an extra tagalong or so, is certainly a kind of engineering. But it's much simpler engineering, because the component pieces are capable of traveling at that speed without any external support.
muzik wrote: Engineered spaceships don't have adjustable slopes or speeds; they are engineered by their own design.
Like Sphenocorona, I do understand the distinction -- but the two terms are so similar, and the underlying concepts are actually different enough, that I have to think about it every time.
When it comes right down to it, I'm unlikely to ever use "engineerable" as a category, just because "adjustable" seems so much clearer. Subcategories seem easier with "adjustable", too: HBKs and Demonoids (and eventually Orthogonoids) are adjustable-speed but fixed-slope. With Geminis the speed and direction are independently adjustable.
Maybe you could say something like "fixed-vector" for spaceships where both speed and direction are fixed. But that sounds kind of contrived. Maybe there's a better new term out there to be found... but usually this kind of classification causes fewer terminology wars, when all the terms have already come into common use.
[I've been trying to get the "self-supporting" vs. "self-constructing" distinction into common use for several years now, but I'm not sure that I've actually succeeded yet...!]
Elementary, engineered and adjustable it is then, if everyone else agrees!
I think that puffer-based should come under "engineered" though
by A for awesome » July 4th, 2016, 3:56 pm
simsim314 wrote: To make my point more clear: what is more probable, to find Gemini in random soup, or the spaghetti monster? If someone could show spaghetti monster is more probable than Gemini - I would agree to the distinction between designed and "elementary".
I would say the spaghetti monster, unless there's some precursor to an entire Gemini that fits inside a 27x137 bounding box.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome
Location: 0x-1
I've done some analysis on the size of caterloopillar, and it turns out I'm very close to optimum.
For starters I should say the smallest caterloopillar is of the size of ~54K and speed c/92.
1. First of all the critique of the long front - the front costs only ~2K (less than 4%).
2. The recipe consists of ~20 *WSS recipes (or 40 in both directions), so we get ~ 1.35K per *WSS recipe.
3. Out of the 1.35K:
~500 is only the reading heads (front and back).
~650 is the recipe "unfolded", SLs for the recipe + slow salvo in progress.
~200 is the skip operation "vanil blocks".
Obviously it's pretty rough estimates on average. But looking where we can cut here:
1. The front ~4%
2. The movement recipes ~ 5%-10% (they're close to optimal)
3. Balancing SKIP operations and movement recipes ~ 5% as well.
All the estimates are somewhat optimistic, as some work has been done already to optimize all these aspects (except the front). So all in all I can max cut by 20% even in the best scenario, and after a lot of hard work. So I still don't get to 25K range as I hoped. But I think it's cool caterloopillar got so close to the optimal. 54K is definitely well below any other constructible ship except the most tiny of them all - the demonoid. I probably can optimize it to beat the 10-hd demonoid, but not the 0-hd.
A for awesome wrote: I would say the spaghetti monster, unless there's some precursor to an entire Gemini that fits inside a 27x137 bounding box.
I see no reason to limit the bounding box.
In other words I can formulate my question: how big the universe should, be filled with random stuff, so that the probability of appearing of spaghetti monster in it would be > 0.5. Using this formulation we start from extremely large universe, where every known pattern could fit.
Can you prove spaghetti monster universe is smaller than gemini in such formulation?
Challenge: find a predecessor for a Geminoid-type spaceship that fits in an smaller bounding box than the spaghetti monster itself.
@muzik - I've added few speeds and min. count to your page. I must admit this is very tedious and boring for me... I would say that everyone should build some caterloopillar using the script, and say they were the first that built a ship of this particular speed And post it to your page.
I actually started a page in my user page to list purely caterloopillars, but gave up on it.
Yes, I'm kinda lazy. I just want to know where the 2c/x and such would fit in, adding them is the easy part
by BlinkerSpawn » July 4th, 2016, 4:56 pm
Would it be possible for someone to convert the script for generating caterloopillars into Lua?
LifeWiki: Like Wikipedia but with more spaceships. [citation needed]
BlinkerSpawn
Location: Getting a snacker from R-Bee's
Well yes it should be possible with some effort, but for what purpose? Python is working for most OS...
by Scorbie » July 4th, 2016, 7:49 pm
simsim314 wrote: Well yes it should be possible with some effort, but for what purpose? Python is working for most OS...
One reason I can think of is speed. (Not for this case) There are situations where a for loop in python is the bottleneck and you can't really do anything about it... Then I guess you would have to rewrite the whole thing in Lua. I would rather write in Lua in the first place.
Of course I don't think there is any need to port a already working python script to Lua.
Best wishes to you, Scorbie
Scorbie
by muzik » July 5th, 2016, 10:11 am
I have listed all of the caterloopillar speeds posted on this thread, although knowing on my luck there's probably a bunch more of them somewhere that I don't know about. (Also, come to think of it, are there any more Gemini/Demonoid ships that have been posted on this forum?)
Anyway, can the c/8 or 31c/240 be shrunk anymore simply by running the script again? I can't test this myself due to not having access to a computer.
by Apple Bottom » July 5th, 2016, 2:31 pm
simsim314 wrote: I think copperhead had appeared in some soup (although it might be symmetric, yet it's still soup friendly).
Aye, It's appeared twice in D2_+2 so far.
@muzik - great (on adding all the existing pillars)! I'll try to fill the missing squares, and add some additional pillars later on...
'Twas a tedious job (English translation: prisons of the world, please use this as your form of execution).
Back when I ran drc's "smaller c/8", I got what didn't look like a caterloopillar at all...
Also, why can't speeds like c/14, c/18 and c/26 exist?
by drc » July 5th, 2016, 3:52 pm
muzik wrote: 'Back when I ran drc's "smaller c/8", I got what didn't look like a caterloopillar at all...
The script was giving errors at that time
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Joined: December 3rd, 2015, 4:11 pm
Location: creating useless things in OCA
drc wrote:
so you didn't actually get a new c/8... sad
by gameoflifemaniac » June 7th, 2017, 11:03 am
What's up with the c/92 caterloopillar? You didn't upload it.
https://www.youtube.com/watch?v=q6EoRBvdVPQ
One big dirty Oro. Yeeeeeeeeee...
gameoflifemaniac
Joined: January 22nd, 2017, 11:17 am
Location: There too
by calcyman » July 18th, 2018, 9:03 am
calcyman wrote:
dvgrn wrote: Of course, the Caterloopillar is an order of magnitude ahead on the bounding box already -- there's no way the Demonoid is going to catch up on that metric.
What about an orthogonal Demonoid which uses a tape of MWSSes? I think that has a very good chance of beating the smallest Caterloopillar, not least because H-to-MWSS and MWSS-to-G technology is so cheap.
H-to-MWSS technology is either quick to recover and expensive to construct, or Spartan and slow to recover and inexpensive to construct -- right? We don't have a small Spartan H-to-MWSS with a sub-100-tick recovery time. Presumably we'd also need a constructible G-to-H if we're using an MWSS-to-G, and even the cheaper 135-degree MWSS-to-G isn't quite Spartan and might cause expensive troubles with construction order.
It seems you were correct after all: according to the LifeWiki, the Orthogonoid is 707-by-868856 and the c/8 Caterloopillar is 734-by-514927. I suspect your boustrophedonic design could bring the Orthogonoid down below the Caterloopillar, however.
(What is the contest, exactly? To find the smallest-bounding-box spaceship that's slower than c/12?)
by gameoflifemaniac » July 18th, 2018, 10:14 am
I was waiting for an answer for over a year, and nobody noticed my post. When nobody was answering my question, I deleted my post, and still nobody answered! THAT ISN'T FAIR!!!
Also, why didn't anyone answer?
I wrote: What's up with the c/92 caterloopillar? You didn't upload it.
Last edited by gameoflifemaniac on July 18th, 2018, 11:24 am, edited 1 time in total.
by dvgrn » July 18th, 2018, 10:35 am
calcyman wrote: It seems you were correct after all: according to the LifeWiki, the Orthogonoid is 707-by-868856 and the c/8 Caterloopillar is 734-by-514927. I suspect your boustrophedonic design could bring the Orthogonoid down below the Caterloopillar, however.
Not sure why it would -- the length of the MWSS recipe would stay about the same, and every time you fold it over and reduce the width, you'll increase the height proportionally. Not much change to the bounding box, except that an additional width has to be added, a fair fraction of the height of the square Orthogonoid (because the elbows stick out so much farther).
There are various optimizations that could be applied to the Orthogonoid that would probably make it competitive with the Caterloopillar. But then there are optimizations that could be applied to the Caterloopillar that would make it a good bit smaller, too! No idea who would win that race in the end.
calcyman wrote: (What is the contest, exactly? To find the smallest-bounding-box spaceship that's slower than c/12?)
Don't know for sure, it's not really my contest... these days I'm trying to avoid getting fixated on metrics and new-record-smallest things as much as possible. However, there's some documentary evidence that the current target to beat is the waterbear. 13295×28010 is 372392950, with about 198K cells, where the c/8 Caterloopillar is 734×514927 = 377956418 with 233K cells.
gameoflifemaniac wrote: Also, why didn't you (that 'you' is in the plural form) answer?
I can't answer in the plural. But just speaking for myself, I didn't answer because I couldn't find anything useful anywhere about a c/92 Caterloopillar.
EDIT: The two mentions of the c/92 in this thread are here and here.
The second of these links gives the current state of the art for smallest Caterloopillars.
However, this isn't entirely relevant to calcyman's contest question -- simsim314 seems to have been talking about "smallest" in terms of population, where the 0hd Demonoid wins hands down (27250 cells, almost an order of magnitude smaller) rather than bounding box (where it definitely doesn't -- 55010×54964 = 3023569640, almost an order of magnitude larger.)
Return to Patterns | CommonCrawl |
Global Water Pathogen Project twitter feedLog In
About GWPP
About K2P
Book Instructions
Pathogen Flow & Mapping Tool
Treatment Plant Sketcher Tool
HomeGlobal Water Pathogen ProjectPART FIVE. CASE STUDIESApplication of the risk-based framework - is it safe?
Can farmers in Bolivia safely irrigate non-edible crops with treated wastewater?
Chapter info
This publication is available in Open Access under the Attribution-ShareAlike 3.0 IGO (CC-BY-SA 3.0 IGO) license (http://creativecommons.org/licenses/by-sa/3.0/igo). By using the content of this publication, the users accept to be bound by the terms of use of the UNESCO Open Access Repository (http://www.unesco.org/openaccess/terms-use-ccbysa-en).
The designations employed and the presentation of material throughout this publication do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The ideas and opinions expressed in this publication are those of the authors; they are not necessarily those of UNESCO and do not commit the Organization.
Symonds, E., Verbyla, M.E. and Mihelcic, J.M. 2019. Can farmers in Bolivia safely irrigate non-edible crops with treated wastewater? In: J.B. Rose and B. Jiménez-Cisneros, (eds) Global Water Pathogen Project. http://www.waterpathogens.org (S. Petterson and G. Medema (eds) Part 5 Case Studies) http://www.waterpathogens.org/book/can-farmers-in-Bolivia-safely-irrigate-non-edible-crops-with-treated-wastewater Michigan State University, E. Lansing, MI, UNESCO.
https://doi.org/10.14321/waterpathogens.71
Acknowledgements: K.R.L. Young, Project Design editor; Website Design: Agroknow (http://www.agroknow.com)
Last published: March 11, 2019
Erin Symonds (University of South Florida) , Matthew Verbyla (San Diego State University) , James Mihelcic (University of South Florida)
Exemplifies how to safely farm with treated wastewater in a rural setting
Necessary precautions can be identified from limited virus data, which can differ from faecal indicator bacteria
Safe wastewater reuse in agriculture addresses SDG targets 2.3, 2.4, 3.2, 3.3, and 6.3
Distinct approaches needed for adults and children to ensure safe reuse on the farm
A multibarrier approach is necessary to ensure safe wastewater reuse in the fields
Graphical abstract
Risk Management Objective
This case study aimed to determine if farmers, in low income countries, can safely reuse treated wastewater from an existing waste stabilization pond (WSP) system for irrigation, or are additional control measures or treatment processes required to reduce exposure to viral pathogens and meet a specified health target?
Location and Setting
The study took place in a town, located in a culturally diverse region of the Caranavi province of Bolivia near the Alto Beni River, an important inland fishery system in the Amazon River basin. The local economy is driven by citrus fruit production for domestic sale and cacao beans for factories that manufacture chocolate. Many farmers chew coca leaves while working, resulting in frequent hand-to-mouth contact. Reclaimed wastewater can provide a local source of irrigation water that contains valuable nutrients and may be less carbon intensive than other sources. Like many areas of the world, most population growth will occur in small cities, such as the one studied here, that are closely linked to agricultural zones.
Figure 1. Community-operated waste stabilization pond (WSP) system with (a) a facultative pond and (b-c) two maturation ponds in series (left); case study site location (right; photo by M.E. Verbyla).
Description of the System
The wastewater treatment system serves 780 people and consists of flush toilets, a gravity-driven conveyance network, and three WSP in series. While it provided high removal of faecal coliforms, limited virus removal was measured. Treated effluent is discharged to a nearby surface water, but some farmers would like to use the effluent for irrigation. This sanitation system is managed and operated by a volunteer community water committee.
Outcome and Recommendations
Minor additional control measures are needed to reduce the risk of virus exposure during farming and meet the specified health target for this study. It is better to use at least two of these measures in combination to create "multiple barriers" for pathogen control. If one barrier fails, others will still provide some protection.
Additional Treatment. The hydraulic performance of the ponds could be improved by installing baffles and/or regular desludging of accumulated solids in the first pond. Also, the treated effluent can be stored in shallow, on-farm ponds prior to irrigation, where it will receive additional treatment.
Restrictive Measures. Children should not be allowed to play in irrigated fields.
Personal Protective Equipment. Farmers should use gloves to handle tools and equipment, and remove them to handle food or coca leaves. They should also have access to hand washing facilities.
Read more?
scroll down for a more detailed case study description
Wastewater use in agriculture facilitates water and nutrient recovery, offsetting energy needs for food production and reducing the degradation of aquatic ecosystems (Hamilton et al., 2007). Currently, 20 million hectares of land are irrigated with wastewater (Raschid-Sally and Jayakody, 2008). The extent of wastewater irrigation will likely increase in the future because of water scarcity, population growth, and the adoption of the Sustainable Development Goals (SDGs), which include a target to increase water recycling and safe reuse globally. Reclaiming treated wastewater is also beneficial because it applies nitrogen and phosphorus to land instead of surface water, which reduces the eutrophication potential of the sanitation system. Reusing treated wastewater may also lower the carbon footprint and embodied energy of sanitation systems, especially systems with high material and energy inputs (Cornejo et al., 2013). The World Health Organization (WHO) recommends a systematic risk-based approach to assess wastewater reuse via Sanitation Safety Planning (SSP; WHO, 2016), with a maximum health burden of 10-6 disability-adjusted life years (DALYs) lost per person per year. Since it has been suggested that 10-4 DALYs may be a more appropriate initial target for regions with high diarrheal disease burdens (Mara et al., 2010), the target of 10-4 DALYs was selected to evaluate the risk of reusing water from a three-pond waste stabilization pond (WSP) system in Bolivia.
While there are many ways to reduce pathogen concentrations in wastewater prior to reuse, WSPs are extremely prevalent worldwide and facilitate natural disinfection and removal processes without requiring high energy or material inputs (Kumar and Asolekar, 2016; Maynard et al., 1999; Oakley, 2005; Verbyla and Mihelcic, 2015; Verbyla et al., 2013a). Pathogen reduction is primarily achieved in tertiary maturation or polishing ponds. Based on Verbyla et al. ,2013a), this system provided an average 3.4-log10 removal of faecal coliforms. Since enteric viruses are often more resistant to treatment, enteric virus reference pathogens were directly measured. This case study highlights a quantitative microbial risk assessment (QMRA) of agricultural irrigation with treated effluent from a community-managed wastewater treatment system in Bolivia consisting of three WSPs in series (Figure 1; Symonds et al., 2014). The QMRA determines the additional log10 enteric virus reductions required to safely reuse the treated effluent and considers the health risks to adult farmers as well as children at play in irrigation fields. The setting is like many areas of the world, where most population growth will occur in small cities closely linked to agricultural zones (Verbyla et al., 2013a).
Problem Formulation
The purpose of the QMRA was to determine the additional log10 enteric virus reductions necessary to ensure the safe reuse of effluent from a three-pond community-managed wastewater treatment systems for irrigation. The work is based on a previously published study (Symonds et al., 2014).
The scope was defined by:
Hazard identification: Enteric viruses, represented by norovirus (measured by RT-qPCR) for adult farmers and rotavirus (measured by RT-qPCR) for children <5 years.
Exposure pathways: two exposure pathways were considered:
Accidental ingestion of irrigation water by farmers working and
Accidental ingestion of soil by children playing in fields irrigated with treated effluent.
Health outcome: DALYs lost per person per year was selected as the health outcome, with a target of 10-4 DALYs per person, since Bolivia has a high diarrheal disease burden (Mara et al., 2010).
Source: The concentrations of norovirus and rotavirus were determined by molecular methods (RT-qPCR) from composite samples of treated wastewater collected over a 24-hour period in June 2012. Since this study used molecular methods to determine rotavirus concentrations and culture-based methods were used to develop the dose-response relationship (Ward et al. 1986), it was necessary to harmonize rotavirus concentrations using a ratio 1:1000 to 1:1900 gene copies to focus-forming units (Mok and Hamilton, 2014). Such an adjustment was not needed for norovirus due to congruent methods used in this study and in the dose-response studies.
Barriers/controls: The risk of enteric virus illness from wastewater reuse for a three-pond wastewater treatment system was executed with respect to farmers and children playing in fields irrigated with treated effluent (Symonds et al., 2014).
Exposure :The assumed amount of virus ingested during exposure to treated wastewater effluent was determined based upon the assumed volume of effluent ingested and the concentration of enteric viruses in the effluent. It was assumed that adult famers and children playing in fields ingested the equivalent of 1.0 mL of wastewater effluent per day (Ottoson and Stenström, 2003), during 75 days/year for farmers and 150 days/year for children (Mara et al., 2007; Seidu et al., 2008). Log-normal distributions of virus concentrations were assumed, based on those measured in the treated wastewater effluents (Table 1).
Table 1. The distributions of norovirus and rotavirus concentrations (copies/mL) used in the QMRA assessment to determine if the effluent from the wastewater treatment pond system could be safely reused for restricted agricultural irrigation.
Population At Risk
Reference Enteric Virus
Assumed Distributions of Reference Enteric Virus Concentrations (copies/mL) In Treated Effluent
Adult farmers
lognormal (mean=363, sd=1.86)
Children <5 years at play
lognormal (mean= 1622, sd=3.55)
Health Effects Assessment
Dose-response models were used to determine the additional virus removal necessary to safely reuse of the wastewater treatment system effluent with respect to farmers and children in fields. The hypergeometric model (Teunis et al., 2008) with a Pfaff transformation (Barker et al., 2013; Mok et al., 2014) was used for norovirus, where the probability of infection was calculated as:
$$P\scriptsize inf\tiny NV = \normalsize1-(\scriptsize2\normalsize F \scriptsize 1 \normalsize(\beta\tiny NV,\frac{\normalsize C \tiny NV \normalsize V(1-a\tiny NV \normalsize)}{\normalsize a \tiny NV},\normalsize \alpha \tiny NV\normalsize-\beta\tiny NV\normalsize;a \tiny NV\normalsize )(\frac{1}{1-a \tiny NV \normalsize})^\frac{-(\normalsize C \tiny NV \normalsize V(1-a \tiny NV \normalsize)}{\normalsize a \tiny NV})$$ (1)
where αNV=0.04; βNV=0.055; aNV=0.9997 (Teunis et al., 2008); and where cNV is the concentration of norovirus and V is the volume of water ingested. Not everyone who becomes infected develops an illness (there is the possibility that some become 'silent carriers'); therefore, a conditional probability of norovirus illness (the proportion of infected individuals developing symptoms of an illness) was calculated using:
$$P_{ill\inf_{NV}}=1-(1-\eta_{NV}c_{NV}V)^{-r_{NV}} $$ where $\eta_{NV}=0.00255; r_{NV}=0.086$ (2)
Rotavirus probability of infection was calculated using the exact beta-Poisson model (Teunis and Havelaar, 2000):
$${p_{inf}}_{RV}=1-_{1}F_{1}(\alpha_{RV},\alpha_{RV}+\beta_{RV},-c_{RV}V)$$ where $\alpha_{RV}=0.167;\beta_{RV}=0.191$ (3)
and the conditional probability of rotavirus illness given infection was determined assuming a simple ratio of 0.9 (Havelaar and Melse, 2003):
$$p_{ill\mid inf_{RV} }=p_{inf_{RV}}\cdot0.9$$ (4)
The probability of contracting an illness that would cause some type of disease burden was calculated as:
$$p_{ill}=p_{inf}\cdot p_{ill\mid inf}$$ (5)
To normalize the probability of illness per year for the two groups exposed for a different number of days per year, the following equation was used, where n is the number of days per year of exposure:
$$p_{ill_{annual}}=1-(1-p_{ill_{daily}})^n$$ (6)
Risk Characterization
Annual risks were expressed in terms of DALYs, assuming uniformly-distributed ranges for the average disease burden per case of illness from norovirus (3.71 × 10-4 to 6.23 × 10-3 DALYs per case; Mok et al., 2014) and rotavirus (1.50 × 10-2 to 2.60 × 10-2 DALYs per case; Havelaar and Melse, 2003; Prüss-Üstün et al., 2008), using the following equation:
$$DB=p_{ill_{annual}}\cdot B$$ (7)
For norovirus, it was assumed that a fraction of the population may have genetic resistance to infection; for this study, this fraction was assumed to be uniformly distributed from 0 to 0.2 (Mok et al., 2014). Risk of rotavirus infection was only calculated for children under the age of five and took into account the effect of vaccination programs by multiplying the disease burden (DB) by the fraction of children with susceptibility (due to the fact that they have not received the vaccine or the vaccine may have not been effective), calculated as:
$$S_f=1-e\cdot p_v$$ (8)
$ p_v=78%$ vaccinated WHO, 2014; $ e=efficacy$ ~ uniform (0.54, 0.79); Patel et al. 2013
QMRA was used to determine the additional log10 enteric virus reductions necessary to ensure a disease burden of <10-4 DALYs per person per year, which has been considered a more appropriate target for regions with high diarrheal disease burdens (Mara et al., 2010), for both adult farmers working and for children playing in wastewater-irrigated fields. To incorporate uncertainty and variability, a Monte Carlo simulation with 10,000 iterations was implemented, using the distributional assumptions described above. Then, descriptive statistics (mean, median, percentiles) of the estimated log10 reduction values (LRV) required to achieve the health target of 10-4 DALYS were determined. The effluent required additional enteric virus reductions to ensure safe reuse for restricted irrigation (Figure 2). The median additional treatment required if children are exposed was 4.0-log10 units; therefore, it is not recommended that children have access to fields where effluent is used for irrigation. The median additional treatment required to protect adult farmers was approximately 0.9-log10 unit.
Figure 2. The additional virus concentration log10 reduction required for safe wastewater reuse in agriculture with respect to farmers (norovirus infection) and children at play in fields (rotavirus infection; adapted from Symonds et al., 2014).
It is important to consider the local context of exposure when completing QMRAs, especially when locally-derived exposure data is not available. This can be done using a sensitivity analysis. For this study, it was assumed that farmers accidentally ingest 1.0 mL of irrigation water per day while working. However, this assumption came from a publication written within the context of irrigation practices in Sweden (Ottoson and Stenström, 2003). In Bolivia, some farmers chew coca leaves while working, a practice that implies frequent hand-to-mouth contact and creates the possibility that greater volumes of irrigation water and/or soil are accidentally ingested. A sensitivity analysis revealed that if the amount of water accidentally ingested were doubled (increased from 1.0 mL to 2.0 mL per day), an additional virus reduction of 0.3-log10 units (in addition to the log10 reductions presented in Figure 2) would be required.
The reuse of the effluent from both wastewater treatment systems for restricted agricultural irrigation exceeded the health benchmark of 10-4 DALYs for adult farmers and children. Based upon a conservative interpretation of the QMRA (the upper 97.5% confidence interval), an additional 5.2-log10 rotavirus reduction would be required to ensure the safety of young children playing in irrigation fields. To ensure the safety of farmers irrigating with treated effluent, up to 1.6-log10 of additional norovirus reduction would be required.
Therefore, the following interventions are recommended:
Children should not be allowed to play in fields irrigated with treated effluent from the wastewater treatment system described in this case study
To protect farmers, additional treatment of the WSP effluent is recommended, as well as the use of personal protective equipment and practices.
The additional required reduction of norovirus risk can be achieved by adding an additional treatment unit to the end of the system or at the point of reuse. For example, an additional pond or constructed wetland cell with 50 cm depth and a hydraulic retention time of 10 days should achieve approximately 1-log10 reduction (Silverman et al., 2014; Silverman et al., 2015). Alternatively, a sand filter followed by a UV disinfection [MEV1] lamp could be used (see Chapters on Disinfection). The installation of baffles on the two maturation ponds may prevent short-circuiting, which has been shown to reduce pathogen removal efficiency in WSP systems (Verbyla et al., 2013b). Exposure can be reduced by implementing practices that limit farmers' exposure to the water while working on the farm (e.g., personal protective equipment; subsurface irrigation; mechanization of farming activities; WHO, 2016). Although the enteric virus removal observed for this WSP system was slightly lower than those previously reported for similar-sized systems, the virus removal performance observed herein may have been impacted by the lack of maintenance (Symonds et al., 2014). Increased investments in the maintenance of the system (e.g., removal of floating algae on the pond surfaces; Verbyla and Mihelcic, 2015) as well as increased stakeholder participation (Verbyla et al., 2015) may help to provide more efficient pathogen removal and ensure safe wastewater reuse.
Evaluation of the QMRA
The QMRA executed in this study provided a framework to assess that additional log10 enteric virus reductions necessary to ensure the safe reuse of WSP effluent with respect to adult famers and children at play in the irrigation fields. The additional virus reductions required could be easily achieved through a combination of additional tertiary treatment of effluents and the use of personal protective equipment by farmers. Although this model used actual virus measurements from the field together with dose-response curves for the health assessment, the results are limited by the model assumptions. While uncertainty and variability of virus concentrations were considered by using distributional assumptions for virus concentrations, the distribution parameters were estimated based on virus concentrations measured from only two composite sampling events in June 2012. If an outbreak of any of the reference pathogens were to occur, the amount of virus removal necessary for safe reuse may be much greater (Barker et al., 2013). For many community-managed wastewater treatment systems, the regular monitoring of pathogens (and even faecal indicators) may not be practical due to training required for community operators, the cost of the service and the lack of laboratories capable of providing it. Thus, there is a need for alternative indicators of microbial risk. In the present case study, we had the opportunity to quantify the concentrations of norovirus and rotavirus in the wastewater. A semi-quantitative approach, such as the one presented in the WHO's SSP guidelines (WHO, 2016), can be guided by quantitative information about pathogen concentrations and exposure (such as the information presented in Part Three of GWPP about pathogen concentrations in raw sewage, feces, and sludge, as well as the information presented in the GWPP Sanitation Technologies chapters about the removal of pathogens in sanitation systems using different technologies). The approach presented in this case study, together with reference values from GWPP, can be used to assess risk for wastewater reuse systems like the one presented here, in data-scarce and/or resource-limited regions. Professional judgement and knowledge of local practices (e.g. coca leaf chewing by farmers in Bolivia) are essential to appropriately assess risk and make subsequent management decisions in different contexts.
This case study was derived from a research project, the results of which are published in the following journal article:
Symonds, E.M., Verbyla, M.E., Lukasik, J.O., Kafle, R.C., Breitbart, M., Mihelcic, J.R. (2014). A case study of enteric virus removal and insights into the associated risk of water reuse for two wastewater treatment pond systems in Bolivia. Water Research. 65: 257-270.
E.M.S. was supported by US NSF grant OCE-1566562. Any opinions, findings, and conclusions or recommendations expressed here are those of the authors and do not necessarily reflect the views of the US NSF.
The full paper can be found here: Symonds et al. 2014
Alves, M., Ribeiro, A.M., Neto, C., Ferreira, E., Benoliel, M.J., Antunes, F. et al. (2006). Distribution of Cryptosporidium species and subtypes in water samples in Portugal: a preliminary study. Journal of Eukaryotic Microbiology. 2006/12/16 ed.53 Suppl 1, pp. S24-5.
Barker, S.F., Packer, M., Scales, P.J., Gray, S., Snape, I. and Hamilton, A.J. (2013). Pathogen reduction requirements for direct potable reuse in Antarctica: evaluating human health risks in small communities. Science of the Total Environment. 461, Elsevier. pp. 723–733.
Cornejo, P.K., Zhang, Q. and Mihelcic, J.R. (2013). Quantifying benefits of resource recovery from sanitation provision in a developing world setting. Journal of Environmental Management. 131, pp. 7–15.
Hamilton, A.J., Stagnitti, F., Xiong, X., Kreidl, S.L., Benke, K.K. and Maher, P. (2007). Wastewater irrigation: the state of play. Vadose zone journal. 6, Soil Science Society. pp. 823–840.
Havelaar, A.H. and Melse, J.M. (2003). Quantifying public health risk in the WHO guidelines for drinking-water quality: a burden of disease approach. Rijksinstituut voor Volksgezondheid en Milieu RIVM.
Kumar, D. and Asolekar, S.R. (2016). Significance of natural treatment systems to enhance reuse of treated effluent: A critical assessment. Ecological Engineering. 94, Elsevier. pp. 225–237.
Mara, D.D., Sleigh, P.A., Blumenthal, U.J. and Carr, R.M. (2007). Health risks in wastewater irrigation: comparing estimates from quantitative microbial risk analyses and epidemiological studies. Journal of water and health. 5, IWA Publishing. pp. 39–50.
Mara, D., Hamilton, A., Sleigh, A. and Karavarsamis, N. (2010). Discussion paper: options for updating the 2006 WHO guidelines. World Health Organization. Geneva.
Maynard, H.E., Ouki, S.K. and Williams, S.C. (1999). Tertiary lagoons: a review of removal mecnisms and performance. Water Research. 33, Elsevier. pp. 1–13.
Mok, H.F., Barker, S.F. and Hamilton, A.J. (2014). A probabilistic quantitative microbial risk assessment model of norovirus disease burden from wastewater irrigation of vegetables in Shepparton, Australia. Water research. 54, Elsevier. pp. 347–362.
Mok, H.F. and Hamilton, A.J. (2014). Exposure factors for wastewater-irrigated Asian vegetables and a probabilistic rotavirus disease burden model for their consumption. Risk Analysis. 34, Wiley Online Library. pp. 602–613.
Oakley, S.M. (2005). The Need for Wastewater Treatment in Latin America: A Case Study of the Use of Wastewater. Small Flows Quarterly. 6, pp. 36–51.
Ottoson, J. and Stenström, T.A. (2003). Faecal contamination of greywater and associated microbial risks. Water research. 37, Elsevier. pp. 645–655.
Patel, M.M., Patzi, M., Pastor, D., Nina, A., Roca, Y., Alvarez, L. et al. (2013). Effectiveness of monovalent rotavirus vaccine in Bolivia: case-control study. Bmj. 346, pp. f3726.
Raschid-Sally, L. and Jayakody, P. (2008). Drivers and characteristics of wastewater agriculture in developing countries: Results from a global assessment. 127, IWMI.
Seidu, R., Heistad, A., Amoah, P., Drechsel, P., Jenssen, P.D. and Stenström, T.A. (2008). Quantification of the health risk associated with wastewater reuse in Accra, Ghana: a contribution toward local guidelines. Journal of water and health. 6, IWA Publishing. pp. 461–471.
Silverman, A.I., Akrong, M.O., Drechsel, P. and Nelson, K.L. (2014). On-farm treatment of wastewater used for vegetable irrigation: bacteria and virus removal in small ponds in Accra, Ghana. Journal of Water Reuse and Desalination. 4, IWA Publishing. pp. 276–286.
Silverman, A.I., Nguyen, M.T., Schilling, I.E., Wenk, J. and Nelson, K.L. (2015). Sunlight inactivation of viruses in open-water unit process treatment wetlands: modeling endogenous and exogenous inactivation rates. Environmental science & technology. 49, ACS Publications. pp. 2757–2766.
Symonds, E.M., Verbyla, M.E., Lukasik, J.O., Kafle, R.C., Breitbart, M. and,. (2014). A case study of enteric virus removal and insights into the associated risk of water reuse for two wastewater treatment pond systems in Bolivia. water research. 65, Elsevier. pp. 257–270.
Teunis, P.F.M. and Havelaar, A.H. (2000). The beta Poisson dose-response model is not a single-hit model. Risk Analysis. 20, Wiley Online Library. pp. 513–520.
Teunis, P.F.M., Moe, C.L., Liu, P., Miller, S.E., Lindesmith, L., Baric, R.S. et al. (2008). Norwalk virus: how infectious is it?. Journal of medical virology. 80, Wiley Online Library. pp. 1468–1476.
Verbyla, M.E., Cairns, M.R., Gonzalez, P.A., Whiteford, L.M. and Mihelcic, J.R. (2015). Emerging challenges for pathogen control and resource recovery in natural wastewater treatment systems. Wiley Interdisciplinary Reviews: Water. 2, Wiley Online Library. pp. 701–714.
Verbyla, M.E. and Mihelcic, J.R. (2015). A review of virus removal in wastewater treatment pond systems. Water research. 71, Elsevier. pp. 107–124.
Verbyla, M.E., Oakley, S.M., Lizima, L.A., Zhang, J., Iriarte, M., Tejada-Martinez, A.E. et al. (2013). Taenia eggs in a stabilization pond system with poor hydraulics: concern for human cysticercosis?. Water Science and Technology. 68, IWA Publishing. pp. 2698–2703.
Verbyla, M.E., Oakley, S.M. and Mihelcic, J.R. (2013). Wastewater infrastructure for small cities in an urbanizing world: integrating protection of human health and the environment with resource recovery and food security. Environmental science & technology. 47, ACS Publications. pp. 3598–3605.
Ward, R.L., Bernstein, D.I. and Young, E.C. (1986). Human rotavirus studies in volunteers: Determination of infectious dose and serological response to infection. Journal of Infectious Diseases. 154, pp. 871.
WHO (2016). Immunization, vaccines, and biologicals. Data, statistics, and graphics. World Health Organization.
Please select all the ways you would like to hear from the Global Water Pathogen Project.
The Intergovernmental Hydrological Programme of UNESCO is a principal partner of GWPP, steering the revision and dissemination of key chapters and the resulting book, in collaboration with MSU.
MSU in collaboration with Venthic is building the GWPP online platform.
The GWPP is supported by the Michigan State University Axia Institute (formerly the Midland Research Institute for Value Chain Creation(MRIVCC)).
The Bill & Melinda Gates Forundation sponsors GWPP.
The GWPP network is grateful for the gift from
The GWPP is grateful for the gift from Research Foundation for Health and Environmental Effects of American Chemistry Council.
Unless otherwise indicated, all materials created by the GWPP are licensed under a Creative Commons Attribution 4.0 International License.
Contribute to GWPP | CommonCrawl |
Home > Vol 6, No 1 (2014) > Makhnei
Albeverio S., Gesztesy F., Hoegh Krohn R., Holden H. Solvable models in quantum mechanics. Springer-Verlag, New York, 1988.
Funtakov V.N. On the expansion by eigenfunctions of a nonselfadjoint differential operator of an even order on the semiaxis $[0,\infty)$. Izv. Akad. Nauk Az. SSR 1960, 6, 3-19. (in Russian)
Gomilko A.M., Radzievskii G.V. Asymptotics with respect to the parameter of solutions of linear functional-differential equations. Ukrainian Math. J. 1990, 42 (11), 1301-1310. doi:10.1007/BF01066184 (translation of Ukr. Mat. Zhurn. 1990, 42 (11), 1460-1469. (in Russian))
Halanay A., Wexler D. The qualitative theory of systems with impulse. Mir, Moscow, 1971. (in Russian)
Makhnei O.V. Asymptotics of a fundamental solution system for a differential equation with measures on the semi-axis. Bull. National Univ. "Lvivs'ka Politechnika". Appl. Math. Series 2010, 687, 82-90. (in Ukrainian)
Makhnei O.V., Tatsii R.M. Singular quasidifferential operators on a finite interval. Simik, Ivano-Frankivsk, 2012. (in Ukrainian)
Naimark M.A. Linear differential operators. Dover Publications, Mineola, New York, 2009.
Radzievskii G.V. Asymptotics of the fundamental system of solutions of a linear functional-differential equation with respect to a parameter. Ukrainian Math. J. 1995, 47 (6), 936-962. doi:10.1007/BF01058784 (translation of Ukr. Mat. Zhurn. 1995, 47 (6), 811-836. (in Russian))
Rykhlov V.S. Asymptotics of the system of solutions of a general differential equation with parameter. Ukrainian Math. J. 1996, 48 (1), 108-121. doi:10.1007/BF02390988 (translation of Ukr. Mat. Zhurn. 1996, 48 (1), 96-108. (in Russian))
Shin D.Yu. On quasidifferental operators in Hilbert space. Dokl. Akad. Nauk SSSR 1938, 18 (8), 523-526. (in Russian)
Shin D.Yu. On solutions of a linear quasidifferential equation of the $n$-th order. Matem. Sbornik 1940, 7 (3), 479-527. (in Russian)
Stasyuk M.F., Tatsii R.M. Matrix integral equations and differential systems with measures. Bull. National Univ. "Lvivs'ka Politechnika". Phiz.-Math. Sci. 2006, 566, 33-40. (in Ukrainian)
Tatsii R.M., Pakholok B.B. The structure of the fundamental matrix of a quasidifferential equation. Dokl. Akad. Nauk USSR. Ser. A 1989, 4, 25-28. (in Ukrainian)
Tatsii R.M. Stasyuk M.F., Mazurenko V.V., Vlasii O.O. Generalized quasidifferential equations. Kolo, Drogobich, 2011. (in Ukrainian)
Vinokurov V.A., Sadovnichii V.A. The asymptotics of eigenvalues and eigenfunctions and a trace formula for a potential with delta functions. Differ. Equ. 2002, 38 (6), 772-789. doi:10.1023/A:1020302110566 (translation of Differ. Uravn. 2002, 38 (6), 735-751. (in Russian))
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. | CommonCrawl |
General Statistics Topics
Bayes theorem with errors
Thread starter billket
Tags bayes uncertainty
billket
I have a situation in which I want to calculate, for a given $y$ (which I measure experimentally), the probability distribution of $x$ i.e. $p(x|y)$ (actually what I need is the value of x for which this is maximized). Using Bayes theorem I have $p(x|y) = \frac{p(y|x)p(x)}{p(y)}$. I know both $p(x)$ and $p(y|x)$ which are both Gaussians. I don't know $p(y)$, but given that $y$ is constant for a given measurement, and all that I need is the maximum value over x, that shouldn't matter. However, in practice, I have an error associated to $y$, call it $dy$ ($y$ is Gaussian distributed). How can I account for this uncertainty on $y$ when trying to find the best $x$ (and the uncertainty on $x$)? Thank you!
katxt
Perhaps Deming Regression is what you are looking for.
I'm not really entirely clear on what is going on. But you said y is constant but x has some error associated with it. Is there a particularly reason you aren't regressing x on y instead of y on x? Seems like you're trying to best fit x from y so... why not just build the model in that direction in the first place?
Also, can you fix your LAtEx, please. My dumbass can't fully appreciate the question in its current state.
Just ignore the dollar signs. It's pretty much readable without them and they wouldn't actually modify how anything really looks except for the one case where they use frac but that should be pretty understandable too.
billket said:
I have a situation in which I want to calculate, for a given y (which I measure experimentally), the probability distribution of x i.e. p(x|y) (actually what I need is the value of x for which this is maximized). Using Bayes theorem I have p(x|y) = {p(y|x)p(x)} /{p(y)}. I know both p(x) and p(y|x) which are both Gaussians. I don't know p(y), but given that y is constant for a given measurement, and all that I need is the maximum value over x, that shouldn't matter. However, in practice, I have an error associated to y, call it dy (y is Gaussian distributed). How can I account for this uncertainty on y when trying to find the best x (and the uncertainty on x)? Thank you!
Seems like you're trying to best fit x from y so... why not just build the model in that direction in the first place?
Is this a calibration problem where you measure y from known x's, then use the equation in reverse to predict x from measured y's. | CommonCrawl |
Of Curves and their enclosed Areas:
A Tale of Two Inequalities
Today we will talk about the story of a simple problem that had far-reaching consequences in modern mathematics. It's the reason drops of water are spherical. It's what mathematicians call the "isoperimetric" problem (iso = same, perimeter = length of the boundary).
The problem is this: you are given a rope of a certain length. How would you enclose the maximum area possible with your rope? This is the 2 dimensional isoperimetric problem. Naturally, you can pose the same problem in 3 or more dimensions too. Say you have a surface of a fixed surface area, what's the maximum volume you can enclose?
The problem dates back to around 800 BC and over the millenia, has inspired a lot of developments in various fields of modern analysis and geometry.
The ancient Greeks knew that on the $2d$ plane, the circle would enclose the maximum area. Zenodorus of ancient Greece had a solution, but unbeknowst to him, his solution wasn't complete. 1900 years would pass without any significant progress on the problem. In 1744, the isoperimetric problem inspired the great Euler (read Oiler) to create the foundations of the field of calculus of variations together with Lagrange.
Jacob Steiner (196-1863) put forward many beautiful geometric arguments to solve the isoperimetric problem. Soon after, there was a complete proof of the isoperimetric problem for the 2 dimensional case.
In the late 1800's, Hermann Brunn and Hermann Minkowski discovered a nice little geometric inequality from which the isoperimetric problem could be solved rather elegantly, for all dimensions. This inequality, that would come to be known as the Brunn-Minkowski inequality (BM), and related ideas gave birth to an entire new field in convex geometry. It has many applications throughout geometry and analysis and over the years have made its presence felt in many disparate areas of mathematics.
Our focus for this article would be to take a peek behind the curtain and investigate the inner workings of the BM inequality in the $2d$ plane and see how it follows from simple and natural geometric ideas. We will also see how the isoperimetric problem and the related isoperimetric inequality follow easily from the BM inequality.
Dido and her oxhide rope
Our story begins with the legend of Dido circa 814 BC. Imagine you're the daughter of the king of Tyre (in present day Lebanon). Imagine after your dad's death, your brother who has seized the throne killed your husband in hopes of getting his hidden treasures. What do you do? Well, you throw your husband's treasures in the sea, gather a group of followers and run away as far as possible. At least, according to Virgil (famous ancient Roman poet), that's exactly what Dido did.
In the course of her journey, Dido found herself in the coast of North Africa and asked the local king Iarbas for a bit of help. She asked for a tiny bit of land for temporary refuge for her and her followers. She didn't ask for much, only as much as could be enclosed by an oxhide. The request seemed very modest to the king and he readily agreed.
Dido had a plan. She cut the oxhide into really thin strips, joined them together and decided to enclose the largest tract of land she could with them. And in doing so, she was faced with the so-called isoperimetric problem.
Q. Of all closed curves of length L which is the one that encloses the maximum area ?
As the above figure shows, there are many ways one can enclose an area with a given perimeter $L$. In fact, infinitely many. But according to legend, Dido did manage to find the answer. She figured that the maximum area is enclosed when the curve takes the shape of a circle. And with that land she founded the city of Carthage that would go on to become a powerful and prosperous kingdom. We do not know exactly how Dido solved the problem, it's likely she derived the answer from intuition.
If $L$ happens to be the perimeter of a circle with radius $r$, then we can write $L= 2\pi r $. That would give us the radius to be $r = L/2\pi$ and the area would be $$A_c =\pi r^2 = \frac{L^2}{4\pi} \tag{1}$$ And if we were to believe Dido and accept that yes, the circle is the figure with perimeter $L$ that maximizes the enclosed area, we can then say that for any other figure with perimeter $L$, if it's area is $A$, then $A$ must be less than or equal to $A_c$. That is, we can say, for any figure that has area $A$ and perimeter $L$, $$A \le \frac{L^2}{4\pi} $$ Often written as $$ L^2 - 4\pi A \ge 0 \tag{2}$$ this inequality is called the isoperimetric inequality. And equality holds in the above inequality, that is, $L^2 - 4\pi A = 0$ when the curve takes the form of a circle of perimeter $L$.
Next we will take a look into the ideas behind Brunn and Minkowski's neat little inequality that, for one, proves the isoperimetric inequality not only for the $2d$ case but for all higher dimensions.
We will mainly focus on the $2d$ case where it's easy to see the inner workings of the inequality. But the same arguments can be used without much work for the general case too. We'll also see a sketch of the proof of the Brunn-Minkowski inequality and finally use it to prove the isoperimetric inequality.
Adding two shapes together
Any point on the $2d$ plane, say $u = (u_1,u_2)$ can be thought to represent a vector, namely, the vector joining the origin and the point $u$. That is, it has the direction of the line starting from the origin, $O$ and passing through the point $u$ and has a length equal to the length of the segment $Ou$.
This is usually called a position vector and we can identify the point with the vector and vice-versa.
Say we have another point $v = (v_1, v_2)$. If we define the sum of the two points as the sum of each co-ordinate, that is, $u + v = (u_1 +v_1, u_2 + v_2)$, then $u+v$ is the same as the vector sum of the vectors represented by $u$ and $v$.
Now, take a disc, say $B$ on the $2d$ plane. By disc, we mean, the circle alongwith the region it encloses taken together. Can we define anything meaningful by $B+u$, where $u$ is a vector say $(u_1,u_2)$?
A natural way to define it would be to consider each point $b = (b_1,b_2)$ in $B$ and adding to it the vector $u = (u_1,u_2)$, that is, we are doing $b+u = (b_1+u_1, b_2+u_2)$. That amounts to simply moving the disc $B$ along the direction of $u$ and by a distance equal to the length of the position vector $u$ like so
This makes sense. And similarly, it can be defined for any shape on the plane. Adding a shape by the vector $u$, simply means moving or translating the shape by the vector $u$
Adding a square and a disc
Let's take a square, $K$ and a disc, $B$ centered at the origin. Does it make sense to define the vector sum of $K$ and $B$?
Well, yes. Just like in the previous step, we translated a disc by a specific vector, here $K+B$ is the set of all points of the form $k+b$ where $k$ is any point in $K$ and $b$ is a point in $B$. Basically we take all possible pair of points one in $K$ and the other in $B$ and take their vector sum.
That sounds a bit complicated. How would the shape $K+B$ look like?
The points on the boundary of the square will be translated/moved to the boundary of the new shape.
The new shape would be as if we put discs of the same radius as $B$ at each point of the boundary of the square. The resulting shape would be a square with rounded corners as shown above.
But what if the square and the circle were not placed with their centers at the origin but placed somewhere else like this.
The center of the square is the position vector $u$ and the center of the circle is the position vector $v$. How would $K+B$ look like now? What would the shape be?
Here we can do a nifty little trick:
$1.$ Translate $K$ by $-u$ which takes the center of $K$ to the origin. The resulting shape is $K-u$.
$2.$ Translate $B$ so that it's center now lies at the origin too, that is translate it by $-v$.
And now, we are back to the case where we have a square and a disc centered at the origin and their Minkowski sum would be the familiar square with rounded corners.
So, $K-u + B -v $ is a square with rounded corners. But $K-u + B-v$ is the same as $K+B -(u+v)$. Then to get $K+B$, we simply need to translate the rounded square with center at the origin to the position vector $u+v$. Notationally, it's the trivial cancellation $K+B -(u+v) + u+v = K+B$
And since the translation doesn't change thte shape, $K+B$ is still the same rounded square, just located at $u+v$. Moving a piece of paper lying on your table wouldn't change it's size. Similarly, translating shapes on the plane, won't change its shape or area.
This vector addition of shapes is commonly known as Minkowski sums in geometry after the German mathematician Hermann Minkowski of course. The square and the disc are nothing special, the same Minkowski sum can be done for any two set of points. In set notation, one would write this as $$K+ C = \{ k+c \text{ } | k \text{ is in }K, \ c \text{ is in }C\} $$
For a shape (set of points) $K$ on the $2$ dimensional plane, we will denote the area of $K$ by $V(K)$. The area is nothing but the $2$ dimensional volume, and that's what the notation $V$ secretly refers to. At this point, it might be good to recall the very intuitive properties of area.
If a shape $C$ is contained in the shape $K$, $V(K) \ge V(C)$. The total area of two non-overlapping shapes, is the sum of their individual areas. Notationally, if $K$ and $C$ are the shapes under consideration, and they are non overalpping. $V(K \cup C) = V(K) +V(C)$. From this, it also follows, that if the two shapes have some overlap, then $V(K \cup C) < V(K) + V(C)$. The figure above might help see this.
As we noted earlier, if we simply translate a set by a certain vector, the shape remains unchanged and so too does the area. We can write this notationally, for any vector $w$, like so: $$V(K+w) = V(K) $$
What we will do next is try to see how the areas of the Minkowski sums of two shapes relates to the area of the individual shapes. And since the area doesn't vary if we simply translate shapes, whenever we wanna visualize Minkowski sums of shapes that have some symmetry at the center like a square, circle, rectangle etc, We will translate them so that their center lies at the origin $(0,0)$.
Let's go back to our example of figuring out $K+B_r$ where $K$ is a square of side-length $l$ and $B_r$ is a disc of radius $r$. We found that $K+ B_r$ is a larger square with rounded corners.But what about the area $V(K+B_r)$?
From the above figure, we can see that
\begin{align*} &V(K+B_r) \\ &= V(K) + V(B_r) + 4lr\\ &\ge V(K) + V(B_r) + 2\sqrt{\pi}lr\\ &= V(K) + V(B_r) + 2\sqrt{V(K)\cdot V(B_r)}\\ &= \left( V(K)^{\frac{1}{2}} + V(B_r)^{\frac{1}{2}} \right)^2 \end{align*}
And here, we notice a curious thing, that $$V(K+B_r)^{\frac{1}{2}} \ge V(K)^{\frac{1}{2}} + V(B_r)^{\frac{1}{2}}$$
Let's go back to the expression for $V(K+B_r)$.
\begin{align*} V(K+B_r) &= V(K) + V(B_r) + 4lr \\ &= V(K)+\pi r^2 + 4lr \end{align*}
$$V(K+B_r) - V(K) =\pi r^2 + 4lr$$ and $$\frac{V(K+B_r)-V(K)}{r} = \pi r + 4l $$ From here we can see that if we keep decreasing $r$, the expression $\frac{V(K+B_r) - V(K)}{r}$ gets closer and closer to $4l$. If you are familiar with the concept of limits, we can write it concisely as
\begin{align*} &\lim_{r \to 0+} \frac{V(K+B_r) - V(K)}{r} \\ & \text{ } = \lim_{r \to 0+} \pi r + 4l\\ & \text{ }= 0 + 4l\\ & \text{ }= 4l\\ & \text{ } = s(K) \end{align*}
$s(K)$ here denotes the length of the perimeter of $K$. Here by $r \to 0+$, we are emphasizing the fact that $r$ is non-negative quantity, being the radius of a circle/disc and it approaches $0$ from the right hand side as it's continually decreased.
Turns out this is not only true for the Minkowski sum $K+B_r$ when $K$ is a square, but for for any nice enough set, with a smooth boundary.
Let's take an arbitrary nice shape $K$ on the plane with a smooth boundary. Once again, to see how $K+B_r$ looks like, refer to the pic above. At each point on the boundary of $K$, imagine a disc of radius $r$ and take the resulting figure.
$V(K+B_r)- V(K)$ gives the area of the shaded strip as in the pic above.
If we cut the strip and stretch it out, we get approximately a rectangular strip with one side having the length $L$, that is the perimeter of the shape $K$ and the other side has length $r$. It very closely resembles a thin rectangular strip when $r$ is very small.
So we have approximately, $$ V(K+B_r) - V(K) \approx rL$$ and so, $$\frac{V(K+B_r) - V(K)}{r} \approx L $$ This formula $$\lim_{r\to 0+} \frac{V(K+B_r)-V(K)}{r} = L=s(K)$$ where $s(K)$ denotes the perimeter of $K$ is known as the Minkowski-Steiner formula and relates the area of a figure to its perimeter. It also holds in dimensions $3$ and higher for nice enough objects and relates the $n$ - dimensional volume to the $n-1$ -dimensional volume of the boundary (surface area).
Adding rectangles together, the Minkowski way
Now let's take a look at another example of the Minkowski sum. This time for two rectangles, $K$ and $C$, whose sides are parallel to the $x$ and $y$ axes and so also to each other. Let the corresponding sides of $K$ and $C$, be $a_1,a_2$ and $b_1,b_2$
As before we can translate both the rectangles to the center, since translation won't change the shape or the area or the shape.
Then $K + C$ would be the shape obtained by placing the rectangle $C$ at each point of the boundary of $K$ and taking the composite shape so obtained as in the figure above. Turns out $K+C$ is simply a larger rectangle with side lengths $a_1+b_1$, $a_2+b_2$.
\begin{align*} &V(K+L) \\ &= (a_1 + b_1)(a_2 + b_2)\\ &= a_1a_2 + b_1b_2 + a_1b_2 + a_2b_1\\ &\ge a_1a_2 + b_1b_2 + 2\sqrt{a_1a_2b_1b_2}\\ &= V(K) + V(L) + 2\sqrt{V(K){V(L)}}\\ &= V(K)^{\frac{1}{2}} + V(L)^{\frac{1}{2}} \end{align*}
That is, $$V(K+L)^{\frac{1}{2}} \ge V(K)^{\frac{1}{2}} + V(L)^{\frac{1}{2}} \tag{$\circ$}$$ We encountered this inequality in the other example too. And it's no coincidence. In fact, it turns out this inequality holds for all "nice enough" sets/shapes. By "nice enough", we mean sets for which the area is defined.
This inequality is known as the Brunn-Minkowski inequality. Just as it holds here in $2$-dimensions it also holds in higher dimensions. That is, if $K$ and $L$ are $n$-dimensional objects and $V(\cdot)$ denotes the $n$-dimensional volume, then $$V(K+L)^{\frac{1}{n}} \ge V(K)^{\frac{1}{n}} + V(L)^{\frac{1}{n}} $$ It was found that equality holds when $K$ and $L$ are the same after possible translations or/and scaling.
Brunn-Minkowski Inequality -sketch of proof
We already saw that the Brunn-Mminkowski inequality holds for two rectangles with sides parallel to each other and the $x$ and $y$ - axes. As we will see, this is about all the information we need to say that it holds for all $2$ dimensional sets (well, rechnically all of those for which the concept of area is defined, mathematicians call them "measurable sets")
Minkowski sum of a collection of rectangles in a grid
Suppose $A$ is a collection of $2$ non-overlapping rectangles in a grid like shown in the figure above and $B$ is $1$ rectangle also from another rectangular grid. What can we say about the Minkowski sum of $A$ and $B$ and the area, $V(A+B)$?
For starters, note that $A$ is basically a composite shape made up of two grid rectangles $A_1$ and $A_2$ taken together. In set notation, we can write that as a union: $$A = A_1 \cup A_2 $$ Recall that $$A +B = \{ a+b | a \text{ is in A}, \text{ $b$ is in B} \} $$ Since $A_1$ and $A_2$ do not overlap, if we take any point $a$ in $A$, it must be either in $A_1$ or in $A_2$, can't be in both.
So $A+B$ is actually just $A_1 + B$ and $A_2 +B$ taken together (their union, that is).
Remember that translating $A$ and $B$ won't change the shape of $A +B$ as we saw before. So, now, we can safely shift $A$ horizontally till we have the $y$-axis as one of the grid-lines and $A_2$ just to the right of it. That is, all points in $A_2$ have their $x$-coordinates non-negative.
We also shift $B$ so that the $y$- axis passes somewhere through $B$ dividing it into $2$ smaller rectangles, $B_1$ to the left and $B_2$ to the right such that $$\frac{V(A_1)}{V(A_2)} = \frac{V(B_1)}{V(B_2)} $$
The motivation behind shifting and having the $y$-axis ($x=0$) dividing $A$ and $B$ in such a way is to ensure that $A_1 + B_1$ will always lie to the left of the $y$ - axis and $A_2$ + $B_2$ will always lie to the right of the $y$- axis. And because of that they will not overlap (except possibly only on a line segment that is part of the $y$-axis, but the area of a line, a 1 dimensional figure is $0$, so we can disregard that overlap).
To see this note that the $x$-coordinates of all points in $A_1$ and $B_1$ are always negative or zero (non-positive) and when they are added together, the result will also be non-positive. Similarly, the $x$-coordinates of all points in $A_2 + B_2$ are all non-negative.
Now, the Minkowski sum, $A + B$, of course contains $A_1 + B_1$ and $A_2 + B_2$ in it, so the area of $A+B$ is greater than the sum of the areas of $A_1 +B_1$ and $A_2+B_2$. That is, $$V(A+B) \ge V(A_1+B_1) + V(A_2+B_2) $$ $A_1, B_1, A_2, B_2$ are rectangles and we already know from before that the Brunn-Minkowski inequality is true for rectangles. So, of course, we can write
$$V(A_1+B_1)^{\frac{1}{2}} \ge V(A_1)^{\frac{1}{2}} +V(B_1)^{\frac{1}{2}} $$ $$V(A_2+B_2)^{\frac{1}{2}} \ge V(A_2)^{\frac{1}{2}} +V(B_2)^{\frac{1}{2}} $$
And in turn, we have
\begin{align*} &V(A+B) \\ &\ge V(A_1 +B_1) + V(A_2 + B_2)\\ &\ge \left(V(A_1)^{1/2} + V(B_1)^{1/2}\right)^2 \\ &+ (V(A_2)^{1/2} + V(B_2)^{1/2})^2\\ &= V(A_1) \left( 1 + \left(\frac{V(B_1)}{V(A_1)} \right)^{\frac{1}{2}} \right)^2 \\ &+V(A_2) \left( 1 + \left(\frac{V(B_2)}{V(A_2)} \right)^{\frac{1}{2}} \right)^2 \tag{$\ast$} \end{align*}
Remember that we had chosen $B_1$ and $B_2$ such that $$\frac{V(A_1)}{V(A_2)} = \frac{V(B_1)}{V(B_2)} $$ That is, for some number $c$, $$\frac{V(B_1)}{V(A_1)} = \frac{V(B_2)}{V(A_2)} = c $$
$A_1$ and $A_2$ are non-overlapping and so are $B_1$ and $B_2$, so we can write $$V(A) = V(A_1) + V(A_2) $$ $$V(B) = V(B_1) + V(B_2) $$ Then
\begin{align*} \frac{V(B)}{V(A)} &= \frac{V(B_1) + V(B_2)}{V(A_1) + V(A_2)} \\ &= \frac{cV(A_1) +cV(A_2)}{V(A_1) +V(A_2)}\\ &= c \\ &= \frac{V(B_1)}{V(A_1)} = \frac{V(B_2)}{V(A_2)} \end{align*}
Using this in the inequality in $(\ast)$, we get
\begin{align*} &V(A+B) \\ &\ge V(A_1) \left(1 + \left(\frac{V(B)}{V(A)}\right)^{\frac{1}{2}}\right)^2 \\ &+ V(A_2) \left(1 + \left(\frac{V(B)}{V(A)}\right)^{\frac{1}{2}}\right)^2\\ &= (V(A_1) + V(A_2))\left(1 + \left(\frac{V(B)}{V(A)}\right)^{\frac{1}{2}}\right)^2\\ &= V(A)\left(\frac{V(A)^{\frac{1}{2}}+ V(B)^{\frac{1}{2}} }{V(A)^{\frac{1}{2}}} \right)^2\\ & = (V(A)^{\frac{1}{2}} + V(B)^{ \frac{1}{2}} )^2 \end{align*}
That is, $$V(A+B)^{\frac{1}{2}} \ge V(A)^{\frac{1}{2}} + V(B)^{ \frac{1}{2}} $$ and we have Brunn-Minkowski inequality for the sum of a shape composed of $2$ grid rectangles and another shape composed of $1$ grid rectangle.
An exactly similar argument, along with induction can be used to show the Brunn-Minkowski inequality for two collections of any number of grid rectangles. It could be a straightforward exercise to try it out yourself.
Suppose we try to cover two arbitrary shapes $K$ and $C$ with rectangular strips with sides parallel to the $x$ and $y$ axes. Let's denote the collection of rectangular strips in $C$ by $A$ and $B$ denote the collection of rectangles in $C$. See figure above.
We have already seen that the Brunn-Minkowski Inequality holds for collection of rectangles with sides parallel to the $x$ and $y$ axes. Here that would impy $$V(A+B)^{\frac{1}{2}} \ge V(A)^{\frac{1}{2}} + V(B)^{\frac{1}{2}} $$
The areas of $K$ and $C$ can be approximated increasingly better by finer and finer rectangular strips that are more in number. And at each step, the Brunn-Minkowski inequality is true for those collections of rectangular strips.
Passing onto the limit, we would have that the Brunn-Minkowski inequality holds for $K$ and $C$ as well.
An almost similar argument can be made in dimensions $3$ and higher, dealing with $n$- dimensional boxes instead of rectangles and it can be shown that the Brunn-Minkowski inequality holds in higher dimensions too.
Isoperimetric inequality revisited
With the Brunn-Minkowski inequality in our toolbox, tackling the isoperimetric inequality becomes a straightforward application.
Say, we have a shape $K$ that has perimeter $L$ and area $A$, i.e., $V(K) = A$. Then if we take the Minkowski sum of K and a disc of radius $r$, say $B_r$ $$\frac{V(K+B_r) -V(K)}{r} \to L$$ as $r \to 0+$ (from the Minkowski-Steiner formula we saw before).
Also, the Brunn-Minkowski inequality gives us
\begin{align*} &V(K+B_r) \\ &\ge V(K) + V(B_r) + 2\sqrt{V(K)\cdot V(B_r)} \end{align*}
\begin{align*} &\frac{V(K+B_r) - V(K)}{r} \\ &\ge \frac{\pi r^2 + 2\sqrt{V(K)}\sqrt{\pi}r}{r} \\ &=\pi r + 2\sqrt{A}\sqrt{\pi} \end{align*}
And then $$$$
\begin{align*} L &= \lim_{r\to 0+} \frac{V(K+B_r)- V(K)}{r} \\ &\ge 2\sqrt{A}\sqrt{\pi} \end{align*}
That is, we have $$L \ge 2\sqrt{A}\sqrt{\pi} $$ And squaring it, we have our beloved isoperimetric inequality $$L^2 \ge 4\pi A $$
Equality holds in the Brunn-Minkowski inequality if the two objects are the same up to translation and scaling, which is to say that they have the same shape. For example if one of them is a circle centered at the origin, for equality to hold, the other must be a circle as well but maybe of a different radius or maybe with their center located away from the origin.
And equality in $L^2 \ge 4\pi A$ implies that we also have equality in the Brunn-Minkowski inequality $$$$
\begin{align*} &V(K+B_r) \\ &\ge V(K) + V(B_r) + 2\sqrt{V(K) \cdot V(B_r)} \end{align*}
Then $K$ must be a disc since possibly scaling and translating the disc $B_r$ would give us a disc.
That is, equality holds in the isoperimetric inequality when the shape is a disc of perimeter $L$ and area $A$.
$$ $$
$\blacksquare $ | CommonCrawl |
Operating Leverage
By Adam Hayes
What Is Operating Leverage?
Operating leverage is a cost-accounting formula that measures the degree to which a firm or project can increase operating income by increasing revenue. A business that generates sales with a high gross margin and low variable costs has high operating leverage.
The higher the degree of operating leverage, the greater the potential danger from forecasting risk, in which a relatively small error in forecasting sales can be magnified into large errors in cash flow projections.
The Operating Leverage And DOL
The Formula for Operating Leverage Is
Degree of operating leverage=Contribution marginProfit\text{Degree of operating leverage} = \frac{\text{Contribution margin}}{\text{Profit}}Degree of operating leverage=ProfitContribution margin
This can be restated as:
Degree of operating leverage=Q∗CMQ∗CM−Fixed operating costswhere:Q=unit quantityCM=contribution margin (price - variable cost per unit)\begin{aligned} &\text{Degree of operating leverage} = \frac{Q*CM}{Q*CM - \text{Fixed operating costs}}\\ &\textbf{where:}\\ &Q = \text{unit quantity}\\ &CM = \text{contribution margin (price - variable cost per unit)}\\ \end{aligned}Degree of operating leverage=Q∗CM−Fixed operating costsQ∗CMwhere:Q=unit quantityCM=contribution margin (price - variable cost per unit)
Operating leverage is a measure of how much debt a company uses to finance its ongoing operations.
Companies with high operating leverage must cover a larger amount of fixed costs each month regardless of whether they sell any units of product.
Low-operating-leverage companies may have high costs that vary directly with their sales but have lower fixed costs to cover each month.
Calculating Operating Leverage
For example, Company A sells 500,000 products for a unit price of $6 each. The company's fixed costs are $800,000. It costs $0.05 in variable costs per unit to make each product.
Calculate company A's degree of operating leverage as follows:
500,000∗($6.00−$0.05)500,000∗($6.00−$0.05)−$800,000=$2,975,000$2,175,000\begin{aligned} &\frac{500,000*\left(\$6.00 - \$0.05 \right )}{500,000*\left(\$6.00 - \$0.05 \right )-\$800,000}\\ &=\frac{\$2,975,000}{\$2,175,000}\\ &=1.37 \text{ or } 137\%. \end{aligned}500,000∗($6.00−$0.05)−$800,000500,000∗($6.00−$0.05)=$2,175,000$2,975,000
A 10% revenue increase should result in a 13.7% increase in operating income (10% x 1.37 = 13.7%).
What Does Operating Leverage Tell You?
The operating leverage formula is used to calculate a company's break-even point and help set appropriate selling prices to cover all costs and generate a profit. The formula can reveal how well a company is using its fixed-cost items, such as its warehouse and machinery and equipment, to generate profits. The more profit a company can squeeze out of the same amount of fixed assets, the higher its operating leverage.
One conclusion companies can learn from examining operating leverage is that firms that minimize fixed costs can increase their profits without making any changes to the selling price, contribution margin or the number of units they sell.
High and Low Operating Leverage
It is important to compare operating leverage between companies in the same industry, as some industries have higher fixed costs than others. The concept of a high or low ratio is then more clearly defined.
Most of a company's costs are fixed costs that recur each month, such as rent, regardless of sales volume. As long as a business earns a substantial profit on each sale and sustains adequate sales volume, fixed costs are covered and profits are earned.
Other company costs are variable costs that are only incurred when sales occur. This includes labor to assemble products and the cost of raw materials used to make products. Some companies earn less profit on each sale but can have a lower sales volume and still generate enough to cover fixed costs.
For example, a software business has greater fixed costs in developers' salaries and lower variable costs in software sales. As such, the business has high operating leverage. In contrast, a computer consulting firm charges its clients hourly and doesn't need expensive office space because its consultants work in clients' offices. This results in variable consultant wages and low fixed operating costs. The business thus has low operating leverage.
Examples of Operating Leverage
Most of Microsoft's costs are fixed, such as expenses for upfront development and marketing. With each dollar in sales earned beyond the break-even point, the company makes a profit, but Microsoft has high operating leverage.
Conversely, Walmart retail stores have low fixed costs and large variable costs, especially for merchandise. Because Walmart sells a huge volume of items and pays upfront for each unit it sells, its cost of goods sold increases as sales increase. Because of this, Walmart stores have low operating leverage.
Understanding the Degree of Operating Leverage
The degree of operating leverage is a multiple that measures how much operating income will change in response to a change in sales.
Understanding Economic Order Quantity – EOQ
Economic order quantity (EOQ) is the ideal order quantity that a company should make for its inventory given a set cost of production, demand rate, and other variables.
Profit margin gauges the degree to which a company or a business activity makes money. It represents what percentage of sales has turned into profits.
Understanding Cost-Volume-Profit – CVP Analysis
Cost-volume-profit (CVP) analysis looks at the impact that varying levels of sales and product costs have on operating profit. Also commonly known as break-even analysis, CVP analysis looks to determine the break-even point for different sales volumes and cost structures.
Understanding the Cross Elasticity of Demand
The cross elasticity of demand measures the responsiveness in the quantity demanded of one good when the price changes for another good.
What Gross Profit Tells Us
Gross profit is the profit a company makes after deducting the costs of making and selling its products, or the costs of providing its services.
How Operating Leverage Can Impact a Business
How Do I Calculate the Degree of Operating Leverage?
Calculating Present and Future Value of Annuities
What is the formula for calculating net present value (NPV) in Excel?
Understanding Contribution Margins
How Is Operating Margin And EBITDA Different? | CommonCrawl |
Second-generation PLINK: rising to the challenge of larger and richer datasets
Christopher C Chang1,2,
Carson C Chow3,
Laurent CAM Tellier2,4,
Shashaank Vattikuti3,
Shaun M Purcell5,6,7,8 &
James J Lee3,9
GigaScience volume 4, Article number: 7 (2015) Cite this article
PLINK 1 is a widely used open-source C/C++ toolset for genome-wide association studies (GWAS) and research in population genetics. However, the steady accumulation of data from imputation and whole-genome sequencing studies has exposed a strong need for faster and scalable implementations of key functions, such as logistic regression, linkage disequilibrium estimation, and genomic distance evaluation. In addition, GWAS and population-genetic data now frequently contain genotype likelihoods, phase information, and/or multiallelic variants, none of which can be represented by PLINK 1's primary data format.
To address these issues, we are developing a second-generation codebase for PLINK. The first major release from this codebase, PLINK 1.9, introduces extensive use of bit-level parallelism, \(O\left (\sqrt {n}\right)\)-time/constant-space Hardy-Weinberg equilibrium and Fisher's exact tests, and many other algorithmic improvements. In combination, these changes accelerate most operations by 1-4 orders of magnitude, and allow the program to handle datasets too large to fit in RAM. We have also developed an extension to the data format which adds low-overhead support for genotype likelihoods, phase, multiallelic variants, and reference vs. alternate alleles, which is the basis of our planned second release (PLINK 2.0).
The second-generation versions of PLINK will offer dramatic improvements in performance and compatibility. For the first time, users without access to high-end computing resources can perform several essential analyses of the feature-rich and very large genetic datasets coming into use.
Because of its broad functionality and efficient binary file format, PLINK is widely employed in data-processing pipelines that are established for gene-trait mapping and population-genetic studies. However, the five years since the final first-generation update (v1.07), however, have witnessed the introduction of new algorithms and analytical approaches, the growth in size of typical datasets, as well as wide deployment of multicore processors.
In response, we have developed PLINK 1.9, a comprehensive performance, scaling, and usability update. Our data indicate that its speedups frequently exceed two, and sometimes even three, orders of magnitude for several commonly used operations. PLINK 1.9's core functional domains are unchanged from that of its predecessor—data management, summary statistics, population stratification, association analysis, identity-by-descent estimation [1] —and it is usable as a drop-in replacement in most cases, requiring no changes to existing scripts. To support easier interoperation with newer software, for example BEAGLE 4 [2], IMPUTE2 [3], GATK [4], VCFtools [5], BCFtools [6] and GCTA [7], features such as the import/export of VCF and Oxford-format files and an efficient cross-platform genomic relationship matrix (GRM) calculator have been introduced. Most pipelines currently employing PLINK 1.07 can expect to benefit from upgrading to PLINK 1.9.
A major problem remains: PLINK's core file format can only represent unphased, biallelic data; however we are developing a second update, PLINK 2.0, to address this.
Improvements in PLINK 1.9
Bit-level parallelism
Modern ×86 processors are designed to operate on data in (usually 64-bit) machine word or (≥ 128-bit) vector chunks. The PLINK 1 binary file format supports this well: the format's packed 2-bit data elements can, with the use of bit arithmetic, easily be processed 32 or 64 at a time. However, most existing programs fail to exploit opportunities for bit-level parallelism; instead their loops painstakingly extract and operate on a single data element at a time. Replacement of these loops with bit-parallel logic is, by itself, enough to speed up numerous operations by more than one order of magnitude.
For example, when comparing two DNA segments, it is frequently useful to start by computing their Hamming distance. Formally, define two sequences {a1,a2,…,a m } and {b1,b2,…,b m } where each a i and b i has a value in {0,1,2,ϕ}, representing either the number of copies of the major allele or (ϕ) the absence of genotype data. Also define an intersection set Ia,b:={i:a i ≠ϕ and b i ≠ϕ}. The "identity-by-state" measure computed by PLINK can then be expressed as
$$1 - \frac{\sum_{i\in I_{a,b}}|a_{i} - b_{i}|}{2|I_{a,b}|}. $$
where |Ia,b| denotes the size of set Ia,b, while |a i −b i | is the absolute value of a i minus b i . The old calculation proceeded roughly as follows:
IBS0 := 0 IBS1 := 0 IBS2 := 0 For i∈{1,2,…,m}:
If a i =ϕ or b i =ϕ, skip
otherwise, if a i =b i , increment IBS2
otherwise, if (a i =2 and b i =0), or (a i =0 and b i =2), increment IBS0
otherwise, increment IBS1
Return \(\frac {0\cdot \text {IBS}0 + 1\cdot \text {IBS}1 + 2\cdot \text {IBS}2}{2\cdot (\text {IBS}0 + \text {IBS}1 + \text {IBS}2)}\)
We replaced this with roughly the following, based on bitwise operations on 960-marker blocks:
$$ m^{\prime} := 960\left\lceil \frac{m}{960}\right\rceil $$
Pad the ends of {a i } and {b i } with ϕs, if necessary A i :={012 if a i =ϕ,002 if a i =0,102 if a i =1,112 if a i =2} B i :={012 if b i =ϕ,002 if b i =0,102 if b i =1,112 if b i =2} C i :={002 if a i =ϕ,112 otherwise} D i :={002 if b i =ϕ,112 otherwise} diff := 0 obs := 0For i∈{1,961,1921,…,m′−959}:
E:=Ai..i+959XORBi..i+959
F:=Ci..i+959ANDDi..i+959
diff := diff + popcount(EANDF)
obs := obs + popcount(F)
Return \(\frac {\text {obs} - \text {diff}}{\text {obs}}\).
The idea is that ({C i }AND {D i }) yields a bit vector with two ones for every marker where genotype data is present for both samples, and two 0 s elsewhere, so 2|Ia,b| is equal to the number of ones in that bit vector; while (({A i }XOR {B i })AND {C i }AND {D i }) yields a bit vector with a 1 for every nucleotide difference. Refer to Additional file 1 [8] for more computational details. Our timing data (see "Performance comparisons" below) indicate that this algorithm takes less than twice as long to handle a 960-marker block as PLINK 1.07 takes to handle a single marker.
Bit population count
The "popcount" function above, defined as the number of ones in a bit vector, merits further discussion. Post-2008 x86 processors support a specialized instruction that directly evaluates this quantity. However, thanks to 50 years of work on the problem, algorithms exist which evaluate bit population count nearly as quickly as the hardware instruction while sticking to universally available operations. Since PLINK is still used on some older machines, we took one such algorithm (previously discussed and refined by [9]), and developed an improved SSE2-based implementation. (Note that SSE2 vector instructions are supported by even the oldest x86-64 processors).
The applications of bit population count extend further than might be obvious at first glance. As another example, consider computation of the correlation coefficient r between a pair of genetic variants, where some data may be missing. Formally, let n be the number of samples in the dataset, and {x1,x2,…,x n } and {y1,y2,…,y n } contain genotype data for the two variants, where each x i and y i has a value in {0,1,2,ϕ}. In addition, define
$${\fontsize{9}{6}\begin{aligned} I_{x,y} & := \{i: x_{i}\ne \phi\ \text{and}\ y_{i}\ne \phi \}, \\ v_{i} & := \{0\mathrm{ if }x_{i}=\phi, (x_{i}-1)\text{otherwise}\}, \\ w_{i} & := \{0\mathrm{ if }y_{i}=\phi, (y_{i}-1)\text{otherwise}\}, \\ \overline{v} & := |I_{x,y}|^{-1}\sum_{i\in I_{x,y}}v_{i}, \\ \overline{w} & := |I_{x,y}|^{-1}\sum_{i\in I_{x,y}}w_{i}, \\ \end{aligned}} $$
$${\fontsize{9}{6}\begin{aligned} \overline{v^{2}} & := |I_{x,y}|^{-1}\sum_{i\in I_{x,y}}{v_{i}^{2}},\text{and} \\ \overline{w^{2}} & := |I_{x,y}|^{-1}\sum_{i\in I_{x,y}}{w_{i}^{2}}. \end{aligned}} $$
The correlation coefficient of interest can then be expressed as
$$\begin{array}{@{}rcl@{}} r & = & \frac{|I_{x,y}|^{-1}\sum_{i\in I_{x,y}}\left(v_{i} - \overline{v}\right)\left(w_{i} - \overline{w}\right)}{\sqrt{\left(\overline{v^{2}} - \overline{v}^{2}\right)\left(\overline{w^{2}} - \overline{w}^{2}\right)}} \\ & = & \frac{|I_{x,y}|^{-1}\sum_{i=1}^{n}v_{i}w_{i} - \overline{v}\cdot \overline{w}}{\sqrt{\left(\overline{v^{2}} - \overline{v}^{2}\right)\left(\overline{w^{2}} - \overline{w}^{2}\right)}} \end{array} $$
Given PLINK 1 binary data, |Ix,y|, \(\overline {v}\), \(\overline {w}\), \(\overline {v^{2}}\), and \(\overline {w^{2}}\) can easily be expressed in terms of bit population counts. The dot product \(\sum _{i=1}^{n}v_{i}w_{i}\) is trickier; to evaluate it, we preprocess the data so that the genotype bit vectors X and Y encode homozygote minor calls as 002, heterozygote and missing calls as 012, and homozygote major calls as 102, and then proceed as follows:
Set Z := (XORY) AND01010101… 2
popcount2(((XXORY) AND (10101010… 2 - Z)) ORZ),
where popcount2() sums 2-bit quantities instead of counting set bits. (This is actually cheaper than PLINK's regular population count; the first step of software popcount() is reduction to a popcount2() problem).
Subtract the latter quantity from n.
The key insight behind this implementation is that each v i w i term is in {−1,0,1}, and can still be represented in 2 bits in an addition-friendly manner. (This is not strictly necessary for bitwise parallel processing—the partial sum lookup algorithm discussed later handles 3-bit outputs by padding the raw input data to 3 bits per genotype call—but it allows for unusually high efficiency). The exact sequence of operations that we chose to evaluate the dot-product terms in a bitwise parallel fashion is somewhat arbitrary.
We note that when computing a matrix of correlation coefficients between all pairs of variants, if no genotype data is absent, then |Ix,y| is invariant, \(\overline {v}\) and \(\overline {v^{2}}\) do not depend on y, and \(\overline {w}\) and \(\overline {w^{2}}\) do not depend on x. Thus, these five values would not need to be recomputed for each variant pair at O(m2n) total time cost; they could instead be precomputed outside the main loop at a total cost of O(mn) time and O(m) space. PLINK 1.9 optimizes this common case.
See popcount_longs() in plink_common.c for our primary bit population count function, and plink_ld.c for several correlation coefficient evaluation functions.
Multicore and cluster parallelism
Modern x86 processors also contain increasing numbers of cores, and computational workloads in genetic studies tend to contain large "embarrassingly parallel" steps which can easily exploit additional cores. Therefore, PLINK 1.9 autodetects the number of cores present in the machine it is running on, and many of its heavy-duty operations default to employing roughly that number of threads. (This behavior can be manually controlled with the –threads flag.) Most of PLINK 1.9's multithreaded computations use a simple set of cross-platform C functions and macros, which compile to pthread library idioms on Linux and OS X, and OS-specific idioms like _beginthreadex() on Windows.
PLINK 1.9 also contains improved support for distributed computation: the –parallel flag makes it easy to split large matrix computations across a cluster, while –write-var-ranges simplifies splitting of per-variant computations.
Graphics processing units (GPUs) remain as a major unexploited computational resource. We have made the development of GPU-specific code a low priority since their installed base is much smaller than that of multicore processors, and the speedup factor over well-written multithreaded code running on similar-cost, less specialized hardware is usually less than 10x [10,11]. However, we do plan to build out GPU support for the heaviest-duty computations after most of our other PLINK 2 development goals are achieved.
To make it possible for PLINK 1.9 to handle the huge datasets that benefit the most from these speed improvements, the program core no longer keeps the main genomic data matrix in memory; instead, most of its functions only load data for a single variant, or a small window of variants, at a time. Sample × sample matrix computations still normally require additional memory proportional to the square of the sample size, but –parallel gets around this:
calculates 1/40th of the genomic relationship matrix per run, with correspondingly reduced memory requirements.
Other noteworthy algorithms
Partial sum lookup
Each entry of a weighted genomic distance matrix between pairs of individuals is a sum of per-marker terms. Given PLINK 1 binary data, for any specific marker, there are seven distinct cases at most:
Both genotypes are homozygous for the major allele.
One is homozygous major, and the other is heterozygous.
One is homozygous major, and the other is homozygous minor.
Both are heterozygous.
One is heterozygous, and the other is homozygous minor.
Both are homozygous minor.
At least one genotype is missing.
For example, the GCTA genomic relationship matrix is defined by the following per-marker increments, where q is the minor allele frequency:
\(\frac {(2-2q)(2-2q)}{2q(1-q)}\)
0; subtract 1 from the final denominator instead, in another loop
This suggests the following matrix calculation algorithm, as a first draft:
Initialize all distance/relationship partial sums to zero.
For each marker, calculate and save the seven possible increments in a lookup table, and then refer to the table when updating partial sums. This replaces several floating point adds/multiplies in the inner loop with a single addition operation.
We can substantially improve on this by handling multiple markers at a time. Since seven cases can be distinguished by three bits, we can compose a sequence of operations which maps a pair of padded 2-bit genotypes to seven different 3-bit values in the appropriate manner. On 64-bit machines, 20 3-bit values can be packed into a machine word—for example, let bits 0-2 describe the relation at marker #0, bits 3-5 describe the relation at marker #1, and so forth, all the way up to bits 57-59 describing the relation at marker #19—so this representation lets us instruct the processor to act on 20 markers simultaneously.
Then, we need to perform the update
$$A_{jk} := A_{jk} + f_{0}(x_{0}) + f_{1}(x_{1}) + \ldots + f_{19}(x_{19}) $$
where the x i 's are bit trios, and the f i 's map them to increments. This could be done with 20 table lookups and floating point addition operations. Or, the update could be restructured as
$$A_{jk} := A_{jk} + f_{\{0-4\}}(x_{\{0-4\}}) + \ldots + f_{\{15-19\}}(x_{\{15-19\}}) $$
where x{0−4} denotes the lowest-order 15 bits, and f{0−4} maps them directly to f0(x0)+f1(x1)+f2(x2)+f3(x3)+f4(x4); similarly for f{5−9}, f{10−14}, and f{15−19}. In exchange for some precomputation—four tables with 215 entries each; total size 1 MB, which is not onerous for modern L2/L3 caches—this restructuring licenses the use of four table lookups and adds per update instead of twenty. See fill_weights_r() and incr_dists_r() in plink_calc.c for source code.
Hardy-Weinberg equilibrium and Fisher's exact tests
Under some population genetic assumptions such as minimal inbreeding, genotype frequencies for a biallelic variant can be expected to follow the Hardy-Weinberg proportions
$$\begin{aligned} &\text{freq} (A_{1}A_{1}) = p^{2} \qquad \text{freq} (A_{1}A_{2}) = 2pq\qquad\\ &\text{freq} (A_{2}A_{2}) = q^{2} \end{aligned} $$
where p is the frequency of allele A1 and q=1−p is the frequency of allele A2 [12]. It is now common for bioinformaticians to use an exact test for deviation from Hardy-Weinberg equilibrium (HWE) to help detect genotyping error and major violations of the Hardy-Weinberg assumptions.
PLINK 1.0 used the SNP-HWE algorithm in a paper by Wigginton et al. [13] to execute this test. SNP-HWE exploits the fact that, while the absolute likelihood of a contingency table involves large factorials which are fairly expensive to evaluate, the ratios between its likelihood and that of adjacent tables are simple since the factorials almost entirely cancel out [14]. More precisely, given n diploid samples containing a total of n1 copies of allele A1 and n2 copies of allele A2 (so n1+n2=2n), there are \(\frac {(2n)!}{n_{1}!n_{2}!}\) distinct ways for the alleles to be distributed among the samples, and \(\frac {(2^{n_{12}})(n!)}{((n_{1}-n_{12})/2)!n_{12}!((n_{2}-n_{12})/2)!}\) of those ways correspond to exactly n12 heterozygotes when n12 has the same parity as n1 and n2. Under Hardy-Weinberg equilibrium, each of these ways is equally likely. Thus, the ratio between the likelihoods of observing exactly n12=k+2 heterozygotes and exactly n12=k heterozygotes, under Hardy-Weinberg equilibrium and fixed n1 and n2, is
$$\begin{array}{@{}rcl@{}} & \left(\frac{(2^{k+2})(n!)}{(\frac{n_{1}-k}{2}-1)!(k+2)!(\frac{n_{2}-k}{2}-1)!} \middle/ \frac{(2^{k})(n!)}{\frac{n_{1}-k}{2}!k!\frac{n_{2}-k}{2}!} \right) \\ = & \frac{2^{k+2}}{2^{k}}\cdot \frac{n!}{n!}\cdot \frac{\frac{n_{1}-k}{2}!}{(\frac{n_{1}-k}{2}-1)!}\cdot \frac{k!}{(k+2)!}\cdot \frac{\frac{n_{2}-k}{2}!}{(\frac{n_{2}-k}{2}-1)!} \\ = & 4\cdot 1\cdot \frac{n_{1}-k}{2}\cdot \frac{1}{(k+1)(k+2)}\cdot \frac{n_{2}-k}{2} \\ = & \frac{(n_{1}-k)(n_{2}-k)}{(k+1)(k+2)}. \end{array} $$
SNP-HWE also recognizes that it is unnecessary to start the computation with an accurate absolute likelihood for one table. Since the final p-value is computed as
$${\fontsize{8.5}{6}\begin{aligned} \frac{[\text{sum of null hypothesis likelihoods of at-least-as-extreme tables}]} {[\text{sum of null hypothesis likelihoods of all tables}]}, \end{aligned}} $$
it is fine for all computed likelihoods to be relative values off by a shared constant factor, since that constant factor will cancel out. This eliminates the need for log-gamma approximation.
While studying the software, we made two additional observations:
Its size- O(n) memory allocation (where n is the sum of all contingency table entries) could be avoided by reordering the calculation; it is only necessary to track a few partial sums.
Since likelihoods decay super-geometrically as one moves away from the most probable table, only \(O(\sqrt {n})\) of the likelihoods can meaningfully impact the partial sums; the sum of the remaining terms is too small to consistently affect even the 10th significant digit in the final p-value. By terminating the calculation when all the partial sums stop changing (due to the newest term being too tiny to be tracked by IEEE-754 double-precision numbers), computational complexity is reduced from O(n) to \(O(\sqrt {n})\) with no loss of precision. See Figure 1 for an example.
2 × 2 contingency table log-frequencies. This is a plot of relative frequencies of 2 × 2 contingency tables with top row sum 1000, left column sum 40000, and grand total 100000, reflecting a low-MAF variant where the difference between the chi-square test and Fisher's exact test is relevant. All such tables with upper left value smaller than 278, or larger than 526, have frequency smaller than 2−53 (dotted horizontal line); thus, if the obvious summation algorithm is used, they have no impact on the p-value denominator due to numerical underflow. (It can be proven that this underflow has negligible impact on accuracy, due to how rapidly the frequencies decay.) A few more tables need to be considered when evaluating the numerator, but we can usually skip at least 70%, and this fraction improves as problem size increases.
PLINK 1.0 also has association analysis and quality control routines which perform Fisher's exact test on 2×2 and 2×3 tables, using the FEXACT network algorithm from Mehta et al. [15,16]. The 2×2 case has the same mathematical structure as the Hardy-Weinberg equilibrium exact test, so it was straightforward to modify the early-termination SNP-HWE algorithm to handle it. The 2×3 case is more complicated, but retains the property that only \(O(\sqrt {\mathrm {\# of tables}})\) relative likelihoods need to be evaluated, so we were able to develop a function to handle it in O(n) time; see Figure 2 for more details. Our timing data indicate that our new functions are consistently faster than both FEXACT and the update to the network algorithm by Requena et al. [17].
Computation pattern for our 2 × 3 Fisher's exact test implementation. This is a plot of the set of alternative 2 × 3 contigency tables explicitly considered by our algorithm when testing the table with 65, 136, 324 in the top row and 81, 172, 314 in the bottom row. Letting ℓ denote the relative likelihood of observing the tested table under the null hypothesis, the set of tables with null hypothesis relative likelihoods between 2−53ℓ and ℓ has an ellipsoidal annulus shape, with area scaling as O(n) as the problem size increases; while the set of tables with relative likelihood greater than 2−53lmax (where lmax is the maximal single-table relative likelihood) has an elliptical shape, also with O(n) area. Summing the relative likelihoods in the first set, and then dividing that number by the sum of the relative likelihoods in the second set, yields the desired p-value to 10+ digit accuracy in O(n) time. In addition, we exploit the fact that a "row" of 2 × 3 table likelihoods sums to a single 2 × 2 table likelihood; this lets us essentially skip the top and bottom of the annulus, as well as all but a single row of the central ellipse.
Standalone source code for early-termination SNP-HWE and Fisher's 2×2/ 2×3 exact test is posted at [18]. Due to recent calls for use of mid-p adjustments in biostatistics [19,20], all of these functions have mid-p modes, and PLINK 1.9 exposes them.
We note that, while the Hardy-Weinberg equilibrium exact test is only of interest to geneticists, Fisher's exact test has wider application. Thus, we are preparing another paper which discusses these algorithms in more detail, with proofs of numerical error bounds and a full explanation of how the Fisher's exact test algorithm extends to larger tables.
Haplotype block estimation
It can be useful to divide the genome into blocks of variants which appear to be inherited together most of the time, since observed recombination patterns are substantially more "blocklike" than would be expected under a model of uniform recombination [21]. PLINK 1.0's –blocks command implements a method of identifying these haplotype blocks by Gabriel et al. [22]. (More precisely, it is a restricted port of Haploview's [23] implementation of the method).
This method is based on 90% confidence intervals (as defined by Wall and Pritchard [21]) for Lewontin's D′ disequilibrium statistic for pairs of variants. Depending on the confidence interval's boundaries, a pair of variants is classified as "strong linkage disequilibrium (LD)", "strong evidence for historical recombination", or "inconclusive"; then, contiguous groups of variants where "strong LD" pairs outnumber "recombination" pairs by more than 19 to 1 are greedily selected, starting with the longest base-pair spans.
PLINK 1.9 accelerates this in several ways:
Estimation of diplotype frequencies and maximum-likelihood D′ has been streamlined. Bit population counts are used to fill the contingency table; then we use the analytic solution to Hill's diplotype frequency cubic equation [24,25] and only compute and compare log likelihoods in this step when multiple solutions to the equation are in the valid range.
90% confidence intervals were originally estimated by computing relative likelihoods at 101 points (corresponding to D′=0,D′=0.01,…,D′=1) and checking where the resulting cumulative distribution function (cdf) crossed 5% and 95%. However, the likelihood function rarely has more than one extreme point in (0,1) (and the full solution to the cubic equation reveals the presence of additional extrema); it is usually possible to exploit this unimodality to establish good bounds on key cdf values after evaluating just a few likelihoods. In particular, many confidence intervals can be classified as "recombination" after inspection of just two of the 101 points; see Figure 3.
Rapid classification of "recombination" variant pairs. This is a plot of 101 equally spaced D' log-likelihoods for (rs58108140, rs140337953) in 1000 Genomes phase 1, used in Gabriel et al.'s method of identifying haplotype blocks. Whenever the upper end of the 90% confidence interval is smaller than 0.90 (i.e. the rightmost 11 likelihoods sum to less than 5% of the total), we have strong evidence for historical recombination between the two variants. After determining that L(D′=x) has only one extreme value in [0, 1] and that it's between 0.39 and 0.40, confirming L(D′=0.90)<L(D′=0.40)/220 is enough to finish classifying the variant pair (due to monotonicity: L(D′=0.90)≥L(D′=0.91)≥…≥L(D′=1.00)); evaluation of the other 99 likelihoods is now skipped in this case. The dotted horizontal line is at L(D′=0.40)/220.
Instead of saving the classification of every variant pair and looking up the resulting massive table at a later point, we just update a small number of "strong LD pairs within last k variants" and "recombination pairs within last k variants" counts while processing the data sequentially, saving only final haploblock candidates. This reduces the amount of time spent looking up out-of-cache memory, and also allows much larger datasets to be processed.
Since "strong LD" pairs must outnumber "recombination" pairs by 19 to 1, it does not take many "recombination" pairs in a window before one can prove no haploblock can contain that window. When this bound is crossed, we take the opportunity to entirely skip classification of many pairs of variants.
Most of these ideas are implemented in haploview_blocks_classify() and haploview_blocks() in plink_ld.c. The last two optimizations were previously implemented in Taliun's "LDExplorer" R package [26].
Coordinate-descent LASSO
PLINK 1.9 includes a basic coordinate-descent LASSO implementation [27] (–lasso), which can be useful for phenotypic prediction and related applications. See Vattikuti et al. for discussion of its theoretical properties [28].
Newly integrated third-party software
PLINK 1.0 commands
Many teams have significantly improved upon PLINK 1.0's implementations of various commands and made their work open source. In several cases, their innovations have been integrated into PLINK 1.9; examples include
Pahl et al.'s PERMORY algorithm for fast permutation testing [29],
Wan et al.'s BOOST software for fast epistasis testing [30],
Ueki, Cordell, and Howey's –fast-epistasis variance correction and joint-effects test [31,32],
Taliun, Gamper, and Pattaro's optimizations to Gabriel et al.'s haplotype block identification algorithm (discussed above) [26], and
Pascal Pons's winning submission to the GWAS Speedup logistic regression crowdsourcing contest [33]. (The contest was designed by Po-Ru Loh, run by Babbage Analytics & Innovation and TopCoder, and subsequent analysis and code preparation were performed by Andrew Hill, Ragu Bharadwaj, and Scott Jelinsky. A manuscript is in preparation by these authors and Iain Kilty, Kevin Boudreau, Karim Lakhani and Eva Guinan.)
In all such cases, PLINK's citation instructions direct users of the affected functions to cite the original work.
Multithreaded gzip
For many purposes, compressed text files strike a good balance between ease of interpretation, loading speed, and resource consumption. However, the computational cost of generating them is fairly high; it is not uncommon for data compression to take longer than all other operations combined. To make a dent in this bottleneck, we have written a simple multithreaded compression library function based on Mark Adler's excellent pigz program [34], and routed most of PLINK 1.9's gzipping through it. See parallel_compress() in pigz.c for details.
Import and export of Variant Call Format (VCF) and Oxford-formatted data
PLINK 1.9 can import data from Variant Call Format (–vcf), binary VCF (–bcf), and Oxford-format (–data, –bgen) files. However, since it cannot handle genotype likelihoods, phase information or variants with more than two alleles, the import process can be quite lossy. Specifically,
With Oxford-format files, genotype likelihoods smaller than 0.9 are normally treated as missing calls, and the rest are treated as hard calls. –hard-call-threshold can be used to change the threshold, or request independent pseudorandom calls based on the likelihoods in the file.
Phase is discarded.
By default, when a VCF variant has more than one alternate allele, only the most common alternate is retained; all other alternate calls are converted to missing. –biallelic-only can be used to skip variants with multiple alternate alleles.
Export to these formats is also possible, via –recode vcf and –recode oxford.
Unplaced contig and nonhuman species support
When the –allow-extra-chr or –aec flag is used, PLINK 1.9 allows datasets to contain unplaced contigs or other arbitrary chromosome names, and most commands will handle them in a reasonable manner. Also, arbitrary nonhuman species (with haploid or diploid genomes) can now be specified with –chr-set.
Command-line help
To improve the experience of using PLINK interactively, we have expanded the –help flag's functionality. When invoked with no parameters, it now prints an entire mini-manual. Given keyword(s), it instead searches for and prints mini-manual entries associated with those keyword(s), and handles misspelled keywords and keyword prefixes in a reasonable manner.
A comment on within-family analysis
Most of our discussion has addressed computational issues. However, there is one methodological issue that deserves a brief comment. The online documentation of PLINK 1.07 weighed the pros and cons of its permutation procedure for within-family analysis of quantitative traits (QFAM) with respect to the standard quantitative transmission disequilibrium test (QTDT) [35]. It pointed out that likelihood-based QTDT enjoyed the advantages of computational speed and increased statistical power. However, a comparison of statistical power is only meaningful if both procedures are anchored to the same Type 1 error rate with respect to the null hypothesis of no linkage with a causal variant, and Ewens et al. has shown that the QTDT is not robust against certain forms of confounding (population stratification) [36]. On the other hand, the validity of a permutation procedure such as QFAM only depends on the applicability of Mendel's laws. When this nicety is combined with the vast speedup of permutation in PLINK 1.9, a given user may now decide to rate QFAM more highly relative to QTDT when considering available options for within-family analysis.
Performance comparisons
In the following tables, running times are collected from seven machines operating on three datasets.
"Mac-2" denotes a MacBook Pro with a 2.8 Ghz Intel Core 2 Duo processor and 4GB RAM running OS X 10.6.8.
"Mac-12" denotes a Mac Pro with two 2.93 Ghz Intel 6-core Xeon processors and 64GB RAM running OS X 10.6.8.
"Linux32-2" denotes a machine with a 2.4 Ghz Intel Core 2 Duo E6600 processor and 1GB RAM running 32-bit Ubuntu Linux.
"Linux32-8" denotes a machine with a 3.4 Ghz Intel Core i7-3770 processor (8 cores) and 8GB RAM running 32-bit Ubuntu Linux.
"Linux64-512" denotes a machine with sixty-four AMD 8-core Opteron 6282 SE processors and 512GB RAM running 64-bit Linux.
"Win32-2" denotes a laptop with a 2.4 Ghz Intel Core i5-2430 M processor (2 cores) and 4GB RAM running 32-bit Windows 7 SP1.
"Win64-2" denotes a machine with a 2.3 Ghz Intel Celeron G1610T processor (2 cores) and 8GB RAM running 64-bit Windows 8.
"synth1" refers to a 1000 sample, 100000 variant synthetic dataset generated with HAPGEN2 [37], while "synth1p" refers to the same dataset after one round of –indep-pairwise 50 5 0.5 pruning (with 76124 markers remaining). For case/control tests, PLINK 1.9's –tail-pheno 0 command was used to downcode the quantitative phenotype to case/control.
"synth2" refers to a 4000 case, 6000 control synthetic dataset with 88025 markers on chromosomes 19-22 generated by resampling HapMap and 1000 Genomes data with simuRare [38] and then removing monomorphic loci. "synth2p" refers to the same dataset after one round of –indep-pairwise 700 70 0.7 pruning (with 71307 markers remaining).
"1000g" refers to the entire 1092 sample, 39637448 variant 1000 Genomes project phase 1 dataset [39]. "chr1" refers to chromosome 1 from this dataset, with 3001739 variants. "chr1snp" refers to chromosome 1 after removal of all non-SNPs and one round of –indep-pairwise 20000 2000 0.5 pruning (798703 markers remaining). Pedigree information was not added to these datasets before our tests.
All times are in seconds. To reduce disk-caching variance, timing runs are preceded by "warmup" commands like plink –freq. PLINK 1.07 was run with the –noweb flag. "nomem" indicates that the program ran out of memory and there was no low-memory mode or other straightforward workaround. A tilde indicates that runtime was extrapolated from several smaller problem instances.
Initialization and basic I/O
Table 1 displays execution times for plink –freq, one of the simplest operations PLINK can perform. These timings reflect fixed initialization and I/O overhead. (Due to the use of warmup runs, they do not include disk latency).
Table 1 –freq times (sec)
Identity-by-state matrices, complete linkage clustering
The PLINK 1.0 –cluster –matrix flag combination launches an identity-by-state matrix calculation and writes the result to disk, and then performs complete linkage clustering on the data; when –ppc is added, a pairwise population concordance constraint is applied to the clustering process. As discussed earlier, PLINK 1.9 employs an XOR/bit population count algorithm which speeds up the matrix calculation by a large constant factor; the computational complexity of the clustering algorithm has also been reduced, from O(n3) to O(n2 logn). (Further improvement of clustering complexity, to O(n2), is possible in some cases [40].)
In Table 2, we compare PLINK 1.07 and PLINK 1.9 execution times under three scenarios: identity-by-state (IBS) matrix calculation only (–cluster –matrix –K [sample count - 1] in PLINK 1.07, –distance ibs square in PLINK 1.9), IBS matrix + standard clustering (–cluster –matrix for both versions), and identity-by-descent (IBD) report generation (–Z-genome.)
Table 2 Identity-by-state (Hamming distance) and complete linkage clustering times (sec)
(Note that newer algorithms such as BEAGLE's fastIBD [41] generate more accurate IBD estimates than PLINK –Z-genome. However, the –Z-genome report contains other useful information.)
Genomic relationship matrices
GCTA's –make-grm-bin command (–make-grm in early versions) calculates the variance-standardized genomic relationship matrix used by many of its other commands. The latest implementation as of this writing (v1.24) is very fast, but only runs on 64-bit Linux, uses single- instead of double-precision arithmetic, and has a high memory requirement.
PLINK 1.9's implementation of this calculation is designed to compensate for GCTA 1.24's limitations—it is cross-platform, works in low-memory environments, and uses double-precision arithmetic while remaining within a factor of 2-5 on speed. See Table 3 for timing data. The comparison is with GCTA 1.24 on 64-bit Linux, and v1.02 elsewhere.
Table 3 Genomic relationship matrix calculation times (sec)
Linkage disequilibrium-based variant pruning
The PLINK 1.0 –indep-pairwise command is frequently used in preparation for analyses which assume approximate linkage equilibrium. In Table 4, we compare PLINK 1.07 and PLINK 1.9 execution times for some reasonable parameter choices. The r2 threshold for "synth2" was chosen to make the "synth1p" and "synth2p" pruned datasets contain similar number of SNPs, so Tables 2 and 3 could clearly demonstrate scaling with respect to sample size.
Table 4 –indep-pairwise runtimes (sec)
Table 5 demonstrates the impact of our rewrite of –blocks. Due to a minor bug in PLINK 1.0's handling of low-MAF variants, we pruned each dataset to contain only variants with MAF ≥0.05 before running –blocks. 95506 markers remained in the "synth1" dataset, and 554549 markers remained in "chr1". A question mark indicates that the extrapolated runtime may not be valid since we suspect Haploview or PLINK 1.07 would have run out of memory before finishing.
Table 5 –blocks runtimes (sec)
Association analysis max(T) permutation tests
PLINK 1.0's basic association analysis commands were quite flexible, but the powerful max(T) permutation test suffered from poor performance. PRESTO [42] and PERMORY introduced major algorithmic improvements (including bit population count) which largely solved the problem. Table 6 demonstrates that PLINK 1.9 successfully extends the PERMORY algorithm to the full range of PLINK 1.0's association analyses, while making Fisher's exact test practical to use in permutation tests. (There is no 64-bit Windows PERMORY build, so the comparisons on the Win64-2 machine are between 64-bit PLINK and 32-bit PERMORY.)
Table 6 Association analysis max(T) permutation test times (sec)
PLINK 2.0 design
Despite its computational advances, we recognize that PLINK 1.9 can ultimately still be an unsatisfactory tool for working with imputed genomic data, due to the limitations of the PLINK 1 binary file format. To address this, we designed a new core file format capable of representing most of the information emitted by modern imputation tools, which is the cornerstone of our plans for PLINK 2.0.
Multiple data representations
As discussed earlier, PLINK 1 binary is inadequate in three ways: likelihoods strictly between 0 and 1 cannot be represented, phase information cannot be stored, and variants are limited to two alleles. This can be addressed by representing all calls probabilistically, and introducing a few other extensions. Unfortunately, this would make PLINK 2.0's representation of PLINK 1-format data so inefficient that it would amount to a serious downgrade from PLINK 1.9 for many purposes.
Therefore, our new format defines several data representations, one of which is equivalent to PLINK 1 binary, and allows different files, or even variants within a single file, to use different representations. To work with this, PLINK 2.0 will include a translation layer which allows individual functions to assume a specific representation is used. As with the rest of PLINK's source code, this translation layer will be GPLv3-licensed open source; and unlike most of the other source code, we are explicitly designing it to be usable as a standalone library. PLINK 2.0 will also be able to convert files/variants from one data representation to another, making it practical for third-party tools lacking access to the library to demand a specific representation.
Reference vs. alternate alleles
The now-ubiquitous VCF file format requires reference alleles to be distinguished from alternate alleles, and an increasing number of software tools and pipelines do not tolerate scrambling of the two. This presents an interoperability problem for PLINK: while it was theoretically possible to handle binary data with PLINK 1.0 in a manner that preserved the reference vs. alternate allele distinction when it was originally present, with constant use of –keep-allele-order and related flags, doing so was inconvenient and error-prone, especially since the accompanying native.ped/.map and.tped/.tfam text formats had no place to store that information. PLINK 1.9's –a2-allele flag, which can import that information from a VCF file, provides limited relief, but it is still necessary for users to fight against the program's major/minor-allele based design.
We aim to solve this problem for good in PLINK 2.0. The file format explicitly defines reference vs. alternate alleles, and this information will be preserved across runs by default. In addition, the file format will include a flag distinguishing provisional reference allele assignments from those derived from an actual reference genome. When PLINK 2.0 operates on.ped/.map or similar data lacking a reference vs. alternate distinction, it will treat a highest-frequency allele as the reference, while flagging it as a provisional assignment. When a file with flagged-as-provisional reference alleles is merged with another file with unflagged reference alleles, the unflagged reference allele assignments take precedence. (Merges involving conflicting unflagged reference alleles will fail unless the user specifies which source file takes precedence.) It will also be straightforward to import real reference allele assignments with an analogue of –a2-allele.
PLINK 1.9 demonstrates the power of a weak form of compressive genomics [43]: by using bit arithmetic to perform computation directly on compressed genomic data, it frequently exhibits far better performance than programs which require an explicit decompression step. But its "compressed format" is merely a tight packing which does not support the holy grail of true sub-linear analysis.
To do our part to make "strong" sub-linear compressive genomics a reality, the PLINK 2 file format will introduce support for "deviations from most common value" storage of low-MAF variants. For datasets containing many samples, this captures much of the storage efficiency benefit of having real reference genomes available, without the drawback of forcing all programs operating on the data to have access to a library of references. Thanks to PLINK 2.0's translation layer and file conversion facilities, programmers will be able to ignore this feature during initial development of a tool, and then work to exploit it after basic functionality is in place.
We note that LD-based compression of variant groups is also possible, and Sambo's SNPack software [44] applies this to the PLINK 1 binary format. We do not plan to support this in PLINK 2.0 due to the additional software complexity required to handle probabilistic and multiallelic data, but we believe this is a promising avenue for development and look forward to integrating it in the future.
Remaining limitations
PLINK 2.0 is designed to meet the needs of tomorrow's genome-wide association studies and population-genetics research; in both contexts, it is appropriate to apply a single genomic coordinate system across all samples, and preferred sample sizes are large enough to make computational efficiency a serious issue.
Whole-exome and whole-genome sequencing also enables detailed study of structural variations which defy clean representation under a single coordinate system; and the number of individuals in such studies is typically much smaller than the tens or even hundreds of thousands which are sometimes required for effective GWAS. There are no plans to make PLINK suitable for this type of analysis; we strongly recommend the use of another software package, such as PLINK/SEQ [45], which is explicitly designed for it. This is why the PLINK 2 file format will still be substantially less expressive than VCF.
An important consequence is that, despite its ability to import and export VCF files, PLINK should not be used for management of genomic data which will be subject to both types of analysis, because it discards all information which is not relevant for its preferred type. However, we will continue to extend PLINK's ability to interpret VCF-like formats and interoperate with other popular software.
Availability and requirements
Project name: Second-generation PLINK
Project (source code) home page:https://www.cog-genomics.org/plink2/(https://github.com/chrchang/plink-ng)
Operating systems: Linux (32/64-bit), OS X (64-bit Intel), Windows (32/64-bit)
Programming language: C, C++
Other requirements (when recompiling): GCC version 4, a few functions also require LAPACK 3.2
License: GNU General Public License version 3.0 (GPLv3)
Any restrictions to use by non-academics: none
Availability of supporting data
The test data and the source code snapshots supporting the results of this article are available in the GigaScience repository, GigaDB [8].
PLINK:
The software toolset that is the main subject of this paper. The name was originally shorthand for "population linkage"
BEAGLE:
A software package capable of high-accuracy haplotype phasing, genotype imputation, and identity-by-descent estimation, developed by Browning [2]
GCTA:
Genome-wide Complex Trait Analysis. This refers to both the statistical method and the software implementation discussed in [7]
VCF:
Variant Call Format [5]
A family of backward compatible instruction set architectures based on the Intel 8086 CPU
IBS:
Identity-by-state. A simple measure of genomic similarity, equal to the number of identical alleles divided by the number of observations
popcount:
Bit population count. The number of '1' bits in a bit vector
XOR:
Exclusive-or. A binary logical operation that evaluates to true if exactly one of its arguments is true
SSE2:
Streaming SIMD Extensions 2. A SIMD (single instruction, multiple data) processor supplementary instruction set first introduced by Intel with the initial version of the Pentium 4 in 2001
Graphics processing unit
HWE:
SNP:
Single-nucleotide polymorphism
FEXACT:
A network algorithm for evaluating Fisher's exact test p-values, developed by Mehta et al. [15,16]
LD:
PERMORY:
A software package designed to perform efficient permutation tests for large-scale genetic data sets, developed by Pahl et al. [29]
GWAS:
Genome-Wide Association Study
QFAM:
A family-based quantitative trait association analysis procedure, introduced by PLINK 1.0, which combines a simple linear regression of phenotype on genotype with a special permutation test which corrects for family structure
QTDT:
Quantitative Transmission Disequilibrium Tests, developed primarily by Abecasis et al. [35]
Ghz:
Gigahertz
GB:
I/O:
MAF:
Minor allele frequency. Frequency of the least common allele that is still present in a population
GPLv3:
Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira M, Bender D, et al. Plink: A tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007; 81:559–75.
Browning B, Browning S. Improving the accuracy and efficiency of identity by descent detection in population data. Genetics. 2013; 194:459–71.
Howie B, Donnelly P, Marchini J. A flexible and accurate genotype imputation method for the next generation of genome-wide association studies. PLoS Genet. 2009; 5:1000529.
McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, et al. The genome analysis toolkit: A mapreduce framework for analyzing next-generation dna sequencing data. Genome Res. 2010; 20:1297–303.
Danecek P, Auton A, Abecasis G, Albers C, Banks E, DePristo M, et al. The variant call format and vcftools. Bioinformatics. 2011; 27:2156–8.
Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, 1000 Genome Project Data Processing Subgroup, et al. The sequence alignment/map format and samtools. Bioinformatics. 2009; 25:2078–9.
Yang J, Lee S, Goddard M, Visscher P. Gcta: A tool for genome-wide complex trait analysis. Am J Hum Genet. 2011; 88:76–82.
Chang C, Chow C, Tellier L, Vattikuti S, Purcell S, Lee J. Software and Supporting Material for "Second-generation PLINK: Rising to the Challenge of Larger and Richer Datasets". GigaScience Database. http://dx.doi.org/10.5524/100116.
Dalke A. Update: Faster Population Counts. http://www.dalkescientific.com/writings/diary/archive/2011/11/02/faster_popcount_update.html.
Lee V, Kim C, Chhugani J, Deisher M, Kim D, Nguyen A, et al. Debunking the 100x gpu vs. cpu myth: an evaluation of throughput computing on cpu and gpu. In: Proceedings of the 37th Annual International Symposium on Computer Architecture: 19-23 June 2010. Saint-Malo, France,: ACM: 2010. p. 451–460.
Haque I, Pande V, Walters W. Anatomy of high-performance 2d similarity calculations. J Chem Inf Model. 2011; 51:2345–51.
Hardy H. Mendelian proportions in a mixed population. Science. 1908; 28:49–50.
Wigginton J, Cutler D, Abecasis G. A note on exact tests of hardy-weinberg equilibrium. Am J Hum Genet. 2005; 76:887–93.
Guo S, Thompson E. Performing the exact test of hardy-weinberg proportion for multiple alleles. Biometrics. 1992; 48:361–72.
Mehta C, Patel N. Algorithm 643: Fexact: a fortran subroutine for fisher's exact test on unordered r ×c contingency tables. ACM Trans Math Softw. 1986; 12:154–61.
Clarkson D, Fan Y, Joe H. A remark on algorithm 643: Fexact: an algorithm for performing fisher's exact test in r x c contingency tables. ACM Trans Math Softw. 1993; 19:484–8.
Requena F, Martín Ciudad N. A major improvement to the network algorithm for fisher's exact test in 2 ×c contingency tables. J Comp Stat & Data Anal. 2006; 51:490–8.
Chang C. Standalone C/C++ Exact Statistical Test Functions. https://github.com/chrchang/stats.
Lydersen S, Fagerland M, Laake P. Recommended tests for association in 2 ×2 tables. Statist Med. 2009; 28:1159–75.
Graffelman J, Moreno V. The mid p-value in exact tests for hardy-weinberg equilibrium. Stat Appl Genet Mol Bio. 2013; 12:433–48.
Wall J, Pritchard J. Assessing the performance of the haplotype block model of linkage disequilibrium. Am J Hum Genet. 2003; 73:502–15.
Gabriel S, Schaffner S, Nguyen H, Moore J, Roy J, Blumenstiel B, et al. The structure of haplotype blocks in the human genome. Science. 2002; 296:2225–9.
Barrett J, Fry B, Maller J, Daly M. Haploview: analysis and visualization of ld and haplotype maps. Bioinformatics. 2005; 21:263–5.
Hill W. Estimation of linkage disequilibrium in randomly mating populations. Heredity. 1974; 33:229–39.
Gaunt T, Rodríguez S, Day I. Cubic exact solutions for the estimation of pairwise haplotype frequencies: implications for linkage disequilibrium analyses and a web tool 'cubex'. BMC Bioinformatics. 2007; 8:428.
Taliun D, Gamper J, Pattaro C. Efficient haplotype block recognition of very long and dense genetic sequences. BMC Bioinformatics. 2014; 15:10.
Friedman J, Hastie T, Höfling H, Tibshirani R. Pathwise coordinate optimization. Ann Appl Stat. 2007; 1:302–32.
Vattikuti S, Lee J, Chang C, Hsu S, Chow C. Applying compressed sensing to genome-wide association studies. GigaScience. 2014; 3:10.
Steiß V, Letschert T, Schäfer H, Pahl R. Permory-mpi: A program for high-speed parallel permutation testing in genome-wide association studies. Bioinformatics. 2012; 28:1168–9.
Wan X, Yang C, Yang Q, Xue H, Fan X, Tang N, et al. Boost: A fast approach to detecting gene-gene interactions in genome-wide case-control studies. Am J Hum Genet. 2010; 87:325–40.
Ueki M, Cordell H. Improved statistics for genome-wide interaction analysis. PLoS Genet. 2012; 8:1002625.
Howey R. CASSI: Genome-Wide Interaction Analysis Software. http://www.staff.ncl.ac.uk/richard.howey/cassi.
GWASSpeedup Problem Statement. http://community.topcoder.com/longcontest/?module=ViewProblemStatement&rd=15637&pm=12525.
Adler M. Pigz: Parallel Gzip. http://zlib.net/pigz/.
Abecasis G, Cardon L, Cookson W. A general test of association for quantitative traits in nuclear families. Am J Hum Genet. 2000; 66:279–92.
Ewens W, Li M, Spielman R. A review of family-based tests for linkage disequilibrium between a quantitative trait and a genetic marker. PLoS Genet. 2008; 4:1000180.
Su Z, Marchini J, Donnelly P. Hapgen2: Simulation of multiple disease snps. Bioinformatics. 2011; 27:2304–5.
Xu Y, Wu Y, Song C, Zhang H. Simulating realistic genomic data with rare variants. Genet Epidemiol. 2013; 37:163–72.
The 1000 Genomes Project Consortium. An integrated map of genetic variation from 1,092 human genomes. Nature. 2012; 491:56–65.
Article PubMed Central Google Scholar
Defays D. An efficient algorithm for a complete link method. Comput J. 1977; 20:364–6.
Browning B, Browning S. A fast, powerful method for detecting identity by descent. Am J Hum Genet. 2011; 88:173–82.
Browning B. Presto: rapid calculation of order statistic distributions and multiple-testing adjusted p-values via permutation for one and two-stage genetic association studies. BMC Bioinformatics. 2008; 9:309.
Loh P, Baym M, Berger B. Compressive genomics. Nat Biotechnol. 2012; 30:627–30.
Sambo F, Di Camillo B, Toffolo G, Cobelli C. Compression and fast retrieval of snp data. Bioinformatics. 2014; 30:495.
PLINK/SEQ: A Library for the Analysis of Genetic Variation Data. https://atgu.mgh.harvard.edu/plinkseq/.
We thank Stephen D.H. Hsu for helpful discussions. We also continue to be thankful to PLINK 1.9 users who perform additional testing of the program, report bugs, and make useful suggestions.
Christopher Chang and Laurent Tellier were supported by BGI Hong Kong and Shenzhen Municipal Government of China grant CXB201108250094A. Carson Chow and Shashaank Vattikuti were supported by the Intramural Research Program of the NIH, NIDDK.
Complete Genomics, 2071 Stierlin Court, Mountain View, 94043, CA, USA
Christopher C Chang
BGI Cognitive Genomics Lab, Building No. 11, Bei Shan Industrial Zone, Yantian District, Shenzhen, 518083, China
Christopher C Chang & Laurent CAM Tellier
Mathematical Biology Section, NIDDK/LBM, National Institutes of Health, Bethesda, 20892, MD, USA
Carson C Chow, Shashaank Vattikuti & James J Lee
Bioinformatics Centre, University of Copenhagen, Copenhagen, 2200, Denmark
Laurent CAM Tellier
Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, 02142, MA, USA
Shaun M Purcell
Division of Psychiatric Genomics, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, 10029, NY, USA
Institute for Genomics and Multiscale Biology, Icahn School of Medicine at Mount Sinai, New York, 10029, NY, USA
Analytic and Translational Genetics Unit, Psychiatric and Neurodevelopmental Genetics Unit, Massachusetts General Hospital, Boston, 02114, MA, USA
Department of Psychology, University of Minnesota Twin Cities, Minneapolis, 55455, MN, USA
James J Lee
Carson C Chow
Shashaank Vattikuti
Correspondence to Christopher C Chang.
SMP and Ch C designed the software. Ch C drafted the manuscript and did most of the v1.9 C/C++ programming. Ca C, SV, and JJL drove early v1.9 feature development and wrote MATLAB prototype code. Ca C, LCAMT, SV, SMP, and JJL assisted with v1.9 software testing. All authors read and approved the final manuscript.
Additional file 1
Detailed description of software bit population count, as applied to identity-by-state computation.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Chang, C.C., Chow, C.C., Tellier, L.C. et al. Second-generation PLINK: rising to the challenge of larger and richer datasets. GigaSci 4, 7 (2015). https://doi.org/10.1186/s13742-015-0047-8
Whole-genome sequencing
High-density SNP genotyping
Computational statistics | CommonCrawl |
Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): bioinformatics
Automatic localization and identification of mitochondria in cellular electron cryo-tomography using faster-RCNN
Ran Li1 na1,
Xiangrui Zeng2 na1,
Stephanie E. Sigmund3,
Ruogu Lin2,
Bo Zhou4,
Chang Liu5,
Kaiwen Wang5,
Rui Jiang1,
Zachary Freyberg6,
Hairong Lv1 &
Min Xu2
BMC Bioinformatics volume 20, Article number: 132 (2019) Cite this article
Cryo-electron tomography (cryo-ET) enables the 3D visualization of cellular organization in near-native state which plays important roles in the field of structural cell biology. However, due to the low signal-to-noise ratio (SNR), large volume and high content complexity within cells, it remains difficult and time-consuming to localize and identify different components in cellular cryo-ET. To automatically localize and recognize in situ cellular structures of interest captured by cryo-ET, we proposed a simple yet effective automatic image analysis approach based on Faster-RCNN.
Our experimental results were validated using in situ cyro-ET-imaged mitochondria data. Our experimental results show that our algorithm can accurately localize and identify important cellular structures on both the 2D tilt images and the reconstructed 2D slices of cryo-ET. When ran on the mitochondria cryo-ET dataset, our algorithm achieved Average Precision >0.95. Moreover, our study demonstrated that our customized pre-processing steps can further improve the robustness of our model performance.
In this paper, we proposed an automatic Cryo-ET image analysis algorithm for localization and identification of different structure of interest in cells, which is the first Faster-RCNN based method for localizing an cellular organelle in Cryo-ET images and demonstrated the high accuracy and robustness of detection and classification tasks of intracellular mitochondria. Furthermore, our approach can be easily applied to detection tasks of other cellular structures as well.
In cells, most biological processes are dominated by intricate molecular assemblies and networks. Analyzing the structural features and spatial organization of those assemblies is essential for understanding cellular functions. Recently, cellular cryo-Electron Tomography (cryo-ET) has been developed as an approach to obtain 3D visualization of cellular structures at submolecular resolution and in a close-to-native state [1]. Cryo-ET has been proven to be a powerful technique for structural biology in situ and has been successfully applied to the study of many important structures, including vaults [2], Integrin Linked Kinase (ILK) [3], and the nuclear pore complex (NPC) [4]. However, the systematic structural analysis of cellular components in cryo-ET images remains challenging due to several factors including low signal-to-noise ratio (SNR), limited projection range (leading to the missing wedge effect) and a crowded intracellular environment composed of complex intracellular structures.
Given the critical roles played by mitochondria within mammalian cells, and the distinctive morphology of these organelles, we chose to examine mitochondria imaged by in situ cryo-ET [5]. The 3D visualization of mitochondria can provide insights into mitochondrial structure and functionalities. Therefore, methodological improvements in the detection and localization of mitochondria within complex in situ cryo-ET datasets may significantly improve accuracy of detection of these organelles and directly impact further structural analyses.
Localization of the subcellular structures of interest can facilitate subsequent study of specific macromolecular components within the selected structures [6]. Such localization can be performed through image segmentation, which are usually performed manually or by specifically designed heuristics. Although some visualization tools have been developed to facilitate these approaches, manual segmentation in Cryo-ET images still requires large amounts of repetitive labor from researchers, and the results of which are subjective. On the other hand, automatic methods are fast and can produce consistent results. Contour-based methods like Watershed yield great results when the image complexity is low, but appear to be sensitive to noise [7]. Threshold-based methods, which usually generate a mask according to the density threshold, can be applied to foreground-background segmentation but still have difficulty in identifying different cellular components [8]. Recently, segmentation methods focusing on specific types of structures including membranes, microtubules and filaments [9–11], have drawn a lot of attention. These methods perform well on specific cellular structures, but lack generality. To date, machine learning approaches to identify intracellular structures appears to be promising. Consequently, we have developed an unsupervised segmentation method based on manually designed heuristic rules [12], and by clustering representative features [13]. Luengo et al. [14] proposed a supervised approach to classify each voxel with a trained classification model. However, both of these methods require manually designed features or rules, which might be time- and effort-consuming while having various limitations. Chen et al. developed another supervised segmentation method, taking advantage of the excellent capability of feature extraction of convolutional neural network (CNN) [15]. But in this way, a separate CNN has to be trained for each type of structural features, and the precise contours need to be manually annotated in the training data, which may not be trivial.
Our goal is to design a simple and generic method of automatic identification and localization of subcellular structures of interest within in situ cryo-ET images with weak annotations, which is different from existing segmentation-type methods and can greatly reduce the time and effort cost of detailed manual annotation. We aim to detect all objects of interest in an image and output corresponding bounding box with class prediction simultaneously. Region-based convolutional neural network (RCNN) [16], which generates region proposals using Selective Search, extracts features from all the proposals after normalization with CNNs, and finally feeds the features to a classifier and a regression layer simultaneously to get both classification scores and bounding box coordinates as output, lays the foundation for our goal. And its last incarnation, Faster RCNN [17], has achieved almost real-time detection with a high degree of accuracy. Faster RCNN based localization methods have been applied to biomedical imaging data such as breast mammography [18] and cellular fluorescence imaging [19].
In this work, we proposed an automatic identification and localization method based on Faster-RCNN, which is the first Faster-RCNN based method for localizing an cellular organelle in Cryo-ET images. Our algorithm is trained and validated on 2D projection images of a cryo-ET tomogram for localization and classification tasks of mitochondira. Our experimental results show that our algorithm is able to robustly predict the object's bounding box with classification scores. Moreover, we extended our study to 3D tomogram slices and achieved accurate and robust performance.
Our mitochondria identification and localization method is comprised of two main parts: (1) pre-processing to improve the quality of samples, and (2) object detection using Faster-RCNN. The input of our system is 2D projection images of a tomogram, and the output includes coordinates of the bounding boxes of object of interest, the class of each object and the probability of the classification. A flowchart of our method is shown in Fig. 1. In this section, we willdescribe each part of our system in details.
Flowchart of our Faster-RCNN model. The denoised input image is fed into Conv layers to generate the feature map. Then, region proposal network proposes potential regions that contain object of interest. The proposal regions are passed to 1) classifier for classification, 2) regressor for refine the bounding box location
Since biological samples are sensitive to radiation damage, only low-dose electrons can be used for electron microscopy imaging [6]. Compared to normal images, electron tomography images are usually noisier and have lower contrast. To make the images suitable for subsequent processing, we first perform noise reduction and contrast enhancement. To reduce noise, considering the edge features are often important for subcellular structures, we chose Bilateral Filtering [20], a nonlinear filtering method that preserves the original edges as much as possible. Bilateral Filtering considers the effects of both spatial distance and gray scale distance, and can be implemented by combining two Gaussian Filters. To improve local contrast and the definition of details, we use Histogram Equalization, which can also balance the brightness of different images.
Object detection in 2D images
The main idea of our method is based on Faster RCNN [17], in which the four modules of feature extraction, proposal generation, RoI Pooling, classification and regression are organically combined to form an end-to-end object detection system.
Feature extraction is the first step of our method. The input of the deep convolutional neural network is the image I, and the output is the extracted feature map. These features will be shared by subsequent modules. The basic feature extraction network in our model, Resnet-50, is based on [21]. He et al. proposed this deep residual learning method in 2015 to make the deeper network train properly. The architecture of our network is shown in Fig. 2. The original Resnet-50 network is split into two parts in our model: part one including layers conv1 to conv4_x is used for extraction of shared features, and part two including layer conv5_x and upper layers further extracts features of proposals for the final classification and regression. The implementation of the model refers to the work of Yann Henon in 2017 [22].
Detailed Architecture of the Faster-RCNN model. The basic feature extraction network Resnet-50 is split into two parts in our model: 1) layers conv1 to conv4_x is used for extraction of shared features (in the shared layers), 2) layer conv5_x and upper layers further extracts features of proposals for the final classification and regression (in the classifier). And the RPN implemented with three convolutional layers generates proposals from the shared feature map
The feature extraction network is followed by a region proposal network (RPN). A window of size n×n slides onto the feature map, and at each location it stays the features in the window are mapped to a low-dimensional vector, which will be used for object-background classification and proposal regression. At the same time, k region proposals centered on the sliding window in the original image are extracted according to k anchors, which are rectangular boxes of different shapes and sizes. Moreover, for each proposal, two probabilities for the classification and four parameters for the regression will be achieved, composing the final 6k outputs of the classification layer and the regression layer. The sliding window, classification layer and regression layer are all implemented using convolutional neural networks. In practice, we chose k=9 with 3 scales of 1282, 2562, and 5122 pixels and 3 aspect ratios of 1:1, 1:2, and 2:1 as the default in [17]. And non-maximum suppression(NMS) was adopted with the IoU threshold at 0.7, while the maximum number of proposals produced by the RPN was 300.
Features of different scales are then integrated into feature maps of the same size (7×7 in our experiment) via RoI pooling layer, so that the features can be used in final fully connected classification and regression layers. For a region proposal of any size, like h×w, it will be divided into a fixed number, like H×W, of windows of size h/H×w/W. Then max pooling will be performed and a fixed-size (H×W) feature map will be obtained with the maximum of each window.
To train the whole model end-to-end, a multi-task loss function is proposed as follows [17].
$$ L\left(p,u,t^{u},v\right)=L_{cls}(p,u)+\lambda[u\geq 1 ]L_{loc}\left(t^{u},v\right) $$
Where u is the ground truth label of the proposal, and v=(vx,vy,vw,vh) represents the regression offset between the proposal and the ground truth.The output of the classification layer, p=(p0,p1,...,pK), represents the probabilities of the proposal belonging to each one of the K+1 classes and \(t^{u}=\left (t_{x}^{u},t_{y}^{u},t_{w}^{u},t_{h}^{u}\right)\) represents the predicted regression offset for a proposal with label u. The loss function of the classification task is defined as:
$$ L_{cls}(p,u)=-\log p_{u}. $$
And the loss function of the regression is a robust L1 loss as follows:
$$ L_{loc}\left(t^{u},v\right)=\sum_{i\in {x,y,w,h}}smooth_{L1}\left(t_{i}^{u}-v_{i}\right). $$
$$ smooth_{L}1\left(x \right)=\left\{ \begin{array}{lr} 0.5x^{2}, \: \: \: \: \: if \, \|x\|<1 & \\ \|x\|-0.5, \: \: \: \: \: otherwise & \end{array} \right. $$
The hyperparameter λ is used to control the balance between the two losses and is set to λ=1 in our experiment. Similarly, the loss function of the RPN during training is also defined in this form. In the training process, the RPN with the shared layers is trained first and then the classifier is trained using proposals generated by the RPN, with the initial weights for both networks given by a pretrained model on ImageNet [17, 23].
Dataset and evaluation metrics
Data Acquisition: Tissue Culture: Rat INS-1E cells (gift of P. Maechler, Université de Genève) were cultured in RPMI 1640 medium supplemented with 2 mM L-glutamine (Life Technologies, Grand Island, NY), 5% heat-inactivated fetal bovine serum, 10 mM HEPES, 100 units/mL penicillin, 100 μg/mL streptomycin, 1 mM sodium pyruvate, and 50 μM b-Mercaptoethanol as described earlier (insert reference: PMID: 14592952).
EM Grid Preparation: For cryo-ET imaging, INS-1E cells were plated onto either fibronectin-coated 200 mesh gold R2/1 Quantifoil grids or 200 mesh gold R2/2 London finder Quantifoil grids (Quantifoil Micro Tools GmbH, Jena, Germany) at a density of 2×105 cells/mL. Following 48 h incubation under conventional culture conditions in complete RPMI 1640 medium, grids were removed directly from culture medium and immediately plunge frozen in liquid ethane using a Vitrobot Mark IV (Thermo Fisher FEI, Hillsboro, OR).
Cryo-Electron Tomography: Tomographic tilt series for INS-1E cells were recorded on a FEI Polara F30 electron microscope (Thermo Fisher FEI) at 300kV with a tilt range of ±60° in 1.5° increments using the Gatan K2 Summit direct detector (Gatan, Inc.) in super-resolution mode at 2X binned to 2.6 Å/pixel; tilt series were acquired via SerialEM.
Datasets: We collected 9 cryo-ET tomograms (786 2D slices) contains mitochondria. 482 out of the 786 slices were selected and annotated manually via LabelImg [24]. Then, the 2D slices were randomly divided into training and testing set with a ratio of 5:1. Details of our dataset are shown in Table 1.
Table 1 Cryo-ET dataset properties
Metrics: To evaluate the performance of our model, we mainly use two metrics from common object detection and segmentation evaluation: AP (average precision) and F1 score. The definitions are as follows:
$$ AP=\int_{0}^{1} P(R)\,d(R) $$
$$ F_{1} \ score=\frac{2P \times R}{P+R} $$
where P represents precision, which indicates the ratio of the true positives to all predicted positives; R represents recall, which indicates the ratio of the true positives to all true elements. Neither precision nor recall alone is sufficient to fully evaluate the prediction performance. Therefore, the F1 score defined by the weighted harmonic mean of precision and recall is commonly used in the case where both of them need to be high enough. And AP, equivalent to the area under the precision-recall curve, may provide an overall evaluation of the model's performance at different precision/recall rates. As an object detection problem, the correctness of each sample prediction is not only related to classification, but also related to localization. The accuracy of localization is evaluated by (Intersection over Union), which is defined as:
$$ IoU=\frac{S_{P} \cap S_{G}}{S_{P} \cup S_{G}} $$
where SP is the predicted bounding box and SG represents the ground truth, and IoU measures the degree of coincidence. In our experiments, different IoU thresholds(0.5, 0.6, 0.7, 0.8, and 0.9) are set, and those samples with mitochondria prediction labels and IoUs higher than the specific threshold are considered. The higher the IoU threshold, the higher the accuracy requirements for localization. Thus we can see the difference in the detection accuracy under different localization accuracy requirements, and judge the localization performance of our model. The precision, recall, F1 score and AP in our experiment are calculated.
Data preprocessing and model training
The 2D projection images we acquired from the original tomograms have low SNR and contrast which interferes with subsequent identification and segmentation of intracellular features. Thus, the images are first denoised via a bilateral filter with σr=1.2 and σd=100, suppressing noise and retaining the original edge features as much as possible. This is followed by enhancement of contrast via histogram equalization which improves in the resolution of previously indistinguishable details. Figure 3 shows an example of two images before and after preprocessing. The preprocessing methods and parameters in our method were finally determined based on the single-image SNR estimated according to [25], gray-scale distribution histograms and visual effect of the image. Figure 4 shows SNR of the same image with different σd and σr and the performance of different preprocessing schemes. We found that performing histogram equalization first will increase the noise in the original image, and the contrast will be reduced again after filtering, failing to achieve the desired effect. Furthermore, we found that Gaussian filtering used for noise reduction cannot preserve the edge as well as Bilateral filtering.
a Original 2D projection images, b Images after noise reduction (Bilateral Filtering with σr=1.2 and σd=100), c Images after noise reduction and contrast adjustment
a Bilateral Filter + Histogram Equalization, b Gaussian Filter + Histogram Equalization, c Histogram Equalization + Bilateral Filter d SNR with different σd and σr
All the models in our experiments were trained and tested using Keras [26] with Tensorflow [27] as the back-end, using optimizer Adam (Adaptive Moment Estimation) [28] with β1=0.9,β2=0.999 and learning rate of 1×10−5 for both RPN and the classifier. The 482 annotated slices were randomly split into a training set of 402 slices and a test set of 80 slices according to a ratio of 5:1. The model would be saved only if the loss after one epoch is less than the best loss before.
Prediction performance
We trained the model on the training set and tested it on the test set. Figures 5 and 6 show the test results visually and quantitatively. In addition to the bounding box, our model also gives the most likely category of the object and the probability of it belonging to that category. In Fig. 5, the red bounding box is the manually annotated ground truth, and the blue box is predicted by the model. We notice that the predicted results and the ground truth are highly coincident, and even the regions that cannot be completely overlapped basically contain the entire mitochondria, which means that our system can achieve the goal of automatic identification and localization of mitochondria quite successfully. The area where the mitochondria is located can be separated from the outside by the bounding box, so as to eliminate the influence of the surrounding environment as much as possible, making it possible to analyze the internal structures in more detail.
Examples of detection results: the red boxes are ground truth, and the blue ones are the predicted bounding boxes. Data source: a Tomogram: Unstim_20k_mito1 (projection image 63), b Tomogram: Unstim_20k_mito2 (projection image 49), c Tomogram: HighGluc_Mito2 (projection image 47), d Tomogram: CTL_Fibro_mito1 (projection image 44), e Tomogram: HighGluc_Mito1 (projection image 48), f Tomogram: CHX + Glucose Stimulation A2 (projection image 13)
Prediction performance: a AP with different IoU threshold, b Precision-Recall curve with IoU threshold=0.7
In Fig. 6, we plotted the precision-recall curve and calculated the APs at different IoU thresholds to measure the detection performance. We noticed that when the IoU threshold is set to 0.7 and below, the AP is close to 1, which means that almost all samples were correctly predicted,indicating that our system can successfully identify the mitochondria in the picture. However, when the IoU threshold is increased to 0.9, the AP drops sharply to around 0.4, which indicates that our system still has some deficiencies in the accuracy of localization. The overlap between the predicted area and the ground truth area can be further improved, which can be an important aspect of our future work. The precision-recall curve for IoU thresholds of 0.7 is also given in Fig. 6. When the IoU threshold is 0.7, all positive samples can be correctly predicted while the precision requirement is not higher than 0.9, that is, all mitochondria can be found in that condition; even with a precision of 1, which means all samples predicted to be positive must be correct, 70% of the mitochondria can still be detected.
In addition, we compared the effect of preprocessing on the prediction results. It is noted that no matter how the IoU threshold is set, the AP value of the model without preprocessing is significantly lower than that of the model containing the preprocessing, which again shows that preprocessing is a necessary step for the overall system. Especially when the IoU threshold is 0.8, the system with or without preprocessing shows a great difference in the average precision of prediction, which indicates that the main contribution of preprocessing to the system is to further improve the accuracy of localization. For the model that does not include preprocessing, the predicted bounding box that has an IoU no less than 0.8 with ground truth is quite rare, and the average precision calculated in this situation is only 0.3. After the preprocessing step, it becomes common that IoU of the predicted bounding box and the ground truth reaches 0.8, resulting in an increase of the average precision to 0.95 and higher.
Source of error
In order to further analyze the performance of our method, we separately analyzed the prediction results of the system on 9 different in situ cryo-ET tomograms (Table 2), and studied the impact of different factors including the quality of the original image, the intactness of the mitochondria etc. The F1 score and AP remain calculated at an IoU threshold of 0.7. In most tomograms, our systems show high accuracy, consistent with the overall results. However, we also found that in INS_21_g3_t10, our system could not accurately detect mitochondria. Therefore, we analyzed the projected image from INS_21_g3_t10 (Fig. 7). We noticed that in all the 2D projection images from that tomogram, the mitochondria included are too small and the structure appeared incomplete, especially the internal structure, which is basically submerged in noise and hard to identify. Even after noise reduction and contrast adjustment, the details of the mitochondria in the image are still too blurred, causing strong interference in the extraction of features. We also calculated the SNR of the two-dimensional projection images in INS_21_g3_t10, which is approximately 0.06 on average. For reference, the SNR of the original projection image from Unstim_20k_mito1 we analyzed in Fig. 4 is 0.12, which is significantly higher than the images in INS_21_g3_t10. It is also worth noting that in Unstim_20k_mito1, the subject of the projection images is the mitochondria we need to detect, while in INS_21_g3_t10, the mitochondria only occupy a very small part of the image. As a result, other components of the image are calculated as signal which may be not that useful for our detection task, making the ratio of effective information to noise even lower than 0.06. This may explain why the detection performance of it is particularly unsatisfactory.
An example of projection images from tomogram INS_21_g3_t10 (in which the mitochondria is hard to detect): a Original image, b Image after noise reduction and contrast adjustment, c Projection image from M2236_Fibro_mito1
Table 2 Prediction results on different tomograms
In order to better study the influence of different tomograms on the accuracy of localization, mean Intersection over Union (mIoU) is calculated for each tomogram. It can be noted that, on average, mIoU is higher in the tomograms that contain complete mitochondria, that is, the localization accuracy is higher, although the highest mIoU comes from a tomogram containing incomplete mitochondria. We analyzed the characteristics of this tomogram and found that it is the only one where mitochondria do not appear circular or nearly circular, but instead possess a slanted strip shape (also shown in Fig. 7). Therefore, when the mitochondrion is marked with a rectangular box, the box occupies a larger area and contains more non-mitochondrial regions, which may make the prediction results more easily coincide with the ground truth. Therefore, in general, we can still conclude that complete mitochondria are more easily localized accurately. This is also in consistent with our intuition that the complete mitochondria have a complete outline of a bilayer membrane that approximates a circular shape, which provides a powerful reference for determining its specific boundaries. In fact, the tomogram with best results on the F1 score and AP also contains intact mitochondria. Therefore, the integrity of mitochondria has a certain impact on the detection results of the system.
Prediction on tomogram slices
The ultimate goal is to detect mitonchondria in 3D tomograms. The model trained on 2D projection images can be directly applied to tomogram slices to generate the output. Like projection images, the slices were first preprocessed through Bilateral filtering and histogram equalization with the same parameters, and then tested by the Faster-RCNN model. The whole model is applied to the tomogram slice by slice and the output includes all the bounding boxes of mitochondria in the slice with a classification score for each box. And it only takes a few seconds for each slice when tested on CPUs.
As shown in Fig. 8, the mitochondria in tomogram slices can be successfully identified and localized, while the accuracy of localization may be slightly reduced due to higher noise, as compared to 2D projection images. Therefore, it is only necessary to perform annotation and training on the 2D projection images, which can greatly reduce the computational costs, and we can detect mitochondria in 3D tomograms with a tolerable error. And the probability of expanding to different organelles is still retained even in the case of 3D.
Detection results on slices of reconstructed tomograms. Data source: a Tomogram: Unstim_20k_mito_1 (slice 26), b Tomogram: M2236_truemito3 (slice 97), c Tomogram: HighGluc_Mito1 (slice 58)
In this paper, we proposed an automatic Cryo-ET image analysis algorithm for localization and identification of different structure of interest in cells. To best to our knowledge, this is the first work to applied Faster-RCNN model to Cryo-ET data, which demonstrated the high accuracy (AP>0.95 and IoU>0.7) and robustness of detection and classification tasks of intracellular mitochondria. Furthermore, our algorithm can be generalized to detect multiple cellular components using the same Faster-RCNN model, if annotations of multiple classes of cellular component were provided. For future work, we will further improve the accuracy of localization by collecting more data and we will explore the effects of different network structures to enhance the model.
Adaptive moment estimation
Average precision
cryo-ET:
Cryo-electron tomography
ILK:
Integrin linked kinase
IoU:
Intersection over union
mIoU:
Mean intersection over union NMS: Non-maximum suppression
NPC:
Nuclear pore complex
SNR:
RCNN:
Region-based convolutional neural network
RPN:
Region proposal network
Irobalieva RN, Martins B, Medalia O. Cellular structural biology as revealed by cryo-electron tomography. J Cell Sci. 2016; 129(3):469–76.
Woodward CL, Mendonċa LM, Jensen GJ. Direct visualization of vaults within intact cells by electron cryo-tomography. Cell Mol Life Sci. 2015; 72(17):3401–9.
Elad N, Volberg T, Patla I, Hirschfeld-Warneken V, Grashoff C, Spatz JP, et al.The role of integrin-linked kinase in the molecular architecture of focal adhesions. J Cell Sci. 2013; 126(18):4099–107.
Grossman E, Medalia O, Zwerger M. Functional Architecture of the Nuclear Pore Complex. Annu Rev Biophys. 2012; 41(1):557–584. PMID:22577827.
Berdanier CD. Mitochondria in health and disease.Boca Raton: CRC Press; 2005.
Asano S, Engel BD, Baumeister W. In Situ Cryo-Electron Tomography: A Post-Reductionist Approach to Structural Biology. J Mol Biol. 2016; 428(2, Part A):332–343. Study of biomolecules and biological systems: Proteins.
Volkmann N. A novel three-dimensional variant of the watershed transform for segmentation of electron density maps. J Struct Biol. 2002; 138(1):123–9.
Cyrklaff M, Risco C, Fernández JJ, Jiménez MV, Estéban M, Baumeister W, et al.Cryo-electron tomography of vaccinia virus. Proc Natl Acad Sci. 2005; 102(8):2772–7.
Martinez-Sanchez A, Garcia I, Fernandez JJ. A differential structure approach to membrane segmentation in electron tomography. J Struct Biol. 2011; 175(3):372–83.
Sandberg K, Brega M. Segmentation of thin structures in electron micrographs using orientation fields. J Struct Biol. 2007; 157(2):403–15.
Loss LA, Bebis G, Chang H, Auer M, Sarkar P, Parvin B. Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography. In: Proceedings of the ACM Conference on Bioinformatics, Computational Biology and Biomedicine. BCB '12. New York: ACM: 2012. p. 170–177.
Xu M, Alber F. Automated target segmentation and real space fast alignment methods for high-throughput classification and averaging of crowded cryo-electron subtomograms. Bioinformatics. 2013; 29(13):i274–82.
Zeng X, Leung MR, Zeev-Ben-Mordehai T, Xu M. A convolutional autoencoder approach for mining features in cellular electron cryo-tomograms and weakly supervised coarse segmentation. J Struct Biol. 2018; 202(2):150–60.
Luengo I, Darrow MC, Spink MC, Sun Y, Dai W, He CY, et al.SuRVoS: Super-Region Volume Segmentation workbench. J Struct Biol. 2017; 198(1):43–53.
Chen M, Dai W, Sun SY, et al.Convolutional neural Networks for automated annotation of cellular cryo-electron tomograms. Nat Methods. 2017; 14(10):983–985.
Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE: 2013. p. 580–587.
Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R, editors. Advances in Neural Information Processing Systems 28. Red Hook: Curran Associates, Inc.: 2015. p. 91–99.
Xu M, Papageorgiou DP, Abidi SZ, Dao M, Zhao H, Karniadakis GE. A deep convolutional neural network for classification of red blood cells in sickle cell anemia. PLoS Comput Biol. 2017; 13(10):e1005746.
Wang W, Taft DA, Chen YJ, Zhang J, Wallace CT, Xu M, et al.Learn to segment single cells with deep distance estimator and deep cell detector. arXiv preprint arXiv:180310829. 2018.
Tomasi C, Manduchi R. Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).Bombay: IEEE: 1998. p. 839–846.
He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE: 2016. p. 770–778.
Keras-frcnn HY. GitHub. 2017. https://github.com/yhenon/keras-frcnn. Accessed 25 July 2018.
Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR09.Miami: IEEE: 2009.
Tzutalin. LabelImg. GitHub. 2015. https://github.com/tzutalin/labelImg. Accessed 05 Apr 2018.
Thong JT, Sim KS, Phang JC. Single-image signal-to-noise ratio estimation. Scanning; 23(5):328–336.
Chollet F, et al.Keras. GitHub. 2015. https://github.com/fchollet/keras. Accessed 25 July 2018.
Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al.TensorFlow: A system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16). Berkeley: USENIX Association: 2016. p. 265–283.
Kingma DP, Ba J. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:14126980. 2014.
This work was supported in part by U.S. National Institutes of Health (NIH) grant P41 GM103712. MX acknowledges support of the Samuel and Emma Winters Foundation. ZF acknowledges support from the U.S. Department of Defense (PR141292) and the John F. and Nancy A. Emmerling Fund of The Pittsburgh Foundation. This work was partially supported by the National Key Research and Development Program of China (No. 2018YFC0910404), the National Natural Science Foundation of China (Nos. 61873141, 61721003, 61573207, U1736210, 71871019 and 71471016), and the Tsinghua-Fuzhou Institute for Data Technology. RJ is a RONG professor at the Institute for Data Science, Tsinghua University.
Publication charge for this work has been funded by the National Key Research and Development Program of China (No. 2018YFC0910404), the National Natural Science Foundation of China (Nos. 61873141, 61721003, 61573207, U1736210, 71871019 and 71471016), and the Tsinghua-Fuzhou Institute for Data Technology. RJ is a RONG professor at the Institute for Data Science, Tsinghua University. This work was supported in part by U.S. National Institutes of Health (NIH) grant P41 GM103712. MX acknowledges support of the Samuel and Emma Winters Foundation. ZF acknowledges support from the U.S. Department of Defense (PR141292) and the John F. and Nancy A. Emmerling Fund of The Pittsburgh Foundation.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 20 Supplement 3, 2019: Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-3.;
Ran Li and Xiangrui Zeng contributed equally to this work.
Department of Automation, Tsinghua University, Beijing, China
Ran Li, Rui Jiang & Hairong Lv
Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, USA
Xiangrui Zeng, Ruogu Lin & Min Xu
Department of Cellular, Molecular and Biophysical Studies, Columbia University Medical Center, New York, NY, USA
Stephanie E. Sigmund
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
Bo Zhou
Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
Chang Liu & Kaiwen Wang
Departments of Psychiatry and Cell Biology, University of Pittsburgh, Pittsburgh, PA, USA
Zachary Freyberg
Ran Li
Xiangrui Zeng
Ruogu Lin
Kaiwen Wang
Rui Jiang
Hairong Lv
Min Xu
MX, HL and RJ provided guidance and planning for this project. ZF provided the data used in the current study and offered guidance on the data. Ran Li and XZ proposed and implemented the methods, analysed the results and wrote the manuscript. SS, Ruogu Lin, BZ, CL and KW helped with writing and revising the manuscript. All authors read and approved the final manuscript.
Correspondence to Zachary Freyberg, Hairong Lv or Min Xu.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Li, R., Zeng, X., Sigmund, S. et al. Automatic localization and identification of mitochondria in cellular electron cryo-tomography using faster-RCNN. BMC Bioinformatics 20, 132 (2019). https://doi.org/10.1186/s12859-019-2650-7
Cryo-ET
Faster-RCNN
Cellular structure detection
Biomedical image analysis | CommonCrawl |
Skip to main content Skip to sections
Microfluidics and Nanofluidics
March 2017 , 21:50 | Cite as
Formation of inverse Chladni patterns in liquids at microscale: roles of acoustic radiation and streaming-induced drag forces
Junjun Lei
First Online: 03 March 2017
While Chladni patterns in air over vibrating plates at macroscale have been well studied, inverse Chladni patterns in water at microscale have recently been reported. The underlying physics for the focusing of microparticles on the vibrating interface, however, is still unclear. In this paper, we present a quantitative three-dimensional study on the acoustophoretic motion of microparticles on a clamped vibrating circular plate in contact with water with emphasis on the roles of acoustic radiation and streaming-induced drag forces. The numerical simulations show good comparisons with experimental observations and basic theory. While we provide clear demonstrations of three-dimensional particle size-dependent microparticle trajectories in vibrating plate systems, we show that acoustic radiation forces are crucial for the formation of inverse Chladni patterns in liquids on both out-of-plane and in-plane microparticle movements. For out-of-plane microparticle acoustophoresis, out-of-plane acoustic radiation forces are the main driving force in the near-field, which prevent out-of-plane acoustic streaming vortices from dragging particles away from the vibrating interface. For in-plane acoustophoresis on the vibrating interface, acoustic streaming is not the only mechanism that carries microparticles to the vibrating antinodes forming inverse Chladni patterns: In-plane acoustic radiation forces could have a greater contribution. To facilitate the design of lab-on-a-chip devices for a wide range of applications, the effects of many key parameters, including the plate radius R and thickness h and the fluid viscosity μ, on the microparticle acoustophoresis are discussed, which show that the threshold in-plane and out-of-plane particle sizes balanced from the acoustic radiation and streaming-induced drag forces scale linearly with R and \(\sqrt \mu\), but inversely with \(\sqrt h\).
Chladni patterns Acoustic streaming Acoustic radiation force Acoustofluidics Vibrating plates
The online version of this article (doi: 10.1007/s10404-017-1888-5) contains supplementary material, which is available to authorized users.
Arranging particles and cells into desired patterns for lab-on-a-chip biological applications using ultrasonic fields, i.e. acoustophoresis, by means of bulk and surface acoustic wave techniques, have attracted increasing interest in recent years (Bruus et al. 2011; Friend and Yeo 2011). When an ultrasonic standing/travelling wave is established in a micro-channel containing an aqueous suspension of particles, two main forces act on the particles: the acoustic radiation force and the streaming-induced drag force. In most bulk and surface micro-acoustofluidic manipulation devices, the latter is generally considered to be a disturbance because it places a practical lower limit on the particle size that can be manipulated by the former (Wiklund et al. 2012; Drinkwater 2016). Nevertheless, acoustic streaming flows have been applied to play an active role in the functioning of such systems. (Hammarstrom et al. 2012, 2014; Yazdi and Ardekani 2012; Antfolk et al. 2014; Devendran et al. 2014; Ohlin et al. 2015; Cheung et al. 2014; Huang et al. 2014; Patel et al. 2014; Destgeer et al. 2016; Rogers and Neild 2011; Tang and Hu 2015; Leibacher et al. 2015; Agrawal et al. 2013, 2015).
The ability to use ultrasonic fields for manipulation of particles and fluids has a long history which can date back to many eminent scientists including Chladni (1787), Faraday (1831), Kundt and Lehmann (1874), Rayleigh (1883), King (1934), Gorkov (1962). As early as 1787, the German physicist Chladni (1787) observed that randomly distributed sand particles on a vibrating metal plate could group along the nodal lines forming a wide variety of symmetrical patterns. The various patterns formed at different modes of resonance were called Chladni figures. Chladni also reported that fine particles would move in the opposite direction, to the antinodes, which was further studied by Faraday (1831), who found that it was due to air currents in the vicinity of the plate, i.e. acoustic streaming. The latter phenomenon was revisited by Van Gerner et al. (2010, 2011) who showed that it will always occur when the acceleration of the resonating plate is lower than gravity acceleration. Zhou et al. (2016) recently proposed an approach which is able to control the motion of multiple objects simultaneously and independently on a Chladni plate.
Recently, Vuillermet et al. (2016) demonstrated that it is possible to form two-dimensional inverse Chladni patterns on a vibrating circular plate in water at microscale, which extended an earlier work from Dorrestijn et al. (2007), who showed formation of one-dimensional (1D) Chladni patterns on a vibrating cantilever submerged in water, where microparticles and nanoparticles were found to move to the antinodes and nodes of the vibrating interface, respectively. Both works have depicted the two-dimensional streaming field in the near-field and emphasized the effects of in-plane streaming flow on the collections of particles at vibrating antinodes or nodes. Practical manipulation on vibrating plates, however, is three-dimensional (3D) including out-of-plane and in-plane manipulation, and interestingly, in such systems, little work has been done on the impact of acoustic radiation forces, the main engine for particle and cell manipulation in other acoustofluidic manipulation devices. Unlike microparticle acoustophoresis in bulk and surface standing wave devices that have been well studied (Barnkob et al. 2012; Muller et al. 2012, 2013; Lei et al. 2014; Hahn et al. 2015; Nama et al. 2015; Oberti et al. 2009), the literature is lacking a quantitative analysis of microparticle acoustophoresis over vibrating plate systems.
In this paper, we will show a detailed 3D study on the main forces for the formation of inverse Chladni patterns on a clamped vibrating circular plate in contact with water (see Fig. 1 for the configuration). Both out-of-plane and in-plane microparticle acoustophoresis are discussed and the contributions of main driving forces are compared, which enables a clear presentation of the underlying physics of microparticle manipulation in such systems. The many key parameters, including the plate thickness and radius, the vibration amplitude and the fluid viscosity, on the microparticle acoustophoresis are discussed. We believe that this work could provide an excellent tool on analysing microparticle acoustophoresis in vibrating plate systems and on guiding device designs for the better control of patterning of microparticles at various sizes as well as for single particle and cell manipulation.
Sketch of a clamped vibrating circular plate in contact with water, where \(R\) and \(h\) are the radius and thickness of the circular plate, respectively
2 Numerical method
We use bold and normal-emphasis fonts to represent vector and scalar quantities, respectively. Here, we assume a homogeneous isotropic fluid, in which the continuity and momentum equations for the fluid motion are.
$$\frac{\partial \rho }{\partial t} + \nabla \cdot \left( {\rho \varvec{u}} \right) = 0,$$
$$\rho \left( {\frac{{\partial \varvec{u}}}{\partial t} + \varvec{u} \cdot \nabla \varvec{u}} \right) = - \nabla p + \mu \nabla^{2} \varvec{u} + \left( {\mu_{b} + \frac{1}{3}\mu } \right)\nabla \nabla \cdot \varvec{u},$$
(1b)
where \(\rho\) is the fluid density, t is time, \(\varvec{u}\) is the fluid velocity, p is the pressure and μ and μ b are, respectively, the dynamic and bulk viscosity coefficients of the fluid.
Taking the first and second order into account, we write the perturbation series of fluid density, pressure and velocity: (Bruus 2012)
$$\rho = \rho_{0} + \rho_{1} + \rho_{2} ,$$
$$p = p_{0} + p_{1} + p_{2} ,$$
$$\varvec{u} = \varvec{u}_{1} + \varvec{u}_{2} ,$$
(2c)
where the subscripts 0, 1 and 2 represent the static (absence of sound), first-order and second-order quantities, respectively. Substituting Eq. (2) into Eq. (1) and considering the equations to the first order, Eq. (1) for solving the first-order acoustic velocity take the form,
$$\frac{{\partial \rho_{1} }}{\partial t} + \rho_{0} \nabla \cdot \varvec{u}_{1} = 0,$$
$$\rho_{0} \frac{{\partial \varvec{u}_{{\mathbf{1}}} }}{\partial t} = - \nabla p_{1} + \mu \nabla^{2} \varvec{u}_{{\mathbf{1}}} + \left( {\mu_{b} + \frac{1}{3}\mu } \right)\nabla \nabla \cdot \varvec{u}_{{\mathbf{1}}} .$$
Repeating the above procedure, considering the equations to the second order and taking the time average of Eq. (1) using Eq. (2), the continuity and momentum equations for solving the second-order time-averaged acoustic streaming velocity can be turned into
$$\nabla \cdot \overline{{\rho_{1} \varvec{u}_{{\mathbf{1}}} }} + \rho_{0} \nabla \cdot \overline{{\varvec{u}_{{\mathbf{2}}} }} = 0,$$
$$- \nabla \overline{{p_{2} }} + \mu \nabla^{2} \overline{{\varvec{u}_{{\mathbf{2}}} }} + \left( {\mu_{b} + \frac{1}{3}\mu } \right)\nabla \nabla \cdot \overline{{\varvec{u}_{{\mathbf{2}}} }} + \varvec{F} = 0,$$
$$\varvec{F} = - \rho_{0} \overline{{\varvec{u}_{{\mathbf{1}}} \nabla \cdot \varvec{u}_{{\mathbf{1}}} + \varvec{u}_{{\mathbf{1}}} \cdot \nabla \varvec{u}_{{\mathbf{1}}} }} ,$$
where the upper bar denotes a time-averaged value and \(\varvec{F}\) is the Reynolds stress force (Lighthill 1978). When modelling the steady-state streaming flows in most practical acoustofluidic manipulation devices, the inertial force \(\overline{{\varvec{u}_{{\mathbf{2}}} }} \cdot \nabla \overline{{\varvec{u}_{{\mathbf{2}}} }}\) is generally negligible compared to the viscosity force in such systems, which results in the creeping motion. The divergence-free velocity \(\overline{{\varvec{u}_{{\mathbf{2}}}^{\varvec{M}} }} = \overline{{\varvec{u}_{{\mathbf{2}}} }} + \overline{{\rho_{1} \varvec{u}_{{\mathbf{1}}} }} /\rho_{0}\), derived from Eq. (4a), is the mass transport velocity of the acoustic streaming, which is generally closer to the velocity of tracer particles in a streaming flow than \(\overline{{\varvec{u}_{2} }}\) (Nyborg 1998).
In this work, only the boundary-driven streaming field was solved because an evanescent wave field is established (see below) such that the overall streaming field is dominated by the boundary-driven streaming. Moreover, as the inner streaming vortices are confined only at the thin viscous boundary layer [thickness of \(\delta_{v} \approx 0.6\) µm at 1 MHz in water (Bruus 2012)], for numerical efficiency, we solved only the 3D outer streaming fields using Nyborg's limiting velocity method (Nyborg 1958; Lee and Wang 1989) as those published previously (Lei et al. 2013, 2014, 2016). Although the inner streaming fields were not computed in this work, they can, of course, be known from the limiting velocity field.
3 Numerical model, results and discussion
To validate the numerical results, a clamped circular plate of radius R = 800 µm and thickness h = 5.9 µm was firstly considered, which has a same size to the one used in Vuillermet et al.'s experiments (2016). Our model is slightly different to the device in Vuillermet et al.'s experiments. It can be seen from Fig. 1 that our model shows a vibrating clamped plate in a free space while the side boundaries of Vuillermet et al.'s device have sound reflections, which may result in acoustic pressure antinodes at the plate boundaries. More model parameters are found in Table 1. The model configuration is shown in Fig. 3a, where a cylindrical fluid-channel-only model was considered. Cartesian (\(x, y, z\)) and cylindrical (\(r, \theta , z\)) coordinates were used for the convenience of calculations. The finite element package COMSOL 5.2 (COMSOL Multiphysics 2015) was used to solve all equations. The modelled final particle (radius of 30 µm) positions driven by the main forces including acoustic radiation forces, streaming-induced drag forces and buoyancy forces at two vibrating modes are shown in Fig. 2a. It can be seen that the inverse Chladni patterns the microparticles form compare well with Vuillermet et al.'s (2016) experimental observations. In the following, we will show step by step why microparticles are gathered to the vibrating antinodes forming inverse Chladni patterns and the contributions of various driving forces on the acoustophoretic motion of microparticles at various sizes.
Model parameters
Model domain
\(\pi R^{2} \times h\)
\(\pi \times 0.8^{2} \times 0.725\)
Density of plate
kg m−3
Plate Poisson's ratio
Plate Young's modulus
\(E\)
Sound speed in plate
\(u\)
m s−1
Particle density
\(\rho_{p}\)
Sound speed in particle
\(c_{p}\)
Density of water
\(\rho_{f}\)
Sound speed in water
\(c_{f}\)
(Colour online) Top views of the final positions of microparticles (radius of 30 µm) on a plate at various vibrating modes: a modelled, where spheres are the microparticles and colours show the vibrating displacements (white for maximum and black for zero); and b measured, adapted with permission from Vuillermet et al. (2016) Copyrighted by the American Physical Society. The particle properties used in simulations are included in Table 1
It is noteworthy that we have previously applied a fluid-channel-only model to study the 3D transducer-plane streaming fields in bulk acoustofluidic manipulation devices (Lei 2015), where the excitation of transducer was approximated by a Gaussian distribution of boundary vibration. The fluid-channel-only model applied in this work has more merits because we can easily write down the displacement equation when the circular plate vibrates at a resonant mode (see Eq. (6) below), and thus, there is no need to make an approximation on the boundary vibrations as we did in the previous models (Lei et al. 2013, 2016).
3.1 Resonant frequencies
Resonant frequencies at various modes were firstly modelled, which are shown in Table 2. For comparison, the modelled eigenfrequencies of first eight modes for another two cases, namely no load and load with air, are also presented. It can be seen that the resonant frequencies for vibrations in air and those in vacuum are very close; differences are small enough to be considered as numerical errors, suggesting that omitting the influence of air does not introduce any significant error on the resonant frequencies. The resonant frequencies of vibration in contact with water, however, have been reduced at least by a factor of 3 for all the modes presented, which means that we have to consider the influence of external load introduced by the surrounding water. All the results shown in this paper are for the (4, 1) mode (\(\delta_{v} \approx 1.84\) µm) unless otherwise stated.
The modelled resonant frequencies (Hz) of first eight modes for various loads
The computations were performed on a Lenovo Y50 running Windows 8 (64-bit) equipped with 16 GB RAM and Intel(R) Core(TM) i7-4710HQ processor of clock frequency 2.5 GHz. The mesh constitution was chosen based on the method described in a previous work (Lei et al. 2013), which chooses the mesh size to obtain steady solutions, i.e. ensuring further refining of mesh does not change the solution significantly. This model resulted in 131,521 mesh elements, a peak RAM usage of 4.96 GB (at the acoustic step), and a running time of about 4 h for solving the steps described between Sects. 3.2 and 3.6 below.
3.2 First-order acoustic fields
The first-order acoustic fields were modelled using the COMSOL 'Pressure Acoustics, Frequency Domain' interface, which solves the harmonic, linearized acoustic problem, taking the form,
$$\nabla^{2} p_{1} + \frac{{\omega^{2} }}{{c^{2} }}p_{1} = 0,$$
where ω is the angular frequency and c is the speed of sound in the fluid. The acoustic fields in the model regime were created by a harmonic vibration of the bottom edge (i.e. the plate) coupled with radiation boundary conditions on all other edges. For comparison, we also tried adding perfect matching layers around the cylindrical domain to absorb all outgoing waves and found that the differences on all the modelled quantities between these two methods are within 3%. To give a clear presentation of results, we show here the results modelled form radiation boundary conditions.
For a \(\left( {m, n} \right)\) vibrating mode, the plate displacement amplitude can be written as
$$w = J_{m} \left( {\frac{{\alpha_{mn} }}{R}r} \right)\cos \left( {m\theta } \right),$$
where \(J_{m} \left( \cdot \right)\) is the Bessel function of the first kind of order m and \(\alpha_{mn}\) is the nth zero of \(J_{m} \left( \cdot \right)\). The results presented in this paper were obtained at a vibration amplitude of 0.4 µm unless otherwise stated. The vibration amplitude has a limited effect on the shape of microparticle trajectories as both the acoustic radiation force and streaming-induced drag force scale with the square of the vibration amplitude (more discussions can be found below).
As shown in Fig. 3b, a standing wave field was established on the vibrating interface with acoustic pressure nodes and antinodes locating at plate displacement nodes and antinodes, respectively. The standing wave field is shown more clearly in Fig. 3d, where the in-plane circumferential acoustic pressure magnitudes are plotted. The out-of-plane acoustic pressure magnitudes over a vibrating antinode are plotted in Fig. 3c, which shows that the acoustic pressure magnitudes decay exponentially with the increase in distance from the vibrating interface. The reason is that the plate wave travels at the vibrating interface at a subsonic regime leading to an evanescent wave field: the plate wave velocity at substrate surface \(u = \lambda f_{r} \approx 55\) m/s \(\ll u_{l}\), where λ is the acoustic wavelength, f r is the resonant frequency and u l is the speed of sound in the liquid.
(Colour online) a Geometry of the considered problem, where the bottom edge (\(z = 0\)) vibrates at a (4, 1) mode; b 3D acoustic pressure magnitudes (\(\left| {p_{1} } \right|\), Pa); c out-of-plane \(\left| {p_{1} } \right|\) [arrow in (b)]; and d in-plane \(\left| {p_{1} } \right|\) on \(r = 0.56\) mm at the bottom edge. \(r = \sqrt {x^{2} + y^{2} }\) and \(\theta = { \arctan }\left( {y/x} \right)\). The dashed line and the equation in (c) show the exponential fitting of the modelled acoustic pressure magnitudes
3.3 Acoustic radiation forces
The corresponding 3D acoustic radiation forces were solved from the Gorkov equation (Gorkov 1962),
$$\varvec{F}_{{\varvec{ac}}} = \nabla \left\{ {V_{0} \left[ {\frac{{3\left( {\rho_{p} - \rho_{f} } \right)}}{{2\rho_{p} + \rho_{f} }}\overline{{E_{kin} }} - \left( {1 - \frac{{\beta_{p} }}{{\beta_{f} }}} \right)\overline{{E_{pot} }} } \right]} \right\},$$
where \(\overline{{E_{kin} }}\) and \(\overline{{E_{pot} }}\) are the time-averaged kinematic and potential energy, \(\rho_{p}\) and \(\rho_{f}\) are, respectively, the density of particle and fluid, \(\beta_{p} = 1/\left( {\rho_{p} c_{p}^{2} } \right)\) and \(\beta_{f} = 1/\left( {\rho_{f} c_{f}^{2} } \right)\) are the compressibility of particle and fluid, and \(V_{0}\) is the particle volume (see Table 1 for model properties). Equation (7) is valid for particles that are small compared to the acoustic wavelength λ in the limit \(r_{0} /\lambda \ll 1\) (where r 0 is the radius of the particle) in an inviscid fluid in an arbitrary sound field. (Gorkov 1962) When a particle moves close to the vibrating plate, the acoustic radiation forces may oscillate weakly with a decrease in distance to the plate due to the multiple-scattering interaction and wall interference, while the force magnitudes will not be significantly affected (Wang and Dual 2012).
The modelled acoustic radiation force fields are shown in Fig. 4. As shown in Fig. 4c, the out-of-plane acoustic radiation forces also decrease exponentially with the increase in distance from the vibrating interface. In the near-field, at this vibrating amplitude, the out-of-plane acoustic radiation forces have a greater contribution on the sedimentation of microparticles than the buoyancy forces. With an increase in vibration amplitude, we can expect dominant out-of-plane acoustic radiation forces over buoyancy forces. Interestingly, as shown in Fig. 4b, the in-plane acoustic radiation forces carry microparticles away from the acoustic pressure nodes and converge at antinodes from all directions, in contrast with the conditions usually found in bulk and surface standing wave manipulation devices, where the acoustic radiation forces move most particles and cells of interest to the acoustic pressure nodes (Glynne-Jones et al. 2012). Examining Eq. (7), it can be seen that the acoustic radiation force is a gradient of the force potential, which contains a positive contribution from the kinematic energy (weighted by a function of the fluid and particle densities) and a negative contribution from the potential energy (weighted by a function of the fluid and particle compressibility). Comparing the contributions of these two terms in this model, it was found that the kinetic energy term dominates in the force potential, which drives microparticles to the vibrating antinodes.
(Colour online) a 3D acoustic radiation force magnitudes (\(\left| {F_{ac} } \right|\), N) on a particle with a radius of 30 µm; b in-plane \(\left| {F_{ac} } \right|\); and c out-of-plane \(\left| {F_{ac} } \right|\) [red arrow in (a)], where the inset shows the directions of the plotted forces above a vibrating antinode. \(F_{B}\) and \(F_{G}\) are the buoyancy and gravity, respectively. The dashed line and the equation in (c) show the exponential fitting of the modelled acoustic radiation force
3.4 Acoustic streaming fields
The 3D acoustic streaming field was modelled using Nyborg's limiting velocity method (Nyborg 1958; Lee and Wang 1989). It was shown that if the boundary has a radius of curvature that is much larger than the acoustic boundary layer, then the time-averaged velocity at the extremity of the inner streaming (the 'limiting velocity') can be approximated as a function of the local, first-order linear acoustic field. The outer streaming in the bulk of the fluid can then be predicted by a fluidic model that takes the limiting velocity as a boundary condition. The applicability and viability of the limiting velocity method have been further discussed recently (Lei et al. 2017). In Cartesian coordinates, the limiting velocity field at the driving boundaries (\(z = 0\)) can be written as
$$u_{L} = - \frac{1}{4\omega }\text{Re} \left\{ {q_{x} + u_{1}^{*} \left[ {\left( {2 + i} \right)\nabla \cdot \varvec{u}_{{\mathbf{1}}} - \left( {2 + 3i} \right)\frac{{dw_{1} }}{dz}} \right]} \right\},$$
$$v_{L} = - \frac{1}{4\omega }\text{Re} \left\{ {q_{y} + v_{1}^{*} \left[ {\left( {2 + i} \right)\nabla \cdot \varvec{u}_{{\mathbf{1}}} - \left( {2 + 3i} \right)\frac{{dw_{1} }}{dz}} \right]} \right\},$$
$$q_{x} = u_{1} \frac{{du_{1}^{*} }}{dx} + v_{1} \frac{{du_{1}^{*} }}{dy},$$
$$q_{y} = u_{1} \frac{{dv_{1}^{*} }}{dx} + v_{1} \frac{{dv_{1}^{*} }}{dy},$$
(8d)
where u L and v L are the x- and y-components of the limiting velocity field, u 1, v 1 and w 1 are the x-, y- and z-components of the acoustic velocity vector \(\varvec{u}_{{\mathbf{1}}}\), \(\text{Re} \left\{ \cdot \right\}\) denotes the real part of a complex value and \(*\) is the complex conjugate.
A COMSOL 'Creeping Flow' interface was used to model the acoustic streaming field, which solves
$$\nabla \cdot \overline{{\varvec{u}_{{\mathbf{2}}} }} = 0,$$
$$\nabla p_{2} = \mu \nabla^{2} \overline{{\varvec{u}_{{\mathbf{2}}} }} .$$
As only outer streaming fields are solved in this method, with the assumption of low velocity and incompressible flow, the first term in the left-hand side of Eq. (4a) is zero and thus \(\overline{{\varvec{u}_{{\mathbf{2}}} }} = \overline{{\varvec{u}_{{\mathbf{2}}}^{\varvec{M}} }}\) (Hamilton et al. 2003). Then, as discussed by Lighthill (1978), the Reynolds stress in the bulk of the fluid can set up hydrostatic stresses, but in the absence of attenuation these will not create vortices, hence these terms are not included in Eq. (9b). The 3D outer acoustic streaming fields in the considered model regime were generated by the limiting velocity field on the vibrating interface (see Fig. 5a) along with no-slip boundary conditions (\(\overline{{\varvec{u}_{{\mathbf{2}}} }} = 0\)) on all other edges.
(Colour online) a The limiting velocity field (m/s) on the bottom edge (\(z = 0\)); and b, c front and left views of the 3D acoustic streaming fields, where the colours at the bottom edge in (b, c) show the acoustic pressure magnitudes (red for maximum and blue for zero). To give a clear presentation of the 3D acoustic streaming flows, only those above one acoustic pressure antinode are shown. Arrows in (b, c) show the streaming directions
The limiting velocity field and the 3D acoustic streaming fields are shown in Fig. 5. It can be seen that, similar to the distribution of in-plane acoustic radiation forces, the limiting velocities (i.e. the in-plane acoustic streaming velocity field) converge at the acoustic pressure antinodes from all directions leading to acoustic streaming vortices on out-of-planes perpendicular to the vibrating interface as those plotted in Fig. 5b, c, where, in order to give a clear demonstration of the 3D acoustic streaming fields, only the acoustic streaming vortices above one acoustic pressure antinode are plotted.
3.5 Acoustic streaming-induced drag forces
Based on the acoustic streaming velocity field, we can calculate the acoustic streaming-induced drag forces on microparticles from the stokes drag,
$$\varvec{F}_{\varvec{d}} = 6\mu \pi r_{0} \left( {\overline{{\varvec{u}_{{\mathbf{2}}} }} - \varvec{v}} \right),$$
where \(\varvec{v}\) is the particle velocity. Equation (10) is valid for particles sufficiently far from the channel walls (Happel 1965). Since microparticle acoustophoresis discussed in this work is closely associated with the vibrating plate, it is necessary to take into account the wall effect on the streaming-induced drag forces when a particle moves close to the bottom wall. When a sphere particle moves perpendicularly towards or in parallel to the vibrating plate, the streaming-induced drag force should be corrected by multiplying a wall-effect-correction factor χ or γ, respectively, which can be expressed as (Happel 1965)
$$\chi = \frac{4}{3}\sinh \alpha \mathop \sum \limits_{i = 1}^{\infty } \frac{{i\left( {i + 1} \right)}}{{\left( {2i - 1} \right)\left( {2i + 3} \right)}} \times \left[ {\frac{{2\sinh \left( {2i + 1} \right)\alpha + \left( {2i + 1} \right)\sinh 2\alpha }}{{4\sinh^{2} \left( {i + 1/2} \right)\alpha - \left( {2i + 1} \right)^{2} \sinh^{2} \alpha }} - 1} \right],$$
$$\gamma = \frac{1}{{1 - A\left( {r_{0} /H} \right) + B\left( {r_{0} /H} \right)^{3} - C\left( {r_{0} /H} \right)^{4} - D\left( {r_{0} /H} \right)^{5} }},$$
(11b)
$$\alpha = \cosh^{ - 1} \left( {H/r_{0} } \right),$$
(11c)
where \(H\) is the distance from the centre of the particle to the plate and A = 9/16, B = 1/8, C = 45/256 and D = 1/16.
The 3D acoustic streaming-induced drag forces are shown in Fig. 6, where, for comparison, the buoyancy forces are also plotted. As shown in Fig. 6c, with the increase in distance from the vibrating interface, the out-of-plane streaming-induced drag forces rise rapidly to the maximum value in the near-field and then fall gradually to zero in the far-field. The wall effect can increase the maximum our-of-plane streaming-induced drag force by approximately a factor of 2 in this model. Also, it can be seen that, for a small vibration amplitude of w = 0.4 µm, the maximum out-of-plane streaming-induced drag force is larger than the buoyancy force on a particle with a radius of 30 µm. With an increase in vibration amplitude, we can expect even larger acoustic streaming-induced drag forces while the buoyancy forces remain the same. Therefore, it might be reasonable to say that introducing only the streaming effects is not enough to explain the sedimentation of microparticles, especially for those with r 0 < 30 µm, where the differences between the out-of-plane streaming-induced drag forces and the buoyancy forces are even larger, as plotted in Fig. 6d, because the former and the latter scale with the particle radius and particle volume, respectively.
(Colour online) a 3D streaming-induced drag forces on a particle with a radius of 30 µm (\(\left| {F_{d} } \right|\), N); b in-plane \(\left| {F_{d} } \right|\); c out-of-plane \(\left| {F_{d} } \right|\) [red arrow in (a)]; and d comparisons of maximum out-of-plane \(\left| {F_{d} } \right|\) [peak in (c)] with the buoyancy forces for various particle sizes (radius of \(r_{0}\)). The inset in (c) shows the directions of the plotted forces above a vibrating antinode. \(F_{B}\) and \(F_{G}\) are the buoyancy and gravity, respectively
3.6 Microparticle trajectories
From the acoustic radiation forces and streaming-induced drag forces that have been calculated, together with the buoyancy forces, microparticle (polystyrene beads) trajectories were modelled, following
$$\frac{d}{dt}\left( {m_{p} \varvec{v}} \right) = \varvec{F}_{\varvec{d}} + \varvec{F}_{{\varvec{ac}}} + \varvec{F}_{\varvec{B}} + \varvec{F}_{\varvec{G}} ,$$
$$\varvec{F}_{\varvec{B}} + \varvec{F}_{\varvec{G}} = \frac{4}{3}\pi r_{0}^{3} g\left( {\rho_{f} - \rho_{p} } \right),$$
where m p is the particle mass, \(\varvec{F}_{\varvec{B}}\) is the buoyancy, \(\varvec{F}_{\varvec{G}}\) is the gravity and g is the gravity acceleration. In this work, it is assumed that all the forces, including acoustic radiation, streaming-induced drag and buoyancy forces, act on the centre of spherical particles (otherwise, integration of forces over the particle surface would be needed when the particles are close to the boundaries). It is noteworthy that, in addition to these main driving forces, a particle–particle interaction force was used in this model. The particle–particle interaction force can be expressed as
$$\varvec{F} = - k_{s} \mathop \sum \limits_{i = 1}^{N} \left( {\left| {\varvec{r} - \varvec{r}_{\varvec{i}} } \right| - r_{e} } \right)\frac{{\varvec{r} - \varvec{r}_{\varvec{i}} }}{{\varvec{r} - \varvec{r}_{\varvec{i}} }},$$
where k s is the spring constant, \(\varvec{r}_{\varvec{i}}\) is the position vector of the ith particle, and r e is the equilibrium position between particles. In this model, k s = 2.5 × 10−4 N/m for polystyrene beads (Jensenius and Zocchi 1997) and r e was set as 2r 0 to avoid all particles being concentrated to a single point.
Here, a COMSOL 'Particle Tracing for Fluid Flow' interface was used to solve Eq. (12) to model the particle trajectories. The shape of the trajectories is independent of the pressure amplitude since both the acoustic radiation forces and steaming-induced drag forces scale with the square of pressure; results are presented here for an excitation amplitude of w = 0.4 µm. An array of tracer particles (given the properties of polystyrene beads of radius 30 µm) are seeded at \(t = 0\). Acoustic radiation forces, streaming-induced drag forces and buoyancy forces act on the particles, resulting in the motion shown in Fig. 7. It can be seen that, in the considered model regime, particles with a radius of 30 µm first move towards the vibrating interface driven by the predominant out-of-plane forces and are then carried to their closest acoustic pressure antinodes by in-plane forces, resulting in spider-like trajectories and inverse Chladni patterns on the vibrating interface within seconds. Generally, particles closer to the vibrating interface take less time to settle for stronger driving forces. Particles with smaller sizes take longer to locate at the acoustic pressure antinodes for smaller driving forces and will follow the out-of-plane streaming vortices leading to acoustic streaming-dominated trajectories close to those shown in Fig. 5b, c while r 0 < 6.9 µm (see explanations below and videos in the Supplemental material).
(Colour online) Trajectories of microparticles (radius of 30 µm) at: a \(t = 0\); and b \(t = 3\) s. Spheres are the microparticle, black solid lines show particle trajectories and colours at the bottom edge show the vibrating displacements (white for maximum and black for zero). See video in the Supplemental material
Out-of-plane acoustophoresis. A single particle out-of-plane acoustophoresis is directly acted upon by the acoustic radiation force, the buoyancy force and the acoustic streaming-induced drag force. The equation of motion for a spherical particle of out-of-plane velocity \(\varvec{v}^{{\varvec{out}}}\) above an acoustic pressure antinode is then
$$\varvec{v}^{{\varvec{out}}} = \frac{{\varvec{F}_{\varvec{d}}^{{\varvec{out}}} + \varvec{F}_{{\varvec{ac}}}^{{\varvec{out}}} + \varvec{F}_{\varvec{B}} + \varvec{F}_{\varvec{G}} }}{{6\pi \mu r_{0} }}.$$
As we have seen above, particles are concentrated at the acoustic pressure antinodes, so we take here a particle staying above an acoustic pressure antinode to analyse the contributions of the many forces on the microparticle out-of-plane acoustophoresis. As shown in the inset of Fig. 8a, the streaming-induced drag forces, \(\varvec{F}_{\varvec{d}}^{{\varvec{out}}}\), competes with other forces above an acoustic pressure antinode as the acoustic streaming flow drives particles away from the pressure antinode, while other forces bring particles to the pressure antinode. Based on the fact that
$$\varvec{F}_{\varvec{d}}^{{\varvec{out}}} \propto r_{0}\quad {\text{and }}\quad \varvec{F}_{{\varvec{ac}}}^{{\varvec{out}}} + \varvec{F}_{\varvec{B}} + \varvec{F}_{\varvec{G}} \propto r_{0}^{3} ,$$
there should be a threshold out-of-plane particle size, \(r_{0}^{out}\): for \(r_{0} > r_{0}^{out}\), particles can be easily concentred to the acoustic pressure antinodes; while for \(r_{0} < r_{0}^{out}\), particles will follow the out-of-plane acoustic streaming vortices. We define the threshold particle radius \(r_{0}^{out}\) for crossover from these out-of-plane forces. The out-of-plane forces on particles at various sizes are plotted in Fig. 8a, which shows that, at a small vibration amplitude of w = 0.4 µm, the threshold particle size \(r_{0}^{out} \approx 6.9\) µm. Considering the wall-effect-correction for the streaming-induced drag forces, \(r_{0}^{out} \approx 9.1\) µm. This threshold out-of-plane particle size may slightly vary with the vibration amplitude as \(\varvec{F}_{\varvec{B}} + \varvec{F}_{\varvec{G}}\) are independent of the vibration amplitude, while \(\varvec{F}_{\varvec{d}}^{{\varvec{out}}}\) and \(\varvec{F}_{{\varvec{ac}}}^{{\varvec{out}}}\) scale with the square of the vibration amplitude. As shown in Fig. 4c, the buoyancy force is approximately 1/20 of \(\varvec{F}_{{\varvec{ac}}}^{{\varvec{out}}}\) at \(w = 0.4\) µm on the vibrating interface. With an increase in vibration amplitude, the contribution of buoyancy force will be even smaller on the microparticle acoustophoresis in the near-field. To calculate the limit value of \(r_{0}^{out}\), we can set
$$\varvec{F}_{{\varvec{ac}}}^{{\varvec{out}}} + \varvec{F}_{\varvec{d}}^{{\varvec{out}}} = 0$$
by ignoring the buoyancy forces, which gives
$$r_{0}^{out} = \sqrt {\frac{{\left| {F_{d}^{out} } \right|}}{{\left| {F_{ac}^{out} } \right|}}} r_{0} \approx 7.1 \mu m.$$
Considering the wall-effect-correction for the streaming-induced drag forces, the limit value of \(r_{0}^{out} \approx 9.4\) µm.
(Colour online) Comparisons of magnitudes of a out-of-plane forces and b in-plane forces on particles with various sizes (radius of \(r_{0}\)). The insets show the directions of the plotted forces above a vibrating antinode. \(F_{ac}\), \(F_{d}\), \(F_{B}\) and \(F_{G}\) are the acoustic radiation force, streaming-induced drag force, buoyancy and gravity, respectively. The in-plane forces are the average values over the bottom edge
In-plane microparticle acoustophoresis. For the in-plane microparticle acoustophoresis, it is acted upon by the acoustic radiation force and the streaming-induced drag force. Similar to the analyses above, the equation of motion for a spherical particle of in-plane velocity \(\varvec{v}^{{\varvec{in}}}\) is then
$$\varvec{v}^{{\varvec{in}}} = \frac{{\varvec{F}_{\varvec{d}}^{{\varvec{in}}} + \varvec{F}_{{\varvec{ac}}}^{{\varvec{in}}} }}{{6\pi \mu r_{0} }}.$$
As shown in Figs. 4b and 6b, both the in-plane acoustic radiation force, \(\varvec{F}_{{\varvec{ac}}}^{{\varvec{in}}}\), and the streaming-induced drag force, \(\varvec{F}_{\varvec{d}}^{{\varvec{in}}}\), move microparticles to the acoustic pressure antinodes (see also the inset in Fig. 8b). To evaluate the contributions of these two forces on the in-plane microparticle acoustophoresis, we compare their average values over the plate interface because considering the maximum force only may not be accurate. Since both of these in-plane forces point to the acoustic pressure antinodes, they jointly contribute to the focusing of microparticles to the acoustic pressure antinodes provided that the particle sizes are big enough to avoid being driven away from the vibrating interface by out-of-plane acoustic streaming vortices (as discussed in the previous step), which could provide evidence for the much larger particle velocities measured in experiments when compared with the predicted streaming velocities as shown in Vuillermet et al. (2016) work.
Although there is no threshold in-plane particle size for the reason that both the in-plane acoustic radiation force and streaming-induced drag force drive microparticles to the acoustic pressure antinodes, we can figure out the contribution of each force on the in-plane microparticle acoustophoresis. Again, based on the fact that
$$\varvec{F}_{\varvec{d}}^{{\varvec{in}}} \propto r_{0}\quad {\text{and}}\quad \varvec{F}_{{\varvec{ac}}}^{{\varvec{in}}} \propto r_{0}^{3} ,$$
we can expect a critical in-plane particle size, \(r_{0}^{in}\): for \(r_{0} > r_{0}^{in}\), \(\varvec{F}_{{\varvec{ac}}}^{{\varvec{in}}}\) contribute more to the in-plane acoustophoresis; while for \(r_{0} < r_{0}^{out}\), \(\varvec{F}_{\varvec{d}}^{{\varvec{in}}}\) have a higher contribution. The value of \(r_{0}^{in}\) can be found from setting \(\varvec{F}_{{\varvec{ac}}}^{{\varvec{in}}} = \varvec{F}_{\varvec{d}}^{{\varvec{in}}}\), which gives
$$r_{0}^{in} = \sqrt {\frac{{\left| {\varvec{F}_{\varvec{d}}^{{\varvec{in}}} } \right|}}{{\left| {\varvec{F}_{{\varvec{ac}}}^{{\varvec{in}}} } \right|}}} r_{0} \approx 15.7\,\upmu{\text{m}}.$$
Considering the wall-effect-correction for the streaming-induced drag forces, \(r_{0}^{in} \approx 27.6\) µm. The in-plane forces on particles at various sizes are plotted in Fig. 8b. It is noteworthy that, different to the situation for \(r_{0}^{out}\), \(r_{0}^{in}\) is independent of the vibration amplitude \(w\) because both \(\varvec{F}_{{\varvec{ac}}}^{{\varvec{in}}}\) and \(\varvec{F}_{\varvec{d}}^{{\varvec{in}}}\) scale with the square of \(w\).
Actually, it can be seen from Eqs. (17) and (20) that, ignoring the small effect of buoyancy forces in the near-field, the relationships between the in-plane and out-of-plane threshold particle sizes and the ratios of the corresponding streaming-induced drag force and acoustic radiation force are
$$r_{0}^{in} , r_{0}^{out} = \sqrt {\frac{{\left| {\varvec{F}_{\varvec{d}} } \right|}}{{\left| {\varvec{F}_{{\varvec{ac}}} } \right|}}} r_{0} .$$
4 Effects of key parameters on microparticle acoustophoresis
Having demonstrated the acoustophoresis of microparticles at various sizes for a particular plate (thickness of 5.9 µm and radius of 0.8 mm), in this section, we investigate the effects of many key parameters, including the plate radius and thickness and the fluid viscosity, on the performance of microparticle acoustophoresis in order to facilitate device design for a wide range of applications.
Effects of fluid viscosity. On the one hand, it can be seen from Eq. (8) that the magnitudes of limiting velocities (i.e. the strength of the outer streaming velocities) are independent of the fluid viscosity even though viscosity is the initial cause of acoustic streaming flows. Thus, with a change in fluid viscosity, the streaming-induced drag force, \(\varvec{F}_{\varvec{d}}\), scales linearly with \(\mu\), while \(\varvec{F}_{{\varvec{ac}}}\) will remain the same. From Eq. (21), the following relationships are established,
$$r_{0}^{in} , r_{0}^{out} \propto \sqrt \mu .$$
Therefore, to eliminate the 'side effect' of streaming flows on the microparticle manipulation, and we can conclude that lowering the fluid viscosity is a viable way to augment the weight of acoustic radiation force on microparticle acoustophoresis.
Effects of plate thickness and radius. To investigate the effects of plate thickness (\(h\)) and radius (\(R\)) on the microparticle acoustophoresis, we considered a series of h and R ranging from 2 to 14 µm and 0.3 to 1.4 mm, respectively. When one parameter was studied, the other parameter was kept the same. For each case, following the whole numerical procedure described in the sections above, we calculated the threshold in-plane and out-of-plane particle sizes, which are shown in Fig. 9. It can be seen that these two threshold particle sizes have similar variation tendencies: they grow with the increase in R and fall with the rise of h.
Effects of plate radius on the threshold a in-plane particle sizes, \(r_{0}^{in}\), and b out-of-plane particle sizes, \(r_{0}^{out}\) (with wall effect). For (a, b), the plate thickness is the same, \(h = 5.9\) µm. For (c, d), the plate radius is the same, \(R = 0.8\) mm
Compare with basic theory. Turning to the theoretical aspect, as seen from Eq. (21), to determine how these two threshold particle sizes change with the many key parameters, we only need to figure out how the force ratio on the right-hand side varies with these parameters. If we define \(\varvec{v}^{{\varvec{rad}}}\) as the contribution of acoustic radiation force on the particle velocity, considering Eqs. (15) and (19), we have
$$\frac{{\left| {\varvec{F}_{\varvec{d}} } \right|}}{{\left| {\varvec{F}_{{\varvec{ac}}} } \right|}} = \frac{{\left| {\overline{{\varvec{u}_{{\mathbf{2}}} }} } \right|}}{{\left| {\varvec{v}^{{\varvec{rad}}} } \right|}}.$$
Examining the acoustic field in the near-field, it can be seen from Fig. 3b that, if expanded in the radial direction, the acoustic pressure field (as plotted in Fig. 3d) can be approximated to a 1D standing wave on all circumferences for \(0 < r \ll R\), in which the right-hand side of Eq. (23) has the following relation (Barnkob et al. 2012)
$$\frac{{\left| {\overline{{\varvec{u}_{{\mathbf{2}}} }} } \right|}}{{\left| {\varvec{v}^{{\varvec{rad}}} } \right|}} = \frac{6\mu }{{\varPhi \rho_{f} \omega r_{0}^{2} }},$$
where \(\varPhi \approx 0.1685\) in this work is the acoustic contrast factor and the thermoviscous effects are not included.
For a clamped circular plate with radius of \(R\) and thickness of h, the angular frequency for an unloaded case for each \(\left( {m, n} \right)\) mode follows (Leissa 1993)
$$\omega = \frac{{\alpha_{mn}^{2} }}{{R^{2} }}\sqrt {\frac{{Eh^{2} }}{{12\rho \left( {1 - \upsilon^{2} } \right)}}} ,$$
where \(E\) is the plate Young's modulus, ρ is the plate density and υ is the plate Poisson's ratio. Considering the surrounding water, for a given \(\left( {m, n} \right)\) mode, the angular frequency is reduced to
$$\omega = \frac{{\alpha_{mn}^{2} }}{{R^{2} }}\sqrt {\frac{{Eh^{2} }}{{12\rho \left( {1 - \upsilon^{2} } \right)}}} \frac{1}{C},$$
$$C = \sqrt {1 + \varGamma_{mn} \frac{{\rho_{f} }}{\rho }\frac{R}{h}} ,$$
where \(\varGamma_{mn}\) is the non-dimensional added virtual mass incremental (NAVMI) factor, values of which can be found in Ref. (Amabili and Kwak 1996), Table 5, in the case of a clamped plate.
Combining Eqs. (21), (23), (24) and (26), the relationships between the threshold in-plane particle sizes and the many key parameters in a 1D standing wave field can be expressed as
$$r_{0}^{in} = Rh^{ - 0.5} \left( {\frac{6\mu C}{{\varPhi \rho_{f} \alpha_{mn}^{2} }}} \right)^{0.5} \left[ {\frac{E}{{12\rho \left( {1 - \upsilon^{2} } \right)}}} \right]^{ - 0.25} .$$
The calculated values of r 0 in using Eq. (27) and those obtained from our model for various \(R\) and \(h\) are shown in Fig. 10. It can be seen that the modelled \(r_{0}^{in}\) compare reasonably well with the calculated values under the 1D standing wave approximation. The differences between the calculated values and those modelled may be attributed to the reason that, compared to an approximated 1D standing wave, the acoustic field in the near-field is a more complex pattern. However, due to the complexity of the problem, the good comparisons between our model and the calculated values indicate that the approximated 1D standing wave may have captured the main features of (4, 1) mode and our model can be applied to study the basic physics of microparticle acoustophoresis on vibrating plate systems for even more complex vibrating modes.
(Colour online) Comparisons on the threshold in-plane particle sizes between the modelling and theory, where the diamonds and squares show the modelled values calculated from the averaged and maximum forces over the bottom surface (with wall effect), respectively, and triangles show the calculated values using Eq. (27). For (a), the plate radius is the same, \(R = 0.8\) mm, and the plate thickness is the same for (b), \(h = 5.9\) µm
5 Mode switching
Eigenfrequency studies show that two orthogonal vibrating patterns for each (\(m,n\)) vibrating mode could be excited at two adjacent frequencies (typically hundreds of Hz difference) provided that the modes are high enough (\(m \ge 1\)). As shown in Fig. 11, the phase angle between two adjacent acoustic pressure antinodes of these two orthogonal patterns is
(Colour online) A schematic representation of the underlying mechanism for the circular manipulation of a single particle by continuous mode switching between two \(\left( {m, n} \right)\) orthogonal modes. To complete a full circle of movement (i.e. \(\theta = \pi /2m\)), 4 \(m\) times of mode switching are required
$$\theta = \frac{\pi }{2m}.$$
For this specific model, both the in-plane acoustic radiation force and streaming-induced drag force diverge from the vibrating nodes and converge at the vibrating antinodes, so when switching from one mode (e.g. mode 1 in Fig. 11) to the other orthogonal mode (e.g. mode 2 in Fig. 11) a particle tends to move from the vibrating antinode of the former to its closest antinode of the latter either clockwise or anticlockwise depending on the initial position of the particle (assuming the initial position of the particle slightly shifts from the vibrating antinode). The potential underlying mechanism for the circular manipulation of a single particle is schematically shown in Fig. 11. It can be seen that, for each mode switching, the particle can move by an angle, \(\theta = \pi /2m\), while its distance to the centre of the circular membrane will remain the same. To complete a full circle of manipulation, 4 \(m\) times of mode switching are required. This method is different from the mode switching proposed by Glynne-Jones et al. (2010) who showed that beads can be brought to any arbitrary point between the half and quarter-wave nodes when rapidly switching back and forth between half and quarter wavelength frequencies in bulk acoustofluidic devices.
We have investigated the 3D acoustophoretic motion of microparticles due to acoustic radiation, acoustic streaming, gravity and buoyancy over a clamped vibrating circular plate in contact with water. The underlying physics of microparticle acoustophoresis over vibrating plates has been studied in detail. Previous predominant analyses have emphasized the in-plane acoustic streaming flows on the formation of inverse Chladni patterns, which, according to this study, may not be complete. For in-plane microparticle acoustophoresis, both the in-plane acoustic radiation forces and the in-plane streaming-induced drag forces were shown to drive microparticles to their closest vibrating antinodes. For out-of-plane microparticle acoustophoresis above vibrating antinodes, in addition to the buoyancy forces, one has to consider the acoustic radiation forces in the near-field, which prevent the out-of-plane streaming vortices from dragging microparticles away from the vibrating interface.
Based on the high efficiency of this numerical model, the threshold in-plane and out-of-plane particle sizes balanced from the acoustic radiation and streaming-induced drag force under all vibrating modes can be readily obtained. An important next step is to achieve a direct experimental verification of numerical modelling. Given a successful experimental verification, this 3D model could be extended to include the thermoviscous effects (Muller and Bruus 2014) to obtain more accurate results, but it would be very computationally expensive. According to a study by Rednikov and Sadhal (2011), the thermoviscous effects can increase the streaming velocities by 18% for water at 20 °C which, thus, will shift the threshold particle sizes.
The good comparisons between our modelling and experiments and basic theories indicate that our numerical model could be used together with high-precision experiments as a better research tool to study the many yet unsolved problems. For example, modelling suggests that mode switching between two adjacent frequencies may be used for circumferential manipulation of a single particle or a pair of particles, which might provide routes for the study of particle–particle and particle–wall interactions in acoustofluidics.
While we have shown here 3D particle size-dependent acoustophoresis over an ultrathin circular plate in water, we believe that this strategy could be applied to analyse 3D acoustophoretic motion of microparticles in other vibrating plate systems regardless of fluid medium and thickness, shape and material of plates. One particular application would be acoustophoretic handling of sub-micrometre particles, such as small cells, bacteria and viruses, whose movements are usually dominated by acoustic streaming flows. From the modelled results and the general scaling law given in Eq. (27), we can conclude that increasing plate thickness, decreasing the plate diameter and lowering the viscosity of the liquid are probably the most viable way to conduct such manipulation.
The above-mentioned applications demonstrate that our numerical model is timely and has a huge potential on studies of basic physical aspects of microparticle acoustophoresis in vibrating plate systems and the design of lab-on-a-chip devices.
This work is supported by the EPSRC/University of Southampton Doctoral Prize Fellowship (EP/N509747/1). The authors gratefully acknowledge helpful discussions with Prof M. Hill and Dr P. Glynne-Jones. Models used to generate the simulation data supporting this study are openly available from the University of Southampton repository at http://dx.doi.org/10.5258/SOTON/404258.
10404_2017_1888_MOESM1_ESM.avi (3.9 mb)
Supplementary material 1 (AVI 3956 kb)
Agrawal P, Gandhi PS, Neild A (2013) The mechanics of microparticle collection in an open fluid volume undergoing low frequency horizontal vibration. J Appl Phys 114(11):114904CrossRefGoogle Scholar
Agrawal P, Gandhi PS, Neild A (2015) Frequency effects on microparticle motion in horizontally actuated open rectangular chambers. Microfluid Nanofluidics 19(5):1209–1219CrossRefGoogle Scholar
Amabili M, Kwak MK (1996) Free vibrations of circular plates coupled with liquids: revising the lamb problem. J Fluids Struct 10(7):743–761CrossRefGoogle Scholar
Antfolk M et al (2014) Focusing of sub-micrometer particles and bacteria enabled by two-dimensional acoustophoresis. Lab Chip 14(15):2791–2799CrossRefGoogle Scholar
Barnkob R et al (2012) Acoustic radiation- and streaming-induced microparticle velocities determined by microparticle image velocimetry in an ultrasound symmetry plane. Phys Rev E 86(5):056307CrossRefGoogle Scholar
Bruus H (2012a) Acoustofluidics 2: perturbation theory and ultrasound resonance modes. Lab Chip 12(1):20–28CrossRefGoogle Scholar
Bruus H (2012b) Acoustofluidics 7: the acoustic radiation force on small particles. Lab Chip 12(6):1014–1021CrossRefGoogle Scholar
Bruus H et al (2011) Forthcoming Lab on a Chip tutorial series on acoustofluidics: acoustofluidics-exploiting ultrasonic standing wave forces and acoustic streaming in microfluidic systems for cell and particle manipulation. Lab Chip 11(21):3579–3580CrossRefGoogle Scholar
Cheung YN, Nguyen NT, Wong TN (2014) Droplet manipulation in a microfluidic chamber with acoustic radiation pressure and acoustic streaming. Soft Matter 10(40):8122–8132CrossRefGoogle Scholar
Chladni EFF (1787) Entdeckungen über die Theorie des Klanges: Weidmanns Erben und Reich 77 S. & 11 Taf.: IllGoogle Scholar
COMSOL Multiphysics 5.2 (2015); Available from. http://www.comsol.com/
Destgeer G et al (2016) Acoustofluidic particle manipulation inside a sessile droplet: four distinct regimes of particle concentration. Lab Chip 16(4):660–667CrossRefGoogle Scholar
Devendran C, Gralinski I, Neild A (2014) Separation of particles using acoustic streaming and radiation forces in an open microfluidic channel. Microfluid Nanofluidics:1–12Google Scholar
Dorrestijn M et al (2007) Chladni figures revisited based on nanomechanics. Phys Rev Lett 98(2):026102CrossRefGoogle Scholar
Drinkwater BW (2016) Dynamic-field devices for the ultrasonic manipulation of microparticles. Lab Chip 16(13):2360–2375CrossRefGoogle Scholar
Faraday M (1831) On a peculiar class of acoustical figures; and on certain forms assumed by groups of particles upon vibrating elastic surfaces. Phil Trans R Soc Lond 121:299–340CrossRefGoogle Scholar
Friend J, Yeo LY (2011) Microscale acoustofluidics: microfluidics driven via acoustics and ultrasonics. Rev Mod Phys 83(2):647–704CrossRefGoogle Scholar
Glynne-Jones P et al (2010) Mode-switching: a new technique for electronically varying the agglomeration position in an acoustic particle manipulator. Ultrasonics 50(1):68–75CrossRefGoogle Scholar
Glynne-Jones P, Boltryk RJ, Hill M (2012) Acoustofluidics 9: modelling and applications of planar resonant devices for acoustic particle manipulation. Lab Chip 12(8):1417–1426CrossRefGoogle Scholar
Gorkov LP (1962) On the forces acting on a small particle in an acoustical field in an ideal fluid. Sov Phys Dorkl (Engl Transl) 6:773Google Scholar
Hahn P et al (2015) Numerical simulation of acoustofluidic manipulation by radiation forces and acoustic streaming for complex particles. Lab Chip 15(22):4302–4313CrossRefGoogle Scholar
Hamilton MF, Ilinskii YA, Zabolotskaya EA (2003) Acoustic streaming generated by standing waves in two-dimensional channels of arbitrary width. J Acoust Soc Am 113(1):153–160CrossRefGoogle Scholar
Hammarstrom B, Laurell T, Nilsson J (2012) Seed particle-enabled acoustic trapping of bacteria and nanoparticles in continuous flow systems. Lab Chip 12(21):4296–4304CrossRefGoogle Scholar
Hammarstrom B et al (2014) Acoustic trapping for bacteria identification in positive blood cultures with MALDI-TOF MS. Anal Chem 86(21):10560–10567CrossRefGoogle Scholar
Happel J, Brenner H (1965) Low Reynolds number hydrodynamics: with special applications to particulate media. Prentice-Hall, Englewood CliffszbMATHGoogle Scholar
Huang PH et al (2014) A reliable and programmable acoustofluidic pump powered by oscillating sharp-edge structures. Lab Chip 14(22):4319–4323CrossRefGoogle Scholar
Jensenius H, Zocchi G (1997) Measuring the spring constant of a single polymer chain. Phys Rev Lett 79(25):5030–5033CrossRefGoogle Scholar
King LV (1934) On the acoustic radiation pressure on spheres. Proc R Soc Lond A147:212–240CrossRefGoogle Scholar
Kundt A, Lehmann O (1874) Ueber longitudinale Schwingungen und Klangfiguren in cylindrischen Flüssigkeitssäulen. Ann Phys Chem 153(1):1–12CrossRefGoogle Scholar
Lee CP, Wang TG (1989) Near-boundary streaming around a small sphere due to 2 orthogonal standing waves. J Acoust Soc Am 85(3):1081–1088CrossRefGoogle Scholar
Lei J (2015) An investigation of boundary-driven streaming in acoustofluidic systems for particle and cell manipulation. Original typescript, p. xxvii, pp 165Google Scholar
Lei J, Glynne-Jones P, Hill M (2013) Acoustic streaming in the transducer plane in ultrasonic particle manipulation devices. Lab Chip 13(11):2133–2143CrossRefGoogle Scholar
Lei J, Hill M, Glynne-Jones P (2014) Numerical simulation of 3D boundary-driven acoustic streaming in microfluidic devices. Lab Chip 14(3):532–541CrossRefGoogle Scholar
Lei JJ, Glynne-Jones P, Hill M (2016) Modal Rayleigh-like streaming in layered acoustofluidic devices. Phys Fluids 28(1):012004CrossRefGoogle Scholar
Lei J, Glynne-Jones P, Hill M (2017) Comparing methods for the modelling of boundary-driven streaming in acoustofluidic devices. Microfluid Nanofluidics 21(2):23CrossRefGoogle Scholar
Leibacher I, Hahn P, Dual J (2015) Acoustophoretic cell and particle trapping on microfluidic sharp edges. Microfluid Nanofluidics 19(4):923–933CrossRefGoogle Scholar
Leissa AW (1993) Vibration of plates. Published for the Acoustical Society of America through the American Institute of PhysicsGoogle Scholar
Lighthill J (1978) Acoustic streaming. J Sound Vib 61(3):391–418CrossRefzbMATHGoogle Scholar
Muller PB, Bruus H (2014) Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels. Phys Rev E 90(4):043016CrossRefGoogle Scholar
Muller PB et al (2012) A numerical study of microparticle acoustophoresis driven by acoustic radiation forces and streaming-induced drag forces. Lab Chip 12:4617–4627CrossRefGoogle Scholar
Muller PB et al (2013) Ultrasound-induced acoustophoretic motion of microparticles in three dimensions. Phys Rev E 88(2):023006CrossRefGoogle Scholar
Nama N et al (2015) Numerical study of acoustophoretic motion of particles in a PDMS microchannel driven by surface acoustic waves. Lab Chip 15(12):2700–2709CrossRefGoogle Scholar
Nyborg WL (1958) Acoustic streaming near a boundary. J Acoust Soc Am 30(4):329–339CrossRefMathSciNetGoogle Scholar
Nyborg WL (1998) Acoustic streaming. In: Hamilton MF, Blackstock DT (eds) Nonlinear acoustics. Academic, San DiegoGoogle Scholar
Oberti S et al (2009) The use of acoustic radiation forces to position particles within fluid droplets. Ultrasonics 49(1):47–52CrossRefGoogle Scholar
Ohlin M et al (2015) Temperature-controlled MPa-pressure ultrasonic cell manipulation in a microfluidic chip. Lab Chip 15(16):3341–3349CrossRefGoogle Scholar
Patel MV et al (2014) Cavity-induced microstreaming for simultaneous on-chip pumping and size-based separation of cells and particles. Lab Chip 14(19):3860–3872CrossRefGoogle Scholar
Rayleigh Lord (1883) On the circulation of air observed in Kundt's tube, and on some allied acoustical problems. Phil Trans 175:1–21CrossRefGoogle Scholar
Rednikov AY, Sadhal SS (2011) Acoustic/steady streaming from a motionless boundary and related phenomena: generalized treatment of the inner streaming and examples. J Fluid Mech 667:426–462CrossRefzbMATHMathSciNetGoogle Scholar
Rogers P, Neild A (2011) Selective particle trapping using an oscillating microbubble. Lab Chip 11(21):3710–3715CrossRefGoogle Scholar
Tang Q, Hu JH (2015) Analyses of acoustic streaming field in the probe-liquid-substrate system for nanotrapping. Microfluid Nanofluidics 19(6):1395–1408CrossRefGoogle Scholar
van Gerner HJ et al (2010) Inversion of Chladni patterns by tuning the vibrational acceleration. Phys Rev E 82(1):012301CrossRefGoogle Scholar
van Gerner HJ et al (2011) Air-induced inverse Chladni patterns. J Fluid Mech 689:203–220CrossRefzbMATHGoogle Scholar
Vuillermet G et al (2016) Chladni Patterns in a liquid at microscale. Phys Rev Lett 116(18):184501CrossRefGoogle Scholar
Wang JT, Dual J (2012) Theoretical and numerical calculation of the acoustic radiation force acting on a circular rigid cylinder near a flat wall in a standing wave excitation in an ideal fluid. Ultrasonics 52(2):325–332CrossRefGoogle Scholar
Wiklund M, Green R, Ohlin M (2012) Acoustofluidics 14: applications of acoustic streaming in microfluidic devices. Lab Chip 12(14):2438–2451CrossRefGoogle Scholar
Yazdi S, Ardekani AM (2012) Bacterial aggregation and biofilm formation in a vortical flow. Biomicrofluidics 6(4):044114CrossRefGoogle Scholar
Zhou Q et al (2016) Controlling the motion of multiple objects on a Chladni plate. Nat Commun 7:12764CrossRefGoogle Scholar
© The Author(s) 2017
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Faculty of Engineering and the EnvironmentUniversity of SouthamptonSouthamptonUK
Lei, J. Microfluid Nanofluid (2017) 21: 50. https://doi.org/10.1007/s10404-017-1888-5
Received 08 November 2016
Accepted 22 February 2017
First Online 03 March 2017
Publisher Name Springer Berlin Heidelberg | CommonCrawl |
American Institute of Mathematical Sciences
Journal Prices
Book Prices/Order
Proceeding Prices
E-journal Policy
Vector-valued obstacle problems for non-local energies
DCDS-B Home
Transport processes with coagulation and strong fragmentation
March 2012, 17(2): 473-485. doi: 10.3934/dcdsb.2012.17.473
Compactness versus regularity in the calculus of variations
Daniel Faraco 1, and Jan Kristensen 2,
Department of Mathematics, Universidad Autónoma de Madrid, and Instituto de Ciencias Matemáticas CSIC-UAM-UC3M-UCM, Campus de Cantoblanco, Madrid, 28049, Spain
Mathematical Institute, 24–29 St Giles', University of Oxford, OX1 3LB Oxford, United Kingdom
Received September 2010 Revised February 2011 Published December 2011
In this note we take the view that compactness in $L^p$ can be seen quantitatively on a scale of fractional Sobolev type spaces. To accommodate this viewpoint one must work on a scale of spaces, where the degree of differentiability is measured, not by a power function, but by an arbitrary function that decays to zero with its argument. In this context we provide new $L^p$ compactness criteria that were motivated by recent regularity results for minimizers of quasiconvex integrals. We also show how rigidity results for approximate solutions to certain differential inclusions follow from the Riesz--Kolmogorov compactness criteria.
Keywords: Weak convergence methods, differential inclusion., compactness criteria.
Mathematics Subject Classification: Primary: 49J45; Secondary: 46E35, 46E3.
Citation: Daniel Faraco, Jan Kristensen. Compactness versus regularity in the calculus of variations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 473-485. doi: 10.3934/dcdsb.2012.17.473
J. J. Alibert and G. Bouchitté, Non-uniform integrability and generalized Young measures,, J. Convex Anal., 4 (1997), 129. Google Scholar
K. Astala and D. Faraco, Quasiregular mappings and Young measures,, Proc. Roy. Soc. Edinb. Sect. A, 132 (2002), 1045. Google Scholar
J. M. Ball and R. D. James, Fine phase mixtures as minimizers of energy,, Arch. Ration. Mech. Anal., 100 (1987), 13. doi: 10.1007/BF00281246. Google Scholar
J. M. Ball and R. D. James, Proposed experimental tests of a theory of fine microstructure and the two-well problem,, Phil. Trans. R. Soc. Lond. A, 338 (1992), 389. doi: 10.1098/rsta.1992.0013. Google Scholar
M. Chlebik and B. Kirchheim, Rigidity for the four gradient problem,, J. Reine Angew. Math., 551 (2002), 1. Google Scholar
J. R. Dorronsoro, A characterization of potential spaces,, Proc. Amer. Math. Soc., 95 (1985), 21. doi: 10.1090/S0002-9939-1985-0796440-3. Google Scholar
D.Faraco, Tartar conjecture and Beltrami operators,, Michigan Math. J., 52 (2004), 83. doi: 10.1307/mmj/1080837736. Google Scholar
D. Faraco and L. Székelyhidi, Tartar's conjecture and localization of the quasiconvex hull in $\mathbbR^{2\times 2}$,, Acta Math., 200 (2008), 279. doi: 10.1007/s11511-008-0028-1. Google Scholar
G. Friesecke, R. D. James and S. Müller, A theorem on geometric rigidity and the derivation of nonlinear plate theory from three-dimensional elasticity,, Comm. Pure Appl. Math., 55 (2002), 1461. doi: 10.1002/cpa.10048. Google Scholar
M. Gromov, "Partial Differential Relations,", Ergebnisse der Mathematik und ihrer Grenzgebiete (3), 9 (1986). Google Scholar
P. Hajłasz, P. Koskela and H. Tuominen, Sobolev embeddings, extensions and measure density condition,, J. Funct. Anal., 254 (2008), 1217. doi: 10.1016/j.jfa.2007.11.020. Google Scholar
H. Hanche-Olsen and H. Holden, The Kolmogorov-Riesz compactness theorem,, Expositiones Mathematicae, (2010). doi: 10.1016/j.exmath.2010.03.001. Google Scholar
T. Iwaniec and G. Martin, "Geometric Function Theory and Non-Linear Analysis,", Oxford Mathematical Monographs, (2001). Google Scholar
S. Janson, Generalizations of Lipschitz spaces and applications to Hardy spaces and bounded mean oscillation,, Duke Math. J., 47 (1980), 959. doi: 10.1215/S0012-7094-80-04755-9. Google Scholar
B. Kirchheim, "Rigidity and Geometry of Microstructures,", Habilitation Thesis, (2003). Google Scholar
B. Kirchheim, S. Müller and V. Šverák, Studying nonlinear PDE by geometry in matrix space,, in, (2003), 347. Google Scholar
J. Kristensen and G. Mingione, The singular set of Lipschitzian minima of multiple integrals,, Arch. Ration. Mech. Anal., 184 (2007), 341. doi: 10.1007/s00205-006-0036-2. Google Scholar
J. Kristensen and F. Rindler, Characterization of generalized gradient Young measures generated by sequences in $W^{1,1}$ and $BV$,, Arch. Ration. Mech. Anal., 197 (2010), 539. doi: 10.1007/s00205-009-0287-9. Google Scholar
S. Müller, A sharp version of Zhang's theorem on truncating sequences of gradients,, Trans. Amer. Math. Soc., 351 (1999), 4585. doi: 10.1090/S0002-9947-99-02520-9. Google Scholar
S. Müller, Variational models for microstructure and phase transitions,, in, 1713 (1999), 85. Google Scholar
S. Müller and V. Šverák, Attainment results for the two-well problem by convex integration,, in, (1996), 239. Google Scholar
S. Müller and V. Šverák, Convex integration for Lipschitz mappings and counterexamples to regularity,, Ann. Math. (2), 157 (2003), 715. Google Scholar
V. Šverák, Rank-one convexity does not imply quasiconvexity,, Proc. Roy. Soc. Edinb. Sect. A, 120 (1992), 185. doi: 10.1017/S0308210500015080. Google Scholar
V. Šverák, On Tartar's conjecture,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 10 (1993), 405. Google Scholar
V. Šverák, On the problem of two wells,, in, 54 (1993), 183. Google Scholar
L. Székelyhidi, Jr., The regularity of critical points of polyconvex functionals,, Arch. Ration. Mech. Anal., 172 (2004), 133. doi: 10.1007/s00205-003-0300-7. Google Scholar
L. Székelyhidi, Jr., Rank-one convex hulls in $\mathbbR^{2\times 2}$,, Calc. Var. Partial Diff. Eq., 22 (2005), 253. Google Scholar
L. Tartar, Compensated compactness and applications to partial differential equations,, in, 39 (1979), 136. Google Scholar
K. Zhang, A construction of quasiconvex functions with linear growth at infinity,, Ann. Sc. Norm. Sup. Pisa Cl. Sci. (4), 19 (1992), 313. Google Scholar
show all references
Alexander Mielke. Weak-convergence methods for Hamiltonian multiscale problems. Discrete & Continuous Dynamical Systems - A, 2008, 20 (1) : 53-79. doi: 10.3934/dcds.2008.20.53
Francesca Faraci, Antonio Iannizzotto. Three nonzero periodic solutions for a differential inclusion. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 779-788. doi: 10.3934/dcdss.2012.5.779
Yan Tang. Convergence analysis of a new iterative algorithm for solving split variational inclusion problems. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-20. doi: 10.3934/jimo.2018187
Alain Bensoussan, Miroslav Bulíček, Jens Frehse. Existence and compactness for weak solutions to Bellman systems with critical growth. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1729-1750. doi: 10.3934/dcdsb.2012.17.1729
Piotr Kowalski. The existence of a solution for Dirichlet boundary value problem for a Duffing type differential inclusion. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2569-2580. doi: 10.3934/dcdsb.2014.19.2569
T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037
Ziqing Yuana, Jianshe Yu. Existence and multiplicity of nontrivial solutions of biharmonic equations via differential inclusion. Communications on Pure & Applied Analysis, 2020, 19 (1) : 391-405. doi: 10.3934/cpaa.2020020
Tomoyuki Suzuki. Regularity criteria in weak spaces in terms of the pressure to the MHD equations. Conference Publications, 2011, 2011 (Special) : 1335-1343. doi: 10.3934/proc.2011.2011.1335
Roberto Livrea, Salvatore A. Marano. A min-max principle for non-differentiable functions with a weak compactness condition. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1019-1029. doi: 10.3934/cpaa.2009.8.1019
Stefan Kindermann, Antonio Leitão. Convergence rates for Kaczmarz-type regularization methods. Inverse Problems & Imaging, 2014, 8 (1) : 149-172. doi: 10.3934/ipi.2014.8.149
Clara Carlota, António Ornelas. The DuBois-Reymond differential inclusion for autonomous optimal control problems with pointwise-constrained derivatives. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 467-484. doi: 10.3934/dcds.2011.29.467
Antonia Chinnì, Roberto Livrea. Multiple solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 753-764. doi: 10.3934/dcdss.2012.5.753
Dina Kalinichenko, Volker Reitmann, Sergey Skopinov. Asymptotic behavior of solutions to a coupled system of Maxwell's equations and a controlled differential inclusion. Conference Publications, 2013, 2013 (special) : 407-414. doi: 10.3934/proc.2013.2013.407
Jan Čermák, Jana Hrabalová. Delay-dependent stability criteria for neutral delay differential and difference equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4577-4588. doi: 10.3934/dcds.2014.34.4577
Suqi Ma, Zhaosheng Feng, Qishao Lu. A two-parameter geometrical criteria for delay differential equations. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 397-413. doi: 10.3934/dcdsb.2008.9.397
Masakatsu Suzuki, Hideaki Matsunaga. Stability criteria for a class of linear differential equations with off-diagonal delays. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1381-1391. doi: 10.3934/dcds.2009.24.1381
Xiaomeng Li, Qiang Xu, Ailing Zhu. Weak Galerkin mixed finite element methods for parabolic equations with memory. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 513-531. doi: 10.3934/dcdss.2019034
Hui Liang, Hermann Brunner. Collocation methods for differential equations with piecewise linear delays. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1839-1857. doi: 10.3934/cpaa.2012.11.1839
Joseph A. Connolly, Neville J. Ford. Comparison of numerical methods for fractional differential equations. Communications on Pure & Applied Analysis, 2006, 5 (2) : 289-307. doi: 10.3934/cpaa.2006.5.289
Qingguang Guan, Max Gunzburger. Stability and convergence of time-stepping methods for a nonlocal model for diffusion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1315-1335. doi: 10.3934/dcdsb.2015.20.1315
PDF downloads (15)
HTML views (0)
on AIMS
Daniel Faraco Jan Kristensen
Copyright © 2019 American Institute of Mathematical Sciences
RIS(for EndNote,Reference Manager,ProCite) | CommonCrawl |
XIX The birth of statistical mechanics
XIX.1 Meanwhile, in the social sciences
XX The century of Big Science
XX.1 More is Different
XX.2 From lattices to networks
XXI The Information Age
Chapter 4 Diving into the anthill
Mathematical regularities arise in the human world as soon as one shifts the attention from the individual to the collective [258]. In human societies there are transitions from disorder to order, like the spontaneous formation of a common language or the emergence of consensus about a specific issue. There are further examples of scaling, self-organization and universality. These macroscopic phenomena naturally call for a statistical approach to social behavior in which the basic constituents of the system are not particules but humans [259]. In fact, in 1842, Auguste Comte, credited as the father of sociology, already divised this possibility: "Now that the human mind has grasped celestial and terrestial physics, mechanical and chemical, organic physics, both vegetable and animal, there remains one science, to fill up the series of sciences of observation - social physics. This is what men have now most need of […]" [18].
Despite the early observations in the XIX century about the possible existence of social physics, this area of research is still in its infancy. To understand why this is so, it might be enlightening to look at how research progressed in other areas of physics. For instance, in the XVI century Tycho Brahe recorded the position of celestial objects with unprecedented accuracy and quantity. After his death, his assistant Johannes Kepler analyzed his data and extracted the three basic laws describing planetary movement that bear his name. These, in turn, inspired Newton in the formulation, by the end of the XVII century, of the laws of motion and universal gravitation. It could be argued, then, that despite the great advances in sociology of the last century, we have just arrived to the first step of the process. That is, we are finally gathering data with unprecedented accuracy and quantity. To demonstrate it, and fully comprehend the paradigm shift that this represents, we can take the example of rumors.
During the second World War, the attention of social scientists was forcibly drawn to the subject of rumors. Not only it became aparent that wartime rumors impaired public morale and confidence, but it could also be used as a weapon of enemy propaganda. Initially, research was focused on understanding the psychology of rumors, interpreting them as something belonging to the individual. For instance, in 1944 Knapp defined a rumor as "a proposition for belief of topical reference disseminated without official verification". He even proposed that to control rumors the people had to be well informed, have confidence in their leaders and even that authorities should prevent idleness, monotony and personal disorganization as rumors - he said - do not thrive among purposeful, industrious and busy people [260]. Soon after, in 1948, Allport and Postman slightly modified Knapp's definition and said that a rumor was "a proposition for belief, passed along from person to person, usually by word of mouth, without secure standards of evidence being present"15
To study this spreading process, they performed several experiments. However, they were already aware of the limitations that in-lab experiments had in the particular context of rumors. For instance, they had to oversimplify rumors in order to track them. Further, the intrinsic motivation of spreading a rumor is lost if you are inside a lab and a scientist is telling you to do it, being the willingness to spread substituted by the willingness to cooperate with the experimenter. They also noted that outside the laboratory the narrator tends to add color to her story, but inside the laboratory the teller feels that her reputation is at stake and does her best to transmit it in the most precise way. Moreover, they usually worked with groups of six or seven individuals.
In contrast, in 2018 Vosoughi et al. were able to investigate the diffusion of more than 126,000 true and false news stories within a population of 3 million people using Twitter data [262]. They found that false news spread farther, faster, deeper and more broadly than the truth, something that clearly could not have been studied 50 years before. Furthermore, they also investigated the role that bots in charge of systematically spreading false news could play in the spreading, but found that even though they accelerated a bit the spreading, they did not affect their total reach. Yet, few months latter, Shao et al. analyzed a much broader set of news, with almost 400,000 thousand articles and found evidence that bots do play a key role in spreading low-credibility content [263]. This contradiction is a sign that new data sources not only provide information that was not accessible before, with unprecedented accuracy and quantity, but that they also represent a subject worth of study on their own.
In this context, in section 4.1 we will study the dynamics of a Spanish online discussion board, Forocoches, with the objective of disentangling how its microscopic properties lead to our macroscopic observations. This will be based on the work
O'Brien, J., Aleta, A., Gleeson, J. and Moreno, Y., Quantifying uncertainty in a predictive model for popularity dynamics, Phys. Rev. E 101:062311, 2020
As McFarland et al. noted, not only these new platforms might be interesting on their own, but also give raise to new social phenomena that could not take place without digital intermediation. For example, some years ago people were simply technically unable to share photos on the scale and frequency they do today. These technologically enabled social transactions are a specific category of behaviors, some which may affect, in turn, offline social dynamics and structures. Hence, data generated on digitally-mediated platforms represent new categories of social action, not different from other phenomena of sociological interest [65].
Along these lines, we will conclude this section analyzing crowd dynamics in a digital setting. In particular, we will analyze the dynamics that emerged in an event that took place on February 2014, in which nearly a million players joined together and played a crowd controlled game, i.e., a game in which the character of the videogame was controlled simultaneously by all players. Clearly, this type of event was completely unattainable, at least with such magnitude, without the Internet. Yet, we will see that patterns that are common in the offline world had their reflection on this event. This section will be based on the work
Aleta, A., and Moreno, Y., The dynamics of collective social behavior in a crowd controlled game, EPJ Data Sci., 8:1-16, 2019
Besides, as we shall see, not only the two systems that we will explore are interesting on their own, but they will also allow us to discuss the dynamics that emerges when humans come together in groups. Since the late XIX century the concept of group has received a lot of attention from psychologists and sociologists as it was observed that a group is not just the addition of the individuals that compose it. The appearance of the Internet, rather than breaking the boundaries that lead naturally to groups, has allowed the formation of new and larger groups - as some sort of virtual ant colonies.
A discussion board, or Internet forum, is an online discussion site where people can hold conversations in the form of posted messages [264]. These platforms are hierarchically organized in a tree-like structure. Each forum can contain a set of sub-forums dedicated to specific topics. Then, inside each sub-forum users can begin a new conversation by opening a thread. In turn, other users can participate in the conversation by sending posts to the thread.
In the last decade, social networks have revolutionized the way we interact with each other. Yet, Internet forums precede modern online social networks by several decades. The precursors of forums date from the late 1970s, although the first proper Internet forum as we know them today was the World-Wide Web Interactive Talk created in 1994 [265]. As Rheingold noted in 1993, in these platforms virtual communities were created, exceeding the limits of the offline world [266]. He stated that the main characteristics of these communities were the fact that they belonged to the cyberspace, that they were based on public discussion and that personal relationships could be developed among the participants. Of these aspects, probably the most characteristic one is the fact that there are no physical boundaries in these communities, allowing people from all over the world to come together. This already raised the interest of several researchers during the late 1990s and early 2000s [267], although we find particularly interesting the thoughts of the jurist Cass R. Sunstein [268]. In 1999, he published a work on group polarization and stated that:
"Many people have expressed concern about processes of social influence on the Internet. The general problem is said to be one of fragmentation, with certain people hearing more and louder versions of their own preexisting commitments, thus reducing the benefits that come from exposure to competing views and unnoticed problems. But an understanding of group polarization heightens these concerns and raises new ones. A `plausible hypothesis is that the Internet-like setting is most likely to create a strong tendency toward group polarization when the members of the group feel some sense of group identity'. If certain people are deliberating with many like-minded others, views will not be reinforced but instead shifted to more extreme points. This cannot be said to be bad by itself - perhaps the increased extremism is good - but it is certainly troublesome if diverse social groups are led, through predictable mechanisms, toward increasingly opposing and ever more extreme views. It is likely that processes of this general sort have threatened both peace and stability in some nations; while dire consequences are unlikely in the United States, both fragmentation and violence are predictable results. As we have seen, group polarization is intensified if people are speaking anonymously and if attention is drawn, though one or another means, to group membership. Many Internet discussion groups have precisely this feature. It is therefore plausible to speculate that the Internet may be serving, for many, as a breeding group for extremism."
These words predicted, for instance, the problem of echo chambers - people only viewing information in social networks coming from those who think like them [269] -, the role of the Internet in the arab spring [270] or the appearance of extremist groups - such as incels [271] - roughly 20 years ahead and when the Internet was, in comparison with today, still in its infancy. Admittedly, his views were probably based on the large amount of research performed on group polarization that was carried out during the XX century. Psychologists, sociologists, economists, politicians… the fact that groups are not just the addition of individuals had already attracted the interest of scientists coming from very diverse fields.
The previous examples show that these discussion platforms are worth being studied on their own. But bear in mind that these systems also provide tons of valuable data about how people interact that can, in turn, be used to test hypothesis about social behavior that were put forward in other contexts. For instance, in 2005 Berger and Heath proposed the concept of idea habitats [272]. They argued that ideas have a set of environmental cues that prime people to think about them and to believe it may be relevant to pass along. Although their definition of habitat is quite broad (for instance, the current season is one of the cues building the habitat), we can clearly see that human groups in general, and online groups in particular, can be examples of habitats. Moreover, they said that to really test their ideas they would need a "perfect but unobtainable database" such as a "searchable database of all conversations". Even though discussion platforms do not posses all the information, as discussions might be influenced by external factors, it might be possible to find examples of conversations that only make sense within a particular online system. In such a case, the system would surely represent a database of all conversations. In fact, in section 4.1.1 we will see one example along these lines.
This data can also help us understand how culture disseminates and evolves. In 1985, Sperber proposed that culture could be studied under the lenses of epidemiology - as something that propagates. Yet, he doubted that mathematical models were ever going to be needed to model cultural transmission [273]. Few years later, in 1997, Axelrod presented his seminal work on cultural dissemination. With a very simple mathematical model, he demonstrated that social influence, contrary to the expectations, could naturally lead to cultural polarization rather than homogenization [274]. The accelerated rate at which online platforms evolve, in comparison to their offline counterparts, can be used to test these assumptions in the light of data. Furthermore, the boundary between the online and offline culture is getting thinner now that all cultural expressions and personal experiences are shared across the internet. Thus, this data can be used to study the evolution of the new culture that is being formed, the culture of real virtuality in Castells terms [71].
To conclude this introduction, we can give yet another example of the opportunities that having such large amounts of data represent. In 2010 Onnela and Reed-Tsochas studied the popularity of all Facebook applications that were available when the data was collected [275]. This, they claimed, removed the sampling bias that was usually present in the studies of social influence, in which only successful products were actually taken into account. By doing so, they discovered that the popularity of those applications was governed by two very different regimes: an individual one in which social influence plays no role and a collective one in which installations were clearly influenced by the behavior of others. They proposed that this type of studies could be extrapolated to other online systems. For instance, they gave the example of the (back then) online book retailer Amazon and the online DVD rental service Netflix, which allowed their users to rate their products. This would lead to an endogenously generated social influence, at a rate unprecedented in the offline world, with important economic consequences. Actually, the fact that consumers were influenced by opinions found in the Internet was something that already attracted the attention of researchers in the early 2000s in, precisely, the context of Internet forums [276].
It should be clear the wide range of possibilities that analyzing discussion boards provide. Yet, the following sections will have much more modest goals. Our objective is to understand the dynamics of the board which, in turn, should help us to study much more complex phenomena such as social influence in the future.
+++ Divide By Cucumber Error. Please Reinstall Universe And Reboot +++
"Hogfather", Terry Pratchett
Forocoches is a Spanish discussion board created in 2003 to talk about cars16. Back in those days it was common to have forums of very diverse topics, unlike modern social networks in which all the information is gathered in the same place. This fact can easily be constated by looking at the name of the subsections that compose the board, figure 4.1. However, the discussions in the forum evolved throughout the years with more and more people gathering in the General subsection. Nowadays, this subsection contains over 80% of all the posts in the board and the discussions cover many topics that have nothing to do with cars, as it can be seen in figure 4.2.
Figure 4.1: Subsections of the board (translated from Spanish). Most of the terms are related to cars, but the distribution of posts across the board is very heterogeneous with 80% of all the messages posted in the General subsection.
Figure 4.2: Topic evolution in Forocoches. Worclouds of the words used as thread titles. In 2003 the most common words were related to cars. Some of them refer to particular car models: alfa (alfa romeo), golf (Volkswagen golf), leon (seat leon), etc. Others represent car parts, technologies or accessories: tdi (turbocharged direct injection), cv (horsepower), llantas (rims), aceite (oil), cd, dvd, mp3… On the other hand, in 2016 the most common words refer to a broader set of topics. There are terms related to politics (pp, psoe, podemos and ciudadanos which were the main political parties in Spain in that year), technology (amazon, xiaomi, pc, iphone…), games (ps4, juego, pokemon…) to name a few.
A remarkable aspect of the forum is that since 2009 people can not register freely as in most social networks. Instead, to be able to create an account an invitation from a previous member is needed, and they were quite limited for a few years. Currently, there are some commercial campaigns that grant invitations making it slightly easier to obtain one, but in any case it is a much more closed community than common social networks in which anyone can create a new account. Note also that despite this fact the board has grown continuously since its creation, figure 4.3A.
Before going any further we should briefly describe the functioning of the board. As in other discussion boards, the information is organized in a tree-like structure. Each section is composed by a set of subsections. In each subsection, a new discussion can be started by opening a thread. Then, users can send posts to continue said discussion. From now own, we will restrict ourselves to the study of the General subsection, as it is the one with a broader set of topics and it is also the most active one as stated previously.
In the General subsection all threads that have received a new post within the last 24 hours are visible, although they are organized in a set of pages containing 40 threads each (very much like Google results). The threads appear in reverse chronological order, that is, the first thread is the one which received a new post most recently. Note that this is completely different from other social platforms in which the information is organized according to the liking of the user or related to her followers/friends. Thus, this should remove the problem of echo chambers that we mentioned previously, as people are shown all the information that is in the board regardless of whether it is suited to their likes or not. Although, admittedly, there could still be a bias due to the forum only containing a certain type of information, at least it is much easier to analyze, since it is not necessary to have precise data about the behavior of each single user.
Inside each thread, posts are organized in chronological order, being the first post the one that initiated the conversation and the last one the most recent one. Posts can contain text, images or videos. Besides, it is possible to cite a previous post in the thread (or in another thread). This does not modify the ordering of the posts, nor adds any points or likes to it. Indeed, unlike other platforms there are not any measures of popularity of posts, such as retweets or favorites. It should be noted that each thread can only contain up to 2,000 posts. Once the limit is reached the thread gets automatically closed and if users want to continue with the conversation they need to start a new thread. Nevertheless, the great majority of threads never reach that limit. This fact is shown in figure 4.3B, where the distribution of the threads sizes is plotted.
Figure 4.3: Statistics of Forocoches. A) Number of new posts per month as a function of time. The activity in the forum has increased continuously since its creation in 2003. B) Distribution of thread popularity measured as number of posts per thread. The distribution can be fitted by a lognormal distribution (which is commonly found in online social media [279], [280]) with parameters \(\mu=2.79\) and \(\sigma=1.25\).
Posts can only be sent by people that registered an account in the forum. An account has a nickname associated, as well as possibly a profile picture and some more information about the user. Unlike social networks, it is not possible to automatically track the activity of other users by following them or being friends (although it is always possible to go to their profile and check their latest posts). Thus, this system does not posses an explicit social network and thus the interactions between individuals should be based more on the topic than on social factors. Yet, we should emphasize that even if there is not an explicit underlying network like the ones we can find in social networks, it would be possible to construct networks that provide insights about the characteristics of the system. For instance, it would be possible to consider that users are nodes and that two users should be linked if they participate in the same thread. Further, these links could be weighted by the number of times this event occurs. Then, it would be possible to study how the information flows in the system or whether there are some underlying structures that might be hidden, such as groups of users that tend to always discuss about the same ideas together.
At this point, we should give some more details about the size of the board. Figure 4.3A shows that as of 2016 the forum received more than 1.5 million posts per month. According to the most recent statistics provided by the board, that number is now over 4 million. There are over 5 million threads, 340 million posts and 800 thousand users registered [281]. Although these numbers pale in comparison with the large social networks that are spread all over the world, note that in this case 90% of the traffic comes from Spain. This has some interesting consequences. On the one hand, it is much smaller than other social networks, making it easier to analyze but, at the same time, it is large enough to convey robust statistics. Moreover, the fact that the traffic comes mostly from Spain also facilitates the study of Spanish events without the sampling biases that arise when one uses the geo-location of users to determine where they come from in social media such as Twitter [282], [283].
Interestingly, it is possible to find remarkable similarities between Forocoches and other Internet platforms like Twitter. For instance, in figure 4.4, we show the daily activity patterns in both systems. In the case of Forocoches the data refers to the year 2015, while in Twitter it represents the tweets sent by people who had their geo-location activated and sent the tweets from within the United Kingdom during a week of October in 2015. As expected, both systems reflect the offline activity patterns of the population, with lower activities during the night. Yet, even though both systems exhibit a pattern that we could call double peaked, one at lunch time and another one at the beginning of the night, there are clear differences that might be related to the sociological characteristics of both countries. Again, this highlights that it is possible to extract much more information from these datasets that what it might seem at first glance.
Figure 4.4: Daily activity of users in online social networks. A) Average number of posts sent as a function of time during 2015 in Forocoches. B) Average number of tweets sent as a function of time in October 2015 by users who had their geo-location activated and sent them from within the United Kingdom.
Another example of the possibilities that the study of these systems bring is shown in figure 4.5. The emergence of new social contexts has enabled slang to abound on the web [284]. Although it is not possible to track the whole evolution of terms that are used all over the Internet, it is possible to find words that only have meaning in a certain context. In this particular case, we show two examples of terms that have meaning only within the board. The interesting thing about them is not their meaning but the fact that it is possible to track their whole evolution, something that obviously cannot be done in the offline world [285]. This information can then be used to study the dynamics of the cultural evolution of language [286]. In other words, we have the database of all conversations that Berger and Heath needed to test their hypothesis of cultural habitats.
Figure 4.5: Evolution of slang in Forocoches. Usage of two memes in posts as a function of time. A) The term "288" originated when a user started a thread on the 8th of April of 2011 with the title "\(48 ÷ 2(9+3)=????\)", prompting people to give their answer. A debate on whether the division or the multiplication had to be performed first arose, with 288 being the solution if the division is performed first. After that thread the term became a meme that is used as a joke to answer questions related to numbers. B) The term "din" originated in a thread started on the 30th of May of 2010. The first person to answer the thread (after the person that created it) wrote "DIN del POST" (din of the post) probably due to a mistake (the letter \(d\) is next to \(f\), which would be used to write fin, end). From that point on, the term gained popularity as a way of saying that someone posted an argument that answered the question being discussed.
Our goals are, however, much more modest for this part of the thesis. Our objective is to understand the mechanics behind the macroscopic behavior of the forum, which in turn should help us in the future to study more specific characteristics of the system such as the ones described so far. The starting point will be the following observation. Threads that have been inactive for over 24 hours are not removed like in other boards. Even though they are not present anymore in the list that can be directly accesed from the front page, they can still be accessed either by having their link or by finding them using Google or the own search engine of the forum. Nevertheless, it has been observed that, in Google, over 90% of the users do not go beyond the first page of results [287]. Hence, it seems reasonable to assume that users will tend to focus on the 40 threads that are on the front page. Thus, given that the more recently a thread has received a post, the most likely it is to be found in the first positions, we hypothesize that the dynamics of the forum should follow some sort of self-exciting process. In particular, we will focus on non-homogeneous Poisson processes, which have yielded satisfactory results when used to study other online social platforms, such as Twitter [288] and Reddit [289] (see [290] for a recent review on other applications of these processes).
In general, point processes are used to describe the random distribution of points in a given mathematical space. In our case, this mathematical space will be the positive real line, so that events will be distributed across time. Moreover, we are not interested in the specific distribution of each event but rather on their cumulative count, as our objective is to elucidate the mechanisms leading to thread growth. In this case, point processes can be described as counting processes [291].
A counting process is a stochastic process defined by the number of events that have been observed (arrived) until time \(t\), \(N(t)\) with $t $. Thus, \(N(t) \in \mathbb{N}_0\), \(N(0)=0\) and it is a right-continuous step function with increments of size \(+1\). Further, we denote by \(\mathcal{H}_u\) with \(u\geq 0\) the history of the arrivals up to time \(u\). It is completely equivalent to refer to this process as a point process defined by a sequence of ordered random variables \(T=\{t_1,t_2,\ldots\}\).
These processes are characterized by the conditional intensity function, which reflects the expected rate of arrivals conditioned on \(\mathcal{H}_t\):
\[\begin{equation} \lambda(t|\mathcal{H}_t) = \lim_{h\rightarrow 0} \frac{P\{N(t,t+h]>0|\mathcal{H}_t\}}{h}\,. \tag{4.1} \end{equation}\]
The most common example of these processes is the homogeneous Poisson process, in which the conditional intensity is constant. Using equation (4.1) this can be properly defined as
\[\begin{equation} \begin{split} P\{N(t,t+h&] = 1 | \mathcal{H}_t\} = \lambda h + o(h) \\ P\{N(t,t+h&] > 1 | \mathcal{H}_t\} = o(h) \\ P\{N(t,t+h&] = 0 | \mathcal{H}_t\} = 1 - \lambda h + o(h) \\ &\Rightarrow \lambda(t|\mathcal{H}_t) = \lambda \,, \end{split} \tag{4.2} \end{equation}\]
with \(\lambda>0\). An interesting consequence of this definition is that the distance between two consecutive points in time is an exponential random variable with parameter \(\lambda\). This, in turn, implies that the distribution is memoryless, i.e., the waiting time (or interarrival time) until the next event does not depend on how much time has elapsed.
Conversely, a Poisson process is said to be inhomogeneous when the conditional intensity depends on time:
\[\begin{equation} \begin{split} P\{N(t,t+h&] = 1 | \mathcal{H}_t\} = \lambda(t) h + o(h) \\ P\{N(t,t+h&] > 1 | \mathcal{H}_t\} = o(h) \\ P\{N(t,t+h&] = 0 | \mathcal{H}_t\} = 1 - \lambda(t) h + o(h) \\ &\Rightarrow \lambda(t|\mathcal{H}_t) = \lambda(t) \,. \end{split} \tag{4.3} \end{equation}\]
In this section, we are interested in a specific type of inhomogeneous Poisson processes known as self-exciting or Hawkes processes, as introduced by Alan G. Hawkes in 1971 [292]. In these processes the conditional intensity not only depends on time, but also on the whole history of the event. Hence, it is given by
\[\begin{equation} \lambda(t) = \lambda_0(t) + \int_0^t \phi(t-s) \text{d}N_s\,. \tag{4.4} \end{equation}\]
The first term of this equation is the background intensity of the process while \(\phi(t-s)\) is the excitation function. This way, the conditional intensity depends on all previous events in a way that is determined by the excitation function. Henceforth, we may refer to the function \(\phi(t-s)\) as the kernel of the process.
Although the function \(\phi(t-s)\) can take almost any form, to gain some intuition about these processes a convenient choice is the exponential function. In fact, that was the function that Hawkes used to illustrate his paper. Hence, if \(\phi(t-s) = \alpha \exp(-\beta(t-s))\), we can rewrite equation (4.4) as
\[\begin{equation} \lambda(t) = \lambda_0(t) + \int_0^\infty \alpha e^{-\beta (t-s)} \text{d}N_s = \lambda_0(t) + \sum_{t_i<t} \alpha e^{-\beta(t-t_i)}\,, \tag{4.5} \end{equation}\]
where the constant \(\alpha\) can be interpreted as the instantaneous excitation of the system when a new event arrives and \(\beta\) as the rate at which said arrival's influence decays.
In figure 4.6 we show an example of the intensity obtained using an exponential kernel. As it can be seen, every time a new event arrives, the intensity is incremented by a factor \(\alpha\) leading to new, clustered, arrivals. Then the intensity decays at rate \(\beta\) until it reaches the value of the background intensity. It is worth remarking that events in Hawkes processes tend to be clustered, i.e., the interarrival time is not independent as in homogeneous processes.
Figure 4.6: Conditional intensity function of a self-exciting process. Simulation of a Hawkes process with exponential kernel, \(\lambda_0 = 1\), \(\alpha = 1\) and \(\beta = 2\). The curve shows the value of the conditional intensity, while dots mark the moments at which a new event arrived.
This figure can also be used to introduce a different interpretation of the process. Suppose that the stream of immigrants arriving to a country forms a homogeneous Poisson process with rate \(\lambda_0\). Then, each individual can produce zero or more children independently of one another but following a simple inhomogeneous Poisson process (without excitation). The global arrival of new people to the country would then follow a Hawkes process. In the terminology of the forum, we could say that new posts arrive to the thread at a rate \(\lambda_0(t)\), which might depend on time because the activity of the users changes during the day (as we saw in figure 4.4), and that each of those posts sprout themselves a sequence of new posts until the thread disappears from the front page (its intensity gets back to the background intensity).
In branching terminology, this immigration-birth representation describes the Galton-Watson process that we briefly discussed in the introduction, albeit with a modified time dimension [293]. In this context, it is possible to define the branching ratio of the process as
\[\begin{equation} n = \int_0^\infty \phi(t) \text{d}t = \int_0^\infty \alpha e^{-\beta s} \text{d}s = \frac{\alpha}{\beta}\,, \tag{4.6} \end{equation}\]
which is the average number of offspring generated by each point event [294]. Both the definition of this parameter and its shape should ring some bells. Indeed, this expression is equivalent to the definition of the basic reproduction number that we saw in section 3.2. In fact, the SIR model can be studied as a Hawkes process [295]. Actually, the study of point processes has partially its origin in the demographic problems studied by mathematicians during the beginning of XX century such as Lotka, who was also the one that introduced the concept of the basic reproductive number in demography as discussed in section 3.2 [291].
A particularly successful application of Hawkes processes was introduced by Ogata in 1988 in the context of earthquakes [296]. Specifically, he used Hawkes processes to describe the occurrence of major earthquakes and the aftershocks that follow them, although he chose a different kernel. He proposed that the intensity should decay following a power law so that
\[\begin{equation} \lambda(t) = \lambda_0(t) + \sum_{i<t_i} \frac{\alpha}{(t-t_i+c)^{1+\beta}}\,. \tag{4.7} \end{equation}\]
Interestingly, he named his model for seismology the Epidemic-Type Aftershock Sequence model (ETAS).
The contribution of Ogata was not simply the introduction of the model to seismology. What really made his work outstanding was that, in a time where most researchers on point processes were mainly focused on their theoretical properties, he established a road map for how to apply point process models to real data using a formal likelihood-based inference framework [297]. The next section will be devoted to this issue.
If our intuition is correct, the arrival of posts to threads in Forocoches should be well described by a self-exciting process. In order to test this hypothesis we need two ingredients. First, we have to estimate the parameters that would yield the observed time sequence of a given thread. Then, we need to measure the quality of the model.
To estimate the set of parameters describing a thread we will use maximum likelihood estimation [298]. Suppose that \(\{t_1,t_2,\ldots,t_n\}\) is a realization over time \([0,T]\) from a point process with conditional intensity function \(\lambda(t)\). The likelihood of the process as a function of the set of parameters \(\theta\) can be expressed as
\[\begin{equation} \mathcal{L}(\theta) = \left[ \prod_{i=1}^n \lambda(t_i|\theta)\right] \exp\left(-\int_0^T \lambda(u|\theta) \text{d}u\right)\,, \tag{4.8} \end{equation}\]
and the log-likelihood is thus given by
\[\begin{equation} l(\theta) = \ln \mathcal{L}(\theta) = \sum_{i=1}^n \ln[\lambda(t_i|\theta)] - \int_0^T \lambda(u|\theta) \text{d} u \,. \tag{4.9} \end{equation}\]
For simplicity, we will assume that the background intensity is either zero or constant, so that \(\lambda_0(t) \equiv \lambda_0\). Hence, in the particular case of an exponential kernel, equation (4.5), the log-likelihood reads
\[\begin{equation} l = -\lambda_0 t + \frac{\alpha}{\beta} \sum_{i=1}^n \left[e^{-\beta(t_n-t_i)}-1\right] + \sum_{i=1}^n \ln[\lambda_0 + \alpha A(i)]\,, \tag{4.10} \end{equation}\]
where \(A(i) = e^{-\beta(t_i-t_{i-1})}(1+A(i-1))\) with \(A(0)=0\). As there is no closed form solution, it is necessary to numerically obtain the maximum of this function. Fortunately, this recursive relation greatly reduces the computational complexity of the problem. For this reason exponential kernels or power law kernels with exponential cut-off are the preferred choice in the analysis of high frequency trading [299]. Nevertheless, to speed up the computation, it is convenient to also calculate the derivatives of the log-likelihood:
\[\begin{equation} \begin{split} &\frac{\partial l}{\partial \lambda_0} = -t_n + \sum_{i=1}^n \frac{1}{\lambda_0 + \alpha A(i)} \\ &\frac{\partial l}{\partial \alpha} = \sum_{i=1}^n \frac{A(i)}{\lambda_0+\alpha A(i)} + \frac{1}{\beta} \sum_{i=1}^n\left[ e^{-\beta(t_n-t_i)}-1\right] \\ &\frac{\partial l}{\partial \beta} = \sum_{i=1}^n \frac{\alpha A'(i)}{\lambda_0 + \alpha A(i)} - \frac{\alpha}{\beta^2} \sum_{i=1}^n\left[e^{-\beta(t_n-t_i)}-1\right] + \frac{\alpha}{\beta} \sum_{i=1}^n \left[-(t_n - t_i) e^{-\beta(t_n-t_i)}\right]\,, \end{split} \tag{4.11} \end{equation}\]
where \(A'(i) = e^{-\beta(t_i-t_{i-1})}\left[-(t_i-t_{i-1})(1+A(i-1))+A'(i-1)\right]\) and \(A'(0) = 0\).
Similarly, the log-likelihood for the power law kernel defined in equation (4.7) can be expressed as
\[\begin{equation} l = - \lambda_0 t - \frac{\alpha}{\beta} \sum_{i=1}^n\left(\frac{1}{c^\beta}-\frac{1}{(t_n-t_i+c)^\beta}\right) + \sum_{i=1}^n \ln \left[\lambda_0 + \sum_{j=1}^i \frac{\alpha}{(t_i-t_j+c)^{1+\beta}}\right]\,. \tag{4.12} \end{equation}\]
In this case the computation of the kernel for long time sequences is more costly. The gradient for this expression reads
\[\begin{equation} \begin{split} &\frac{\partial l}{\partial \lambda_0} = -t_n + \sum_{i=1}^n \frac{1}{\lambda_0 + \alpha A(i)}\\ &\frac{\partial l}{\partial \alpha} = \sum_{i=1}^n \frac{A(i)}{\lambda_0 + \alpha A(i)} - \frac{1}{\beta} \sum_{i=1}^n \left(\frac{1}{c^\beta} - \frac{1}{(t_n-t_i+c)^\beta}\right)\\ &\frac{\partial l}{\partial \beta} = \sum_{i=1}^n \frac{-\alpha LA(i)}{\lambda_0 + \alpha A(i)} + \frac{\alpha}{\beta^2} \sum_{i=1}^n \left(\frac{1}{c^\beta} - \frac{1}{(t_n-t_i+c)^\beta}\right) + \frac{\alpha}{\beta} \left(\frac{\ln(c)}{c^\beta} - \frac{\ln(t_n-t_i+c)}{(t_n-t_i+c)^\beta}\right) \\ &\frac{\partial l}{\partial c} = - \sum_{i=1}^n \frac{\alpha (1+ \beta) A'(i)}{\lambda_0 + \alpha A(i)} +\alpha \sum_{i=1}^n \left(\frac{1}{c^{\beta+1}} - \frac{1}{(t_n-t_i+c)^{\beta + 1}}\right) \end{split} \tag{4.13} \end{equation}\]
with \(A(i) = \sum_{j=1}^i (t_i-t_j+c)^{-1-\beta}\), \(LA(i) = \sum_{j=1}^i \ln(t_i-t_j+c) (t_i-t_j+c)^{-1-\beta}\) and \(A'(i) = \sum_{j=1}^i (t_i-t_j+c)^{-2-\beta}\).
With these expressions we can easily estimate the set of parameters that would fit each thread in our dataset. To asses the quality of the fit, a common approach is to use tools as the Akaike information criterion (AIC) [300]. However, as already noticed by Ogata, AIC and related methods can provide information about which is the best model of all the ones being considered, but it does not say anything about whether there is a better model outside that set. Fortunately, there is a better option.
Suppose that the point process data \(\{t_i\}\) are generated by the conditional intensity \(\lambda(t)\). We define the compensator of the counting process as
\[\begin{equation} \Lambda(t) = \int_0^t \lambda(s) \text{d}s\,, \tag{4.14} \end{equation}\]
which in the case of the exponential kernel is equal to
\[\begin{equation} \Lambda(t_k) = \lambda_0 t_k - \frac{\alpha}{\beta} \sum_{i=1}^{k-1} \left[e^{-\beta(t-t_i)}-1\right]\,, \tag{4.15} \end{equation}\]
and for the power law kernel is
\[\begin{equation} \Lambda(t_k) = \lambda_0t_k + \frac{\alpha}{\beta} \sum_{i=1}^{k-1} \left(\frac{1}{c^\beta} - \frac{1}{(t_k-t_i+c)^\beta}\right)\,. \tag{4.16} \end{equation}\]
Figure 4.7: Fitting Hawkes processes to Forocoches threads. Each panels shows the fraction of threads that successfully pass all the tests described in section 4.1.3 with different kernel choices. A) Exponential kernel with constant background intensity. B) Power law kernel with constant background intensity. C) Homogeneous Poisson process. D) Power law kernel without background intensity.
With this definition we can now enunciate the random time change theorem [301]. If \(\{t_1,t_2,\ldots,t_k\}\) is a realization over time \([0,T]\) from a point process with conditional intensity function \(\lambda(t)\), then the transformed points \(\{t_1^\ast,t_2^\ast,\ldots,t_k^\ast\}\) given by \(t_i^\ast = \Lambda(t_i)\) form a Poisson process with unit rate.
Therefore, if the estimated conditional intensity \(\lambda(t|\theta)\) is a good approximation to the true \(\lambda(t)\), then the transformed points \({t_i^\ast}\) should behave according to a Poisson process with unit rate. To test if the series forms a Poisson process we will check two of their main properties:
Independence: the interarrival times of the transformed points, \(\tau^\ast_i = t^\ast_i - t_{i-1}^\ast\) should be independent. This can be tested using the Ljung-Box test. The null hypothesis of this test is that the data presents no auto-correlations (in other words, they are independent). If the \(p\)-value is higher than 0.05 then the hypothesis cannot be discarded and thus the data might be independent.
Unit rate: if the values of \(\{\tau_i^\ast\}\) are extracted from an exponential with unit rate then the quantity \(x_k = 1- e^{-\tau_k^\ast}\) is uniformly distributed in the interval \([0,1]\). We can test this hypothesis using the Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) tests.
Only if the estimated \(\lambda(t|\theta)\) passes all these tests we will accept that it is correctly describing the evolution of a thread as a Hawkes process with the kernel under consideration. With these tools we are finally ready to asses if the dynamics of the board can be captured by these processes or not.
We consider all threads that started between 01-01-2011 and 01-01-2012 with 10 or more posts, which represent nearly 230,000 different conversations. To each thread we fit: a homogeneous Poisson process; a Hawkes process with exponential kernel and constant background intensity; with power law kernel and constant background intensity; and with power law kernel without background intensity.
In figure 4.7 we show the fraction of threads that successfully pass all the tests for each kernel choice. For the moment we are not asking which model is better, only which one can fit the largest amount of threads. As we can see, both the exponential kernel and the power law kernel with constant background are able to model 75% of the threads. In contrast, an homogeneous Poisson model can only explain 25% of the threads and a power law kernel without background intensity only a tiny fraction of roughly 5%.
Figure 4.8: Best model as a function of external factors. For each thread that is successfully described by any of the processes that we are considering, we select the model that better fits the data using BIC. In panel A we show the distribution of those threads as a function of their popularity, i.e., their number of posts. In panel B we show the distribution as a function of the time length of the thread instead, i.e., the difference in minutes between the last and first posts.
These results partially confirm our hypothesis, as the dynamics of most threads can be well described with Hawkes processes. However, to fully understand the mechanisms underlying this system we need to address the question of what is the difference between those threads that are correctly described and those that are not. In order to do so, we first determine which is the best model for each thread. We choose to asses this using the Bayesian information criterion (BIC) as it penalizes more those models with several parameters than AIC. This is quite important given that each choice of the kernel yields a different amount of parameters.
In figure 4.8A we plot the distribution of thread size (total number of posts) distinguishing which model is better fitted to each thread. The results are quite interesting. First, the power law kernel without background intensity can only fit a tiny fraction of very short threads, signaling that the background activity of the forum is very important. Then, we find that the threads that can be fitted by a homogeneous model tend to be also rather small. In order to be able to explain larger threads, either the exponential or the power law kernels with background intensity are needed. Lastly, the longest threads cannot be described using these models.
These observations seem to point out in a similar direction as in the Facebook work that we discussed in 4.1. Indeed, in that setting, it was observed that there was a transition between a regime in which popularity was completely independent from the collective action of users and a regime in which social influence was important. In a similar fashion, we find that small threads can be studied as homogeneous Poisson processes, i.e., the arrival of new posts is independent of the ones that are already there. Conversely, once social influence comes into play, threads can reach a larger amount of popularity.
The only thing left is to disentangle why the most popular threads cannot be captured by these models. To do so, in figure 4.8B we show the distribution of thread duration, measured as the time elapsed between the very first post and the last one, as a function of which model better fits the thread. In this case we can see that those threads better fitted by a homogeneous Poisson model are those that last only for a few minutes. Once their length is over a few hours, the exponential kernel is needed. For even longer threads, a slower decay rate is needed, hence the power law fits better. Lastly, threads that are exceptionally long cannot be fitted by any of these models. This is, however, not surprising.
Recall that in figure 4.4 we saw that the daily patterns of activity highly depend on the time of the day. Hence, it is to be expected that when a thread last for over a few hours, the effects that this activity can have in the background intensity start to be noticeable. Yet, we have considered that the background intensity is constant, something that clearly goes against this observation. Hence, to be able to explain the behavior of longer threads, a background intensity that is somehow proportional to the activity of the forum would be needed.
In conclusion, we have seen that data from discussion boards conveys a large array of opportunities for research. We have focused on disentangling the underlying dynamics of the system, for which we have proposed that a self-exciting process would be adequate. The results presented in this section signal that this hypothesis is correct, showing that there are two regimes in the forum: one in which activity is essentially random and one in which social influence plays a key role. However, in order to be able to completely characterize all types of threads, more complex models, such as background intensities that depend on the hour of the day, would be needed.
The intelligence of that creature known as a crowd is the square root of the number of people in it
"Jingo", Terry Pratchett
Collective phenomena have been the subject of intense research in psychology and sociology since the XIX century. There are several ways in which humans gather to perform collective actions, although observations suggest that most of them require some sort of diminution of self-identity [302]. One of the first attempts to address this subject was Le Bon's theory on the psychology of crowds in which he argued that when people are part of a crowd they lose their individual consciousness and become more primitive and emotional thanks to the anonymity provided by the group [303]. In the following decades, theories of crowd behavior such as the convergence theory, the emergent norm theory or the social identity theory emerged. These theories shifted away from Le Bon's original ideas, introducing rationality, collective norms and social identities as building blocks of the crowd [304], [305].
The classical view of crowds as an irrational horde led researchers to focus on the study of crowds as something inherently violent, and thus, to seek for a better understanding and prediction of violence eruption, or at least, to develop some strategies to handle them [306]. However, the information era has created a new kind of crowd, as it is no longer necessary to be in the same place to communicate and take part of collective actions. Indeed, open source and "wiki" initiatives, as well as crowdsourcing and crowdworking, are some examples of how crowds can collaborate online in order to achieve a particular objective [307], [308]. Although this offers a plethora of opportunities, caution has to be taken because, as research on the psychology of crowds has shown, the group is not just the simple addition of individuals [309]. For example, it has been observed that the group performance can be less efficient than the sum of the individual performances if they had acted separately [310]. What are the conditions for this to happen and whether the group is more than the individuals composing it are two current challenges of utmost importance if, for instance, one wants to use online crowds as a working force.
To be able to unlock the potential of collective intelligence, a deeper understanding of the functioning of these systems is needed [311]. Examples of scenarios that can benefit from further insights into crowd behavior include new ways to reach group decisions, such as voting, consensus making or opinion averaging, as well as finding the best strategies to motivate the crowd to perform some task [312]. Regarding the latter, as arbitrary tasks usually are not intrinsically enjoyable, to be able to systematically execute crowdsourcing jobs, some sort of financial compensation is used [313]. This, however, implies dealing with new challenges, since many experiments have demonstrated that financial incentives might undermine the intrinsic motivation of workers or encourage them to only seek for the results that are being measured, either by focusing only on them or by free-riding [314]–[316]. A relevant case is given by platforms such as Amazon's Mechanical Turk, that allow organizations to pay workers that perform micro-tasks for them, and that have already given rise to interesting questions about the future of crowd work [317]. In particular, its validity to be used for crowdsourcing behavioral research has been recently called into question [318].
Notwithstanding the previous observations, it is possible to find tasks that are intrinsically enjoyable by the crowd due to their motivational nature, which is ultimately independent of the reward [316]. This is one of the basis of online citizen science. In these projects, volunteers contribute to analyze and interpret large datasets which are later used to solve scientific problems [319]. To increase the motivation of the volunteers, some of these projects are shaped as computer games [320]. Examples range from the study of protein folding [321] to annotating people within social networks [322] or identifying the presence of cropland [323].
It is thus clear that to harness the full potential of crowds in the new era, we need a deeper understanding of the mechanisms that drive and govern the dynamics of these complex systems. To this aim, here we study an event that took place in February 2014 known as Twitch Plays Pokémon (TPP). During this event, players were allowed to control simultaneously the same character of a Pokémon game without any kind of central authority. This constituted an unprecedented event because in crowd games, each user usually has its own avatar and it is the common action of all of them what produces a given result [324]. Due to its novelty, in the following years it sprouted similar crowd controlled events such as The Button in 2015 [325] or Reddit r/place in 2017 [326], [327]. Similarly to those which came after it, TPP was a completely crowd controlled process in which thousands of users played simultaneously for 17 days, with more than a million different players [328]. TPP is specially interesting because it represents an out of the lab social experiment that became extremely successful based only on its intrinsic enjoyment and, given that it was run without any scientific purpose in mind, it represents a natural, unbiased (i.e., not artificially driven) opportunity to study the evolution and organization of crowds. Furthermore, the whole event was recorded in video, the messages sent in the chat window were collected and both are available online17 [329]. Hence, in contrast to the offline crowd events that were studied during the last century, in this case we possess a huge amount of information of both the outcome of the event but, even more important, the evolution of the crowd during the process.
On February 12, 2014, an anonymous developer started to broadcast a game of Pokémon Red on the streaming platform Twitch. Pokémon Red was the first installment of the Pokémon series, which is the most successful role playing game (RPG) franchise of all time [330]. The purpose of the game was to capture and train creatures known as Pokémons in order to win increasingly difficult battles based on classical turn-based combats. However, as Pokémon Go showed in the summer of 2016, the power of the Pokémon franchise goes beyond the classical RPG games and is still able to attract millions of players [331].
On the other hand, Twitch is an online service for watching and streaming digital video broadcast. Its content is mainly related to video games: from e-sports competitions to professional players games or simply popular individuals who tend to gather large audiences to watch them play, commonly known as streamers. Due to the live nature of the streaming and the presence of a chat window where viewers can interact among each other and with the streamer, in these type of platforms the relationship between the media creator and the consumer is much more direct than in traditional media [332]. Back in February 2014, Twitch was the 4th largest source of peak internet traffic in the US [333] and nowadays, with over 100 million unique users, it has become the home of the largest gaming community in history [334].
The element that distinguished this stream from the rest was that the streamer did not play the game. Instead, he set up a bot in the chat window that accepted some predefined commands and forwarded them to the input system of the video game. Thus, anyone could join the stream and control the character by just writing one of those actions in the chat. Although all actions were sent to the video game sequentially, it could only perform one at a time. As a consequence, all commands that arrived while the character was performing a given action (which takes less than a second) did not have any effect. Thus, it was a completely crowd controlled game without any central authority or coordination system in place. This was not a multiplayer game, this was something different, something new [335].
Due to its novelty, during the first day the game was mainly unknown with only a few tens of viewers/players and as a consequence little is known about the game events of that day [336]. However, on the second day it started to gain viewers and quickly went viral, see figure 4.9. Indeed, it ramped up from 25,000 new players on day 1 (note that the time was recorded starting from day 0 and thus day 1 in game time actually refers to the second day on real time) to almost 75,000 on day 2 and an already stable base of nearly 10,000 continuous players. Even though there was a clear decay on the number of new users after day 5, the event was able to retain a large user base for over two weeks. This huge number of users imposed a challenge on the technical capabilities of the system, which translated in a delay of between 20 and 30 seconds between the stream and the chat window. That is, users had to send their commands based on where the player was up to 30 seconds ago.
Figure 4.9: Popularity of the stream. Number of new users that arrived each day. The histogram is fitted to a gamma distribution with parameters \(\alpha=2.66\) and \(\beta=0.41\). Note that this reflects those users who inputted at least one command, not the number of viewers. In the inset we show the total number of users who sent at least 1 message each hour, regardless on whether they were new players or not.
Although simple in comparison to modern video games, Pokémon Red is a complex game which can not be progressed effectively at random. In fact, a single player needs, on average, 26 hours to finish the game [337]. Nevertheless, only 7 commands are needed to complete the game. There are 4 movement commands (up, right, down and left), 2 actions commands (a and b, accept and back/cancel) and 1 system button (start which opens the game's menu). As a consequence the gameplay is simple. The character is moved around the map using the four movement commands. If you encounter a wild Pokémon you will have to fight it with the possibility of capturing it. Then, you will have to face the Pokémons of trainers controlled by the machine in order to obtain the 8 medals needed to finish the game. The combats are all turn-based so that time is not an important factor. In each turn of a combat the player has to decide which action to take for which the movement buttons along with a and b are used. Once the 8 medals have been collected there is a final encounter after which the game is finished. This gameplay, however, was much more complex during TPP due to the huge number of players sending commands at the same time and the lag present in the system.
A remarkable aspect of the event is that actions that would usually go unnoticed, such as selecting an object or nicknaming a Pokémon, yielded unexpected outcomes due to the messy nature of the gameplay. The community embraced these outcomes and created a whole narrative around them in the form of jokes, fan art and even a religion-like movement based on the judeo-christian tradition [338] both in the chat window and in related media such as Reddit. Although these characteristics of the game are outside of the scope of this thesis, it is another example of the new possibilities that digital systems bring in relation to the study of naming conventions and narrative consensus [339]. As we saw in section 4.1.1, language can evolve in digital platforms, with users developing new words that do not have any meaning outside the habitat where they were created. Not only it is a sign of the sociological richness of these systems, but also they might provide clues about the origin and evolution of slang in the offline world.
Returning to the discussion about the gameplay, even if it was at a slower peace, progress was made. Probably the first thing that comes to ones mind when thinking on how progressing was possible is the famous experiment by Francis Galton in which he asked a crowd to guess the weight of an ox. He found that the average of all estimates of the crowd was just 0.8% higher than the real weight [340]. Indeed, if lots of users were playing, the extreme answers should cancel each other and the character would tend to move towards the most common command sent by the crowd. Note, however, that as they were not voting, actions deviating from the mean could also be performed by pure chance. In general, this did not have great effects but as we will see in section 4.2.2 there were certain parts of the game where this was extremely relevant.
It is worth stressing that, to form a classical wise crowd, some important elements are needed, such as independence [341]. That is, the answer of each individual should not be influenced by other people in the crowd. In our case, this was not true, as the character was continuously moving. Indeed, the big difference of this crowd event to others is that opinions had effect in real time, and hence, people could see the tendency of the crowd and change its behavior accordingly. Theoretical [342] and empirical studies [343] have shown that a minority of informed individuals can lead a na"ive group of animals or humans to a given target in the absence of direct communication. Even in the case of conflict in the group, the time taken to reach the target is not increased significantly [343] which would explain why it only took the crowd 10 times more to finish the game than the average person. Although this amount may seem high, as we shall see later, the crowd got stuck in some parts of the game for over a day, increasing the time to finish. However, if those parts were excluded, the game progress can be considered to be remarkably fast, despite the messy nature of the gameplay.
As a matter of fact, the movement of the character on the map can be probably better described as a swarm rather than as a crowd. Classical collective intelligence, such as the opinions of a crowd obtained via polls or surveys, has the particularity stated previously of independence but, in addition, asynchrony. It has been shown that when there is no independence, that is, when users can influence each other, as long as the process is asynchronous the groups decisions will be distorted by social biasing effects [344]. Conversely, when the process is synchronous, mimicking natural swarms, these problems can be corrected [345]. Indeed, by allowing users to participate in decision making processes in real time with feedback about what the rest is doing, in some sort of human swarm, it is possible to explore more efficiently the decision space and reach more accurate predictions than with simple majority voting [346]. Admittedly, the interaction in the online world is so different that maybe the term crowd cannot be straightforwardly applied to online gatherings. In fact, it has recently been suggested that online crowds might be better described as swarms as something in-between crowds and networks [347].
Figure 4.10: Introduction of the voting system. Command distribution after the first introduction of the voting system. Once the system was back online votes would tally up over a period of 10 seconds. After 15 minutes the system was brought down to reduce this time to 5 seconds. This, however, did not please the crowd and it started to protest. The first \(start9\) was sent at 5d9h8m but went almost unnoticed. Few minutes after, it was sent again but this time it got the attention of the crowd. In barely 3 minutes it went from 4 \(start9\) per minute to over 300, which stalled the game for over 8 minutes. The developer brought down the system again and removed the voting system, introducing the anarchy/democracy system a few hours later.
Even though the characteristics described so far already make this event very interesting from the research point of view, on the sixth day the rules were slightly changed, which made the dynamics even richer. After the swarm had been stuck in a movement based puzzle for almost 24 hours, the developer took down the stream to change the code. Fifteen minutes later the stream was back online but this time commands were not executed right away. Instead, they were added up and every 10 seconds the most voted command was executed. In addition, it was possible to use compound commands made of up to 9 simple commands such as \(a2\) or \(aleftright\) which would correspond to executing \(a\) twice or \(a\), \(left\) and \(right\) respectively. Thus, the swarm became a crowd with a majority rule to decide which action to take. As it waited 10 seconds between each command, progress was slow and, twenty minutes after, that time was reduced to 5 seconds. However, the crowd did not like this system and started to protest by sending \(start9\) which would open and close the menu repeatedly impeding any movement. This riot, as it was called, lasted for 8 minutes (figure 4.10), moment when the developer removed the voting system. However, two hours later the system was modified again. Two new commands were added: democracy and anarchy, which controlled some sort of tug of war voting system over which rules to use. If the fraction of people voting for democracy went over a given threshold, the game would start to tally up votes about which action to take next. If not, the game would be played using the old rules. This system split the community into "democrats" and "anarchists" who would fight for taking control of the game. Therefore, the system would change between a crowd-like movement and a swarm-like movement purely based on its own group interactions. We will analyze this situation in section 4.2.3.
Figure 4.11: Network representation of the ledge area. It is possible to go from each node to the ones surrounding it using the commands \(up\), \(right\), \(down\) and \(left\). The only exception are the yellow nodes labeled \(L\) which correspond to ledges. If the character tries to step on one of those nodes it will be automatically sent to the node right below it, characteristic that is represented by the curved links connecting nodes above and below ledges. Light blue nodes mark the entrance and exit of the area and red nodes highlight the most difficult part of the path. Note that as the original map was composed by discrete squared tiles this network representation is not an approximation but the exact shape of the area.
On the third day of the game, the character arrived to the area depicted in figure 4.11 (note that the democracy/anarchy system we just described had not been introduced yet). Each node of the graph represents a tile of the game. The character starts on the light blue node on the left part of the network and has to exit through the right part, an event that we will define as getting to one of the light blue nodes on the right. The path is simple for an average player but it represented a challenge for the crowd due to the presence of the yellow \(L\)-nodes. These nodes represent ledges which can only be traversed going downwards, effectively working as a filter that allows flux only downwards. Thus, one good step will not cancel a bad step, as the character would be trapped down the ledge and will have to find a different path to go up again. For this reason, this particular region is highly vulnerable to actions deviating from the norm, either caused by mistake or performed intentionally by griefers, i.e., individuals whose only purpose is to annoy other players and who do so by using the mechanisms provided by the game itself [348], [349] (note that in social contexts these individuals are usually called trolls [350]). Indeed, there are paths (see red nodes in figure 4.11) where only the command right is needed and which are next to a ledge so that the command down, which is not needed at all, would force the crowd to go back and start the path again. Additionally, the existence of the lag described in section 4.2.1 made this task even more difficult.
In figure 4.12A we show the time evolution of the amount of messages containing each command (the values have been normalized to the total number of commands sent each minute) since the beginning of this part until they finally exited. First, we notice that it took the crowd over 15 hours to finish an area that can be completed by an optimal walk in less than 2 minutes. Then, we can clearly see a pattern from 2d18h30m to the first time they were able to reach the nodes located right after the red ones, approximately 3d01h10m: when the number of rights is high the number of lefts is low. This is a signature of the character trying to go through the red nodes by going right, falling down the ledge, and going left to start over. Once they finally reached the nodes after the red path (first arrival) they had to fight a trainer controlled by the game, combat which they lost and as a consequence the character was transported outside of the area and they had to enter and start again from the beginning. Again, we can see a similar left-right pattern until they got over that red path for the second time, which in this case was definitive.
Figure 4.12: Study of the ledge event. A) Time evolution of the fraction of commands sent each minute. Note that a single player should be able to finish this area in a few minutes, but the crowd needed 15 hours. The time series has been smoothed using moving averages. B) Hierarchical clustering of the time series of each group of users. C) Left: Mean time needed to exit the area according to our model as a function of the fraction of griefers and noise in the system. Right: 1st percentile of the time needed to exit the area, note that the \(y\) axis is given in minutes instead of hours.
The ledge is a great case study of the behavior of the crowd because the mechanics needed to complete it is very simple (just moving from one point to another), which facilitates the analysis. But, at the same time, it took the players much longer to finish this area than what is expected for a single player. To address all these features, we propose a model aimed at mimicking the behavior of the crowd. Specifically, we consider a \(n\)-th order Markov Chain so that the probability of going from state \(x_{m}\) to \(x_{m+1}\) depends only on the state \(x_{m-n}\), thus accounting for the effect of the lag of the dynamics. Furthermore, the probabilities of going from one state to another will be set according to the behavior of the players in the crowd.
To define these probabilities, we first classify the players in groups according to the total number of commands they sent in this period: G1, users with 1 or 2 commands (46% of the users); G2, 3 or 4 commands (18%); G3, between 5 and 7 commands (13%); G4, between 8 and 14 commands (12%); G5, between 15 and 25 commands (6%); and G6, more than 25 commands (5%). These groups were defined so that the total number of messages sent by the first three is close to 50,000 and 100,000 for the other three (if we had selected the same value for all of them, either we would have lost resolution in the small ones or we would have obtained too many groups for the most active players). Interestingly, the time series of the inputs of each of these groups are very similar. Actually, if we remove the labels of the 42 time series and cluster them using the euclidean distance, we obtain 7 clusters, one for each command. Even more, the time series of each of the commands are clustered together, figure 4.12B. In other words, the behavior of users with medium and large activities are not only similar to each other, but they are also equivalent to the ones coming from the aggregation of the users who only sent 1 or 2 commands.
In this context we could argue that users with few messages tend to act intuitively as they soon lose interest. According to the social heuristics hypothesis [351], fast decisions tend to increase cooperation, which in this case would mean trying to get out of the area as fast as possible. Similarly, experiments have shown that people with prosocial predispositions tend to act that way when they have to make decisions quickly [352]. Thus, users that send few commands might tend to send the ones that get the character closer to the exit, which would explain why without being aware of it, they behave as those users that tried to progress for longer. However, coordination might not be so desirable in this occasion. The problem with players conforming with the majoritarian direction or mimicking each other is that they will be subject to herding effects [353], [354] which in this particular setting can be catastrophic due to the lag present in the system. Indeed, if we set the probabilities in our model so that the next state in the transition is always the one that gets you closer to the exit but with 25 seconds of delay (that is, the probability of going from state \(x_m\) to \(x_{m+1}\) is the probability of going from \(x_{m-n}\) to the state which follows the optimal path), the system gets stuck in a loop and is never able to reach the exit.
Nevertheless, the chat analysis shows that players were not perfectly coordinated. Thus, to make our model more realistic we consider that each time step there are 100 users with different behaviors introducing commands. In particular, we consider variable quantities of noisy users who play completely at random, griefers who only press down to annoy the rest of the crowd and the herd who always sends the optimal command to get to the exit. The results, figure 4.12C, show that the addition of noise to the herd breaks the loops and allows the swarm to get to the exit. In particular, for the case with no griefers we find that with 1 percent of users adding noise to the input the mean time needed to finish this part is almost 3,000 hours. However, as we increase the noise, time is quickly reduced with an optimal noise level of around 40% of the swarm. Conversely, the introduction of griefers in the model, as expected, increases the time needed to finish this part in most cases. Interestingly though, for low values of the noise, the addition of griefers can actually be beneficial for the swarm, allowing the completion of this area in times compatible to the observed ones. Indeed, by breaking the herding effect, griefers are unintentionally helping the swarm to reach their goal.
Whether the individuals categorized as noise were producing it unintentionally or doing it on purpose to disentangle the crowd (an unknown fraction of users were aware of the effects of the lag and they tried to disentangle the system [355]) is something we can not analyze because, unfortunately, the resolution of the chat log in this area is in minutes and not in seconds. We can, however, approximate the fraction of griefers in the system thanks to the special characteristics of this area. Indeed, as most of the time the command \(down\) is not needed \(-\)on the contrary, it would destroy all progress\(-\), we can categorize those players with an abnormal number of \(downs\) as griefers. To do so, we take the users that belong to \(G6\) (the most active ones) and compare the fraction of their inputs that corresponds to \(down\) between each other. We find that 7% have a behavior that could be categorized as outlier (the fraction of their input corresponding to \(down\) is higher than 1.5 times the inter quartile range). More restrictively, for 1% of the players, the command \(down\) represents more than half of their inputs. Both these values are compatible with the observed time according to our model, even more if we take into account that the model is more restrictive as we consider that griefers continuously press down (not only near the red nodes). Thus, we conclude that users deviating from the norm, regardless of being griefers, noise or even very smart individuals, were the ones that made finishing this part possible.
As we already mentioned, on the sixth day of the game the input system was modified. This resulted in the \(start9\) riot that led to the introduction of the anarchy/democracy system. From this time on, if the fraction of users sending democracy, out of the total amount of players sending the commands anarchy or democracy, went over \(0.75\) (later modified to \(0.80\)) the game would enter into democracy mode and commands would be tallied up for 5 seconds. Then, the meter needed to go below \(0.25\) (later modified to \(0.50\)) to enter into anarchy mode again. Note that these thresholds were set by the creator of the experiment.
Figure 4.13: Overview of \(start9\) protests throughout the game. A) Fraction of input corresponding to the \(start9\) command. B) Fraction of users who where in the original \(start9\) riot (inset, total number of protesters each day). There were \(start9\) protests 10 days after the first one even though less than 10% of the protesters had been part of the first one.
The introduction of the voting system was mainly motivated by a puzzle where the crowd had been stuck for over 20 hours with no progress. Nonetheless, even in democracy mode, progress was complex as it was necessary to retain control of the game mode plus taking into account lag when deciding which action to take. Actually, the tug of war system was introduced at the middle of day 5, yet the puzzle was not fully completed until the beginning of day 6, over 40 hours after the crowd had originally arrived to the puzzle. One of the reasons why it took so long to finish it even after the introduction of the voting system is that it was very difficult to enter into democracy mode. Democracy was only "allowed" by the crowd when they were right in front of the puzzle and they would go into anarchy mode quickly after finishing it. Similarly, the rest of the game was mainly played under anarchy mode. Interestingly, though, we find that there were more "democrats" in the crowd (players who only voted for democracy) than "anarchists" (players who only voted for anarchy). Out of nearly 400,000 players who participated in the tug of war throughout the game, 54% were democrats, 28% anarchists and 18% voted at least once for both of them. Therefore, the introduction of this new system did not only split the crowd into two polarized groups with, as we shall see, their own norms and behaviors, but also created non trivial dynamics between them.
Figure 4.14: Politics of the crowd. Days 6 (top) and 8 (bottom). In every plot the gray color denotes when the game was played under anarchy rules and the green color when it was played under democracy rules. The polar plots represent the evolution of the fraction of votes corresponding to anarchy/democracy while distinguishing if the user previously voted for anarchy or democracy: first quadrant, votes for anarchy coming from users who previously voted for anarchy (\(A\rightarrow A\)); second quadrant, votes for democracy coming from anarchy (\(A\rightarrow D\)); third quadrant, votes for democracy coming from democracy (\(D\rightarrow D\)); fourth quadrant, votes for anarchy coming from democracy (\(D\rightarrow A\)). In the other plots we show the evolution of the total number of votes for anarchy or democracy as a function of time normalized by its maximum value (orange) as well as the position of the tug of war meter (blue). When the meter goes above 0.75 the system enters into democracy mode (green) until it reaches 0.25 (these thresholds were latter changed to 0.80 and 0.50 respectively) when it enters into anarchy mode (gray) again. The gap in the orange curve in panel D is due to the lack of data in that period.
The first question that arises is what might have motivated players to join into one group or the other. From a broad perspective, it has been proposed that one of the key ingredients behind video game enjoyment is the continuous perception of one's causal effects on the environment, also known as effectance [356], thanks to their immediate response to player inputs. In contrast, a reduction of control, defined as being able to influence the dynamics according to one's goals, does not automatically lower enjoyment [357]. This might explain why some people preferred anarchy. Under its rules, players saw that the game was continuously responding to inputs, even if they were not exactly the ones they sent. On the other hand, with democracy, control was higher at the expense of effectance, as the game would only advance once every 5 seconds. The fact that some people might have preferred anarchy while others democracy is not surprising as it is well known that different people might enjoy different aspects of a game [358]. In the classical player classification proposed by Bartle [359] for the context of MUDs (multi-user dungeon, which later evolved into what we now today as MMORPGs - massively multiplayer online role-playing games) he already distinguished four types of players: achievers, who focus on finishing the game (who in our context could be related to democrats); explorers, who focused on interacting with the world (anarchists); socializers, who focused on interacting with other players (those players who focused on making fan art and developing narratives); and killers, whose main motivation was to kill other players (griefers). Similarly, it has been seen in the context of FPSs (first person shooters) that player-death events, i.e., loosing a battle, can be pleasurable for some players (anarchists) while not for others (democrats) [360].
However, when addressing the subject of video games entertainment, it is always assumed that the player has complete control over the character, regardless of whether it is a single player game or a competitive/cooperative game. TPP differs from those cases in the fact that everyone controlled the same character. As a consequence, enjoyment is no longer related to what a player, as a single individual, has done but rather to what they, as a group, have achieved. From the social identity approach perspective this can be described as a shift from the personal identity to the group identity. This shift would increase conformity to the norms associated to each group but as the groups were unstructured their norms would be inferred from the actions taken by the rest of the group [304]. New group members would then perform the actions they saw appropriate for them as members of the group, even if they might be seen as antinormative from an outside perspective [361]. This key component of the theory is clearly constated in the behavior of the anarchists. Indeed, every time the game entered in democracy mode, anarchists started to send \(start9\) as a form of protest, hijacking the democracy. Interestingly, this kept happening even though most of the players who were in the original protest did not play anymore (see figure 4.13). Thus, newcomers adopted the identity of the group even if they had not participated in its conception. Even more, stalling the game might have been regarded as antisocial behavior from the own anarchists point of view when they were playing under anarchy rules, but when the game entered into democracy mode it suddenly turned into an acceptable behavior, something that is predicted by the theory.
To further explore the dynamics of these two groups, we next compare two different days: day 6 and day 8. Day 6 was the second day after the introduction of the anarchy/democracy dynamics and there were not any extremely difficult puzzles or similar areas where democracy might have been needed. On the other hand, day 8 was the day when the crowd arrived to the safari zone, which certainly needed democracy mode since the available number of steps in this area is limited (i.e., once the number of steps taken inside the area exceeds 500, the player is teleported to the entrance of the zone). We must note that, contrary to what we observed in section 4.2.2, in this case commands coming from low activity users are not equivalent to the ones coming from high activity users. In particular, low activity users tend to vote much more for democracy (see figure 4.15). As a consequence, although if we only take out the users with just 1 vote the position of the meter is unaffected, if we remove users with less than 10 votes the differences start to be noticeable. As such, it would not be adequate to remove low activity users in general from the analysis. Our results are summarized in figure 4.14.
Figure 4.15: Tug of war commitment. Hypothetical meter position of the political tug of war if only votes from commited players - those who sent at least 2 votes (top) or 10 votes (bottom) throughout the whole game - are taken into account (blue) and if only votes from visitors - only one vote (top) or less than 10 (bottom) - are taken into account (red). In contrast to the ledge event, the behavior of users who sent few commands clearly differs from the ones with several commands. Visitors had a clear tendency towards democracy, while committed players preferred anarchy.
One of the most characteristic features of groups is their polarization [268], [362]. The problem in the case we are studying is that as players were leaving the game while others were constantly coming in, it is not straightforward to measure polarization. The fact that the number of votes for democracy could increase at a given moment did not mean that anarchists changed their opinion, it could be that new users were voting for democracy or simply that players who voted for anarchy stopped voting. Then, to properly measure polarization we consider 4 possible states for each user. They are defined by both the current vote of the player and the immediately previous one (note that we have removed players who only voted once, but this does not affect the measure of the position of the meter, see figure 4.15A): \(A\rightarrow A\), first anarchy then anarchy; \(A \rightarrow D\), first anarchy then democracy; \(D \rightarrow D\), first democracy then democracy; \(D \rightarrow A\), first democracy then anarchy. As we can see in figures 4.14A and 4.14C the communities are very polarized, with very few individuals changing their votes. The fraction of users changing from anarchy to democracy is always lower than 5%, which indicates that anarchists form a very closed group. Similarly, the fraction of users changing from democracy to anarchy is also very low, although there are clear bursts when the crowd exits democracy mode. This reflects that those who changed their vote from anarchy to democracy do so to achieve a particular goal, such as going through a mace, and once they achieve the target they instantly lose interest in democracy.
With such degree of polarization the next question is how was it possible for the crowd to change from one mode to the other. To do so, we shift our attention to the number of votes. In figure 4.14B we can see that every time the meter gets above the democracy threshold it is preceded by an increase in the total number of votes. Then, once under democracy mode, the total number of votes decays very fast. Finally, there is another increment before entering again into anarchy mode. Thus, it seems that every time democrats were able to enter into their mode they stopped voting and started playing. This let anarchists regain control even though they were less users, leading to a sharp decay of the tug of war meter. Once they exited democracy mode, democrats started to vote again to try to set the game back into democracy mode. In figure 4.14D we can see initially a similar behavior in the short periods when democracy was installed. However, there is a wider area were the crowd accepted the democracy, this marks the safari zone mentioned previously. Interestingly, we can see how democrats learned how to keep their mode active. Initially there was the same drop on users voting and on the position of the meter seen in the other attempts. This forced democrats to keep voting instead of playing, which allowed them to retain control for longer. Few minutes later the number of votes decays again but in this case the position of the meter is barely modified probably due to anarchists finally accepting that they needed democracy mode to finish this part. Even though they might have implicitly accepted democracy, it is worth noting that the transitions \(A \rightarrow D\) are minimum (figure 4.14C). Finally once the mission for which the democracy mode was needed finished, there is a sharp increment in the fraction of transitions \(D \rightarrow A\).
In this section we have analyzed a crowd based event where nearly 1 million users played a game with the exact same character. Remarkably, the event was not only highly successful in terms of participants but also in length, lasting for over two weeks. As we discussed in the introduction of section 4.2, motivating a crowd to complete a project is not an easy task. Yet, this event is an example that this can happen even in the absence of any material reward, signaling once again that online crowds have their own rules which might depart from what has been studied in the offline world.
Although the overall success of the event is probably due to a mixture of many factors, there is one that we can extract from the chat logs which is quite interesting. The game was disordered, progress was slower than if played individually, and often really bad actions were taken (such as mistakenly releasing some of the strongest Pokémons) which might have led to frustration. Indeed, by looking at the stretchable words sent by the users [363} it is possible to measure the frustration the players felt during the event, figure 4.16. Although usually frustration has a negative connotation, in the context of games it has been observed that frustration and stress can be pleasurable as they motivate players to overcome new challenges [364]. Actually, there is a whole game genre known as "masocore" (a portmanteau of masochism and hardcore) which consists of games with extremely challenging gameplay built with the only purpose of generating frustration on the players [365]. Similarly, there are games which might be simpler but that have really difficult controls and strange physics, such as QWOP, Surgeon Simulator or Octodad, which are also built with the sole aim of generating frustration [366]. Thus, the mistakes performed by the crowd might have not been something dissatisfactory but completely the opposite, they might have been the reason why this event was so successful.
Figure 4.16: Measures of frustration. A) Players expressed their frustration by adding more times the letter o when they wanted to say no. Even though frustration was present throughout the event, it was incremented after the events of what is known as Bloody Sunday. B) Distribution of the number of o. Interestingly, the relationship is not linear as the word noo tends to appear less than nooo or noooo, which indicates that when players were frustrated they overexpressed it. C) Number of messages containing the word why per hour. This indicates that many players did not understand the actions of the crowd, which probably made them feel frustrated.
One of the particularly frustrating areas was the ledge, a part of the game that can be completed in a few minutes but that took over 15 hours to complete. We have seen that in this area the behavior of low and high activity users is quite similar, even though they might have been unaware of it. Besides, we have built a model to explain how the crowd was able to finally exit this part and shown how a minority - either in the form of griefers, smart users or simply na"ive individuals - can lead the crowd to a successful outcome, even in the lack of consensus. Note also that the fact that they only needed roughly 1/3 of the time to traverse the area on their second attempt compared to their first one might be a signal of the crowd learning how to break the herding effect. Unfortunately, with just two observations we cannot test this hypothesis. It would be interesting, though, to design experiments inspired by this event with the purpose of measuring if the crowd is able to learn and, if it does so, how long does it take, what would happen if a fraction of the crowd is substituted by new players, etc.
To conclude this section, we have also analyzed the effects that the introduction of a voting system had in the crowd. We have seen how the crowd was split into two groups and we have been able to explain the behavior of these groups using the social identity approach. We saw how norms could last within groups longer than their own members, as predicted by the theory. Note that this theory was introduced during the 1980s, way before Internet was as widespread as it is today, and still it can be applied, at least in this case, to online groups. Hence, despite the many differences that exist between the online and offline worlds, maybe they are not that far apart after all.
We have emphasized the crucial change in the definition. Rumor was no longer just something, it was something that spread from person to person. The obvious similarities of this definition with disease dynamics led Daley and Kendall to propose in 1964 that the spread of a rumor in a closed community should resemble the spread of an epidemic. Furthermore, they adapted the SIR model presented in chapter 3 to this context using ignorants, spreaders and stiflers [261].↩︎
To put this date into perspective, Facebook was created in 2004, although its Spanish version was not released until 2008 [277]. Similarly, Twitter was created in 2006 and its Spanish version was released in late 2009 [278].↩︎
The chat logs can have either seconds (YYYY-MM-DD HH:MM:SS) or minute (YYYY-MM-DD HH:MM) resolution. The game started on February 12, 2014 at 23:16:01 UTC, but the first log recorded corresponds to February 14, 2014 at 08:16:19 GMT+1. Besides, the log data between February 21, 2014 at 04:25:54 GMT+1 and 07:59:22 GMT+1 is missing. We extracted the position of the tug of war meter that will be described in section 4.2.3 as well as the game mode active at each time from the videos using optical character recognition techniques.↩︎ | CommonCrawl |
EJNMMI Physics
The physics of radioembolization
Remco Bastiaannet ORCID: orcid.org/0000-0001-7056-32291,
S. Cheenu Kappadath2,
Britt Kunnen1,
Arthur J. A. T. Braat1,
Marnix G. E. H. Lam1 &
Hugo W. A. M. de Jong1
EJNMMI Physics volume 5, Article number: 22 (2018) Cite this article
Radioembolization is an established treatment for chemoresistant and unresectable liver cancers. Currently, treatment planning is often based on semi-empirical methods, which yield acceptable toxicity profiles and have enabled the large-scale application in a palliative setting. However, recently, five large randomized controlled trials using resin microspheres failed to demonstrate a significant improvement in either progression-free survival or overall survival in both hepatocellular carcinoma and metastatic colorectal cancer. One reason for this might be that the activity prescription methods used in these studies are suboptimal for many patients.
In this review, the current dosimetric methods and their caveats are evaluated. Furthermore, the current state-of-the-art of image-guided dosimetry and advanced radiobiological modeling is reviewed from a physics' perspective. The current literature is explored for the observation of robust dose-response relationships followed by an overview of recent advancements in quantitative image reconstruction in relation to image-guided dosimetry.
This review is concluded with a discussion on areas where further research is necessary in order to arrive at a personalized treatment method that provides optimal tumor control and is clinically feasible.
Radioembolization is an established treatment for chemoresistant and unresectable liver cancers. The treatment consists of the administration of microspheres that are loaded with a beta-emitter into the arterial hepatic vasculature. As a result of a differential vasculature of the healthy liver and tumor tissue, the microspheres preferentially accumulate in the tumor tissue, resulting in a local radiation dose to the tumor whilst sparing healthy liver tissue.
Currently, two types of microspheres are approved for clinical use by the FDA and are CE-marked: resin microspheres (SIR-spheres; SirTex Medical) and glass microspheres (TheraSphere; BTG International Ltd.), both of which are loaded with 90Y. A third type consists of 166Ho-loaded poly-lactate spheres, called QuiremSpheres, which is yet to receive FDA approval but has been CE-marked.
Radioembolization treatment planning is currently based on semi-empirical methods, which are designed to yield acceptable toxicity profiles and have enabled the large-scale application in a palliative setting. The addition of radioembolization with SIR-spheres to first-line treatments for metastatic colorectal cancer was investigated in three large randomized controlled trials, SIRFLOX [1], FOXFIRE [2], and FOXFIRE-global. The combined analyses of these three trials did not show a significant improvement in either progression-free survival [3] or overall survival [4]. Similarly, the SARAH and SIRveNIB Phase III studies failed to show an improvement in overall or progression-free survival after the treatment of advanced hepatocellular carcinoma with SIR-spheres vs. sorafenib [5, 6].
One reason for this might be that the current activity planning methods often result in underdosing (and in some cases overdosing) in patients [1, 2, 7,8,9]. Fortunately, a recent survey amongst European institutes has shown that some form of absorbed dose-based prescription was used by 64 and 96% of the respondents for the use of resin and glass microspheres, respectively [10]. The lack of biological clearance of the microspheres simplifies dosimetry compared to most other molecular radiotherapies. In order to further increase the adoption of absorbed dose-based prescription, the package inserts of both manufacturers could be improved by placing more emphasis on this type of activity prescription. Furthermore, there is mounting evidence for clear dose-effect relationships (see Table 1). However, the estimated absorbed dose needed to elicit a reliable tumor response or complication varies between studies. As such, reliable absorbed dose targets and limits are yet to be established.
Table 1 Non-exhaustive overview of recent dose-response studies showing a large variety in all relevant parameters. This variety in outcome measures and reported dose thresholds complicate data pooling and the extraction of reliable clinical dose limits
This review aims to investigate the current state-of-the-art of dosimetry in relation to this discussion from a physics perspective, elaborating on technical difficulties and providing an overview of the relevant hiatuses in the current knowledge.
Current activity planning methods
Pre-treatment safety procedure
Before the infusion of the therapeutic dose, an angiographic work-up is performed in which the hepatic vessel anatomy is explored and an infusion site is selected. As per EANM guidelines, this is followed by the administration of 75–150 MBq of the surrogate particle 99mTc macroaggregated albumin (99mTc-MAA) [11]. These imageable protein aggregations aim to simulate the expected distribution of the subsequent therapeutic microspheres. There are three main reasons for the use of a simulation procedure using 99mTc-MAA [12].
First, extrahepatic depositions can be detected. This used to be done using planar scintigraphy; however, SPECT/CT has been shown to be superior for this goal [13].
Second, the lung shunt fraction (LSF) is estimated. This fraction is used as a proxy for the absorbed lung dose and is subsequently used to adjust the prescribed activity, as described in the next sections. The microsphere manufacturers specify that this estimation should be performed on planar scintigraphic imaging [14], according to the formula
$$ \mathrm{LSF}=\frac{C_{\mathrm{lungs}}}{C_{\mathrm{lungs}}+{C}_{\mathrm{liver}}}, $$
where Clungs indicates the total counts in the lungs, and Cliver the total counts in the liver. Usually, the number of counts in these regions-of-interest (ROIs) is determined on the geometric mean of the anterior and posterior views. However, the validity method has been questioned as it does not include proper compensation for differences in attenuation between the liver and the lungs, resulting in a systematic overestimation of LSF estimated on planar images relative to LSF estimated on SPECT/CT images [15,16,17].
Third, by using the 99mTc-MAA distribution as a predictor for the subsequent 90Y distribution, it may be used for multi-compartment dosimetry (see the section "Multi-compartment dosimetry") [18].
The type of planning method used in clinical practice depends on the type of microsphere. For resin microspheres, the most commonly used method is the body surface area-based (BSA) method [14]. For glass microspheres and holmium-loaded microspheres, a commonly used method is the MIRD mono-compartment method [19,20,21]. Collectively, these methods are referred to as semi-empirical methods.
BSA-based method for resin microspheres
The BSA-based method was developed to overcome the clinically observed high toxicity of a previous method used in early clinical studies [22]. The prescribed activity using this previous method ranged between 2 and 3 GBq, depending on tumor load only and not on the liver size [14]. Conversely, the BSA-based method is based on the observation that BSA correlates with liver volume in the healthy population [23]. As such, the planned activity is adjusted to an individual patient's liver volume. The activity is calculated according to the following relationship [14]:
$$ A\left[\mathrm{GBq}\right]=\left(\mathrm{BSA}\left[{m}^2\right]-0.2\right)+\frac{V_{\mathrm{tumor}}}{V_{\mathrm{tumor}}+{V}_{\mathrm{normal}\ \mathrm{liver}}}, $$
where Vtumor and Vnormal liver indicate the volumes of the tumor and the healthy parenchyma, respectively. For lobar or superselective treatment, the activity is reduced in proportion to the size of the liver volume being treated.
The prescribed activity is reduced by 20 or 40% if there is an LSF between 10 and 15% or 15 and 20%, respectively. An LSF higher than 20% is a contraindication for the treatment [14].
A modified BSA method was used for the SIRFLOX, FOXFIRE, and FOXFIRE-global studies, where activity was reduced relative to the BSA method, based on LSF and tumor involvement [1].
MIRD mono-compartment for glass microspheres
For glass microspheres, the activity calculation is based on the desired mean absorbed dose to the target liver mass (independent of tumor burden), following:
$$ A\left[\mathrm{GBq}\right]=\frac{\mathrm{Desired}\ \mathrm{dose}\ \left[\mathrm{Gy}\right]\times {M}_{\mathrm{target}}\left[\mathrm{kg}\right]}{50\ \left[\mathrm{J}/\mathrm{GBq}\right]}. $$
The desired absorbed dose is set assuming a completely homogeneous distribution of the microspheres over the target volume. The target mass may be determined using either CT, MRI, PET, or 99mTc-MAA SPECT [21].
The recommended absorbed dose ranges from 80 to 150 Gy, depending on the judgment of the treating physician. The estimated total activity shunting to the lungs should not exceed 610 MBq, which equates to approximately 30 Gy in 1 kg lung tissue [21].
MIRD mono-compartment method for holmium microspheres
For the administration of holmium microspheres, a methodology akin to the MIRD mono-compartment method for glass microspheres was used in a phase I absorbed dose-escalation study [24]. The administered activity was calculated according to
$$ A\left[\mathrm{GBq}\right]=\frac{\mathrm{Liver}\ \mathrm{dose}\left[\mathrm{Gy}\right]\times {M}_{\mathrm{liver}}\left[\mathrm{kg}\right]}{15.9\ \left[\mathrm{J}/\mathrm{GBq}\right]}, $$
where the liver mass was determined on contrast-enhanced CT. The absorbed dose was escalated from 20 to 80 Gy in four steps. The maximum tolerated absorbed dose was established to be 60 Gy.
Limitations of current methods
An obvious limitation of these methods is that the actual spatial dose distribution of an individual patient is neglected. In general, these methods seek to prevent overdosing to the parenchyma (and lungs), minimizing the occurrence of radioembolization-induced liver disease [25,26,27]. As a consequence, the resultant prescribed activities are likely curbed by toxicity limitations of the most vulnerable patients and the occurrence of patients with a highly unfavorable absorbed dose distribution. This is thought to result in under-dosing in some patients [28,29,30].
For the BSA method, an added limitation is that the estimated liver volume is based on a healthy population. As such, this relation might not hold for patients with liver tumors. Indeed, it has been shown that absorbed liver dose does not correlate with prescribed activity using the BSA method [31]. This results in patients with relatively small livers that are more likely to be overdosed and patients with larger livers are more likely to be under-dosed (see Fig. 1) [29, 31, 32]. An illustration of this is given in [31] where, based on the BSA method, a patient received 1.82 GBq (BSA 1.78 m2, tumor involvement 15%), resulting in a high liver absorbed dose of 74.7 Gy, due to its relatively low mass of 1.22 kg. In the same study, another patient received a similar activity of 1.85 GBq (BSA 1.50 m2, tumor involvement 45%), but that patient had a larger liver of 2.33 kg, resulting in a much lower average liver absorbed dose of 39.7 Gy. Furthermore, there are currently no guidelines regarding activity prescription after prior resection [33].
Adapted from [27]. Absorbed dose to the whole liver was not correlated to the administered activity (a). However, liver weight was negatively correlated with whole liver absorbed dose (r = − 0.723, P < 0.001), leading to patients with small liver being relatively over-dosed and patients with larger liver under-dosed (b)
Multi-compartment dosimetry
A different approach to activity prescription from the homogenous, single compartment models of the BSA and MIRD mono-compartment methods is the partition model (PM). It postulates three compartments with potentially different activity uptakes: tumor, normal liver, and lung tissue [18]. As such, it allows for the selection of a prescribed activity that maximizes the absorbed dose to the tumor tissue, while not exceeding toxicity thresholds for the other two compartments. The expected activities in each compartment are usually based on the distribution of 99mTc-MAA on the safety scan. However, there is some discussion in the literature about the predictive value of these particles for the subsequent 90Y microsphere distribution [34,35,36,37].
The respective compartments are usually segmented on an anatomical imaging modality (e.g., contrast-enhanced CT) or a functional modality (e.g., SPECT thresholding) and registered to the reconstructed 99mTc-MAA distribution. The activity distribution over the compartments is described by the tumor-to-normal tissue ratio (TN ratio), expressed as
$$ \mathrm{TN}=\frac{\raisebox{1ex}{${A}_T\left[\mathrm{MBq}\right]$}\!\left/ \!\raisebox{-1ex}{${M}_T\left[\mathrm{kg}\right]$}\right.}{\raisebox{1ex}{${A}_{\mathrm{NL}}\left[\mathrm{MBq}\right]$}\!\left/ \!\raisebox{-1ex}{${M}_{\mathrm{NL}}\left[\mathrm{kg}\right]$}\right.}, $$
where A and M indicate the activity in and the mass of the tumor (T) and normal liver tissue (NL) compartments.
Using some algebra, the following relation can be derived for the prescribed activity, given a certain TN ratio, LSF and compartment masses [38]:
$$ A\left[\mathrm{GBq}\right]={D}_{\mathrm{NL}}\left[\mathrm{Gy}\right]\frac{\mathrm{TN}\times {M}_T\left[\mathrm{kg}\right]+{M}_{\mathrm{NL}}\left[\mathrm{kg}\right]}{50\left[\mathrm{J}/\mathrm{GBq}\right]\times \left(1-\mathrm{LSF}\right)}, $$
where DNL indicates the absorbed dose to the parenchyma. Implicit in this equation is the assumption that dose is deposited locally in the compartment that contains the activity, which is a simplification. This is discussed in further detail in the section "Dosimetric models."
Multi-compartment dosimetry is claimed to be more 'scientifically sound' than the BSA-based or MIRD mono-compartment method [29]. However, besides being more labor intensive to work with in clinical practice, there are several technical caveats to using the PM.
Different methods to calculate TN ratio
When multiple lesions are present, each may have a different microsphere uptake, leading to errors in the subsequent individual tumor absorbed dose estimates, due to averaging of the TN ratio. Mikell et al. have shown this effect by comparing the silver standard Monte Carlo-based dose estimates with the MIRD mono-compartment based dosimetry and the PM model in realistic patient data [39]. In the case of multiple tumors, there can be large discrepancies between the methods for the estimated tumor absorbed dose. For example, the variability between PM-based and Monte Carlo-based tumor absorbed dose estimates was higher by a factor five in cases where there were multiple tumors present, compared to single tumor samples.
However, there is currently no consensus on how to calculate the TN ratios for individual tumors for the use in the PM model. Some authors use the entire normal liver volume for this calculation [40], whilst others opt for a smaller sample volume, placed near the tumor-of-interest [36]. Although this simplification makes the use of the PM model more feasible in clinical practice, it also inevitably leads to larger uncertainty (≈ 2.5×) in the TN ratio estimations when the microspheres are not strictly homogenously distributed in the healthy liver tissue [39].
Definition of compartments on anatomical imaging
When diagnostic (contrast-enhanced) CT or MRI is used for the delineation of the tumor compartments, these delineations subsequently need to be transferred to the SPECT/CT reconstructions. This can be achieved by copying the volume-of-interest (VOI) delineation to the SPECT/CT data or by using (non-rigid) coregistration. However, mismatches are likely to occur, causing a misalignment between the anatomical delineations and the SPECT reconstruction. A common cause is differences in patient positioning between both anatomical scans. For instance, different arm positioning (above the head vs. lying next to the body) or body position (e.g., different placement on the table).
Another issue for coregistration is breathing during the CT acquisition. The acquisition of the liver volume is usually much faster than an entire respiratory period, resulting in a 'snapshot' of a random respiratory phase. As the SPECT or PET activity reconstruction is a superposition of all respiratory phases, this can result in mismatches of > 1 cm between the anatomical delineations and the reconstructed activity [41]. Using CTs acquired during breath-hold for coregistration might mitigate this effect, but breath-holds are shown to have a limited reproducibility between acquisitions, resulting in different relative respiratory states between scans [42]. A viable solution might be the use of so-called 'time-averaged 3D mid-position CT scans' in this context, often used in radiotherapy [43].
Besides leading to mismatches in coregistration, respiratory motion also results in activity reconstructions that are 'smeared out.' This leads to an underestimation of the local activity concentration, especially in tumor tissue, which has a smaller volume compared to the motion amplitude than the background compartment. The effect of motion blurring is well-known in general [44, 45], but the impact of respiratory motion in the context of radioembolization has recently been shown for both PET [46] and SPECT [47].
Furthermore, defining the boundaries of the tumor compartment on anatomical modalities may be non-trivial in the case of morphologically diffuse or infiltrative tumors [29]. Tumors with substantial necrosis pose a similar problem. A possible solution to this might be the use of FDG PET for the demarcation of vital tumor tissue in the case of FDG-avid tumors.
Definition of compartments using physiological information
Similarly, the uptake information in SPECT reconstructions (e.g., when using MAA for absorbed dose prediction) could be used to indicate vital tumors. However, as delineations drawn directly on SPECT will generally result in errors in the estimated volume [48,49,50] (Fig. 2), Garin et al. have developed a hybrid method in which the SPECT reconstruction and CT information are presented in conjunction, integrating functional and anatomical information and aiding manual delineation [51, 52]. This has been shown to work well for both phantom and patient studies, in which anatomical borders are readily discernable. However, this type of method does not have a well-defined contouring guideline, which reduces reproducibility.
Exemplar case where a VOI delineation based on SPECT thresholding only (blue contour) does not match the CT-based anatomical tumor definition (teal contour). The mismatch results in a difference in tumor volume and mean tumor uptake
A more fundamental approach to this segmentation problem was proposed in a study by Lam et al. [53], in which directly after the normal 99mTc-MAA SPECT scan, the participating patients were injected with 99mTc-sulfur colloid (SC) and another SPECT was acquired after 5 min. This compound specifically accumulates in functional (non-tumor) liver tissue and as such will act as a negative template for the tumor compartments. By taking the difference between the MAA and SC SPECT reconstructions, voxel maps for healthy parenchyma and tumor tissue are automatically obtained, providing a 'physiology-based segmentation.'
Voxel-based dosimetry
In voxel-based dosimetry, the reconstructed voxel is taken as the smallest independent spatial unit for activity. This allows for the expression of (estimated) absorbed dose gradients and non-homogeneities on a small spatial scale, somewhat similar to external beam radiotherapy (EBRT). This contrasts with multi-compartment models, where absorbed dose estimates are averaged over each compartment. By including this spatial dimension, voxel-based dosimetry potentially provides a link to the rich EBRT literature on dose-effect relationships, which could potentially be used for both therapy planning and post-therapy outcome assessment. However, in contrast to image-guided absorbed dose planning for EBRT, voxel-based dosimetry for radioembolization is based on nuclear medicine images, which are generally noisy and of low resolution, prohibiting a direct translation of EBRT concepts to the radioembolization paradigm.
Using spatial dose information
To aid assessment and comparison between individual cases, the spatial dose information can be combined into a (cumulative) dose-volume histogram (cDVH). These graphs express the fraction of the total VOI (be it a tumor, normal tissue, or entire liver) receiving a certain minimum absorbed dose. This expresses in a single graph how the absorbed dose is distributed over the volume (Fig. 4). The concept of cDVHs also enables the introduction of spatially dependent measures of absorbed dose such as D70 (minimum absorbed dose to 70% of VOI) and V100 (percentage of VOI receiving at least 100 Gy) that might be expected to be good predictors of treatment effect [54, 55]. For example, it is clear from Fig. 4a that the blue absorbed dose distribution clearly delivers a higher absorbed dose to more of the tumor volume (or conversely, red is less toxic when this is a cDVH of normal tissue). These metrics are widely used for the comparison between EBRT plans and are gaining some traction within the radioembolization dosimetry community to help better explain clinical outcomes [54, 56,57,58,59] (see "Dose-effect relationships" section).
Due to the typical heterogeneous distribution of the microspheres, comparing cDVHs of, for instance, two patients is not always trivial (as is the case in Fig. 4a). Ambiguity can occur and an example of such a case is shown in Fig. 4b, where the cDVH curves are crossing. In such cases, it is completely dependent on the specific organ (e.g., parallel organ or not) which cDVH would lead to the highest tumor kill or least amount of toxicity.
This ambiguity is a well-known phenomenon in EBRT, and efforts have been undertaken to create radiobiological models that aim to quantify the biological effect of any treatment plan and enable the comparison of plans based on expected outcome [60]. The premise of most of these models is that an irradiated tumor exhibits a binary response (control or survival), which is determined by the surviving fraction (SF) of a population of cells after irradiation. This SF is modeled as a function of absorbed dose and may include any additional clinically relevant parameters, such as repopulation between treatments, clonogen radioresistance, and dose rate effects. Subsequently, the parameters of these models are retrospectively fitted on clinical data and can then be used to predict treatment outcome. As such, these radiobiological models provide a link between physical quantities such as spatial dose distribution and expected clinical outcome. The potential importance of radiobiological modeling is illustrated with a clinical example Fig. 3.
Example of a large neuroendocrine tumor, which was treated with glass microspheres. Activity was prescribed according to the MIRD mono-compartment method to reach 120 Gy. According to the PM model, the average absorbed dose to the tumor was 150 Gy. The patient has shown no response after treatment (RECIST, mRECIST, and EASL). The contrast-enhanced CT shows the tumor as a large enhanced area (orange solid line) and necrosis (yellow dotted line) (a). A strong absorbed dose inhomogeneity can be observed (b). Voxel-based dosimetry and radiobiological models may account for such absorbed dose inhomogeneities
Importantly, two such models have been adapted from EBRT for the context of radioembolization [56, 61]. First, the effect of dose rate and cell repair mechanisms can be modeled with the biologically effective dose (BED), such that ln(SF) = − α BED. BED can be calculated for a unit volume i (e.g. a voxel) according to
$$ \mathrm{BE}{\mathrm{D}}_i={D}_i\left(1+\frac{D_i\cdotp {T}_{\mathrm{rep}}}{\left({T}_{\mathrm{rep}}+{T}_{\mathrm{phys}}\right)\cdotp \alpha /\beta}\right), $$
with Di the locally absorbed dose, Trep and Tphys the halftimes for cell repair after damage and the physical halftime of 90Y, respectively. α and β denote the so-called intrinsic radio sensitivity and potential sparing capacity [62].
Furthermore, spatial non-uniformities can be normalized to a single number, called equivalent uniform biologically effective dose (EUBED), see also Fig. 4c. This number is the same for different absorbed dose distributions that have the same biological effect [63]. EUBED can be defined as
$$ \mathrm{EUBED}=-\frac{1}{\alpha}\ln \left(\frac{\sum_{\mathrm{i}}{e}^{-\alpha {\mathrm{BED}}_i}}{n_{\mathrm{voxel}}}\right), $$
Hypothetical cDVHs illustrating key concepts in voxel-based dosimetry which may be used for outcome prediction. In panel (a) the situation of the red absorbed dose distribution may be expected to have a smaller impact on the tissue under consideration (less toxic or less tumor kill). This is also reflected in the D70 and V100 being lower for the red than that for the blue curve. Due to highly heterogeneous absorbed dose distributions, which is typical for radioembolization, two different cases with cDVHs as depicted in panel b might occur. Which of these cDVHs may be expected to have a larger effect on the tissue, is ambiguous (same D70 and V100) and might depend on the tissue type. c Depicts the hypothetical differences in equivalent uniform doses (EUD), derived from the situation in panel b, potentially resolving the ambiguity
Where α is the radiosensitivity (1/Gy) of the local tissue, nvoxel is the number of voxels of the current VOI, and i denotes the voxel index [61]. A reasonable simplification for radioembolization is to neglect quadratic effects (i.e., β = 0) and BED, in which case BEDi is substituted with Di in Eq. 8, which then yields equivalent uniform dose (EUD).
In theory, this approach will aid physicians to optimally weigh risks and benefits of an individual absorbed dose distribution, as clinical outcomes can be linked to a single number such as BED and EUBED. However, the existence and robustness of such dose-effect relationships in the context of radioembolization are currently still under investigation.
Dose-effect relationships
There is an increasing literature on dose-effect relationships in radioembolization that utilizes advanced dosimetry. An overview of recent papers that (implicitly) estimate tumor control probability (TCP) and/or non-tumor complication probability (NTCP) is given in Table 1. The search for the combination of tumor type, outcome measure, dosimetric model, and imaging modality that yields the best predictive power is very early stage. Consequently, there is a wide variety in each of these properties amongst these studies, resulting in diffuse optimal absorbed dose limits for both liver complications (~ 50–97 Gy) and tumor control (~ 50–560 Gy). Four major factors are hypothesized to contribute to this: differences in response measures, absorbed dose calculations, microsphere type, scan modality (including acquisition and reconstruction settings), and tumor type.
Response measures
In these studies, tumor response is assessed according to RECIST, mRECIST, vRECIST, EASL, densitometric change [64], change in total lesion glycolysis (TLG), or standardized uptake value (SUV). What is considered a complete response, partial response, stable disease, or progressive disease differs significantly between these measures [65]. For example, the RECIST criteria are sensitive to changes in tumor size, whereas TLG expresses (changes in) total glycolysis (tumor volume times mean SUV over the VOI). Consequently, minimum absorbed dose estimates that lead to tumor response are different between criteria. Although some attempts have been made to directly compare some of these methods [66], the use of such a variety of methods makes comparing these data non-trivial, if not impossible. As the most relevant clinical outcomes are overall survival and progression-free survival, it is important to establish which of the reported proxies is the most predictive of survival [67]. This may result in disease-specific outcome measures (e.g., EASL or mRECIST for hepatocellular carcinoma and RECIST or TLG for metastatic colorectal cancer).
Some studies that are reported in Table 1 incorporate either metabolism-based (functional) masks from a previous FDG PET [55] or, for example, the D70 measure [54]. However, most studies calculate the average absorbed dose to the tumor. This may disregard the existence of necrotic volumes and, more generally, absorbed dose heterogeneity.
Absorbed dose calculations
Which absorbed dose calculation method best reflects the underlying radiobiological processes in the entire patient population remains an open question. Theoretically, applying EUD and/or BED-based models should be best suited to naturally incorporate differences in specific activity and absorbed dose heterogeneity in a tissue, as described above. But clinically, a clear advantage over average absorbed dose to the tumor is yet to be found [56, 61, 68, 69]. The authors suggest this might be linked to the outcome measure being too crudely categorized [61]. Another central finding in these studies, however, is that the apparent radio sensitivities (α and β in Eq. 7) of both the tumor and hepatic tissues in radioembolization are an order-of-magnitude lower than what is found in the EBRT setting, even when correcting for absorbed dose inhomogeneities [61]. Moreover, a significant difference between the absorbed dose needed to reach TCP(50%) for glass and for resin microspheres has been found [61, 68]. In conclusion, the values of the relevant parameters in the radiobiological models have not been well established for radioembolization. They may be specific to the type of microsphere and it can even be expected to be different between 90Y and 166Ho-based microspheres. These uncertainties have a direct impact on the determination of BED and EUD.
Micro-distribution
A possible explanation for the differences found between glass and resin microsphere dose-effect relationships is the potential difference in micro-distribution.
One of the first papers on in vivo microsphere distribution was by Fox et al. [70]. Using a beta-probe, they showed that the activity pattern on a sub-centimeter scale was highly heterogeneous. Later, Yorke et al. [71] used a combination of computer simulations and biopsy samples to try to find an explanation for the clinically observed lack in normal liver complications using glass microspheres at absorbed dose levels that are known to cause complications in EBRT and found that absorbed dose heterogeneity is sufficient to explain this incongruity.
More recently, Walrand et al. performed a simulation study of normal liver tissue, finding that the relatively low number of injected glass microspheres results in non-uniform trapping in the terminal portal artery, resulting in tissue volumes receiving sub-lethal absorbed doses. This would both explain the relatively low toxicity per Gray of glass, relative to resin microsphere and the granularity observed in post-treatment 90Y PET (see Fig. 5) [72].
Simulated arterial tree (a) and subsequently simulated microsphere distribution after flow through the arterial tree (b, c), which explains PET 'mottled' look often found in patients (d) but not in phantom scans (e). This research was originally published in JNM [72]. Copyright by the Society of Nuclear Medicine and Molecular Imaging, Inc.
This conclusion was seemingly contradicted in an elaborate histological study by Högberg et al., who found that a higher concentration of microspheres (i.e., in the case of resin microspheres) leads to a higher tendency to form clusters, especially in the larger (upstream) arterioles, resulting in a more non-uniform absorbed dose distribution in the liver parenchyma [73]. According to these authors, this apparent contradiction stems from the fact that Walrand et al. only assumed microsphere trapping in the terminal branches of the infused artery. In a subsequent simulation study, Högberg et al. were able to replicate their histological findings in a mathematical model. This places further emphasis on the importance of the geometry of the arterial tree and (local) microsphere concentration as drivers for microsphere distribution inhomogeneity [74]. These models, however, predict cluster propensity as a function of arteriole generation (branch number) and lack further spatial information. Consequently, the authors conclude that the micro-scale clusterings they observed in itself might not (fully) explain the observed macroscopic inhomogeneities, as measured by non-invasive imaging [73].
Pasciak et al. tried to bridge the gap between micro- and macro-scale tumor dosimetry by using Monte Carlo-based estimations of microsphere micro-distributions, given a 90Y PET reconstruction of a patient [75]. These microsphere micro-distributions are simulated by drawing properties such as cluster propensity and distance from probability density functions that were constructed from histological data [76]. This resulted in realistic structures (Fig. 6) such as clusters and strings of microspheres. Crucially, it provides a plausible link between the observations in macro- and microdosimetry.
a Small clusters (white arrow) and large clusters (black arrow) are apparent in the Monte Carlo simulations by Pasciak. These simulated distributions seem to be consistent with the histological findings of (amongst others) Högberg (b, c, d). Panel a was originally published in JNM [75]. Copyright by the Society of Nuclear Medicine and Molecular Imaging, Inc. Other panels are adapted from [74]
Quantitative image reconstruction
Besides the abovementioned factors concerning dosimetry, differences in outcome between these studies may in part also be explained by the wide range in technical scan parameters used and the measurement variance inherent to nuclear medicine images. An overview of the current topics in nuclear image acquisition and reconstruction is therefore desired.
In quantitative image reconstruction, all relevant interactions between the radionuclide, the patient, and the imaging system need to be accounted for during reconstruction. The current state-of-the-art consists of iterative reconstruction algorithms that incorporate models for all such image-degrading effects (e.g., attenuation, scatter, nuclide decay, detector uniformity) [77,78,79].
From its inception, PET was considered a quantitative modality, in contrast to SPECT. This is due to the high signal-to-noise ratio of PET and the relative simplicity of the physics of coincidences, which enables a straight-forward method for attenuation correction. This was available in early generations of PET scanners. However, with the advent of inherently coregistered CT in SPECT/CT, attenuation and scatter correction are now common practice and some authors have claimed that modern clinical SPECT systems can now be considered quantitative as well [80, 81]. Furthermore, vendors are currently implementing calibration routines and inherently quantitative reconstruction software in their machines [82, 83], which enables the dissemination of absolute activity quantification into clinical practice for both modalities.
Post-therapy imaging
In radioembolization, treatment success can be assessed with a post-therapy scan, either with bremsstrahlung SPECT/CT (bSPECT) or PET/CT (90Y PET).
Bremsstrahlung SPECT/CT—the relevance of physics modeling
Post-therapy assessment with 90Y may be performed by bremsstrahlung imaging. This is different from mono-energetic emitters (such as 99mTc) in that 90Y produces a broad and continuous energy spectrum without a photopeak. Secondly, the high flux of bremsstrahlung photons on a gamma camera will result in significant dead time, if not managed correctly (count rate linearity was estimated up-to 7.5 GBq for 90Y and 1.5 GBq for 166Ho [84]). Therefore, 90Y image quantitation using a gamma camera was recognized early on as being non-trivial [85]. With the advent of advanced iterative reconstruction techniques that enable advanced physics modeling, several quantitative reconstruction methods have been proposed in the literature. Most of these methods utilize some kind of Monte Carlo modeling of the imaging process. Rong et al. achieved quantitation errors between − 1.6 and 11.9% for a phantom experiment by modeling all relevant energy-dependent image degrading effects [86]. Elschot et al. incorporated Monte Carlo simulations of photon-tissue interactions directly within the iterative reconstruction loop, increasing image contrast and activity recovery significantly (over 80% for non-small spheres), relative to a reference clinical reconstruction algorithm [87]. Minarik et al. performed a similar study, using the SIMIND code and achieved a quantification error around + 8.5% [88].
However, these methods rely on advanced Monte Carlo techniques, which are currently not easily accessible for institutions without a medical physics team that has extensive experience with these methods. Consequently, the reported accuracies will be significantly worse in normal clinical practice.
90Y PET—the impact of machine and reconstruction parameters
90Y can also be imaged using PET. However, as 90Y only has a minute positron branching ratio (~ 32 ppm) and the detectors were expected to be saturated from the bremsstrahlung photons (which was later demonstrated to be false), for a long time, PET was not considered a feasible modality for post-therapy imaging. The earliest in vivo demonstration of the feasibility of 90Y PET imaging was delivered by Lhommel et al. in 2009 [89] using a time-of-flight (TOF) PET scanner and an additional copper ring inserted in the gantry to prevent detector saturation. Later, the feasibility of 90Y PET with Lutetium oxyorthosilicate (LSO) crystals in a scanner without TOF capability was demonstrated [90].
These initial proofs-of-concept were followed by studies that corroborated the quantitative reconstruction capabilities of 90Y PET, using clinically available methods [91,92,93,94] which were applied to clinical data [54, 95,96,97]. However, the very high contribution of randoms (> 90%) due to bremsstrahlung in combination with the very low coincidence count statistics was expected to impact random coincidence estimation, scatter correction, and consequently, image quality. An elaborate study by Carlier et al. has shown that the effect of these phenomena on bias, variability, and detectability of hotspots is minor. The use of correct point spread function (PSF) modeling and TOF reconstruction kept background variability and noise at acceptable levels [98]. This was further corroborated in a fully Monte Carlo-based simulation in which 90Y quantitation is compared to that of 18F [99]. It was found that, relative to 18F, the image quality was only slightly poorer in 90Y for a similar positron emission rate. Furthermore, image quality was not strongly linked to any particular physical effect or reconstruction step. This led to the conclusion that adding 90Y-specific models to the PET imaging process is not needed. Furthermore, Van Elmbt et al. have shown that systems based on modern crystals (post-BGO) can be used for 90Y dosimetry [100]. Since then, 90Y image quantification has also been shown to be possible in PET/MRI [101] and solid-state digital PET/CT [94].
For clinical 18FDG PET, the importance of homogenization of acquisition and reconstruction settings over centers to allow pooling and comparison of data sets is well-recognized and has resulted in the EARL guidelines and accreditation program [102]. A similar initial attempt for 90Y PET has been made in the form of the QUEST study, showing that 90Y PET-based dosimetry should be reproducible across scanners and centers, as long as TOF-capabilities are available [103].
Together, these studies show that PET-based 90Y quantitative imaging is feasible, robust, and straight-forward to implement in clinical practice when a reasonably modern PET system with TOF-capabilities is available. This is in contrast to bSPECT/CT, for which no sufficiently accurate reconstruction methods are currently available for general clinical use.
90Y PET vs. bSPECT
In a direct comparison between bSPECT/CT and 90Y PET/CT, the latter is found to have a higher resolution and less scatter in patient studies and several case series [104, 105]. In a quantitative direct comparison between a state-of-the-art clinical bSPECT/CT reconstruction algorithm and clinical PET/CT reconstruction protocols, the superior contrast, detectability, and absorbed dose estimates of PET were demonstrated [106]. However, this comes at the price of a relatively long scan duration in the case of 90Y PET, which is 15 to 20 min per bed position. When advanced photon-tissue and photon-detector interactions were modeled with a Monte Carlo-based SPECT/CT reconstructor, image contrast improved substantially and was in some cases (in larger hot spots) higher than in PET/CT [87].
In general, it should be noted that currently there is no standardized approach for post-therapy imaging in terms of acquisition and reconstruction settings and there may exist some systematic biases between the various approaches of different groups, even within the same modality. As a consequence, interpreting and comparing dosimetric results between different groups should be done with caution. However, in general, 90Y PET is currently superior to clinical bSPECT/CT in terms of resolution, accuracy, and the clinical availability of accurate reconstruction methods for dosimetry.
MR and CT for 166Ho
In contrast to 90Y-based microspheres, 166Ho does emit photons with discrete energies that are directly detectable with a gamma camera. Furthermore, it is a paramagnetic element, enabling the visualization with MRI, and it has a very high X-ray attenuation, resulting in good contrast on CT [107, 108]. A quantitative SPECT/CT reconstruction using advanced Monte Carlo-based techniques has been developed [109] (achieving contrast recovery of over 80% in non-small NEMA spheres), as is a hybrid method to correct for photon down-scatter from bremsstrahlung and higher energy photons [110]. In a direct comparison between SPECT- and MR-based quantification [108], both modalities are found to be suited for peri-therapy dosimetry [111].
Dosimetric models
With quantitative imaging, the physical quantity activity (i.e., Becquerel or Curie) of the isotope distributed in space is estimated. However, especially in the case of radionuclide therapies, the process of interest is not the activity per se, but rather the subsequent dose absorption by the surrounding tissue (in Gray), as a result of high energy particles (betas and photons) emitted in the process of decay. This process of dose absorption is what causes the tumor kill and constitutes a rather complex interaction, which depends both on the tissue and the specific emissions from the isotope.
If the isotope distribution is known exactly, the most comprehensive and precise estimations of absorbed dose are achieved through Monte Carlo simulations of all relevant interactions between the high energy particles and the healthy or tumor tissue. Popular codes include the EGSnrc code [112], MCNP [113], FLUKA [114], and the GATE extension of GEANT [115].
However, these types of simulations are rather complex and time-consuming. Furthermore, the liver is a rather homogenous medium in terms of dose absorption at energies typical for radioembolization. Therefore, a frequently used method to speed up these calculations is by pre-calculating a dose point-kernel (DPK) or dose voxel-kernel (DVK), which is energy absorption in a homogeneous medium around a point source or a voxel source, respectively. Then, a convolution of the true activity distribution with the DPK/DVK will result in an accurate absorbed dose estimation for a homogeneous medium. This kernel can also be scaled to different local tissue densities [116].
The largest contribution to the total absorbed dose comes from the emitted beta particles. The maximal range for 90Y betas in tissue is 1.2 cm (0.9 cm for 166Ho), which is in the same order of magnitude as the resolution of both SPECT and PET. This implies that most of the energy is deposited within the voxel of origin. Consequently, a further simplification is to assume that all emitted energy is absorbed locally, which is usually called the local deposition model (LDM). In practice, this method constitutes applying an appropriate scaling factor to the voxel values of a quantitative reconstruction.
In a direct comparison of SPECT-based 90Y dosimetry Monte Carlo, DVK and LDM-based dosimetry are found to be nearly identical for activity which is not close to tissue inhomogeneities (e.g., liver-lung border) for the liver [117]. For lung tissue, it was necessary for the DVK method to be scaled to the lower local tissue density of the lung tissue to reach adequate results.
This lack of difference between LDM and the other models is likely to be explained by the fact that a SPECT-based reconstruction has a resolution that is in the same order as the average beta-range. This can be understood as a convolution of a 'blurring' kernel with a putative perfect activity distribution, obviating the need for the simulation of the beta-transport (i.e., blurring). LDM does not unnecessarily repeat this step. Indeed, Pasciak and Erwin found for 99mTc-MAA SPECT reconstructions that LDM outperformed a Monte Carlo-based absorbed dose estimation, due to this effect [118]. Later, this finding was repeated in 90Y PET [119]. Although in most cases for PET, much of the theoretical benefit is obscured due to image noise, causing both techniques to have a similar absorbed dose uncertainty. Still, the authors recommend using the LDM in post-radioembolization 90Y PET dosimetry due to its accuracy and ease-of-use [119].
Timing of dosimetry-based treatment planning
Pre- and post-treatment are not the only time points for dosimetry, as Bourgeois et al. report on intra-procedural PET/CT in a case study [120]. They used a 3-step protocol wherein the first step 90Y microspheres with a total activity determined by the BSA model were administered to the patient. For the second step, the patient was transferred for PET/CT imaging. The maximum absorbed dose by normal hepatic parenchyma and the average absorbed dose by the tumor were determined. For one out of the six patients in this study, the absorbed dose by the tumor was below the assumed tumoricidal absorbed dose of 100–120 Gy for HCC (the other five patients reached this threshold with the first infusion, which was based on the BSA model). For this single, undertreated patient, the third step of the protocol was performed which was a repeated infusion of 90Y microspheres with an optimized activity determined from the quantitative PET/CT data to reach the target tumor absorbed dose. Although the initial treatment planning was based on the suboptimal BSA method, the dosimetry based on the intra-procedural PET/CT scan allowed for activity delivery based on patient-specific physiology at the time of the procedure. The downsides of this 3-step protocol are the increased time, costs, and access to equipment and personnel.
This last disadvantage might be partly solved by imaging in the intervention room. Walrand et al. describe a camera dedicated to bremsstrahlung SPECT of 90Y [121]. They suggested mounting the gamma camera on a robotic arm to allow SPECT acquisition within a few minutes in the intervention room during the catheterization procedure to optimize the 90Y activity to inject.
Another option for imaging in the intervention room is proposed by Beijst et al. [122]. The authors propose a hybrid imaging system, consisting of an X-ray c-arm combined with gamma imaging capabilities for simultaneous real-time fluoroscopic and nuclear imaging. A slightly modified version of this prototype [123] was shown to be able to accurately estimate LSF of a 99mTc-MAA scout dose in an interventional setting [124]. When this hybrid imaging modality becomes available in the angiography room, it may be possible to move towards 1-day procedures by combining scout and therapy dose in one session.
Using microspheres labeled with a paramagnetic element, like 166Ho, will provide contrast on MRI. It has been shown that the absorbed dose by the tumor and healthy liver can be accurately quantified using a post-treatment MRI scan [108, 112]. Since MRI provides excellent soft tissue contrast, it would be a well-suited modality for radioembolization guidance as well as evaluation of therapy. The feasibility of fully MR-guided real-time navigation of hepatic catheterization was demonstrated in an animal model [125]. Drawbacks of MR-guided radioembolization are the potentially limited availability of MR scanners and MR-compatible catheters, and guide wires and the relatively high costs.
Recently, several phase III trials failed to show an improvement in progression-free survival and overall survival when radioembolization with SIR-Spheres was combined with first-line treatments. A reason for this might be that the methods for activity prescription which were used in these studies (BSA and MIRD mono-compartment) are barely personalized and are geared towards safety rather than efficacy. More personalized methods (e.g., the partition model, cDVH-based methods) are available. However, there is no consensus as to what absorbed dose thresholds should be prescribed. In this manuscript, we have therefore reviewed the specific shortcomings of the current activity prescription methods and the current state-of-the-art of newer dosimetric methods and understanding of the underlying radiobiology.
Currently, there is a large range in the literature regarding dosimetric limits, both for the TCP and the NTCP (Table 1). We believe that one of the biggest drivers of these diffuse limits is the corresponding wide range in modalities, technical settings, analysis, clinical outcome measures, and relatively small sample sizes. It is therefore nearly impossible to compare data from different studies and distill a common absorbed dose limit, regardless of dosimetric method. This also highlights the importance of investigators providing clear and detailed information on the dosimetric method and analysis used in their publication. This will facilitate reproducibility and may allow for the pooling of clinical data.
With the advent of advanced iterative reconstruction techniques, image quality has improved dramatically in both PET and SPECT [80]. This is mainly due to the incorporation of models for the physics of image formation. In contrast to PET and probably owing to the more complex (underdetermined) nature of the physics in SPECT imaging, dissemination of quantitative reconstruction algorithms started only recently for SPECT [84, 126]. For more complex isotopes (e.g., 90Y bremsstrahlung SPECT), this is still in the research phase and vendor-supported solutions are currently not available. The same holds for more complex image-degrading effects (for both SPECT and PET) such as respiratory motion and compensation for partial volume effects. These developments are beneficial to the goal of personalized dosimetry but are currently not widely available.
However, we believe that using the currently available reconstruction techniques, reliable estimates of dosimetric limits may be established. But in order to achieve comparable results, acquisition and reconstruction settings should be standardized. An initiative similar to the EARL accreditation program for 18FDG PET should lead to reconstructions that are perhaps less than optimal but are at least comparable between patients and institutions, allowing for the pooling of derived dosimetric data. To illustrate this point, it was recently shown that it is feasible to get reliable absorbed dose estimates from 99mTc-MAA, even if attenuation correction is lacking, using only a simple calibration [62]. This lowers the technical demands for more personalized dosimetry. For technical parameters in 90Y PET, the QUEST initiative may be regarded as a step in the right direction [103].
Ideally, this standardization should not be limited to technical parameters of a specific imaging modality, but should also include methods for the segmentation of compartments (or voxels of any VOI), the transferring of volumes (e.g., pre-therapy image delineations transferred to post-therapy images), the selection of relevant clinical outcome measures, and stratification by relevant clinical factors (e.g. tumor size, tumor type, baseline liver function). We believe that this standardized acquisition and analysis pipeline may solve the biggest sources of error in current comparative studies (especially multicenter ones). Furthermore, this standardized protocol can be used for prospective studies in dosimetric limits that are relevant to clinicians.
We expect that the formulation of and adherence to such a standardized protocol would be greatly aided if it is based on guidelines formulated by a panel of experts, ideally sponsored by an authoritative entity such as the EANM, SNMMI, or similar.
In order to further refine personalized dosimetry in clinical practice, new methods need to be fast in terms of scan time, labor extensive, robust, and standardized. Some examples of developments in this direction are fully data-driven respiratory motion compensation [127, 128] and fast lung absorbed dose estimation [129]. These methods have in common that they are faster, often more reliable and robust and more usable in clinical practice than existing methods. As there is a wide range of clinical parameters that have an influence on response to therapy, we believe that currently the biggest challenge for the medical physics community involved in radioembolization is not the improvement of quantitative imaging in itself, but rather the translation and dissemination of the current state-of-the-art into a usable form for the use in practical dosimetric efforts. The potential increase in clinical workload and costs associated with further refined personalized dosimetry should be weighed against the potential gains and should not a priori be considered a barrier for implementation.
Currently, many aspects of fundamental radiobiology in radioembolization are unknown. For example, the group of Chiesa et al. was unexpectedly unable to show a clear increase in predictive power for outcome when using equivalent uniform dose-based measures, as opposed to the average absorbed dose to the tumor or healthy liver tissue [61]. This illustrates that radiobiological model parameters are not well established for this modality and that the precise relation between absorbed dose non-homogeneity at both the macro (voxel) level and sub-millimeter level and tissue response is not well understood. We expect that a better understanding of radiobiology on this level will aid the establishment of a coherent account on the efficacy of radioembolization and to enable further refinements in patient selection and/or personalized dose optimization. In that sense, the combination of statistical histological data with models that bridge between micro-distribution and clinically observable macroscopic features in reconstructed data (e.g., 'mottled look' in 90Y PET) might provide additional insight into (deviations from) dose-response relationships in a wide population of patients and may result in a micro-scale equivalent uniform dose-metric.
An improved understanding of radiobiology may also facilitate other concepts from EBRT to be translated to the context of radioembolization. An example is fractionation, which uses tumor repopulation and oxygenation between fractions to increase the tumoricidal effect of subsequent irradiations. For radioembolization, this would mean improved tumor control for multiple vs. the current single treatment (e.g., two times 60 Gy vs. 120 Gy at once). Whether or not this effect can be exploited using radioembolization is an area of future research.
A better understanding of the dose-response relationships will lead to an improved selection of patients for which dose may be increased safely. Currently, the only available particle for treatment planning is 99mTc-MAA, which might be a suboptimal predictor of the subsequent microsphere biodistribution, both in terms of LSF [15, 16] and intrahepatic distribution [35,36,37]. Consequently, a particle that better matches the rheology of the therapeutic microspheres is needed, if radioembolization is to become a true theranostic modality [130]. Several efforts in this direction have been undertaken [131,132,133]. In this context, 166Ho is a promising alternative in that exactly the same particle can be used for both planning and therapy. This was illustrated for the estimation of LSF [15].
Another approach is to apply dosimetry in an interventional setting [120]. For instance, following an AHASA (as high as safely attainable [8]) paradigm during infusion of the microspheres until thresholds for hepatic toxicity are reached.
Furthermore, if dosimetry is found to be sufficiently reliable, EBRT could be used after radioembolization on specific target areas that might have received a sub-optimal absorbed dose from radioembolization.
Together, these developments in homogenization, accessibility, and improved methods will ideally lead to the most personalized and optimal treatment, which we expect to result in improved overall survival and progression-free survival.
A better understanding of dose-response relationships is needed for improved patient selection and dose optimization in radioembolization. To this end, standardization of acquisition, reconstruction, and analysis protocols are needed. Such an effort would greatly benefit from centrally formulated guidelines. This might enable comparison and pooling of clinical data. Disseminating advanced methods from research groups to clinical practice could prove to be useful in this respect.
99mTc-MAA:
99mTc macroaggregated albumin
Biologically effective dose
BSA:
Body surface area
bSPECT:
Bremsstrahlung SPECT
cDVH:
Cumulative dose-volume histogram
Cholangio:
Complete response
CRC:
DPK:
Dose-point kernel
DVK:
Dose-voxel kernel
EBRT:
External beam radiotherapy
EUD:
Equivalent uniform dose
LDM:
Local deposition model
LMER:
Linear mixed-effects regression model
LSF:
Lung shunt fraction
Neuroendocrine tumor
NTCP:
Non-tumor complication probability
Partition model
Partial response
PSF:
Point spread function
Region-of-interest
Sulfur colloid
SD:
Stable disease
SF:
Surviving fraction
SUV:
Standardized uptake value
TCP:
Tumor control probability
TLG:
Total lesion glycolysis
TN ratio:
Tumor-to-normal tissue ratio
TOF:
Time of flight
Volume-of-interest
Gibbs P, Gebski V, Van Buskirk M, Thurston K, Cade DN, Van Hazel GA. Selective internal radiation therapy (SIRT) with yttrium-90 resin microspheres plus standard systemic chemotherapy regimen of FOLFOX versus FOLFOX alone as first-line treatment of non-resectable liver metastases from colorectal cancer: the SIRFLOX study. BMC Cancer. 2014;14:897.
Dutton SJ, Kenealy N, Love SB, Wasan HS, Sharma RA, FOXFIRE Protocol Development Group and the NCRI Colorectal Clinical Study Group. FOXFIRE protocol: an open-label, randomised, phase III trial of 5-fluorouracil, oxaliplatin and folinic acid (OxMdG) with or without interventional selective internal radiation therapy (SIRT) as first-line treatment for patients with unresectable liver-on. BMC Cancer. 2014;14:497.
Van Hazel GA, Heinemann V, Sharma NK, Findlay MPN, Ricke J, Peeters M, et al. SIRFLOX: randomized phase III trial comparing first-line mFOLFOX6 (plus or minus bevacizumab) versus mFOLFOX6 (plus or minus bevacizumab) plus selective internal radiation therapy in patients with metastatic colorectal cancer. J Clin Oncol. 2016;34:1723–31.
Wasan HS, Gibbs P, Sharma NK, Taieb J, Heinemann V, Ricke J, et al. First-line selective internal radiotherapy plus chemotherapy versus chemotherapy alone in patients with liver metastases from colorectal cancer (FOXFIRE, SIRFLOX, and FOXFIRE-global): a combined analysis of three multicentre, randomised, phase 3 trials. Lancet Oncol. 2017;18:1159–71.
Vilgrain V, Pereira H, Assenat E, Guiu B, Ilonca AD, Pageaux GP, et al. Efficacy and safety of selective internal radiotherapy with yttrium-90 resin microspheres compared with sorafenib in locally advanced and inoperable hepatocellular carcinoma (SARAH): an open-label randomised controlled phase 3 trial. Lancet Oncol. 2017;18:1624–36.
Chow PKH, Gandhi M, Tan S-B, Khin MW, Khasbazar A, Ong J, et al. SIRveNIB: selective internal radiation therapy versus sorafenib in Asia-Pacific patients with hepatocellular carcinoma. J Clin Oncol. 2018:JCO201776089. https://doi.org/10.1200/JCO.2017.76.0892.
Tong AKT, Kao YH, Too C, Chin KFW, Ng DCE, Chow PKH. Yttrium-90 hepatic radioembolization: clinical review and current techniques in interventional radiology and personalized dosimetry. Br J Radiol. 2016;89:20150943.
Chiesa C, Sjogreen Gleisner K, Flux G, Gear J, Walrand S, Bacher K, et al. The conflict between treatment optimization and registration of radiopharmaceuticals with fixed activity posology in oncological nuclear medicine therapy. Eur J Nucl Med Mol Imaging. 2017;44:1783–6.
Braat AJAT, Kappadath SC, Bruijnen RCG, van den Hoven AF, Mahvash A, de Jong HWAM, et al. Adequate SIRT activity dose is as important as adequate chemotherapy dose. Lancet Oncol Elsevier Ltd. 2017;18:e636.
Sjögreen Gleisner K, Spezi E, Solny P, Gabina PM, Cicone F, Stokke C, et al. Variations in the practice of molecular radiotherapy and implementation of dosimetry: results from a European survey. EJNMMI Physics. 2017;4:28.
Giammarile F, Bodei L, Chiesa C, Flux G, Forrer F, Kraeber-Bodere F, et al. EANM procedure guideline for the treatment of liver cancer and liver metastases with intra-arterial radioactive compounds. Eur J Nucl Med Mol Imaging. 2011;38:1393–406.
Smits MLJ, Elschot M, Sze DY, Kao YH, Nijsen JFW, Iagaru AH, et al. Radioembolization dosimetry: the road ahead. Cardiovasc Intervent Radiol. 2014;38:261–9.
Ahmadzadehfar H, Sabet A, Biermann K, Muckle M, Brockmann H, Kuhl C, et al. The significance of 99mTc-MAA SPECT/CT liver perfusion imaging in treatment planning for 90Y-microsphere selective internal radiation treatment. J Nucl Med. 2010;51:1206–12.
Sirtex Medical Limited. Sirtex package insert [internet]; 2017. p. 3–5. [cited 2017 Oct 31]. Available from: https://www.sirtex.com/eu/clinicians/package-insert/
Elschot M, Nijsen JFW, Lam MGEH, Smits MLJ, Prince JF, Viergever MA, et al. 99mTc-MAA overestimates the absorbed dose to the lungs in radioembolization: a quantitative evaluation in patients treated with 166Ho-microspheres. Eur J Nucl Med Mol Imaging. 2014;41:1965–75.
Yu N, Srinivas SM, DiFilippo FP, Shrikanthan S, Levitin A, McLennan G, et al. Lung dose calculation with SPECT/CT for 90Yittrium Radioembolization of liver Cancer. Int. J. Radiat. Oncol. Elsevier Inc. 2013;85:834–9.
Kao YH, Magsombol BM, Toh Y, Tay KH, Chow PK, Goh AS, et al. Personalized predictive lung dosimetry by technetium-99m macroaggregated albumin SPECT/CT for yttrium-90 radioembolization. EJNMMI Res. 2014;4:33.
Ho S, Lau WY, Leung TWT, Chan M, Ngar YK, Johnson PJ, et al. Partition model for estimating radiation doses from yttrium-90 microspheres in treating hepatic tumours. Eur J Nucl Med. 1996;23:947–52.
Gulec SA, Mesoloras G, Stabin M. Dosimetric techniques in 90 Y-microsphere therapy of liver cancer : the MIRD equations for dose calculations; 2016. p. 1209–12.
Smits MLJ, Nijsen JFW, van den Bosch MAAJ, MGEH L, MAD V, Huijbregts JE, et al. Holmium-166 radioembolization for the treatment of patients with liver metastases: design of the phase I HEPAR trial. J Exp Clin Cancer Res. 2010;29:70.
Biocompatibles UK Ltd. Package Insert–TheraSphere® Yttrium-90 Glass Microspheres–Rev. 14 [Internet]. 2014. p. 1–21. Available from: https://www.btg-im.com/BTG/media/TheraSphere-Documents/PDF/TheraSphere-Package-Insert_USA_Rev-14.pdf.
Gray B, Van Hazel G, Hope M, Burton M, Moroz P, Anderson J, et al. Randomised trial of SIR-spheres® plus chemotherapy vs. chemotherapy alone for treating patients with liver metastases from primary large bowel cancer. Ann Oncol. 2001;12:1711–20.
Vauthey JN, Abdalla EK, Doherty DA, Gertsch P, Fenstermacher MJ, Loyer EM, et al. Body surface area and body weight predict total liver volume in western adults. Liver Transplant. 2002;8:233–40.
Smits MLJ, Nijsen JFW, van den Bosch MAAJ, Lam MGEH, Vente MAD, Mali WPTM, et al. Holmium-166 radioembolisation in patients with unresectable, chemorefractory liver metastases (HEPAR trial): a phase 1, dose-escalation study. Lancet Oncol. 2012;13:1025–34.
Coldwell D, Sangro B, Salem R, Wasan H, Kennedy A. Radioembolization in the treatment of unresectable liver tumors: experience across a range of primary cancers. Am J Clin Oncol. 2012;35:167–77.
Kennedy AS, McNeillie P, Dezarn WA, Nutting C, Sangro B, Wertman D, et al. Treatment parameters and outcome in 680 treatments of internal radiation with resin 90Y-microspheres for unresectable hepatic tumors. Int J Radiat Oncol Biol Phys. 2009;74:1494–500.
Braat MNGJA, van Erpecum KJ, Zonnenberg BA, van den Bosch MAJ, Lam MGEH. Radioembolization-induced liver disease. Eur J Gastroenterol Hepatol. 2017;29:144–52.
Sze DY, Lam MGEH. Reply to "the limitations of theoretical dose modeling for yttrium-90 radioembolization". J Vasc Interv Radiol. 2014;25:1147–8.
Kao YH, Tan EH, Ng CE, Goh SW. Clinical implications of the body surface area method versus partition model dosimetry for yttrium-90 radioembolization using resin microspheres: a technical review. Ann Nucl Med. 2011;25:455–61.
Flux G, Bardies M, Chiesa C, Monsieurs M, Savolainen S, Strand S-E, et al. Clinical radionuclide therapy dosimetry: the quest for the "holy gray". Eur J Nucl Med Mol Imaging. 2007;34:1699–700.
Lam MGEH, Louie JD, Abdelmaksoud MHK, Fisher GA, Cho-Phan CD, Sze DY. Limitations of body surface area-based activity calculation for radioembolization of hepatic metastases in colorectal cancer. J Vasc Interv Radiol Elsevier. 2014;25:1085–93.
Bernardini M, Smadja C, Faraggi M, Orio S, Petitguillaume A, Desbrée A, et al. Liver selective internal radiation therapy with 90Y resin microspheres: comparison between pre-treatment activity calculation methods. Phys Medica. 2014;30:752–64.
Samim M, van Veenendaal LM, Braat MNGJA, van den Hoven AF, Van Hillegersberg R, Sangro B, et al. Recommendations for radioembolisation after liver surgery using yttrium-90 resin microspheres based on a survey of an international expert panel. Eur Radiol. 2017;27:4923–30.
Gnesin S, Canetti L, Adib S, Cherbuin N, Silva Monteiro M, Bize P, et al. Partition model-based 99mTc-MAA SPECT/CT predictive dosimetry compared with 90Y TOF PET/CT posttreatment dosimetry in radioembolization of hepatocellular carcinoma: a quantitative agreement comparison. J Nucl Med. 2016;57:1672–8.
Wondergem M, Smits MLJ, Elschot M, de Jong HWAM, Verkooijen HM, van den Bosch MAAJ, et al. 99mTc-macroaggregated albumin poorly predicts the intrahepatic distribution of 90Y resin microspheres in hepatic Radioembolization. J Nucl Med. 2013;54:1294–301.
Ilhan H, Goritschan A, Paprottka P, Jakobs TF, Fendler WP, Todica A, et al. Predictive value of 99mTc-MAA SPECT for 90Y-labeled resin microsphere distribution in Radioembolization of primary and secondary hepatic tumors. J Nucl Med. 2015;56:1654–60.
Haste P, Tann M, Persohn S, LaRoche T, Aaron V, Mauxion T, et al. Correlation of technetium-99m macroaggregated albumin and Yttrium-90 glass microsphere biodistribution in hepatocellular carcinoma: a retrospective review of pretreatment single photon emission CT and posttreatment positron emission tomography/CT. J Vasc Interv Radiol Elsevier Inc. 2017;28(5):722–30.
Gulec SA, Mesoloras G, Stabin M. Dosimetric techniques in 90Y-microsphere therapy of liver cancer: the MIRD equations for dose calculations. J Nucl Med. 2006;47:1209–11.
Mikell JK, Mahvash A, Siman W, Baladandayuthapani V, Mourtada F, Kappadath SC. Selective internal radiation therapy with Yttrium-90 glass microspheres: biases and uncertainties in absorbed dose calculations between clinical dosimetry models. Int J Radiat Oncol Biol Phys Elsevier Inc. 2016;96:888–96.
Kao YH, Hock Tan AE, Burgmans MC, Irani FG, Khoo LS, Gong Lo RH, et al. Image-guided personalized predictive dosimetry by artery-specific SPECT/CT partition modeling for safe and effective 90Y radioembolization. J Nucl Med. 2012;53:559–66.
Vogel WV, van Dalen JA, Wiering B, Huisman H, Corstens FHM, Ruers TJM, et al. Evaluation of image registration in PET/CT of the liver and recommendations for optimized imaging. J Nucl Med. 2007;48:910–9.
Boda-Heggemann J, Knopf AC, Simeonova-Chergou A, Wertz H, Stieler F, Jahnke A, et al. Deep inspiration breath hold-based radiation therapy: a clinical review. Int J Radiat Oncol Biol Phys. 2016;94:478–92.
Kruis MF, van de Kamer JB, Belderbos JSA, Sonke J-J, van Herk M. 4D CT amplitude binning for the generation of a time-averaged 3D mid-position CT scan. Phys Med Biol. 2014;59:5517.
Liu C, Pierce IILA, Alessio AM, Kinahan PE. The impact of respiratory motion on tumor quantification and delineation in static PET/CT imaging. Phys Med Biol. 2009;54:7345–62.
McClelland JR, Hawkes DJ, Schaeffter T, King AP. Respiratory motion models: a review. Med Image Anal Elsevier BV. 2013;17:19–42.
Siman W, Mawlawi OR, Mikell JK, Mourtada F, Kappadath SC. Effects of image noise, respiratory motion, and motion compensation on 3D activity quantification in count-limited PET images. Phys Med Biol IOP Publishing. 2017;62:448–64.
Bastiaannet R, Viergever MA, de Jong HWAM. Impact of respiratory motion and acquisition settings on SPECT liver dosimetry for radioembolization. Med Phys. 2017;44:5270–9.
Kessler RM, Ellis JR, Eden M. Analysis of emission tomographic scan data: limitations imposed by resolution and background. J Comput Assist Tomogr. 1984;8:514–22.
King MA, Long DT, Brill AB. SPECT volume quantitation: influence of spatial resolution, source size and shape, and voxel size. Med Phys. 1991;18:1016–24.
Lee JA. Segmentation of positron emission tomography images: some recommendations for target delineation in radiation oncology. Radiother Oncol Elsevier Ireland Ltd. 2010;96:302–7.
Garin E, Lenoir L, Rolland Y, Laffont S, Pracht M, Mesbah H, et al. Effectiveness of quantitative MAA SPECT/CT for the definition of vascularized hepatic volume and dosimetric approach: phantom validation and clinical preliminary results in patients with complex hepatic vascularization treated with yttrium-90-labeled micr. Nucl Med Commun. 2011;32:1245–55.
Garin E, Rolland Y, Lenoir L, Pracht M, Mesbah H, Porée P, et al. Utility of quantitative 99m Tc-MAA SPECT/CT for 90 yttrium-labelled microsphere treatment planning: calculating vascularized hepatic volume and Dosimetric approach. Int J Mol Imaging. 2011;2011:1–8.
Lam MGEH, Goris ML, Iagaru AH, Mittra ES, Louie JD, Sze DY. Prognostic utility of 90Y radioembolization dosimetry based on fusion 99mTc-macroaggregated albumin-99mTc-sulfur colloid SPECT. J Nucl Med. 2013;54:2055–61.
Kao Y-H, Steinberg JD, Tay Y-S, Lim GK, Yan J, Townsend DW, et al. Post-radioembolization yttrium-90 PET/CT - part 2: dose-response and tumor predictive dosimetry for resin microspheres. EJNMMI Res. 2013;3:57.
Eaton BR, Kim HS, Schreibmann E, Schuster DM, Galt JR, Barron B, et al. Quantitative dosimetry for yttrium-90 radionuclide therapy: tumor dose predicts fluorodeoxyglucose positron emission tomography response in hepatic metastatic melanoma. J Vasc Interv Radiol. 2014;25:288–95.
Cremonesi M, Chiesa C, Strigari L, Ferrari M, Botta F, Guerriero F, et al. Radioembolization of hepatic lesions from a radiobiology and dosimetric perspective. Front Oncol. 2014;4:1–20.
Flamen P, Vanderlinden B, Delatte P, Ghanem G, Ameye L, Van Den Eynde M, et al. Multimodality imaging can predict the metabolic response of unresectable colorectal liver metastases to radioembolization therapy with Yttrium-90 labeled resin microspheres. Phys Med Biol. 2008;53:6591–603.
Srinivas SM, Natarajan N, Kuroiwa J, Gallagher S, Nasr E, Shah SN, et al. Determination of radiation absorbed dose to primary liver tumors and normal liver tissue using post-Radioembolization 90Y PET. Front Oncol. 2014;4:255.
Willowson KP, Hayes AR, Chan DLH, Tapner M, Bernard EJ, Maher R, et al. Clinical and imaging-based prognostic factors in radioembolisation of liver metastases from colorectal cancer: a retrospective exploratory analysis. EJNMMI Res. 2017;7:46.
Fowler JF. 21 years of biologically effective dose. Br J Radiol. 2010;83:554–68.
Chiesa C, Mira M, Maccauro M, Spreafico C, Romito R, Morosi C, et al. Radioembolization of hepatocarcinoma with 90Y glass microspheres: development of an individualized treatment planning strategy based on dosimetry and radiobiology. Eur J Nucl Med Mol Imaging. 2015;42:1718–38.
Botta F, Ferrari M, Chiesa C, Vitali S, Guerriero F, De Nile MC, et al. Impact of missing attenuation and scatter corrections on 99mTc-MAA SPECT 3D dosimetry for liver radioembolization using the patient relative calibration methodology: a retrospective investigation on clinical images. Med Phys. 2018;45:1684–98.
Jones LC, Hoban PW. Treatment plan comparison using equivalent uniform biologically effective dose (EUBED). Phys Med Biol. 2000;45:159–70.
Choi H, Charnsangavej C, Faria SC, Macapinlac HA, Burgess MA, Patel SR, et al. Correlation of computed tomography and positron emission tomography in patients with metastatic gastrointestinal stromal tumor treated at a single institution with imatinib mesylate: proposal of new computed tomography response criteria. J Clin Oncol. 2007;25:1753–9.
Kim MN, Kim BK, Han K-H, Kim SU. Evolution from WHO to EASL and mRECIST for hepatocellular carcinoma: considerations for tumor response assessment. Expert Rev Gastroenterol Hepatol. 2015;9:335–48.
Riaz A, Memon K, Miller FH, Nikolaidis P, Kulik LM, Lewandowski RJ, et al. Role of the EASL, RECIST, and WHO response guidelines alone or in combination for hepatocellular carcinoma: radiologic–pathologic correlation. J Hepatol. 2011;54:695–704.
Hipps D, Ausania F, Manas DM, Rose JDG, French JJ. Selective Interarterial radiation therapy (SIRT) in colorectal liver metastases: how do we monitor response? HPB Surg. 2013;2013:570808.
Strigari L, Sciuto R, Rea S, Carpanese L, Pizzi G, Soriani A, et al. Efficacy and toxicity related to treatment of hepatocellular carcinoma with 90Y-SIR spheres: radiobiologic considerations. J Nucl Med. 2010;51:1377–85.
Chiesa C, Mira M, Maccauro M, Romito R, Spreafico C, Sposito C, et al. A dosimetric treatment planning strategy in radioembolization of hepatocarcinoma with 90Y glass microspheres. Q J Nucl Med Mol Imaging. 2012;56:503–9.
Fox RA, Klemp PF, Egan G, Mina LL, Burton MA, Gray BN. Dose distribution following selective internal radiation therapy. Int J Radiat Oncol Biol Phys. 1991;21:463–7.
Yorke ED, Jackson A, Fox RA, Wessels BW, Gray BN. Can current models explain the lack of liver complications in Y-90 microsphere therapy? Clin Cancer Res. 1999;5:3024s–30s.
Walrand S, Hesse M, Chiesa C, Lhommel R, Jamar F. The low hepatic toxicity per Gray of 90Y glass microspheres is linked to their transport in the arterial tree favoring a nonuniform trapping as observed in posttherapy PET imaging. J Nucl Med. 2014;55:135–40.
Högberg J, Rizell M, Hultborn R, Svensson J, Henrikson O, Mölne J, et al. Increased absorbed liver dose in selective internal radiation therapy (SIRT) correlates with increased sphere-cluster frequency and absorbed dose inhomogeneity. EJNMMI Phys. 2015;2:10.
Högberg J, Rizell M, Hultborn R, Svensson J, Henrikson O, Mölne J, et al. Simulation model of microsphere distribution for selective internal radiation therapy agrees with observations. Int J Radiat Oncol. 2016;96:414–21.
Pasciak AS, Bourgeois AC, Bradley YCA. Microdosimetric analysis of absorbed dose to tumor as a function of number of microspheres per unit volume in 90Y Radioembolization. J Nucl Med. 2016;57:1020–6.
Campbell AM, Bailey IH, Burton MA. Analysis of the distribution of intra-arterial microspheres in human liver following hepatic yttrium-90 microsphere therapy. Phys Med Biol. 2000;45:1023–33.
Dewaraja YK, Frey EC, Sgouros G, Brill AB, Roberson P, Zanzonico PB, et al. MIRD pamphlet no. 23: quantitative SPECT for patient-specific 3-dimensional dosimetry in internal radionuclide therapy. J Nucl Med. 2012;53:1310–25.
Pacilio M, Ferrari M, Chiesa C, Lorenzon L, Mira M, Botta F, et al. Impact of SPECT corrections on 3D-dosimetry for liver transarterial radioembolization using the patient relative calibration methodology. Med Phys. 2016;43:4053–64.
Frey EC, Humm JL, Ljungberg M. Accuracy and precision of radioactivity quantification in nuclear medicine images. Semin Nucl Med. 2013;42:208–18.
Bailey DL, Willowson KP. Quantitative SPECT/CT: SPECT joins PET as a quantitative imaging modality. Eur J Nucl Med Mol Imaging. 2014;41:17–25.
Bailey DL, Willowson KP. An evidence-based review of quantitative SPECT imaging and potential clinical applications. J Nucl Med. 2013;54:83–9.
Zeintl J, Vija AH, Yahil A, Hornegger J, Kuwert T. Quantitative accuracy of clinical 99mTc SPECT/CT using ordered-subset expectation maximization with 3-dimensional resolution recovery, attenuation, and scatter correction. J Nucl Med. 2010;51:921–8.
Vija H. Introduction to xSPECT* technology: evolving multi-modal SPECT to become context-based and quantitative [internet]. 2013.
Elschot M, Nijsen JFW, Dam AJ, de Jong HWAM. Quantitative evaluation of scintillation camera imaging characteristics of isotopes used in liver radioembolization. PLoS One. 2011;6:e26174.
Shen S, DeNardo GL, Yuan A, DeNardo DA, DeNardo SJ. Planar gamma camera imaging and quantitation of yttrium-90 bremsstrahlung. J Nucl Med. 1994;35:1381–9.
Rong X, Du Y, Ljungberg M, Rault E, Vandenberghe S, Frey EC. Development and evaluation of an improved quantitative 90Y bremsstrahlung SPECT method. Med Phys. 2012;39:2346.
Elschot M, Lam MGEH, van den Bosch MAAJ, Viergever MA, de Jong HWAM. Quantitative Monte Carlo-based 90Y SPECT reconstruction. J Nucl Med. 2013;54:1557–63.
Minarik D, Sjögreen Gleisner K, Ljungberg M. Evaluation of quantitative (90)Y SPECT based on experimental phantom studies. Phys Med Biol. 2008;53:5689–703.
Lhommel R, Goffette P, Van Den Eynde M, Jamar F, Pauwels S, Bilbao JI, et al. Yttrium-90 TOF PET scan demonstrates high-resolution biodistribution after liver SIRT. Eur J Nucl Med Mol Imaging. 2009;36:1696.
Gates VL, Esmail AAH, Marshall K, Spies S, Salem R. Internal pair production of 90Y permits hepatic localization of microspheres using routine PET: proof of concept. J Nucl Med. 2011;52:72–6.
Carlier T, Eugène T, Bodet-Milin C, Garin E, Ansquer C, Rousseau C, et al. Assessment of acquisition protocols for routine imaging of Y-90 using PET/CT. EJNMMI Res. 2013;3:11.
Gates VL, Salem R, Lewandowski RJ. Positron emission tomography/CT after yttrium-90 radioembolization: current and future applications. J. Vasc. Interv. Radiol. Elsevier. 2013;24:1153–5.
Willowson K, Forwood N, Jakoby BW, Smith AM, Bailey DL. Quantitative 90 Y image reconstruction in PET. Med Phys. 2012;39:7153–9.
Wright CL, Binzel K, Zhang J, Wuthrick EJ, Knopp MV. Clinical feasibility of 90Y digital PET/CT for imaging microsphere biodistribution following radioembolization. Eur J Nucl Med Mol Imaging. 2017;44(7):1194–7.
Fourkal E, Veltchev I, Lin M, Koren S, Meyer J, Doss M, et al. 3D inpatient dose reconstruction from the PET-CT imaging of 90Y microspheres for metastatic cancer to the liver: feasibility study. Med Phys. 2013;40:081702.
Song YS, Paeng JC, Kim H-C, Chung JW, Cheon GJ, Chung J-K, et al. PET/CT-based dosimetry in 90Y-microsphere selective internal radiation therapy. Medicine (Baltimore). 2015;94:e945.
Ng SC, Lee VH, Law MW, Liu RK, Ma VW, Tso WK, et al. Patient dosimetry for 90Y selective internal radiation treatment based on 90Y PET imaging. J Appl Clin Med Phys. 2013;14:212–21.
Carlier T, Willowson KP, Fourkal E, Bailey DL, Doss M, Conti M. 90 Y -PET imaging: exploring limitations and accuracy under conditions of low counts and high random fraction. Med Phys. 2015;42:4295–309.
Strydhorst J, Carlier T, Dieudonne A, Conti M, Buvat I. A gate evaluation of the sources of error in quantitative 90Y PET. Med Phys. 2016;43:5320–9.
van Elmbt L, Vandenberghe S, Walrand S, Pauwels S, Jamar F. Comparison of yttrium-90 quantitative imaging by TOF and non-TOF PET in a phantom of liver selective internal radiotherapy. Phys Med Biol. 2011;56:6759–77.
Eldib M, Oesingmann N, Faul DD, Kostakoglu L, Knešaurek K, Fayad ZA. Optimization of yttrium-90 PET for simultaneous PET/MR imaging: a phantom study. Med Phys. 2016;43:4768–74.
Boellaard R, Delgado-Bolton R, Oyen WJG, Giammarile F, Tatsch K, Eschner W, et al. FDG PET/CT: EANM procedure guidelines for tumour imaging: version 2.0. Eur J Nucl Med Mol Imaging. 2014;42:328–54.
Willowson KP, Tapner M, The QUEST Investigator Team, Bailey DL, Willowson KP, Tapner MJ, et al. A multicentre comparison of quantitative 90Y PET/CT for dosimetric purposes after radioembolization with resin microspheres: the QUEST phantom study. Eur J Nucl Med Mol Imaging. 2015;42:1202–22.
Padia SA, Alessio A, Kwan SW, Lewis DH, Vaidya S, Minoshima S. Comparison of positron emission tomography and bremsstrahlung imaging to detect particle distribution in patients undergoing yttrium-90 radioembolization for large hepatocellular carcinomas or associated portal vein thrombosis. J Vasc Interv Radiol. 2013;24:1147–53.
Kao YH, Tan EH, Ng CE, Goh SW. Yttrium-90 time-of-flight PET/CT is superior to bremsstrahlung SPECT/CT for postradioembolization imaging of microsphere biodistribution. Clin Nucl Med. 2011;36:e186–7.
Elschot M, Vermolen BJ, Lam MGEH, de Keizer B, van den Bosch MAAJ, de Jong HWAM. Quantitative comparison of PET and bremsstrahlung SPECT for imaging the in vivo Yttrium-90 microsphere distribution after liver Radioembolization. PLoS One. 2013;8:e55742.
Seevinck PR, Seppenwoolde J-H, de Wit TC, Nijsen JFW, Beekman FJ, van Het Schip AD, et al. Factors affecting the sensitivity and detection limits of MRI, CT, and SPECT for multimodal diagnostic and therapeutic agents. Anti Cancer Agents Med Chem. 2007;7:317–34.
Van De Maat GH, Seevinck PR, Elschot M, Smits MLJ, De Leeuw H, Van Het Schip AD, et al. MRI-based biodistribution assessment of holmium-166 poly(L-lactic acid) microspheres after radioembolisation. Eur Radiol. 2013;23:827–35.
Elschot M, Smits MLJ, Nijsen JFW, Lam MGEH, Zonnenberg BA, van den Bosch MAAJ, et al. Quantitative Monte Carlo-based holmium-166 SPECT reconstruction. Med Phys. 2013;40:112502.
de Wit TC, Xiao J, JFW N, van het Schip FD, Staelens SG, van Rijk PP, et al. Hybrid scatter correction applied to quantitative holmium-166 SPECT. Phys Med Biol. 2006;51:4773–87.
Smits MLJ, Elschot M, Van Den Bosch MAAJ, Van De Maat GH, Van Het Schip AD, Zonnenberg BA, et al. In vivo dosimetry based on SPECT and MR imaging of 166 ho-microspheres for treatment of liver malignancies. J Nucl Med. 2013;54:2093–100.
Kawrakow I. The EGSnrc code system: Monte Carlo simulation of Electron and photon transport. Med Phys. 2007;34:4818–53.
Goorley JT, James MR, Booth TE, Brown FB, Bull JS, Cox LJ, et al. Initial MCNP6 Release Overview - MCNP6 version 1.0. Los Alamos National Lab. 2013. https://doi.org/10.2172/1086758. Accessed 26 June 2018.
Andersen V, Ballarini F, Battistoni G, Cerutti F, Empl A, Fassò A, et al. The application of FLUKA to dosimetry and radiation therapy. Radiat Prot Dosim. 2005;116:113–7.
Sarrut D, Bardiès M, Boussion N, Freud N, Jan S, Létang J-M, et al. A review of the use and potential of the GATE Monte Carlo simulation code for radiation therapy and dosimetry applications. Med Phys. 2014;41:064301.
Dieudonne A, Hobbs RF, Lebtahi R, Maurel F, Baechler S, Wahl RL, et al. Study of the impact of tissue density heterogeneities on 3-dimensional abdominal dosimetry: comparison between dose kernel convolution and direct Monte Carlo methods. J Nucl Med. 2013;54:236–43.
Mikell JK, Mahvash A, Siman W, Mourtada F, Kappadath SC. Comparing voxel-based absorbed dosimetry methods in tumors, liver, lung, and at the liver-lung interface for 90Y microsphere selective internal radiation therapy. EJNMMI Phys. 2015;2:16.
Pasciak AS, Erwin WD. Effect of voxel size and computation method on Tc-99m MAA SPECT/CT-based dose estimation for Y-90 microsphere therapy. IEEE Trans Med Imaging. 2009;28:1754–8.
Pasciak AS, Bourgeois AC, Bradley YC. A comparison of techniques for (90)Y PET/CT image-based dosimetry following Radioembolization with resin microspheres. Front Oncol. 2014;4:121.
Bourgeois AC, Chang TT, Bradley YC, Acuff SN, Pasciak AS. Intraprocedural yttrium-90 positron emission tomography/CT for treatment optimization of yttrium-90 radioembolization. J. Vasc. Interv. Radiol. Elsevier. 2014;25:271–5.
Walrand S, Hesse M, Demonceau G, Pauwels S, Jamar F. Yttrium-90-labeled microsphere tracking during liver selective internal radiotherapy by bremsstrahlung pinhole SPECT: feasibility study and evaluation in an abdominal phantom. EJNMMI Res. 2011;1:32.
Beijst C, Elschot M, Viergever MA, de Jong HWAM. Toward simultaneous real-time fluoroscopic and nuclear imaging in the intervention room. Radiology. 2016;278:232–8.
Van Der Velden S, Beijst C, Viergever MA, De Jong HWAM. Simultaneous fluoroscopic and nuclear imaging: impact of collimator choice on nuclear image quality: impact. Med Phys. 2017;44:249–61.
van der Velden S, Bastiaannet R, Braat AJAT, Lam MGEH, Viergever MA, de Jong HWAM. Estimation of lung shunt fraction from simultaneous fluoroscopic and nuclear images. Phys Med Biol. 2017;62:8210–25.
Seppenwoolde JH, Bartels LW, Van Der Weide R, Nijsen JFW, Van Het Schip AD, Bakker CJG. Fully MR-guided hepatic artery catheterization for selective drug delivery: a feasibility study in pigs. J Magn Reson Imaging. 2006;23:123–9.
Ritt P, Vija H, Hornegger J, Kuwert T. Absolute quantification in SPECT. Eur J Nucl Med Mol Imaging. 2011;38:69–77.
Büther F, Vehren T, Schäfers KP, Schäfers M. Impact of data-driven respiratory gating in clinical PET. Radiology. 2016;281:229–38.
Kesner AL, Chung JH, Lind KE, Kwak JJ, Lynch D, Burckhardt D, et al. Validation of software gating: a practical Technology for Respiratory Motion Correction in PET. Radiology. 2016;281:239–48.
Bastiaannet R, Van Der Velden S, Lam M, Viergever M, de Jong H. Fast quantitative determination of lung shunt fraction using orthogonal planar projections in hepatic radioembolization. J Nucl Med Soc Nuclear Med. 2016;57:537.
Eberlein U, Cremonesi M, Lassmann M. Individualized dosimetry for Theranostics: necessary, nice to have, or counterproductive? J Nucl Med. 2017;58:97S–103S.
Selwyn RG, Avila-Rodriguez MA, Converse AK, Hampel JA, Jaskowiak CJ, McDermott JC, et al. 18F-labeled resin microspheres as surrogates for 90Y resin microspheres used in the treatment of hepatic tumors: a radiolabeling and PET validation study. Phys Med Biol. 2007;52:7397–408.
Schiller E, Bergmann R, Pietzsch J, Noll B, Sterger A, Johannsen B, et al. Yttrium-86-labelled human serum albumin microspheres: relation of surface structure with in vivo stability. Nucl Med Biol. 2008;35:227–32.
Grosser O, Ruf J, Kupitz D, Pethe A, Ulrich G, Genseke P, et al. Pharmacokinetics of 99mTc-MAA- and 99mTc-HSA-Microspheres Used in Preradioembolization Dosimetry: Influence on the Liver-Lung Shunt. J Nucl Med. 2016;57(6):925–7.
Kappadath SC, Mikell J, Balagopal A, Baladandayuthapani V, Kaseb A, Mahvash A. Hepatocellular carcinoma tumor dose response following 90 Y-radioembolization with glass microspheres using 90 Y-SPECT/CT based voxel dosimetry. Int. J. Radiat. Oncol. Elsevier Inc. 2018. https://doi.org/10.1016/j.ijrobp.2018.05.062.
Garin E, Lenoir L, Edeline J, Laffont S, Mesbah H, Porée P, et al. Boosted selective internal radiation therapy with 90Y-loaded glass microspheres (B-SIRT) for hepatocellular carcinoma patients: a new personalized promising concept. Eur J Nucl Med Mol Imaging. 2013;40:1057–68.
Garin E, Lenoir L, Rolland Y, Edeline J, Mesbah H, Laffont S, et al. Dosimetry based on 99mTc-macroaggregated albumin SPECT/CT accurately predicts tumor response and survival in hepatocellular carcinoma patients treated with 90Y-loaded glass microspheres: preliminary results. J Nucl Med. 2012;53:255–63.
Chan KT, Alessio AM, Johnson GE, Vaidya S, Kwan SW, Monsky W, et al. Prospective trial using internal pair-production positron emission tomography to establish the Yttrium-90 radioembolization dose required for response of hepatocellular carcinoma. Int J Radiat Oncol Elsevier Inc. 2018;101:358–65.
Fowler KJ, Maughan NM, Laforest R, Saad NE, Sharma A, Olsen J, et al. PET/MRI of hepatic 90Y microsphere deposition determines individual tumor response. Cardiovasc Intervent Radiol. 2016;39:855–64.
Flamen P, Vanderlinden B, Delatte P, Ghanem G, Ameye L, Van Den Eynde M, et al. Corrigendum: multimodality imaging can predict the metabolic response of unresectable colorectal liver metastases to radioembolization therapy with Yttrium-90 labeled resin microspheres (2008 Phys. Med. Biol. 53 6591–603). Phys Med Biol. 2014;59:2549–51.
van den Hoven AF, Rosenbaum CENM, Elias SG, de Jong HWAM, Koopman M, Verkooijen HM, et al. Insights into the dose-response relationship of Radioembolization with resin 90Y-microspheres: a prospective cohort study in patients with colorectal Cancer liver metastases. J Nucl Med. 2016;57:1014–9.
Chansanti O, Jahangiri Y, Matsui Y, Adachi A, Geeratikun Y, Kaufman JA, et al. Tumor dose response in Yttrium-90 resin microsphere embolization for neuroendocrine liver metastases: a tumor-specific analysis with dose estimation using SPECT-CT. J Vasc Interv Radiol SIR. 2017;28(11):1528–35.
This work was supported in part by a research grant from Siemens Medical Solutions (HWAMdJ). Siemens did not participate in the design of the study, the collection, analysis, and interpretation of the data nor in the writing of the manuscript.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no. [646734]).
Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Room E01.132, P.O. Box 85500, 3508, GA, Utrecht, The Netherlands
Remco Bastiaannet
, Britt Kunnen
, Arthur J. A. T. Braat
, Marnix G. E. H. Lam
& Hugo W. A. M. de Jong
Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1155 Pressler St, Unit 1352, Houston, TX, 77030, USA
S. Cheenu Kappadath
Search for Remco Bastiaannet in:
Search for S. Cheenu Kappadath in:
Search for Britt Kunnen in:
Search for Arthur J. A. T. Braat in:
Search for Marnix G. E. H. Lam in:
Search for Hugo W. A. M. de Jong in:
HWAMdJ, MGEHL, and SCK designed the initial outline of the manuscript. RB drafted the manuscript with input from all authors. BK has written a section of the manuscript. AJATB has provided written text as well as clinical examples. All authors read and approved the final manuscript.
Correspondence to Remco Bastiaannet.
For this type of study, formal consent is not required and informed consent is not applicable.
MGEHL is a consultant for BTG International and Terumo and has received research support from Quirem Medical.The Department of Radiology and Nuclear Medicine of the UMC Utrecht receives royalties from Quirem Medical.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Bastiaannet, R., Kappadath, S.C., Kunnen, B. et al. The physics of radioembolization. EJNMMI Phys 5, 22 (2018) doi:10.1186/s40658-018-0221-z
Radiobiological model
Dose-effect relationship
Imaging and dosimetry for radionuclide based therapy | CommonCrawl |
Trapping and spreading properties of quantum walk in homological structure
Takuya Machida, Etsuo Segawa
We attempt to extract a homological structure of two kinds of graphs by the Grover walk. The first one consists of a cycle and two semi-infinite lines, and the second one is assembled by a periodic embedding of the cycles in $$\mathbb {Z}$$Z. We show that both of them have essentially the same eigenvalues induced by the existence of cycles in the infinite graphs. The eigenspace of the homological structure appears as so called localization in the Grover walks, in which the walk is partially trapped by the homological structure. On the other hand, the difference of the absolutely continuous part of spectrum between them provides different behaviors. We characterize the behaviors by the density functions in the weak convergence theorem: The first one is the delta measure at the bottom, while the second one is expressed by two kinds of continuous functions, which have different finite supports $$(-1/\sqrt{10},1/\sqrt{10})$$(-1/10,1/10) and $$(-2/7,2/7)$$(-2/7,2/7), respectively.
Quantum Information Processing
Homological structure
Linear spreading
Quantum walk
Spectral mapping
Modelling and Simulation
Fingerprint Dive into the research topics of 'Trapping and spreading properties of quantum walk in homological structure'. Together they form a unique fingerprint.
Quantum Walk Mathematics
Trapping Mathematics
Walk Mathematics
Probability density function Chemical Compounds
trapping Physics & Astronomy
Cycle Mathematics
cycles Physics & Astronomy
Infinite Graphs Mathematics
Machida, T., & Segawa, E. (2015). Trapping and spreading properties of quantum walk in homological structure. Quantum Information Processing, 14(5), 1539-1558. https://doi.org/10.1007/s11128-014-0819-6
Trapping and spreading properties of quantum walk in homological structure. / Machida, Takuya; Segawa, Etsuo.
In: Quantum Information Processing, Vol. 14, No. 5, 01.05.2015, p. 1539-1558.
Machida, T & Segawa, E 2015, 'Trapping and spreading properties of quantum walk in homological structure', Quantum Information Processing, vol. 14, no. 5, pp. 1539-1558. https://doi.org/10.1007/s11128-014-0819-6
Machida T, Segawa E. Trapping and spreading properties of quantum walk in homological structure. Quantum Information Processing. 2015 May 1;14(5):1539-1558. https://doi.org/10.1007/s11128-014-0819-6
Machida, Takuya ; Segawa, Etsuo. / Trapping and spreading properties of quantum walk in homological structure. In: Quantum Information Processing. 2015 ; Vol. 14, No. 5. pp. 1539-1558.
@article{3a4529b3dd9a45bead17fa056d51b35c,
title = "Trapping and spreading properties of quantum walk in homological structure",
abstract = "We attempt to extract a homological structure of two kinds of graphs by the Grover walk. The first one consists of a cycle and two semi-infinite lines, and the second one is assembled by a periodic embedding of the cycles in $$\mathbb {Z}$$Z. We show that both of them have essentially the same eigenvalues induced by the existence of cycles in the infinite graphs. The eigenspace of the homological structure appears as so called localization in the Grover walks, in which the walk is partially trapped by the homological structure. On the other hand, the difference of the absolutely continuous part of spectrum between them provides different behaviors. We characterize the behaviors by the density functions in the weak convergence theorem: The first one is the delta measure at the bottom, while the second one is expressed by two kinds of continuous functions, which have different finite supports $$(-1/\sqrt{10},1/\sqrt{10})$$(-1/10,1/10) and $$(-2/7,2/7)$$(-2/7,2/7), respectively.",
keywords = "Homological structure, Linear spreading, Localization, Quantum walk, Spectral mapping",
author = "Takuya Machida and Etsuo Segawa",
journal = "Quantum Information Processing",
publisher = "Springer New York",
T1 - Trapping and spreading properties of quantum walk in homological structure
AU - Machida, Takuya
AU - Segawa, Etsuo
N2 - We attempt to extract a homological structure of two kinds of graphs by the Grover walk. The first one consists of a cycle and two semi-infinite lines, and the second one is assembled by a periodic embedding of the cycles in $$\mathbb {Z}$$Z. We show that both of them have essentially the same eigenvalues induced by the existence of cycles in the infinite graphs. The eigenspace of the homological structure appears as so called localization in the Grover walks, in which the walk is partially trapped by the homological structure. On the other hand, the difference of the absolutely continuous part of spectrum between them provides different behaviors. We characterize the behaviors by the density functions in the weak convergence theorem: The first one is the delta measure at the bottom, while the second one is expressed by two kinds of continuous functions, which have different finite supports $$(-1/\sqrt{10},1/\sqrt{10})$$(-1/10,1/10) and $$(-2/7,2/7)$$(-2/7,2/7), respectively.
AB - We attempt to extract a homological structure of two kinds of graphs by the Grover walk. The first one consists of a cycle and two semi-infinite lines, and the second one is assembled by a periodic embedding of the cycles in $$\mathbb {Z}$$Z. We show that both of them have essentially the same eigenvalues induced by the existence of cycles in the infinite graphs. The eigenspace of the homological structure appears as so called localization in the Grover walks, in which the walk is partially trapped by the homological structure. On the other hand, the difference of the absolutely continuous part of spectrum between them provides different behaviors. We characterize the behaviors by the density functions in the weak convergence theorem: The first one is the delta measure at the bottom, while the second one is expressed by two kinds of continuous functions, which have different finite supports $$(-1/\sqrt{10},1/\sqrt{10})$$(-1/10,1/10) and $$(-2/7,2/7)$$(-2/7,2/7), respectively.
KW - Homological structure
KW - Linear spreading
KW - Localization
KW - Quantum walk
KW - Spectral mapping
JO - Quantum Information Processing
JF - Quantum Information Processing | CommonCrawl |
Determine the length and breadth of a rectangle if the length is 3cm less than twice the breadth, and the perimeter is 18
40,002 questions, page 113
Trigonometry Help please
1.Explain the relationship(s) among angle measure in degrees, angle measure in radians, and arc length.
asked by Ryan on February 20, 2016
A brass is 2m long at a certain temperature.what is it's length after a temperature rise of 100k if the expansivity of brass is 18x10-6k-1
asked by Kate on December 27, 2018
If it requires 5.5 J of work to stretch a particular spring by 2.0 cm from its equilibrium length, how much more work will be required to stretch it an additional 4.1 cm?
asked by Bill on November 12, 2011
heart and lungs/cardiorespiratory fitness
what is the ideal length of time for a cool-down following an intense workout a.0 to 3 minutes b.15 to 20 minutes c.3 to 5 minutes d.5 to 15 minutes my answer is d
asked by susue on February 5, 2014
b=2, c+ sqrt of 13. Trying to find the length of the 3rd side. It is a right triangle with A on top, 2 on the right side and the sqrt of 13 on the left.
asked by Lucy on May 5, 2012
a new unit equal of length is equal to 10 M the area of a m square expressed in term of new unit has a magnitude of (1) 0.05 (2) 0.50 (3) 5.05 (4) 5.00
asked by Mimi on June 29, 2018
a owns a rectangularers compound of length 40 meters and width 25 meters what is the area of this compound in ares
asked by jack on January 20, 2012
Points P(1, 5) and Q(4, 1) on a coordinate grid represent side PQ of triangle PQR. What is the length of side PQ of the triangle?
asked by Billy on November 11, 2011
A pond is 36 m long. A duck swims 4 length of the pond each day. How far does the duck swim in 4 weeks
asked by Cortney on September 7, 2018
The equation of line l is 3x-4y=24. the line intersects the x -axis at A and the y -axis at B. GIVEN THAT M is the point (4,-3) and O is the origin . find the length of AB .
asked by asif on October 5, 2014
The diagonal of a TV set is 26 inches long. Its length is 14 inches more than the height. Find the dimensions of the TV set.
asked by jake on February 21, 2016
Karyn cuts a length of ribbon into 4 equal pieces, each 1 1/4 feet long. How long was the ribbon?
asked by taniya on January 27, 2015
A right triangle has a base that is 2 more than twice the height. Find the length of the base for the triangle. If the area is 30 square units.
asked by Raj on January 31, 2011
FAST Angle Help! (Math)
Which value could be the length of the missing side of the triangle? (the side of the triangle has 5, the bottom has a 12 and the top has a X) PLZ I NEED HELP FAST!!!!!!!!!!!!
asked by Agala on January 11, 2017
the velocity of sound in air is 332m/s. if the unit of length is km and the unit of time is hour,what is the value of velocity?
asked by sam on July 28, 2011
A rectangular garden covers 16,000 ft2. Its length is 60 ft longer than its width. What is the width of the garden?
asked by Annonymous on November 17, 2015
The base of a triangle is 11. The other two sides are integers and one of the sides is twice as long as the other. What is the shortest possible length of a side of the triangle?
asked by john on August 7, 2014
the lengths of the three sides of a right triangle R have odd integer values and two of the three sides have lengths 3 and 5. What is one possible length of the third side?
asked by max on July 23, 2013
A roll of ribbon is cut into 28 pieces, each \frac{3}{4} m long. What was the TOTAL length of ribbon on the roll?
asked by indira on February 28, 2017
right triangle has an acute angle of 24.8 and an adjacent length of 145 what is the side opposite of the acute angle
asked by lo on February 14, 2012
A pendulum has a length of 1.35m. What mass should be placed on a spring with a spring constant of 10.0N/m to have oscillations with the same period as the pendulum.
asked by Joe on November 4, 2012
A wire of nichrome of length 80cm has a resistance of 50 ohms. if the wire is four folded, find new resistance.
asked by vanditha on August 29, 2016
The base of the triangle is 17. The other two sides are integers and one of the sides is twice as long as the other. What is the longest possible length of the side of the triangle.
asked by Anonymous on August 6, 2015
Two chords intersect inside a circle. The lengths of the segment of one chord are 4 and 6. The length of the segment of the other chords are 3 and what?
asked by Rhonda on September 23, 2011
A rhombus has diagonals of length 4 and 10. Find the angles of the rhombus to the nearest degree. would the angles be 136 and 44?
asked by anna on March 10, 2008
A rectangular sandbox has a length of 60 inches, a width of 40 inches, and a depth of 6 inches. What is the volume in cubic inches?
asked by Anne on August 2, 2012
A chord is 2cm from the centre of a circle. if the ratius of the circle is 5cm, find the length of the chord
asked by obaji on July 13, 2014
Math plz help it only one question
What is the volume of the rectangular prism? (there is a rectangular prism with a height of 7 mm width of 9.6 mm and a length of 5 mm) Plz help and can you show me how its done? thanks!
asked by I LOVE PHINES AND FERB on April 3, 2016
the diagonals of a rhombus are in the ratio 3:4. if its perimeter is 40cm then find the length of the sides and diagonals of the rhombus
asked by Sneha on June 4, 2017
a brass rod is 2 m long at a certain temperature. what is its length for a temperature rise of 100 K, if the expansivity of brass is 18 *10^-6K-1?
asked by Edward on March 7, 2015
how do you find the area of a triangle? 1/2 x B x H (one half times the base length times the height of the triangle)
asked by jules on February 13, 2007
The perimeter of a rectangular field is 424 feet. The length is 4 feet more than the width. find the width.
asked by Tina on September 27, 2008
The length of a track around a football field is 1/4 mile. How many miles do you walk if you walk 2 3/4 times around the track?
asked by Mia on November 3, 2011
Find the length of segment BC if segment BC is parallel to segment DE and segment DC is a medsegment of triangle ABC. A(-3,4) E(4,3) D(1,1) B and C do not have coordinates
asked by Jillian on January 5, 2010
Find the perimeter of a rectangular area with a length of 13 inches and a width of 7 inches. with distribution and without distribution
asked by You know it on November 11, 2018
In the accompanying diagram of triangle ABC side DE is parallel to AC. If BD is 8 and BA is 18, that is all on side d is the midpoint of BA and BC is 27 with the midpoint E what is the length of BE??
asked by Michelle on June 8, 2010
The thompson family's new deck is 19 2/5 feet long. There is 12 inches in 1 foot. What is the length of the deck in inches?
asked by Anonymous on March 4, 2013
Jose is planning to spread fertilizer on his rectangular yard.His yard has a length of 40 ft and width of 25 ft.How much fertilizer does he need
asked by jay on June 13, 2017
The angle between two tangents from a point to a circle is 82 degree.What is the length of one of these tangents if the radius of the circle is 80mm?
asked by Shane on March 3, 2013
If a triangle has angle measures 15, 120 and 45, and the length of the side between the 15 and 120 angles is 8, what is the area of the triangle?
asked by Linda on February 10, 2011
The base of a triangle is 3 cm greater than the height. The area is 14cm^2. Find the length and height of the base.
asked by Sadie on June 2, 2009
The length of AB is 1 mulliken (10 inches) on ΔABC. Calculate the lengths of AC = _____________ mullikens and BC = _____________ mullikens.
asked by a on September 10, 2015
Two mechanical shafts are joined together at their ends. Together, their total length is 17 2/9 ft. If one shaft is 7 5/9 ft long. How long must the other shaft be?
asked by donny on May 4, 2014
in a diagram of a scalene triangle ABC, line DE is the midpoint of line AB and line AC and DE=7 find the length of line BC
asked by melinda on December 5, 2010
a brass rod is 2m long at a certain temperature. What is its length for a temperature rise of 100k, if the expansivity of brass is 18*10
asked by Maro on January 3, 2017
point A is located at (1,5) and point B is located at (3,2). What is the length of line segment AB in simplest radical form
asked by shay on February 15, 2012
the base of an isosceles triangle is 27dm.if the length of a leg is 11 more than one-third of the base,find the perimeter of the triangle?
asked by annalou paradillo on September 19, 2013
A square has a side of length 2 inches. How long is a side on a square that has twice the area? please answer and explain
asked by Thomas on January 5, 2014
If it requires 7.0 of work to stretch a particular spring by 2.1 from its equilibrium length, how much more work will be required to stretch it an additional 3.7?
asked by Stew on November 1, 2010
Two rods of equal length and temperature at end of the rods is T1 and T2. What is the condition for equal flow of heat through them
asked by gayathri on November 23, 2016
the side of a regular hexagon is twice the square root of its apothem. find the apothem and side length.
asked by Sofia on January 12, 2011
the base of a triangle is 5cm greater than the height. The area is 33cm^2. What is the height and the length of the base?
asked by Armelia on July 23, 2010
a 50 foot length of rope was cut into two pieces. the first piece is 5 feet more than twice the second. what are the lengths of the two pieces of rope?
asked by alondra on May 6, 2013
A rectangular prism has a volume of 6,090 cm3, a width of 14 cm, and a length of 15 cm. What is the height of the rectangular prism?
asked by SHANNON on November 11, 2010
The perimeter of rectangular playground is 200 feet. If the length is 5 feet less than twice the width, what are dimensions of the playground
asked by lisa on March 8, 2012
Algebra/ Pre Calc
The area of a circle is 78.5 square centimeters, and a subtending arc on the circle has an arc length of 6π. The estimated value of π is 3.14.
asked by Amy on July 7, 2015
a plimber has a 2 meter length of pipe. he needs to cut it into sections that are 10 centimeters long, how many sections will he be able to cut?
asked by kimmy plus on February 19, 2014
A fence is 20 ft long. it has posts at each end and at every four feet along its length. how many fence posts are there? Draw a picture.
asked by Derek on May 20, 2013
A parallelogram has an area of 8x^2 - 2x -45. The height of the parallelogram is 4x + 9. I know the formula for a parallelogram is A=bh. Please work and explain how I get the length of the base of the parallelogram. Thanks
asked by Jane on March 27, 2013
Find a polynominal for the perimeter and for the area. The demensions are: Width:b+3, Length:b The perimeter is: The area is: Do not factor. Please help.
asked by Kaui on March 2, 2009
The sides of a triangle are three consecutive odd integers. The perimeter of the triangle is 39 inches. What is the length of each of the three sides?
asked by ANONYMOUS on January 3, 2013
a 41 inch ribbon was cut into five shorter ribbons of equal length how long was each shorter ribbon
asked by george on December 7, 2011
Thirty-three small communities in Connecticut (population near 10,000 each) gave an average of x = 138.5 reported cases of larceny per year. Assume that σ is known to be 42.9 cases per year. (a) Find a 90% confidence interval for the population mean
asked by Amy on December 3, 2016
A rectangular piece of tin has an area of 1334 square inches. A square tab of 3 inches is cut from each corner, and the ends and sides are turned up to make an open box. If the volume of the box is 2760 cubic inches, what were the original dimensions of
asked by James on June 13, 2016
An infinitely long thin metal strip of width w=12cm carries a current of I=10A that is uniformly distributed across its cross section. What is the magnetic field at point P a distance a=3cm above the center of the strip? I have tried using the integration
asked by Joel on April 4, 2014
At one time, television sets used "rabbit-ears" antennas. Such an antenna consists of a pair of metal rods. The length of each rod can be adjusted to be one-sixth of a wavelength of an electromagnetic wave whose frequency is 65.0 MHz. How long is each rod?
asked by Mary on October 13, 2007
Essay.Show all work. A gardener wants to create a rectangular garden with the length of 3x-2y ft. and the width of 3x-3y ft. What is an algebraic expression for the area of the garden.Be sure to multiply this out.Express in the simpliest correct
asked by Lueshelle on April 11, 2012
physics - energy question
A simple pendulum, 2.0 m in length, is released with a push when the support string is at an angle of 25° from the vertical. If the initial speed of the suspended mass is 1.2 m/s when at the release point, what is its speed at the bottom of the swing? (g
asked by icy on March 26, 2008
Math (calc)
An open box with a square base is to have a volume of 12ft^3. Find the box dimensions that minimize the amount of material used. (round to two decimal places). it asks for the side length and the height. Please help asap due in a few hours. Thank you
asked by Kyle on February 8, 2017
Suppose an airline policy states that all the baggage must be boxed shaped with a sum of length, width, and height not exceeding 138 inches. What are the dimensions and volume of a square based box with the greatest volume under these conditions.
asked by Jahaira on May 9, 2017
Science(physics)
A spring 20cm long is stretched to 25cm by a load of 50N. What will be its length when stretched by 100N assuming that the elastic limit is not reached? I want the answer abd solving to this question and also the formula for young modulus of elasticity
asked by Rachel on January 19, 2016
An object is placed 5 cm in front of a concave lens of focal length 7 cm. Find the image location by drawing a ray tracing diagram to scale. Verify your answer using the lens equation. I am confused on what numbers would be negative in my formula of
asked by Frank German on December 5, 2014
All of the following factors are related to the plate tectonic model of the western coast of South America except the 1) length of the coastline 2) pattern of the earthquake activity 3) location of volcanoes 4) density of crustal plates 5) location of
asked by nathalia on June 21, 2016
Bottom part of the greenhouse has a length of 350cm a width of 220cm and a height of 250cm. The angle at the peak of the roof measures 90 degrees. Sketch the frame and label it with its actual dimensions. How do I calculate the actual demensions?
I have another question (studying for finals, haha) Let ~r(t) = et cost ~i + et sin t~j be parametric equations of a curve C. Find the length of C from t = 0 to t = π. so the equation I am using is the integral from a to b (in thsi case 0 to pi) of the
asked by rebecca on December 7, 2014
A Japanese fan can be made by sliding open its 7 small sections (or leaves), which are each in the form of sectors of a circle having central angle of 15. If the radius of this fan is 24 cm, find out the length of the lace that is required to cover its
asked by Niel on March 15, 2014
A electrician charges $50 for the first half-hour of work and $30 each hour for additional time. Stoney Point High School budgeted $200 to repair the refrigeration system. What length of service call with the budget not be exceeded?
asked by Monica on February 21, 2011
A 240 g mass is attached to a spring of constant k = 5.4 N/m and set into oscillation with amplitude A = 26 cm. (a) Determine the frequency of the system in hertz. (b) Determine the period. (c) Determine the maximum velocity of the mass. (d) Determine the
asked by Jill on November 11, 2012
when you use a graph to solve a problem about how far a car traveled during a specified time during which it was accelerating how many area calculations do you have to make? what is/ are the shape/shapes you are calculating? a. one area calculation, a
asked by carrie on May 12, 2010
Complete the following exercise. In December 20X2, the Cardoso Company established its predetermined overhead rate for jobs produced during the year 20X3 by using the following cost predictions: Overhead costs: $750,000 Direct labor costs: $625,000. At
asked by Jane on February 3, 2015
The area of a rectangle is 45 cm2. Two squares are constructed such that two adjacent sides of the rectangle are each also the side of one of the squares. The combined area of the two squares is 106 cm². Find the lengths of the sides of the squares.
I'm confused about a question on a math test. The question is: Which statement is a true statement? The answer choices are: A. All rectangles are squares B. All squares are rectangles C. Every rhombus is a rectangle D. Every rectangle is a rhombus. I
asked by DustyRose<3 on January 13, 2017
The length of a football field is 100 yards. When you compare geologic time (4.6 billion years) to the length of a football field, you will find that each yard on the field will equal 46 million years, and each 10-yard section will equal 460 million years.
asked by Aria on November 4, 2015
A cotton reel has a diameter of 3cm. 91 metres of cotton goes around it. How many times does the cotton go around the reel? And give your answer to the nearest ten
asked by Kevin on September 28, 2016
in a circle of radius 6cm , a chord is drawn 3cm from the circle (a)calculate the angle subtended by the chord at the centre of the circle
asked by kelvin mafara on January 31, 2017
Before continuing we need a little more information about the angle factor given in the formula above. It turns out that the angle factor is equal to the cosine of the angle that the light strikes the surface as measured from perpendicular. In this
asked by Luis on July 15, 2012
mathematical literacy
The scale on a map is given as 1:2 000 000.the distance between two towns on the map is 4,3cm?
asked by ntiso on March 13, 2016
a paper cone has a base diameter of 8cm a height of 3cm. a)caculate the volume of the cone in terms of pie b)if the cone is cut and opened out into the sector of a circle.what is the angle of the sector
asked by ulubi on June 15, 2016
Sarah needs to cover the lateral area and the base on top of the cylinder. (cylinder 6 in on the base 12 in in length) About how many square inches of paper will Sara need? 282 in. 2 254 in. 2 679 in. 2 565 in. 2
asked by Alina on June 4, 2015
Find the exact length of the altitude drawn to the hypotenuse. Do not round. The triangle is not drawn to scale. A triangle with base measures of 9 & 17. Draw a diagram. Using similar triangles, we know that h/9 = 17/ã370 I need more help with this.
asked by Anonymous on February 13, 2015
Anna is designing rectangular garden that has an area of 182 square feet, and the length is longer than the width. The table lists the possible whole number dimensiones for the garden. Which dimensiones are missing?
asked by Jessica on April 4, 2016
Suppose a bimetallic strip is constructed of copper and steel strips of thickness 1.33 mm and length 23.5 mm, and the temperature of the strip is reduced by 4.50 K. If the strip is 23.5 mm long, how far is the maximum deviation of the strip from the
asked by Anonymous on November 4, 2014
Frank and Oswalt report a molar absorptivity of 4700 L mol^-1 cm^-1 for thiocyanatoiron(lll) ion. What absorbance would you expect for a soloution that it 1.0e-4 M in thiocyanatorion(lll) ion, if the path length is 1.00 cm?
asked by Mariah on March 6, 2015
A rectangular box with given dimensions length=80m, width=64m and height=48m. Has its two largest sides painted green. what percentage of the total area is painted green?
asked by Claudia on May 31, 2016
Two identical rods are placed end to end, separated by a gap of width D. Each rod has charge Q spread uniformly along its length L. Calculate the force between the rods. I need some help. My teacher doesn't give us much help. Thank you!
asked by Lindsey on February 7, 2013
A violin has an open string length (bridge to nut) of L=32.7 cm. At what distance x from the bridge does a violinist have to place his finger on the fingerboard to play a C (523.3 Hz) on the A string (fundamental frequency 440 Hz)?
asked by Nadya on December 8, 2010
physics-5
A pilot begins a race at a speed of 755.0 km/h and accelerates at a constant uniform rate for 63.21 s. The pilot cross the finish line with a speed of 777.0 km/h. From this data, calculate the length of the course.
asked by NoNo100 on September 16, 2009
Coordinate Geometry A=(7,4) B=(2,0) Gradient of AB = 4/5 Equation of AB in ax+by=0 for is 4x-5y-8=0. The length of AB in surd form is root41. The point C has coordinates (2,t), where t>0 and AC=AB. a) find the value of t. b) without plotting point, find
asked by Anonymous on October 12, 2015
maths-urgently needed
ABCD is a square with length of each side 1cm. An octagon is formed by lines joining the vertices of the square to the mid points of opposite sides. Find the area of the octagon?
asked by Anonymous on February 5, 2013 | CommonCrawl |
Resources tagged with: Factors and multiples
Filter by: Content type: ALL Problems Articles Games
Age range: All 5 to 11 7 to 14 11 to 16 14 to 18
Other tags that relate to A Biggy
Powers & roots. Factors and multiples. Creating and manipulating expressions and formulae. Number theory. Prime factors. Divisibility. Inequalities. Mathematical reasoning & proof. Indices. Modular arithmetic.
Broad Topics > Numbers and the Number System > Factors and multiples
A Biggy
Find the smallest positive integer N such that N/2 is a perfect cube, N/3 is a perfect fifth power and N/5 is a perfect seventh power.
Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3.
Data Chunks
Data is sent in chunks of two different sizes - a yellow chunk has 5 characters and a blue chunk has 9 characters. A data slot of size 31 cannot be exactly filled with a combination of yellow and. . . .
Really Mr. Bond
115^2 = (110 x 120) + 25, that is 13225 895^2 = (890 x 900) + 25, that is 801025 Can you explain what is happening and generalise?
Sixational
The nth term of a sequence is given by the formula n^3 + 11n . Find the first four terms of the sequence given by this formula and the first term of the sequence which is bigger than one million. . . .
Divisibility Tests
This article takes the reader through divisibility tests and how they work. An article to read with pencil and paper to hand.
Squaresearch
Consider numbers of the form un = 1! + 2! + 3! +...+n!. How many such numbers are perfect squares?
Prove that if the integer n is divisible by 4 then it can be written as the difference of two squares.
Transposition Cipher
Can you work out what size grid you need to read our secret message?
Different by One
Can you make lines of Cuisenaire rods that differ by 1?
What is the largest number which, when divided into 1905, 2587, 3951, 7020 and 8725 in turn, leaves the same remainder each time?
LCM Sudoku
Here is a Sudoku with a difference! Use information about lowest common multiples to help you solve it.
Number Rules - OK
Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square number...
Thirty Six Exactly
The number 12 = 2^2 � 3 has 6 factors. What is the smallest natural number with exactly 36 factors?
Each letter represents a different positive digit AHHAAH / JOKE = HA What are the values of each of the letters?
Big Powers
Three people chose this as a favourite problem. It is the sort of problem that needs thinking time - but once the connection is made it gives access to many similar ideas.
Diggits
Can you find what the last two digits of the number $4^{1999}$ are?
N000ughty Thoughts
How many noughts are at the end of these giant numbers?
Take Three from Five
Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him?
Multiplication Magic
Given any 3 digit number you can use the given digits and name another number which is divisible by 37 (e.g. given 628 you say 628371 is divisible by 37 because you know that 6+3 = 2+7 = 8+1 = 9). . . .
Age 7 to 14 Challenge Level:
I'm thinking of a number. My number is both a multiple of 5 and a multiple of 6. What could my number be?
Phew I'm Factored
Explore the factors of the numbers which are written as 10101 in different number bases. Prove that the numbers 10201, 11011 and 10101 are composite in any base.
Factors and Multiples - Secondary Resources
A collection of resources to support work on Factors and Multiples at Secondary level.
Factoring Factorials
Find the highest power of 11 that will divide into 1000! exactly.
AB Search
The five digit number A679B, in base ten, is divisible by 72. What are the values of A and B?
Even So
Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why?
Eminit
The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M?
Star Product Sudoku
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
LCM Sudoku II
You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku.
What Numbers Can We Make?
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
What Numbers Can We Make Now?
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
Robotic Rotations
How did the the rotation robot make these patterns?
What is the remainder when 2^2002 is divided by 7? What happens with different powers of 2?
Counting Factors
Is there an efficient way to work out how many factors a large number has?
Ewa's Eggs
I put eggs into a basket in groups of 7 and noticed that I could easily have divided them into piles of 2, 3, 4, 5 or 6 and always have one left over. How many eggs were in the basket?
Factoring a Million
In how many ways can the number 1 000 000 be expressed as the product of three positive integers?
Digat
What is the value of the digit A in the sum below: [3(230 + A)]^2 = 49280A
Oh! Hidden Inside?
Find the number which has 8 divisors, such that the product of the divisors is 331776.
Helen's Conjecture
Helen made the conjecture that "every multiple of six has more factors than the two numbers either side of it". Is this conjecture true?
One to Eight
Complete the following expressions so that each one gives a four digit number as the product of two two digit numbers and uses the digits 1 to 8 once and only once.
Gaxinta
A number N is divisible by 10, 90, 98 and 882 but it is NOT divisible by 50 or 270 or 686 or 1764. It is also known that N is a factor of 9261000. What is N?
Common Divisor
Find the largest integer which divides every member of the following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n.
Cuisenaire Environment
An environment which simulates working with Cuisenaire rods.
Powerful Factorial
6! = 6 x 5 x 4 x 3 x 2 x 1. The highest power of 2 that divides exactly into 6! is 4 since (6!) / (2^4 ) = 45. What is the highest power of two that divides exactly into 100!?
Power Crazy
What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties?
Have You Got It?
Can you explain the strategy for winning this game with any target?
Special Sums and Products
Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48.
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
Satisfying Statements
Can you find any two-digit numbers that satisfy all of these statements?
Do you know a quick way to check if a number is a multiple of two? How about three, four or six? | CommonCrawl |
Quantitative prediction of grain boundary thermal conductivities from local atomic environments
Computing grain boundary diagrams of thermodynamic and mechanical properties
Chongze Hu, Yanwen Li, … Jian Luo
Atomistic and machine learning studies of solute segregation in metastable grain boundaries
Yasir Mahmood, Maher Alghalayini, … Fadi Abdeljawad
Graph neural network modeling of grain-scale anisotropic elastic behavior using simulated and measured microscale data
Darren C. Pagan, Calvin R. Pash, … Matthew P. Kasemer
Predicting the propensity for thermally activated β events in metallic glasses via interpretable machine learning
Qi Wang, Jun Ding, … Evan Ma
Point-defect avalanches mediate grain boundary diffusion
Ian Chesser & Yuri Mishin
A new method to reliably determine elastic strain of various crystal structures from atomic-resolution images
J. S. Chen, Y. Liu, … T. X. Fan
Anomalous twin boundaries in two dimensional materials
A. P. Rooney, Z. Li, … S. J. Haigh
A classical equation that accounts for observations of non-Arrhenius and cryogenic grain boundary migration
Eric R. Homer, Oliver K. Johnson, … Gregory B. Thompson
Entropy decay during grain growth
Pawan Vedanti, Xin Wu & Victor Berdichevsky
Susumu Fujii ORCID: orcid.org/0000-0003-4650-57521,2,3,
Tatsuya Yokoi3,4,
Craig A. J. Fisher ORCID: orcid.org/0000-0002-0999-57911,
Hiroki Moriwake1,2 &
Masato Yoshiya ORCID: orcid.org/0000-0003-2029-25251,3,5
Atomistic models
Molecular dynamics
Quantifying the dependence of thermal conductivity on grain boundary (GB) structure is critical for controlling nanoscale thermal transport in many technologically important materials. A major obstacle to determining such a relationship is the lack of a robust and physically intuitive structure descriptor capable of distinguishing between disparate GB structures. We demonstrate that a microscopic structure metric, the local distortion factor, correlates well with atomically decomposed thermal conductivities obtained from perturbed molecular dynamics for a wide variety of MgO GBs. Based on this correlation, a model for accurately predicting thermal conductivity of GBs is constructed using machine learning techniques. The model reveals that small distortions to local atomic environments are sufficient to reduce overall thermal conductivity dramatically. The method developed should enable more precise design of next-generation thermal materials as it allows GB structures exhibiting the desired thermal transport behaviour to be identified with small computational overhead.
Thermal conductivity is a fundamental property of a material and crucial for many technological applications, e.g., thermoelectrics1,2,3, thermal barrier coatings4,5, high-power devices6,7 and microelectronics8,9. Recent studies have shown that nanocrystalline materials, which have large grain boundary (GB) populations, exhibit extremely low lattice thermal conductivities1,5,10,11, even when the bulk form is thermally conductive, e.g., elemental silicon12,13. This dramatic reduction in lattice thermal conductivity is commonly attributed to shortening of the phonon mean free path (MFP), with the assumption that it is of the same order as the average grain size5,12,14,15. Although this first-order approximation has informed most attempts to control thermal conductivity, e.g., by tailoring grain size distributions16, it does not take into account the impact of individual GBs and their different atomistic structures, and recent experimental studies have indicated that the amount of thermal conductivity reduction varies considerably depending on the structure of a particular GB17,18,19. For example, Tai et al18. measured the thermal resistances of three twist Al2O3 GBs and found that they vary by a factor of three. Quantitatively determining the relationship between GB structure and thermal conductivity is thus desirable for designing thermally functional materials at the nano-scale.
Many computational studies have been performed over the past two decades using non-equilibrium molecular dynamics (MD) to examine thermal conductivities of individual GBs20,21,22,23,24. Although the results revealed that thermal conductivity varies with misorientation angle and GB energy, the underlying physical mechanism responsible for this has not been elucidated in terms of the GB structures themselves. To help remedy this, we recently calculated thermal conductivities of 81 MgO symmetric tilt GBs (STGBs), and found that GB excess volume, which stems from reduced atomic coordination and non-optimal bond lengths at the GB core (the characteristic structure pattern centred on the GB plane), is strongly correlated with thermal conductivity25. We identified three different correlations depending on the type of GB, with low thermal conductivities occurring in the vicinity of the most open structures. The results provided further evidence that thermal conductivity can vary significantly depending on the type of GB and its atomic structure.
An analysis based on excess volume alone, however, is insufficient for explaining structure-property relationships over high-dimensional space, e.g., general GBs in polycrystals, because a given excess volume is not necessarily unique to a particular GB structure. This is because excess volume is a measure of the non-optimum packing of atoms at a GB but contains no other information about how the GB structure differs from that in the crystal bulk or to other GBs; consequently two GBs can have the same excess volume but exhibit very different thermal conductivity behaviour because of differences in atom configurations and bonding26,27,28,29,30. General GBs consist of complex mixtures of simpler high-symmetry (planar) GBs, and are even harder to analyse because of the enormous number of degrees of freedom involved. This problem is exacerbated when the effect of intrinsic defects or impurity atoms is included. A brute force method, e.g., MD simulation, can enable a specific thermal conductivity to be assigned to a specific GB core structure so that the dependence of thermal conductivity on GB misorientation and composition can be examined systematically, but even using computationally inexpensive empirical potential models it would take an inordinately long time to generate sufficient data for a wide variety of GB forms. Thus a more efficient and computationally tractable method is needed if meaningful progress is to be made.
A promising method for handling large numbers of different atomic configurations is the use of structure descriptors developed in the burgeoning field of materials informatics31,32,33,34,35. These descriptors contain information sufficient to define uniquely a particular atom arrangement, and act as fingerprints distinguishing different atomistic structures. Recent studies have used such descriptors in the context of machine learning (ML) to enhance our understanding of GB structure-property relationships36,37,38. A prime example is the study of Rosenbrock et al.38; using the smooth overlap of atomic positions (SOAP) descriptor39,40 and a supervised ML technique, they identified a set of building blocks (or representative local atomic environments, LAEs) from which GBs of metallic Ni are constructed, and determined which LAEs strongly influence GB energies and mobilities. In related work41 they reviewed various models used to analyse GB structures (in particular comparing the utility of the local environment representation to that of the structural unit model in the analysis of 126 Ni STGBs), and showed that the former is in many respects superior to the others, most notably because it provides a smoothly varying function.
In this report we describe our search for a suitable SOAP-based microscopic metric that correlates with GB thermal conductivity and can be used to identify relationships between GB structure and thermal conductivity. To ensure the rigour of the relationship identified, a wide range of GBs are included in the analysis, viz., symmetric tilt, twist, twin and asymmetric tilt GBs stable at standard pressure, and symmetric tilt GBs stable at higher pressure. MgO is chosen as a model material because of its simple structure and long history of experimental and theoretical work. The most appropriate microscopic quantity that we identify, which we refer to as the local distortion factor, LDF, measures deviations in the local structural environment of an atom near a GB from that of an identical atom in the crystal bulk, and correlates well with atomically decomposed thermal conductivities perpendicular to the GB extracted from perturbed MD simulations. We then construct a prediction model using multiple linear regression with input variables based on hierarchical clustering of LAEs, and demonstrate that the thermal conductivity of a GB can be predicted with high accuracy using this model. Analysing the results in terms of LDFs reveals that even a small amount of structural distortion at the GB is sufficient to suppress thermal conductivity strongly. We expect that extension of this ML-based technique to other materials should greatly enhance our understanding of GB behaviour, thereby enabling materials to be tailored to exhibit the desired thermal properties, especially once suitable nano-scale engineering techniques have been developed.
Effective thermal conductivities
In addition to low-angle and high-angle STGBs reported previously25, in this study we calculated effective thermal conductivities across GB planes of standard-pressure twist, asymmetric tilt and high-pressure tilt GBs of MgO to obtain a more comprehensive understanding of the relationship between GB structure and thermal conductivity. Detailed lists of all GB models used in this study are provided in Supplementary Tables 1–9, with some relevant properties summarised in Supplementary Figs. 1–3, and explanatory notes included as Supplementary Notes 1 and 2. The combined results are plotted in Fig. 1a against excess volume per unit area of each GB, with representative GB structures shown in Fig. 1b–h. For the STGBs under standard pressure, the thermal conductivities exhibit three different correlations with excess volume depending on the GB type: low-angle GBs with (I) dense and (I′) open dislocation core structures, and (II) high-angle GBs. In Fig. 1a, thermal conductivities of high-angle high-pressure STGBs also fall on correlation line II (solid black line), whereas their excess volumes are smaller than those of standard-pressure STGBs with the same misorientation because of their denser GB core structures. In contrast, thermal conductivities of low-angle high-pressure STGBs deviate from these trends, lying between lines I and I′ because of the intermediate densities of their dislocation structures. Thermal conductivities of asymmetric Σ5 [001] tilt GBs lie only slightly above the correlation II line, probably because their GB core structures are similar to those of high-angle GBs; the asymmetric boundaries in this study are mainly composed of (310) and (210) facets similar to those in the corresponding symmetric boundaries, although different kinds of atomic structures are formed at the facet junctions.
Fig. 1: Overview of GB thermal conductivities and structures.
a Effective thermal conductivities across standard-pressure tilt and twist GBs, twin (and twin-like) boundaries, and high-pressure tilt GBs as a function of excess volume. Data for standard-pressure tilt GBs and the correlations indicated by solid, dashed and dotted lines are from Fujii et al.25 (I) low-angle tilt GBs with dense dislocation core structures; (I′) low-angle tilt GBs with open dislocation core structures; and (II) high-angle tilt GBs. Error bars indicate standard deviations in thermal conductivity calculated using perturbations of different magnitudes. b–h Example structures of different types of GB: b twin, c low-angle GB with dense dislocation cores, d asymmetric tilt GB, e low-angle GB with open dislocation cores, f twist GB, g high-pressure high-angle GB and h standard-pressure high-angle GB. i Method for predicting GB thermal conductivities based on local atomic environments. rc is the cutoff radius of the SOAP descriptor.
The three twist GBs examined also show similar behaviour to the dense low-angle STGBs; the thermal conductivity is high for low excess volumes, and initially decreases rapidly with increasing excess volume, but the rate of decrease diminishes once the dislocations begin to overlap. The most pertinent difference between twist and tilt GBs in this case, however, is that the excess volumes of twist GBs are much smaller than those of STGBs because of their denser structures. GBs with very high symmetry and thus high number density, viz., the Σ3(111) twin boundary and GBs with LAEs similar to it (labelled twin-like in Fig. 1a), appear to fall on a fourth correlation line, one flatter than correlations I or I′ (see Supplementary Fig. 1 for their structures). The results indicate that the thermal conductivities of high-pressure tilt, asymmetric tilt and twist GBs are governed by the same mechanism as for STGBs, and that the macroscopic metric, i.e., GB excess volume, is inadequate as a parameter for accurately predicting thermal conductivities of various types of GB structures. As explained below, we overcome this problem by quantifying LAEs in the vicinity of GBs using the SOAP descriptor to generate input data for ML techniques. A schematic of the method is shown in Fig. 1i.
Quantifying local distortions
The mechanism by which thermal conductivity is reduced at GBs is expected to be related to local structural distortions because long-range thermal transport occurs by phonons, which are the collective motion of atoms in a periodic lattice, and any disturbance to this motion results in enhanced phonon scattering, as evidenced by numerous experimental and theoretical studies1,3,5,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25. To quantify these structural distortions, we defined a (non-normalised) dissimilarity metric that measures the difference in LAE between an atom at a GB and an atom in the crystal bulk, which we refer to as the local distortion factor, LDF using the SOAP descriptor (see Methods for details). We calculated LDFs of all atoms in GB structure models for a wide variety of different GB types, viz., 80 standard-pressure STGBs (about six different rotation axes), a twin25, four high-pressure [001] STGBs, three (001) twist GBs and four asymmetric [001] tilt GBs. Figure 2a shows a plot of the LDFs in each model classified by GB rotation axis in order of increasing tilt or twist angle. The LDFs span a wide range, from 0 to 3000, with atoms at open GBs tending to have high values and those at relatively dense GBs to have low values.
Fig. 2: Local distortion factors, LDFs, obtained using the SOAP descriptor.
a LDFs of all LAEs in tilt and twist GB models in order of increasing tilt or twist angle for each class of rotation axis. One vertical column of LDFs corresponds to one GB model. b LDFs of atoms in an isotropically expanded unit cell as a function of bond elongation and volume expansion.
To quantify how LDFs vary with bond elongation, we also calculated those of atoms in uniformly expanded, defect-free MgO single crystals, and the results are plotted in Fig. 2b. This plot shows that when an MgO crystal is expanded isotropically, LDFs (those of cations and anions are equivalent in this case because of its rock-salt structure) increase smoothly and reach a value equivalent to the maximum LDF in the GB models for a lattice constant elongation of ~9.5% and volume expansion of ~31.2%. Unlike the atoms in the perfect crystal, atoms at GBs are not subjected to as large increases in local volume or bond lengths, but instead experience non-uniform (anisotropic) strain to their bonds and/or changes in coordination environment. Although LDFs by themselves do not indicate whether strain or coordination environment has the stronger effect, separate analysis showed that both of them are important, with contributions of similar magnitude in many cases. For example, the average and standard deviation of LDFs of atoms with first-nearest neighbour coordination deficiencies of 0, 1 and 2 are 456.6 ± 392.4, 1270.0 ± 608.4 and 1483.5 ± 580.0, respectively. The LDF values increase with increasing under-coordination but also have high standard deviations because of large variations in bond strain about atoms with different LAEs.
Clustering analysis of LAEs
To classify the structural environments of atoms at the cores of different GBs into groups suitable for constructing our ML model, we first used the complete-linkage method to identify LAEs in each GB model based on the dissimilarity metric d between each pair of atoms, before applying Ward's hierarchical clustering method42 to the complete set of LAEs generated (see Methods for details). Figure 3 shows a dendrogram of the different classes of LAEs identified, together with representative STGBs to illustrate how they are distributed around GBs. The dendrogram in Fig. 3a shows three supergroups of LAEs (indicated by different colour shading in the figure) that are classified into six groups whose members consist of unstrained (bulk-like) atoms, weakly strained atoms, moderately strained atoms, strongly strained atoms, moderately under-coordinated atoms and highly under-coordinated (bond-ruptured) atoms. The averages and standard deviations of LDFs in these six groups are 70.0 ± 66.1, 138.7 ± 42.9, 316.8 ± 84.8, 609.8 ± 140.7, 1032.3 ± 165.6 and 1786.1 ± 323.8, respectively, reflecting the increasing amount of structural distortion (LDF distributions in each LAE group are reported in Supplementary Fig. 3). For reference, from Fig. 2b the LDFs of weakly strained, moderately strained and strongly strained groups correspond to average bond elongations of roughly 0.4, 0.8 and 1.6%, respectively. The average amounts of first-nearest neighbour under-coordination in these groups are 0.00, 0.01, 0.03, 0.22, 0.37 and 0.94, respectively, suggesting that the effect of strongly strained atoms is of similar magnitude to that of slightly under-coordinated atoms.
Fig. 3: Hierarchical clustering of GB LAEs.
a Hierarchical relationship between LAEs depicted in dendrogram form. The different regions represent three general groups of LAEs: (green) highly under-coordinated (bond-ruptured); (red) moderately under-coordinated or strongly strained; and (grey) moderately strained, weakly strained or bulk-like. b Representative distributions of the LAE groups and LDFs at six STGBs. A log scale is used to make it easier to distinguish changes in LDFs within LAE groups.
Figure 3b shows GB structures coloured according to LAE group and LDF values for two low-angle STGBs with dense structures, a low-angle STGB with open structure, a high-pressure high-angle STGB and two standard-pressure high-angle GBs. These indicate that highly under-coordinated atoms occur at open GB core structures, whereas dense GB core structures consist of atoms in strongly strained environments, either at dense low-angle GBs or adjacent to under-coordinated atoms in high-angle STGBs. In dense low-angle GBs such as \(\Sigma 183(13\,\overline {14} \,1)/[111]\) and Σ113(15 1 0)/[001], atoms between the dislocation cores have LAEs similar to bulk atoms. These results illustrate how hierarchical clustering of LAEs and LDFs captures information regarding the arrangement of atoms and degree of distortion at GBs in a physically interpretable manner.
LDF values quantify the local distortion relative to the ideal crystal bulk, but do not directly measure differences in LAEs between GBs. To better assess the range of LAEs present in different types of GBs, we thus also calculated d values between all atoms in one GB model with those in another. This revealed that similar LAEs frequently occur in other GBs, with greater differences occurring for high-pressure and high-angle STGBs than for others. Specifically, the minimum d value of any atom in the high-pressure STGBs, asymmetric tilt GBs and twist GBs were no greater than 211, 140 and 87, respectively (compared to maximum LDFs close to 3000); these values correspond to about 0.5%, 0.4% and 0.2% bond elongation, respectively, when considered in terms of a uniformly expanded MgO crystal (Fig. 2b). For example, the d value for the two atoms indicated by blue circles in the high-pressure Σ17(410)/[001] GB and standard-pressure \(\Sigma 5(0\bar 21)/[112]\) GB in Fig. 3b is only 58.2. In other words, the range of LAEs provided by a sufficiently large and diverse sample of GB structures (92 in our case) is expected to encompass those encountered in GBs with other misorientations, higher complexity or lower symmetry. This result is consistent with Priedman et al.'s observation that different GBs consist of similar structural building blocks or motifs41. Consequently, similar to Rosenbrock et al.'s38 findings for GB energies and mobilities, the properties and behaviour of individual GBs of MgO can be expected to depend on the relative numbers of each type of LAE of which they are composed. Identifying correlations between the numbers and distributions of LAEs in a GB and its thermal conductivity, preferably in a physically meaningful way, should thus allow thermal conductivities of MgO GBs of arbitrary structure to be predicted quickly, accurately and reliably. In the following sections we demonstrate how hierarchical clustering can fulfil this purpose in the context of thermal transport and phonon dispersion, with interpretation facilitated by analysing LDFs.
Thermal conduction at tilt grain boundaries
To determine the dependence of microscopic thermal conduction on structural distortion in the vicinity of GB planes, we calculated atomic thermal conductivities perpendicular to GB planes at 300 K using perturbed MD simulations, and LDFs from the relaxed GB structures for each GB model.
Figure 4 compares plots of LDFs and atomic thermal conductivities of standard- and high-pressure Σ25(710)/[001] and Σ5(310)/[001] STGBs, together with the LAE classifications identified by hierarchical clustering. These plots reveal that, overall, there is strong negative correlation between LDF and atomic thermal conductivity in these two cases. One exception to this is the standard-pressure Σ5(310)/[001] STGB, in which LDFs of the innermost atoms (Fig. 4b) are high and their atomic thermal conductivities (Fig. 4d) are the highest of all atoms in the GB structure. This inversion of the correlation is because the SOAP vector, and hence LDF, are non-directional, whereas there is a large anisotropy in the bond distances and hence components of atomic thermal conductivity of the Σ5(310)/[001] GB, with single pairs of atoms across the GB plane acting like thermal conduction bottlenecks. Distances between atoms perpendicular to the GB plane are similar to those in the bulk, but much longer parallel to it in the \([1\bar 30]\) direction, resulting in a large LDF factor (maps of the components of atomic thermal conductivity perpendicular and parallel to the GB plane are compared in Supplementary Fig. 4 and Supplementary Note 3). Such bottlenecks generally only occur in high-angle STGBs, but in low densities dispersed between low-conductivity voids, so their effect on the overall thermal conductivity is small.
Fig. 4: Atomic configurations and atomic thermal conductivities near GB planes of four STGBs.
a, b Local distortion factors, LDFs; c, d Gaussian-smeared atomic thermal conductivities; e, f Distributions of LAE groups classified from hierarchical clustering. A log scale is used to make it easier to distinguish changes in LDFs within LAE groups.
The greatest decrease in atomic thermal conduction occurs at the centres of dislocation cores, whereas thermal conduction is rapid via atoms in less disturbed (low LDF) regions even if on the GB plane (corresponding to light-coloured atoms in Fig. 4a, b). The core structures of the high-pressure GBs are denser than those of the standard-pressure GBs, making them more like low-angle GBs in which dislocations are arrayed in a regular pattern. This results in the wider regions of unruptured bonds on the GB planes seen in the right-hand images of Fig. 4a, c. This explains why the effective thermal conductivity of the low-angle high-pressure GB is higher than those of the standard-pressure GBs, falling between lines I and I′ in Fig. 1a because of the intermediate atomic densities of its dislocations. Overall, the close correspondence between LDF and atomic thermal conductivity suggests that this metric makes a good descriptor for developing a model for predicting thermal conductivities in a wide variety of GB types of MgO.
Thermal conduction at twist grain boundaries
In Fig. 5, we compare LDFs and atomic thermal conductivities of three (001) twist GBs, viz. Σ41, Σ25 and Σ37, in order of increasing twist angle. In this case thermal conductivities are projected onto the GB planes, as opposed to parallel to the GB planes in the case of tilt GBs (Fig. 4). In the twist GBs, the LDFs are smaller than those of STGBs (as seen in Fig. 2a), but the structurally distorted sites are widely distributed about the GB plane, which is very different to the case of tilt GBs. In the case of the Σ41 twist GB, the dislocation lines, identified using the method of Stukowski et al.43, are relatively far apart, and the LDF values are relatively low in the regions between them. These regions serve as thermal conduction highways, evidenced by the close match between regions of low LDF and high atomic thermal conductivity (Fig. 5a, b). In the case of the Σ37 GB, with its relatively high twist angle, all atoms on the GB plane are in distorted environments and the LDFs are uniformly high. The structural distortion thus correlates with low thermal conductivities across the GB plane in contrast to the rapid thermal conduction paths identified in the case of the Σ41 GB.
Fig. 5: Atomic configurations and atomic thermal conductivities near GB planes of three twist GBs.
a Local distortion factors, LDFs; b Gaussian-smeared atomic thermal conductivities; c Distributions of LAE groups classified using hierarchical clustering. Dislocation lines are shown as dashed lines in c. A log scale is used for LDF values in c to make it easier to distinguish differences within LAE groups.
In contrast to the strong correlation between LDF and atomic thermal conductivity in the case of Σ41 and Σ37 twist GBs, the correlation in the case of the Σ25 twist GB is somewhat weaker. Even though its LDFs are lower than those of the Σ37 GB, especially in the inter-dislocation regions, their atomic thermal conductivities (and hence the effective thermal conductivity of the GB) are similar to those of the Σ37 GB. This difference indicates that the relationship between LDF and atomic thermal conductivity is non-linear; relatively small structural distortions to the lattice are sufficient to dampen the local thermal conduction strongly and thus very high LDFs may not be necessary to suppress thermal transport dramatically. This interpretation is consistent with the slow decrease in effective thermal conductivity exhibited by correlation II in Fig. 1a. Figure 5 also suggests that LDFs may be useful for identifying sites which induce strong phonon scattering and thus lower the effective thermal conductivity in the case of twist GBs as well as for tilt GBs. Further discussion on the utility and limitations of the LDF is provided in Supplementary Note 4.
Prediction models for thermal conductivity
Motivated by the good correlation between LDF and atomic thermal conductivity described in the previous sections, we constructed a mathematical model for predicting thermal conductivities of GBs using multiple linear regression with l2-norm (or ridge) regularisation. For this, we classified the LAEs into several groups according to the magnitude of their average LDF values by slicing the hierarchical clustering relationships in Fig. 3a in the manner described in Supplementary Fig. 5. We found that classifying the LAEs into six groups, viz., (1) bulk-like, (2) weakly strained, (3) moderately strained, (4) strongly strained, (5) moderately under-coordinated and (6) highly under-coordinated (as shown in Fig. 3a), is sufficient for accurate prediction of GB thermal conductivities. A summary of predictive performance using alternative numbers of LAE groups is also provided as Supplementary Fig. 5 and Supplementary Note 5.
Numbers of LAEs per unit area of a GB, Nm, for each LAE group (m = 1–6) were used as predictor variables, and fitting carried out using multiple linear regression (see the Methods section for details). As examples, Fig. 6a, b show the structures of Σ5(310)/[001] and \(\Sigma 327(17\,\overline {19} \,2)/[111]\) STGBs, the Gaussian weighting function, G(x), and plots of their Nm values for each LAE group. These show that there are only highly distorted LAEs in the vicinity of the high-angle Σ5(310)/[001] GB whereas there are both bulk-like and moderately distorted LAEs in the vicinity of the low-angle \(\Sigma 327(17\,\overline {19} \,2)/[111]\) GB.
Fig. 6: Regression modelling of thermal conductivity at GBs of MgO.
a, b Example of how predictor (input) variables Nm were generated from GB structures for multiple linear regression. a LDFs in the vicinity of high-angle Σ5(310)/[001] and low-angle \(\Sigma 327(17\,\overline {19} \,2)/[111]\) STGBs, and the Gaussian function G(x) centred on the GB plane used in calculating Nm. A log scale is used for LDF values to make it easier to distinguish differences within LAE groups. b Number of atoms per unit area in each LAE group, Nm, using hierarchical clustering results for the two GBs. c Parity plot of calculated against predicted GB thermal conductivities. Error bars indicate standard deviations in thermal conductivity calculated using perturbations of different magnitudes. d Ridge regression coefficients for Nm of each LAE group. The higher the LAE group number, the larger the LDF values in the group.
The predictor model was trained using data from 70 randomly chosen symmetric GBs, and then validated using data from the remaining 22 GBs, including all four asymmetric tilt GBs. Figure 6c shows a parity plot of overall thermal conductivities calculated using perturbed MD against values predicted by the model. The root mean squared error (RMSE) and R2 value are 1.28 Wm−1K−1 and 0.93, respectively, for the training data, and 1.30 Wm−1K−1 and 0.92, respectively, for the test data. These results demonstrate that GB thermal conductivity can be predicted with high precision from their local atomic structures alone, regardless of whether the GB is under standard or high pressure, a tilt, twist or twin GB. The prediction model also reliably estimated thermal conductivities of the asymmetric tilt GBs, confirming its good transferability as well as the efficacy of including a wide range of GB types in the training dataset. In addition, as seen in Fig. 6d, the regression coefficient is very high in the case of LAE group 1, where LDFs are very small (70.0 on average), i.e., the local environments are very similar to those in the crystal bulk, and very low for the other LAE groups decreasing gradually as LDF increases. These results again suggest that introducing GBs with relatively small structural distortions (e.g., low-angle GBs with dense GB cores) is an effective strategy for reducing thermal conductivity dramatically.
Similar to point defects such as vacancies, impurity atoms and interstitial atoms44, GBs are known to limit phonon MFPs by causing diffuse scattering, and this is consistent with the results of our perturbed MD simulations. GBs can be thought of as extended planar defects or clusters of point defects, typically a few nanometres wide, so that deviations from the ideal lattice in the vicinity of GBs, as reflected in their LAEs and LDFs, are typically much larger than for isolated defects, making them able to scatter long-wavelength phonons much more effectively, resulting in much shorter MFPs in a polycrystal than in single crystal (in which MFPs are on the order of hundreds of nanometres or several micrometres in the case of single crystal MgO45). Constructing an ML model with data from MD simulations of GBs shows that these effects can be predicted accurately from analysis of LAEs calculated with only a short cutoff (~4.5 Å).
The correlation between GB structure and thermal conductivity identified in this study should enable polycrystalline materials to be designed with more precisely controlled thermal conductivities, e.g., by identifying GBs with the desired microscopic behaviour for a given application and facilitating their formation in the material with appropriate synthesis methods and conditions. Although it is still very difficult to engineer GB structures directly at the atomic level, it is possible to increase the probability of their formation by tailoring grain orientation through thermal treatment, mechanical processing, use of substrates, and so on, as grains coming into contact within a narrower range of orientations are more likely to exhibit a particular GB structure with the desired LAEs. It should also be possible to examine the effect of dopants on GB thermal conductivities using this model, assuming suitable potential parameters are available for performing MD simulations, although the number of simulations required may increase substantially as a result of the increased degrees of freedom (dopant concentration, segregation sites and so on). Nevertheless, extending the ML method developed in this study to more complex crystal structures and compounds should enable a more comprehensive understanding of GB structure-property relationships to be obtained, so that the next-generation of thermal materials can be designed more efficiently and effectively.
The method presented here, in which the relationship between thermal conductivity and local atomic distortions is identified through ML with a multidimensional dataset, can be readily applied to other structure-property relationships because of the universality of the SOAP descriptor, whether the cause of the distortion is point defects (isolated or clustered), dislocations, GBs, heterointerfaces or surfaces. When used in conjunction with a large dataset of defective structures such as those generated by atomistic materials modelling46,47,48 using reliable interatomic potentials, quantification of complex structure-property relationships using ML techniques with SOAP-derived metrics has the potential to provide deeper insights into complex interface phenomena and greatly accelerate materials design of a broad range of technologically important materials. In some situations, however, it may be necessary to include directional information in the model so that properties more sensitive to anisotropy or that are highly directional can be predicted accurately. Methods for including directional information are discussed briefly in Supplementary Note 3 as a stimulus for future work.
In summary, we have used ML with data derived from the SOAP descriptor and perturbed MD to quantify the relationship between local atomic structure and overall thermal conductivity in standard- and high-pressure STGBs, twin, twist and asymmetric tilt GBs of MgO. The LDF, a simple metric based on the SOAP descriptor, was found to correlate well with atomic thermal conductivity in a non-linear fashion. The prediction model constructed based on this insight revealed that even small structural distortions at GBs can reduce thermal conductivity dramatically, suggesting that the thermal conductivity of a polycrystalline material may be closely controlled by tailoring the number and distribution of such GBs through GB engineering. Although the importance of structural disorder at GBs has been posited by earlier researchers20,49, to the best of our knowledge this is the first study to demonstrate quantitatively the correlation between structural distortion and suppression of thermal conductivity at the atomic level.
GB model construction
Eighty-one standard-pressure STGBs of MgO constructed previously25 were used together with an additional three (001) twist GBs, four [001] asymmetric tilt GBs, and four high-pressure STGBs generated using the method described previously25,50. Simulated annealing (SA) of initial structures was performed to obtain the stable atomic configurations of the GBs using equilibrium MD methods encoded in the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) programme51. Initial configurations were constructed by tilting or twisting two half-crystals by a specific angle, and sandwiching an amorphous block of MgO between them. The amorphous block was obtained from a separate MD calculation by heating a perfect crystal of MgO to 8000 K. The rigid-ion Buckingham potential for MgO reported by Landuzzi et al.52 was used in all cases.
SA simulations commenced with the GB model heated to 4000 K, and the temperature was decreased gradually to 50 K over 330 ps. This gradual cooling from high temperature allowed the atoms in the amorphous region to diffuse and find energetically favourable positions, so that a low-energy ordered GB structure was obtained for each initial configuration. The final atomic configuration for each GB model was obtained by performing geometry optimisation (at 0 GPa) using the General Utility Lattice Program (GULP)53 on the structures obtained from SA simulations. In several cases, metastable GB structures (GBs with higher energies than the most stable form for that GB orientation at 0 GPa with atoms trapped in higher-energy local minima) were also obtained. These GB structures became lower in energy than the stable GB structures when geometry-optimised at higher pressures using GULP, so these were included as examples of high-pressure STGBs when developing the ML model.
We repeated the SA simulations 10 times for each symmetric GB and 50 times for each asymmetric GB using different initial velocity distributions to confirm that the most energetically stable atomic arrangement had been obtained. Structures of the Σ5(310)/[001] GB were found to be in agreement with that determined using first-principles calculations54, and a few dislocation core structures, which can be seen in low-angle STGBs with [001] and \([1\bar 10]\) rotation axes, e.g., Σ41(540)/[001] and \(\Sigma 51(1\,\,1\,\,10)/[1\bar 10]\) GBs, were found to be in excellent agreement with those observed by scanning transmission electron microscopy25,55. This gives us confidence that we successfully identified the lowest-energy (ground-state) structures. These GB models are available as Supplementary Data 1 in LAMMPS format. Two GB structure models are illustrated in Supplementary Fig. 6 as examples.
The excess volume per unit area of each GB, ΔVGB, was calculated using the following equation:
$${\mathrm{\Delta }}V^{{\mathrm{GB}}} = \frac{{V^{{\mathrm{GB}}} - \frac{{N^{{\mathrm{GB}}}}}{{N^{{\mathrm{SC}}}}}V^{{\mathrm{SC}}}}}{{2A}} = \frac{{V^{{\mathrm{GB}}} - N^{{\mathrm{GB}}}/\rho ^{{\mathrm{SC}}}}}{{2A}}$$
where VGB and VSC are the volume of the GB model and unit cell, respectively, NGB and NSC are the number of atoms in the GB model and unit cell, respectively, and ρSC is the number density of the unit cell.
SOAP descriptor
SOAP vectors of all atoms in MgO GBs were calculated using the Python-based software DScribe56. The SOAP descriptor is derived by fitting a set of spherical harmonics and radial basis functions to the 3-dimensional density distribution generated by placing Gaussian-smeared atomic densities on atoms within a specified cutoff radius about a central atom. The coefficients of the fit form a rotationally invariant power spectrum57 which is compiled into a SOAP vector for that atom which contains all the information needed to reconstruct the LAE. Compiling SOAP vectors of atoms in the GB model into a matrix known as the local environment representation allows each particular GB structure to be described quantitatively and uniquely38,41. One of the advantages of the SOAP descriptor is that it also makes it possible to compare LAEs quantitatively, so that a dissimilarity (or, conversely, similarity) metric can be defined between two atoms33 which varies smoothly with a change in neighbouring atom positions38. In this study, we used a non-normalised dissimilarity metric, d, defined as
$$d_{ij} = \sqrt {{\mathbf{p}}_i \cdot {\mathbf{p}}_i + {\mathbf{p}}_j \cdot {\mathbf{p}}_j - 2{\mathbf{p}}_i \cdot {\mathbf{p}}_j}$$
where \({\mathbf{p}}_i\) and \({\mathbf{p}}_j\) are the SOAP vectors of two atoms i and j. If \({\mathbf{p}}_i\) and \({\mathbf{p}}_j\) are the SOAP vectors of a GB atom and its equivalent crystal bulk atom, the dissimilarity metric represents how much the LAE of the GB atom differs from that of the bulk atom. We refer to this as the local distortion factor, LDF, defined as
$${\mathrm{LDF}} = \sqrt {{\mathbf{p}}_{{\mathrm{GB}}} \cdot {\mathbf{p}}_{{\mathrm{GB}}} + {\mathbf{p}}_{{\mathrm{bulk}}} \cdot {\mathbf{p}}_{{\mathrm{bulk}}} - 2{\mathbf{p}}_{{\mathrm{GB}}} \cdot {\mathbf{p}}_{{\mathrm{bulk}}}}$$
where \({\bf{p}}_{{\mathrm{GB}}}\) and \({\bf{p}}_{{\mathrm{bulk}}}\) are the SOAP vectors of a GB atom and an atom in the crystal bulk, respectively. A cutoff of 4.461 Å, corresponding to the average of the fourth and fifth nearest neighbour distances in MgO, was selected after preliminary testing of cutoffs both shorter and longer.
To compare LAEs and GB excess volume quantitatively, we defined the term total distortion factor, TDF, to be the sum of all LDFs at a GB normalised to the GB cross-sectional area, A,
$${\mathrm{TDF}} = \mathop {\sum }\limits_i {\mathrm{LDF}}_i/2A$$
where i is the index of an atom in the GB model. TDF is divided by two because each GB model produces two GBs under periodic boundary conditions. The calculated TDF and GB excess volume exhibited a linear relationship, especially in the case of high-angle tilt GBs formed under standard pressure (see Supplementary Fig. 7 and Supplementary Note 6). We also calculated the LDFs and TDFs using cutoffs of 3.313 and 3.923 Å, and confirmed that the relationship between TDF and excess volume was not overly sensitive to the choice of cutoff. Using large cutoff radii (~10 Å or greater) made it difficult to identify the GB core structure because it resulted in many more atoms being classified as having under-coordinated atoms in their spheres of influence.
The maximum degree of spherical harmonics, lmax, and the number of radial basis functions, nmax, were set to 9 and 12, respectively. In test calculations, it was found that the linear relationship between TDF and excess volume was insensitive to lmax (even 0 produced similar results) but nmax needed to be sufficiently large to achieve a good linear fit. We used spherical Gaussian type orbitals (as defined in Himanen et al.56) as radial basis functions, with a Gaussian width of 0.5 Å. Another implementation of the SOAP descriptor, the QUIP code58, was also tested, and produced essentially the same linear relationship as DScribe (see Supplementary Fig. 8), indicating that the results reported here do not depend strongly on the particular implementation of the SOAP descriptor.
To extract a unique set of LAEs from each GB model in Fig. 2a, we performed complete-linkage clustering as implemented in Scipy59 so that all combinations of atoms in each LAE group had d values below a threshold value of 30.0. The threshold value was carefully chosen to maximise the performance of the prediction model without compromising interpretability of the classification groups. We also tested normalised forms of the SOAP vectors and other dissimilarity metrics such as the SOAP kernel and Gaussian kernel, but found that they make interpretation of the hierarchical clustering results difficult and reduce the predictive performance of the model. Further details are given in Supplementary Methods.
Thermal conductivity calculations
Overall thermal conductivities across the GB planes and grain interiors, which we refer to as effective thermal conductivities, were calculated using the perturbed MD method60 for a few high-pressure tilt, twist and asymmetric tilt GB structures at 300 K. Custom-written code was added to LAMMPS for this purpose. In this method, lattice thermal conductivity in the x direction is calculated according to
$$\kappa _{{\mathrm{lattice}}} = \frac{1}{{F_{{\mathrm{ext}}}T}}\mathop {{\lim }}\limits_{t \to \infty } \langle J_x\rangle _t$$
where Fext is the magnitude of the perturbation, T is the absolute temperature and Jx is the heat flux in the x direction. The microscopic heat flux is defined by Irving and Kirkwood61 to be
$${\mathbf{J}} = \mathop {\sum }\limits_i {\mathbf{J}}_i = \mathop {\sum }\limits_i \frac{1}{{2V}}\left[ {\left\{ {m_i{\mathbf{v}}_i^2{\mathbf{I}} + \mathop {\sum }\limits_j \phi _{ij}{\mathbf{I}}} \right\}{\mathbf{v}}_i - \mathop {\sum }\limits_j \left( {{\mathbf{F}}_{ij} \cdot {\mathbf{v}}_i} \right){\mathbf{r}}_{ij}} \right]$$
where Ji is the atomic contribution of atom i to the heat flux, V is the volume of the GB model (supercell), mi and vi are the mass and velocity of atom i, respectively, ϕij is the interatomic potential energy between atoms i and j, I is a unit tensor of second rank and Fij is the force exerted by atom j on atom i. By substituting Eq. 6 into Eq. 5, atomic thermal conductivities κi, which are the atomic contributions to overall lattice thermal conductivity, can be calculated according to
$$\kappa _{{\mathrm{lattice}}} = \mathop {\sum }\limits_i \kappa _i = \mathop {\sum }\limits_i \frac{1}{{F_{{\mathrm{ext}}}T}}\mathop {{\lim }}\limits_{t \to \infty } \langle J_{i,x}\rangle _t$$
where Ji, x is the contribution of atom i to the heat flux in the x direction. As seen in Eq. 6, atomic thermal conductivities are proportional to the inverse of the supercell volume, and thus must be normalised by multiplying the supercell volume for comparison between GB models. In addition, because the intensities in the thermal conductivity map in Figs. 4 and 5 also depend on the number of atoms in the depth direction, Gaussian-smeared atomic thermal conductivities projected onto the two-dimensional planes were divided by the cell depth. This procedure for calculating thermal conductivity is the same as reported in our previous work on STGBs25: For each GB orientation, models were constructed with three different half-crystal widths (distances between GB planes of as close to 4, 5 and 6 nm as feasible for that particular misorientation) by altering the number of bulk layers. MD simulations were then performed in the NPT ensemble for 100 ps with a timestep of 1 fs for each model to determine its equilibrium cell dimensions at 300 K. Next, an NVT ensemble was applied for 100 ps with temperature scaling, followed by 300 ps using a Nosé-Hoover thermostat, to ensure thermal equilibrium had been reached. Perturbed MD simulations were then performed on the equilibrated GB models for 1.1 ns and the average heat flux of the last 1.0 ns used to calculate the thermal conductivity. The first 0.1 ns of data was discarded because this was the time needed for the system to transition from thermal equilibrium to a steady state under the perturbation. For each model, perturbed MD simulations were performed with at least four different magnitudes of the perturbation (after confirming the response was within the linear regime) and the average thermal conductivity calculated. The effective thermal conductivity for a width of exactly 5 nm was then extracted from a linear regression fit to these averaged thermal conductivities. Atomic thermal conductivities, plotted in Figs. 4 and 5, were extracted from the GB models with half-crystal widths of about 5 nm. Further details on the perturbed MD method are also available elsewhere60,62,63,64.
LAEs identified for each GB model using the complete-linkage algorithm were grouped and classified using Ward's minimum variance method of hierarchical clustering42 as implemented in SciPy59, again using the dissimilarity metric d in Eq. 2, as it is equivalent to the Euclidean distance. We also tested several other methods, such as the average method, but Ward's method was found to perform the most reliably and consistently. With this method, LAEs in the various GB structures were grouped into six different categories within three supergroups based on their level of lattice distortion.
The prediction model for thermal conductivity was constructed using the number of LAEs per unit area of a GB in each LAE group m, Nm, as input variables. Values of Nm were weighted by a Gaussian function, G, of the distance, x, of the LAE's atom from the GB plane according to
$$N_m = \frac{1}{A}\mathop {\sum }\limits_i^n G\left( x \right) = \frac{1}{A}\mathop {\sum }\limits_i^n \frac{1}{{\sqrt {2\pi \sigma ^2} }}{\mathrm{exp}}\left( { - \frac{{x^2}}{{2\sigma ^2}}} \right)$$
where A is the GB cross-sectional area, n is the number of atoms in the LAE group, i is the index of an atom in the LAE group, and σ is the variance (set to 1.5 Å). Nm corresponds to the number density of atoms in the vicinity of the GB plane decomposed into the contribution of each LAE group.
Fitting was performed using regularised multiple linear regression (Ridge regression) as implemented in scikit-learn65. Ridge regression shrinks the regression coefficients, β, to prevent overfitting to the training data, by penalizing their size according to
$${\mathbf{\beta }} = \mathop {{{\mathrm{argmin}}}}\limits_{\mathbf{\beta }} \left\{ {\mathop {\sum }\limits_i^t \left( {y_i - \beta _0 - \mathop {\sum }\limits_j^p x_{ij}\beta _j} \right)^2 + \lambda \mathop {\sum }\limits_j^p \beta _j^2} \right\}$$
where t is the number of training data, yi is the ith observed value, p is the number of input variables, xij is the jth component of the input variable for the ith training datum, β0 and βj are the intercept and the jth regression coefficients, respectively, and λ is the regularization parameter66. Because thermal conductivity should be zero when all Nm are zero, i.e., there are no atoms in the vicinity of the GB plane, in this study the intercept β0 was set to zero. For training data, 70 of the symmetric GB models were randomly selected with the proviso that each class of GB (namely, the six types of tilt GBs grouped by rotation axis, low-angle tilt GBs (open or dense), high-angle tilt GBs, twist GBs and high-pressure GBs) was represented at least once. The model was trained using λ = 3 × 10−4, determined through cross-validation. The remaining 18 symmetric GBs and all four asymmetric tilt GBs were used as test data to estimate the predictive performance. Input values Nm were not standardised because this was found to reduce the predictive performance of the model.
GB models used in this study are available as Supplementary Data 1. Effective thermal conductivities of all the GB models used in multiple linear regression are summarised in Supplementary Tables 1 to 9. All other data that support the findings of this study are available from one of corresponding authors S.F. upon request.
Code availability
Details of computer codes used in this study are provided in Supplementary Methods.
Biswas, K. et al. High-performance bulk thermoelectrics with all-scale hierarchical architectures. Nature 489, 414–418 (2012).
Article ADS CAS PubMed Google Scholar
He, J. & Tritt, T. M. Advances in thermoelectric materials research: looking back and moving forward. Science 357, eaak9997 (2017).
Kim, S. I. et al. Dense dislocation arrays embedded in grain boundaries for high-performance bulk thermoelectrics. Science 348, 109–114 (2015).
Padture, N. P. Advanced structural ceramics in aerospace propulsion. Nat. Mater. 15, 804–809 (2016).
Yang, H. S., Bai, G. R., Thompson, L. J. & Eastman, J. A. Interfacial thermal resistance in nanocrystalline yttria-stabilized zirconia. Acta Mater. 50, 2309–2317 (2002).
Cahill, D. G. et al. Nanoscale thermal transport. II. 2003-2012. Appl. Phys. Rev. 1, 011305 (2014).
Li, S. et al. High thermal conductivity in cubic boron arsenide crystals. Science 361, 579–581 (2018).
Cahill, D. G. et al. Nanoscale thermal transport. J. Appl. Phys. 93, 793–818 (2003).
Losego, M. D., Grady, M. E., Sottos, N. R., Cahill, D. G. & Braun, P. V. Effects of chemical bonding on heat transport across interfaces. Nat. Mater. 11, 502–506 (2012).
Poudel, B. et al. High-thermoelectric performance of nanostructured bismuth antimony telluride bulk alloys. Science 320, 634–638 (2008).
Ibáñez, M. et al. High-performance thermoelectric nanocomposites from nanocrystal building blocks. Nat. Commun. 7, 1–7 (2016).
Wang, Z., Alaniz, J. E., Jang, W., Garay, J. E. & Dames, C. Thermal conductivity of nanocrystalline silicon: Importance of grain size and frequency-dependent mean free paths. Nano Lett. 11, 2206–2213 (2011).
Nakamura, Y. et al. Anomalous reduction of thermal conductivity in coherent nanocrystal architecture for silicon thermoelectric material. Nano Energy 12, 845–851 (2015).
Ju, S. & Liang, X. Thermal conductivity of nanocrystalline silicon by direct molecular dynamics simulation. J. Appl. Phys. 112, 064305 (2012).
Dong, H., Wen, B. & Melnik, R. Relative importance of grain boundaries and size effects in thermal conductivity of nanocrystalline materials. Sci. Rep. 4, 7037 (2014).
Article ADS CAS PubMed PubMed Central Google Scholar
Aketo, D., Shiga, T. & Shiomi, J. Scaling laws of cumulative thermal conductivity for short and long phonon mean free paths. Appl. Phys. Lett. 105, 131901 (2014).
Sood, A. et al. Direct visualization of thermal conductivity suppression due to enhanced phonon scattering near individual grain boundaries. Nano Lett. 18, 3466–3472 (2018).
Tai, K., Lawrence, A., Harmer, M. P. & Dillon, S. J. Misorientation dependence of Al2O3 grain boundary thermal resistance. Appl. Phys. Lett. 102, 034101 (2013).
Xu, D. et al. Thermal boundary resistance correlated with strain energy in individual Si film-wafer twist boundaries. Mater. Today Phys. 6, 53–59 (2018).
Schelling, P. K., Phillpot, S. R. & Keblinski, P. Kapitza conductance and phonon scattering at grain boundaries by simulation. J. Appl. Phys. 95, 6082–6091 (2004).
Watanabe, T., Ni, B., Phillpot, S. R., Schelling, P. K. & Keblinski, P. Thermal conductance across grain boundaries in diamond from molecular dynamics simulation. J. Appl. Phys. 102, 063503 (2007).
Bagri, A., Kim, S. P., Ruoff, R. S. & Shenoy, V. B. Thermal transport across twin grain boundaries in polycrystalline graphene from nonequilibrium molecular dynamics simulations. Nano Lett. 11, 3917–3921 (2011).
Chernatynskiy, A., Bai, X. M. & Gan, J. Systematic investigation of the misorientation- and temperature-dependent Kapitza resistance in CeO2. Int. J. Heat Mass Transf. 99, 461–469 (2016).
Yeandel, S. R., Molinari, M. & Parker, S. C. The impact of tilt grain boundaries on the thermal transport in perovskite SrTiO3 layered nanostructures. A computational study. Nanoscale 10, 15010–15022 (2018).
Fujii, S., Yokoi, T. & Yoshiya, M. Atomistic mechanisms of thermal transport across symmetric tilt grain boundaries in MgO. Acta Mater. 171, 154–162 (2019).
Wolf, D. Structure-energy correlation for grain boundaries in F.C.C. metals–III. Symmetrical tilt boundaries. Acta Metall. Mater. 38, 781–790 (1990).
Wolf, D. Structure-energy correlation for grain boundaries in f.c.c. metals–IV. Asymmetrical twist (general) boundaries. Acta Metall. Mater. 38, 791–798 (1990).
Homer, E. R., Patala, S. & Priedeman, J. L. Grain boundary plane orientation fundamental zones and structure-property relationships. Sci. Rep. 5, 1–13 (2015).
Priester, L. Grain Boundaries: From Theory To Engineering, Springer Series In Materials Science, Vol. 172 (Springer, 2013).
Cantwell, P. R. et al. Grain boundary complexions. Acta Mater. 62, 1–48 (2014).
Schoenholz, S. S., Cubuk, E. D., Sussman, D. M., Kaxiras, E. & Liu, A. J. A structural approach to relaxation in glassy liquids. Nat. Phys. 12, 469–472 (2016).
Ramprasad, R., Batra, R., Pilania, G., Mannodi-Kanakkithodi, A. & Kim, C. Machine learning in materials informatics: recent applications and prospects. npj Comput. Mater. 3, 54 (2017).
Patala, S. Understanding grain boundaries – The role of crystallography, structural descriptors and machine learning. Comput. Mater. Sci. 162, 281–294 (2019).
Konstantinou, K., Mocanu, F. C., Lee, T. H. & Elliott, S. R. Revealing the intrinsic nature of the mid-gap defects in amorphous Ge2Sb2Te5. Nat. Commun. 10, 3065 (2019).
Article ADS PubMed PubMed Central CAS Google Scholar
Jäger, M. O. J., Morooka, E. V., Federici Canova, F., Himanen, L. & Foster, A. S. Machine learning hydrogen adsorption on nanoclusters through structural descriptors. npj Comput. Mater. 4, 37 (2018).
Sharp, T. A. et al. Machine learning determination of atomic dynamics at grain boundaries. Proc. Natl Acad. Sci. USA 115, 10943–10947 (2018).
Tomoyuki, T. et al. Fast and scalable prediction of local energy at grain boundaries: machine-learning based modeling of first-principles calculations. Model. Simul. Mater. Sci. Eng. 25, 75003 (2017).
Rosenbrock, C. W., Homer, E. R., Csányi, G. & Hart, G. L. W. Discovering the building blocks of atomic systems using machine learning: application to grain boundaries. npj Comput. Mater. 3, 1–7 (2017).
Bartók, A. P., Kondor, R. & Csányi, G. On representing chemical environments. Phys. Rev. B 87, 1–16 (2013).
Bartók, A. P., Kondor, R. & Csányi, G. Erratum: on representing chemical environments [Phys. Rev. B 87, 184115 (2013)]. Phys. Rev. B 96, 9–10 (2017).
Priedeman, J. L., Rosenbrock, C. W., Johnson, O. K. & Homer, E. R. Quantifying and connecting atomic and crystallographic grain boundary structure using local environment representation and dimensionality reduction techniques. Acta Mater. 161, 431–443 (2018).
Ward, J. H. Hierarchical grouping to optimize an objective function. J. Am. Stat. Assoc. 58, 236–244 (1963).
Stukowski, A., Bulatov, V. V. & Arsenlis, A. Automated identification and indexing of dislocations in crystal interfaces. Model. Simul. Mater. Sci. Eng. 20, 085007 (2012).
Ren, G. K. et al. Contribution of point defects and nano-grains to thermal transport behaviours of oxide-based thermoelectrics. npj Comput. Mater. 2, 1–9 (2016).
Wilson, R. B. & Cahill, D. G. Limits to Fourier theory in high thermal conductivity single crystals. Appl. Phys. Lett. 107, 203112 (2015).
Kiyohara, S., Oda, H., Miyata, T. & Mizoguchi, T. Prediction of interface structures and energies via virtual screening. Sci. Adv. 2, e1600746 (2016).
Yonezu, T., Tamura, T., Takeuchi, I. & Karasuyama, M. Knowledge-transfer-based cost-effective search for interface structures: a case study on fcc-Al [110] tilt grain boundary. Phys. Rev. Mater. 2, 1–9 (2018).
Zhu, Q., Samanta, A., Li, B., Rudd, R. E. & Frolov, T. Predicting phase behavior of grain boundaries with evolutionary search and machine learning. Nat. Commun. 9, 467 (2018).
Spiteri, D., Anaya, J. & Kuball, M. The effects of grain size and grain boundary characteristics on the thermal conductivity of nanocrystalline diamond. J. Appl. Phys. 119, 085102 (2016).
Yokoi, T. & Yoshiya, M. Atomistic simulations of grain boundary transformation under high pressures in MgO. Phys. B 532, 2–8 (2018).
Plimpton, S. Fast Parallel Algorithms for Short-range molecular dynamics. J. Comput. Phys. 117, 1–19 (1995).
Article ADS CAS MATH Google Scholar
Landuzzi, F. et al. Molecular dynamics of ionic self-diffusion at an MgO grain boundary. J. Mater. Sci. 50, 2502–2509 (2015).
Gale, J. D. GULP: a computer program for the symmetry-adapted simulation of solids. J. Chem. Soc. Faraday Trans. 93, 629–637 (1997).
Yan, Y. et al. Impurity-induced structural transformation of a MgO grain boundary. Phys. Rev. Lett. 81, 3675–3678 (1998).
Wang, Z., Saito, M., McKenna, K. P. & Ikuhara, Y. Polymorphism of dislocation core structures at the atomic scale. Nat. Commun. 5, 3239 (2014).
Article ADS PubMed CAS Google Scholar
Himanen, L. et al. DScribe: library of descriptors for machine learning in materials science. Comput. Phys. Commun. 247, 106949 (2019).
De, S., Bartók, A. P., Csányi, G. & Ceriotti, M. Comparing molecules and solids across structural and alchemical space. Phys. Chem. Chem. Phys. 18, 13754–13769 (2016).
Bartók, A. P., Payne, M. C., Kondor, R. & Csányi, G. Gaussian approximation potentials: the accuracy of quantum mechanics, without the electrons. Phys. Rev. Lett. 104, 1–4 (2010).
Jones, E., et al. SciPy: Open Source Scientific Tools for Python, http://www.scipy.org/ (2001).
Yoshiya, M., Harada, A., Takeuchi, M., Matsunaga, K. & Matsubara, H. Perturbed molecular dynamics for calculating thermal conductivity of zirconia. Mol. Simul. 30, 953–961 (2004).
Article CAS MATH Google Scholar
Irving, J. H. & Kirkwood, J. G. The statistical mechanical theory of transport processes. IV. The equations of hydrodynamics. J. Chem. Phys. 18, 817 (1950).
Article ADS MathSciNet CAS Google Scholar
Fujii, S., Yoshiya, M. & Fisher, C. A. J. Quantifying Anharmonic Vibrations in Thermoelectric Layered Cobaltites and Their Role in Suppressing Thermal Conductivity. Sci. Rep. 8, 11152 (2018).
Fujii, S. et al. Impact of dynamic interlayer interactions on thermal conductivity of Ca3Co4O9. J. Electron. Mater. 43, 1905–1915 (2014).
Fujii, S. & Yoshiya, M. Manipulating Thermal Conductivity by Interfacial Modification of Misfit-Layered Cobaltites Ca3Co4O9. J. Electron. Mater. 45, 1217–1226 (2016).
Pedregosa, F. et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer, 2017).
This work was supported by "Materials Research by Information Integration" Initiative (MI2I) project of the Support Program for Starting Up Innovation Hub from the Japan Science and Technology Agency (JST) and Grant-in-Aid for Scientific Research on Innovative Areas 'New Materials Science on Nanoscale Structures and Functions of Crystal Defect Cores' from the Japan Society for the Promotion of Science (JSPS) [grant number 19H05786].
Nanostructures Research Laboratory, Japan Fine Ceramics Center, 2-4-1 Mutsuno, Atsuta, Nagoya, 456-8587, Japan
Susumu Fujii, Craig A. J. Fisher, Hiroki Moriwake & Masato Yoshiya
Center for Materials Research by Information Integration, National Institute for Materials Science, 1-2-1 Sengen, Tsukuba, Ibaraki, 305-0047, Japan
Susumu Fujii & Hiroki Moriwake
Department of Adaptive Machine Systems, Osaka University, 2-1 Yamadaoka, Suita, Osaka, 565-0871, Japan
Susumu Fujii, Tatsuya Yokoi & Masato Yoshiya
Department of Materials Physics, Nagoya University, Furo-chou, Chikusa, Nagoya, 464-8603, Japan
Tatsuya Yokoi
Division of Materials and Manufacturing Science, Osaka University, 2-1 Yamadaoka, Suita, Osaka, 565-0871, Japan
Masato Yoshiya
Susumu Fujii
Craig A. J. Fisher
Hiroki Moriwake
S.F. conceived the research idea, carried out theoretical calculations, performed machine learning and wrote the paper. C.A.J.F and T.Y. contributed to writing of the paper with oversight by M.Y. T.Y. constructed grain boundary models. M.Y., C.A.J.F. and H.M. advised on the machine learning method and interpretation of results. All authors discussed the results, and read and commented on the paper.
Correspondence to Susumu Fujii or Masato Yoshiya.
Peer review information Nature Communications thanks Srikanth Patala and Keith McKenna for their contribution to the peer review of this work. Peer reviewer reports are available.
Description of Additional Supplementary Files
Supplementary Data 1
Fujii, S., Yokoi, T., Fisher, C.A.J. et al. Quantitative prediction of grain boundary thermal conductivities from local atomic environments. Nat Commun 11, 1854 (2020). https://doi.org/10.1038/s41467-020-15619-9
Robust combined modeling of crystalline and amorphous silicon grain boundary conductance by machine learning
Chayaphol Lortaraprasert
Junichiro Shiomi
npj Computational Materials (2022)
Machine learning approach for the prediction and optimization of thermal transport properties
Yulou Ouyang
Cuiqian Yu
Jie Chen
Frontiers of Physics (2021) | CommonCrawl |
Tagged: row equivalent
If the Augmented Matrix is Row-Equivalent to the Identity Matrix, is the System Consistent?
Consider the following system of linear equations:
ax_1+bx_2 &=c\\
dx_1+ex_2 &=f\\
gx_1+hx_2 &=i.
(a) Write down the augmented matrix.
(b) Suppose that the augmented matrix is row equivalent to the identity matrix. Is the system consistent? Justify your answer.
If Two Matrices Have the Same Rank, Are They Row-Equivalent?
If $A, B$ have the same rank, can we conclude that they are row-equivalent?
If so, then prove it. If not, then provide a counterexample.
Find a Row-Equivalent Matrix which is in Reduced Row Echelon Form and Determine the Rank
For each of the following matrices, find a row-equivalent matrix which is in reduced row echelon form. Then determine the rank of each matrix.
(a) $A = \begin{bmatrix} 1 & 3 \\ -2 & 2 \end{bmatrix}$.
(b) $B = \begin{bmatrix} 2 & 6 & -2 \\ 3 & -2 & 8 \end{bmatrix}$.
(c) $C = \begin{bmatrix} 2 & -2 & 4 \\ 4 & 1 & -2 \\ 6 & -1 & 2 \end{bmatrix}$.
(d) $D = \begin{bmatrix} -2 \\ 3 \\ 1 \end{bmatrix}$.
(e) $E = \begin{bmatrix} -2 & 3 & 1 \end{bmatrix}$.
Row Equivalence of Matrices is Transitive
If $A, B, C$ are three $m \times n$ matrices such that $A$ is row-equivalent to $B$ and $B$ is row-equivalent to $C$, then can we conclude that $A$ is row-equivalent to $C$?
Row Equivalent Matrix, Bases for the Null Space, Range, and Row Space of a Matrix
Let \[A=\begin{bmatrix}
(a) Find a matrix $B$ in reduced row echelon form such that $B$ is row equivalent to the matrix $A$.
(b) Find a basis for the null space of $A$.
(c) Find a basis for the range of $A$ that consists of columns of $A$. For each columns, $A_j$ of $A$ that does not appear in the basis, express $A_j$ as a linear combination of the basis vectors.
(d) Exhibit a basis for the row space of $A$.
Condition that Two Matrices are Row Equivalent
We say that two $m\times n$ matrices are row equivalent if one can be obtained from the other by a sequence of elementary row operations.
Let $A$ and $I$ be $2\times 2$ matrices defined as follows.
1 & b\\
c& d
\end{bmatrix}, \qquad I=\begin{bmatrix}
\end{bmatrix}.\] Prove that the matrix $A$ is row equivalent to the matrix $I$ if $d-cb \neq 0$.
Group Homomorphism, Preimage, and Product of Groups
Find the Vector Form Solution to the Matrix Equation $A\mathbf{x}=\mathbf{0}$
An Example of a Real Matrix that Does Not Have Real Eigenvalues
How to Calculate and Simplify a Matrix Polynomial
Perturbation of a Singular Matrix is Nonsingular
Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis | CommonCrawl |
The role of competitiveness in the Prisoner's Dilemma
Marco A Javarone1,2 &
Antonio E Atzeni3
Competitiveness is a relevant social behavior and in several contexts, from economy to sport activities, has a fundamental role. We analyze this social behavior in the domain of evolutionary game theory, using as reference the Prisoner's Dilemma.
In particular, we investigate whether, in an agent population, it is possible to identify a relation between competitiveness and cooperation. The agent population is embedded both in continuous and in discrete spaces, hence agents play the Prisoner's Dilemma with their neighbors. In continuous spaces, each agent computes its neighbors by an Euclidean distance-based rule, whereas in discrete spaces agents have as neighbors those directly connected with them. We map competitiveness to the amount of opponents each agent wants to face; therefore, this value is used to define the set of neighbors. Notably, in continuous spaces, competitive agents have a high interaction radius used to compute their neighbors. Instead, since discrete spaces are implemented as directed networks, competitiveness corresponds to the out-degree of each agent, i.e., to the number of arrows starting from the considered agent and directed to those agents it wants to face.
Results and conclusions
Then, we study the evolution of the system with the aim to investigate if, and under which conditions, cooperation among agents emerges. As result, numerical simulations of the proposed model show that competitiveness strongly increases cooperation. Furthermore, we found other relevant phenomena as the emergence of hubs in directed networks.
In the last years, social and economic phenomena have attracted the interest of scientists belonging to hard sciences, as mathematics, physics and computer science. As result, the interdisciplinary fields of social dynamics [1, 2] and econophysics [3] have rapidly emerged. For instance, several analytical and computational approaches have been developed for studying behaviors such as homophily [4], conformity [5–8], and rationality [9, 10]. Furthermore, many social and economic phenomena can be studied in the context of Evolutionary Game Theory [11–13], which represents the attempt of describing the evolution of populations by Game Theory using famous models like the Prisoner's Dilemma [14, 15] (PD hereinafter). Since the PD allows to analyze the phenomenon of cooperation [16–18], it is possible to study the evolutionary dynamics among agents whose interactions are based on this game. In doing so, we can evaluate if, and under which conditions, cooperation emerges. It is worth to highlight that simple games like the PD, implemented considering different social behaviors, contexts (see [19]), or topologies (e.g., [20–24]) to implement agent's interactions, as sketched before, allow to investigate a wide variety of topics such as criminality [25], biological systems [26], imitation phenomena [27], and further social psychology aspects such as conformity [28, 29]. Here, we consider an important social character, i.e., the competitiveness, that strongly affects dynamics in animal herds and among individuals [4]. In particular, in this study, we aim to investigate if there is a relation between competitiveness and cooperation. To this end, we implement a population whose agents, provided with a parameter that represents their degree of competitiveness (see [30]), play the PD. The relevance of this work lays in the fact that, both in herds and in human communities, many contexts are defined as competitive, e.g., stock markets, athletic challenges, and job markets. Numerical simulations, of the proposed model, allowed to analyze parameters as the average out-degree over time and to define the TS-diagram; the latter constitutes a relevant tool to assess if, and in which extent, cooperation emerges among agents. As result, we found that competitiveness strongly affects these dynamics and, in particular, it increases the cooperation among agents. The remainder of the paper is organized as follows: "Model" introduces the model for studying the PD in continuous spaces and in discrete spaces. "Results" shows results of numerical simulations on varying the initial conditions. Eventually, "Discussion and conclusion" ends the paper.
In the proposed model [30], we study a population, embedded in a bidimensional continuous space and in a discrete space, whose agents play the PD. The continuous space is represented by a square of side \(L=1\), where agents are equally spread inside it. Instead, the discrete space is represented by a directed network of agents. In so doing, agents play the PD with their neighbors: (a) in the continuous space, neighbors are computed by an Euclidean distance-based rule [31], whereas (b) in the discrete space, each agent has as neighbors those connected by an arrow (starting from the considered agent). It is worth to emphasize that, since we are dealing we a directed network, for each pair of agents—say A and B, there is a reciprocal interaction only if there are two arrows: one from A to B, and one from B to A. Therefore, if there is only one arrow between A and B, e.g., from A to B, the B agent is a neighbor of A, but A is not considered a neighbor of B. These relations appear clear considering that the related adjacency matrix, i.e., the matrix containing all the information about the connections, is not symmetric as for undirected networks (e.g., friendship networks and collaborator networks [7]). In principle, the PD is a very simple game where agents may behave as cooperators or as defectors, then in accordance with a payoff matrix, they increase or decrease their payoff when they face each other. In particular, depending on their behavior and on that of their opponents, agents compute their gain at each interaction. Moreover, it is worth to note that, in this context, to behave as a cooperator means to adopt a cooperation strategy and, in the same way, to behave as a defector means to play with a defection strategy. The way agents update their payoff, in accordance with their behavior (i.e., strategy), is described in the following payoff matrix:
$$ \begin{array}{*{20}l} C \\ D \\ \end{array} \mathop {\left( {\begin{array}{*{20}l} 1 & s \\ T & 0 \\ \end{array} } \right)}\limits_{{}}^{{\begin{array}{*{20}l} C & D \\ \end{array} }} $$
The set of strategies is \(\Sigma = \left\{ C,D\right\}, \) where C stands for 'Cooperator' and D for 'Defector'. In the matrix 1, T represents the Temptation, i.e., the payoff that an agent gains if it defects while its opponent cooperates, while S the Sucker's payoff, i.e., the gain achieved by a cooperator while the opponent defects. In the PD, game values of T and S are in the following range: \(1 \le T \le 2\) and \(-1 \le S \le 0.\) As discussed before, the TS-plane is a relevant tool while studying the system because, as we can see in matrix 1, the PD can be played with different values of S and T, having different meanings. For instance, a low value of T entails defectors have a small increase of their payoff when they play against cooperators, whereas a high value of S entails small losses for cooperators which play against defectors. Therefore, it is interesting to investigate whether a cooperative behavior emerges, in the agent population, on varying the values of described parameters (i.e., T and S). In general, the evolution of a population can be simulated in two different ways: synchronous dynamics or asynchronous dynamics. The former entails that at each time step, all agents interact (i.e., they play the PD with their neighbors). Instead, the latter entails that at each time step only one agent is considered, i.e., it computes its neighbors and faces them playing the PD. Remarkably, in this work, simulations have been implemented by the asynchronous dynamics. To summarize, the main steps of the proposed model are:
A randomly chosen agent, say the jth agent, computes the set of its neighbors in accordance with the interaction radius r (or with the network structure in the discrete space);
The jth agent faces its neighbors (note that each single challenge involves only two agents at time);
All agents, playing at this step (i.e., the jth agents and its neighbors), compute their new payoff;
The jth agent updates its strategy according to a revision rule.
In doing so, each agent involved in the game obtains a payoff in accordance with its strategy (i.e., cooperation or defection), considering the payoff matrix 1. Now, let \(\sigma _j(t)\) be a vector giving the strategy profile of the jth agent at time t with \(C=(1,0)\) and \(D=(0,1),\) and let M be the payoff matrix discussed above. The payoff collected by the jth agent, at time t, can be computed as
$$\begin{aligned} \Pi _j(t)=\sum _{i\in N_j} \sigma _j(t)M\sigma _i^\top (t) \end{aligned}.$$
In the proposed model, we adopted the strategy revision rule called 'imitation of the best': the jth agent compares its payoff (\(P_j\)) with those of its neighbors, and it adopts the strategy of the neighbor having the highest payoff if it is greater than \(P_j.\) As a consequence, agents can vary their strategy several times during the evolution of the system. Since some parameters of the proposed model depend on the considered domain (i.e., continuous and discrete), we illustrate both cases with more detail.
Continuous space
As shown in [32], using low values of T and high values of S, cooperation among agents emerges only under particular conditions, i.e., when agents randomly move over time. It is worth to highlight that in [32], all agents have the same radius to compute the set of their neighbors. Furthermore, this radius depends on the average number of opponents agents face. Here, we consider the same geometrical framework (i.e., that defined in [32]) to implement the proposed model on continuous spaces, with two main differences: (1) agents are fixed (i.e., they cannot move) and (2) agents can vary their radius. Notably, agents have an interaction radius whose length depends on gained payoff: as their payoff increases/decreases their radius increases/decreases. Hence, agents with high payoff become more competitive and, as result, they face a higher number of opponents than agents with a small payoff. At time \(t=0,\) all agents have the same radius computed according to the average number of opponents they can face (if selected). In particular, the radius \(r(t=0)\) is computed as \(r(0) = \sqrt{\bar{k(0)}/(\pi N)}.\) Then, considering that each radius varies in accordance with agent's payoff, and that agents face a number of opponents in the range \([1,N-1],\) the radius is computed as \(r= \alpha r_0,\) where \((\sqrt{{1/\bar{k}}}) \le \alpha \le \sqrt{(}N/\bar{k}).\) Thus, at \(t = 0,\) the value of \(\alpha \) is \(\alpha _0 = 1.\) In general, after n time steps, each agent plays an average number of times equal to \(\bar{n} = n/N.\) Since best agents (i.e., those with high payoff) should get the maximum radius in \(\bar{n}\) steps, every time agents play, their value of \(\alpha \) increases to \(\delta \alpha = (\alpha _{\rm max}-\alpha _0)/\bar{n}.\) Hence, the radius is modified to \(\pm \delta r,\) where \(\delta r = r_0 \delta \alpha,\) depending on which the considered agent obtains a positive or a negative payoff.
Discrete space
The discrete space is implemented by a directed network, i.e., a network whose connections can be represented by arrows. In the proposed model, an arrow from one agent to another one represents the challenger agent (i.e., the one that faces someone else) and the faced agent (i.e., agent identified as neighbor of the challenger one). In directed networks, the definition of neighbors is not immediate as for undirected networks, where connections can be represented by simple lines. Notably, arrows represent links (or edges) and their direction represents the meaning of the relation. For instance, an arrow starting from node A, and ending to node B, codifies a relation from A to B, and not vice versa. Thus, neighbors of the jth node are those nodes connected to it by arrows starting from the jth node itself. In doing so, an arrow starts from the challenger and it ends on the faced agent. To analyze the structure of these networks, using the degree distribution, we have to consider both the "in-degree" distribution and the "out-degree" distribution. The former represents the distribution of links ending in nodes, whereas the latter those of links starting from nodes. Then, competitiveness can be mapped to the out-degree of each node. As for the continuous space, at \(t=0\) all agents begin to play in the same conditions, i.e., all nodes have the same out-degree and the same in-degree. On the other hand, as the population evolves (i.e., agents play the PD over time), winning agents increase their out-degree (randomly selecting new opponents) and loosing agents do the opposite, i.e., they reduce their out-degree (randomly selecting nodes to remove from their neighborhood). As before, the increment/reduction of the out-degree has as constraint that each agent cannot play with more than \(N-1\) agents nor less then 1 agent. Furthermore, the increasing and the decreasing is unitary, i.e., the \(k_{\rm out}\) can vary at each time step of \(\pm 1.\) Finally, we recall that in both domains we adopted the 'imitation of the best' strategy revision rule, and in all simulations we consider an equal initial distribution of strategies, i.e., at the beginning the \(50\%\) of the population is composed of cooperators and the remaining \(50 \%\) of defectors.
We performed many numerical simulations to study the evolution of the system and, moreover, we highlight that each presented result has been obtained by averaging over 50 different simulation runs. In particular, we investigated the following cases:
Mean-field approximation
Continuous spaces
Discrete spaces
The first case represents a classical generalization of the studied system, as we introduce the trivial hypothesis that all agents interact with all the others, at each time step. In terms of networks theory, this scenario corresponds to a fully connected network, hence complex interaction patterns are not considered nor the competitiveness is represented. Notably, competitiveness is mapped to the number of opponents each agent faces; therefore, in the event everyone faces everyone, competitiveness vanishes. Anyway, when studying complex systems, before focusing on complex scenarios it is often useful to analyze results coming from simple or trivial configurations. Then, once we performed the first analysis, we proceed on analyzing results related to the continuous space and to the discrete space.
Here, we consider a simple fully connected network structure to arrange agents. We observe that this kind of configuration can be studied also in a continuous domain, making the assumption that every agent is provided with an interaction radius long enough to include in its social circle all the other agents. Both implementations, of the mean-field approximation, are equivalent as both produce the same effects on agents. As shown in Figure 1, in the event agents interact with all the population, at the same time and without considering particular characters as the competitiveness, the population reaches always the same final defection phase, i.e., all agents behave as defectors for every value of T and S. Only for very high values of S and for low values of T, a small amount of cooperative agents survives.
Mean-field approximation. Cooperation frequencies in the TS-plane achieved by a population arranged on a fully connected network. This result is in full accordance with the expected Nash equilibrium for the PD. Parameter S indicates the payoff obtained by cooperators that face defectors, that in turn gain a payoff equal to T (when facing cooperators)—see matrix 1. Colors indicate the averaged degree of cooperation achieved by the population. We recall that red indicates strong cooperation, while blue defection (i.e., no cooperation).
Anyway, it is possible that, if we observe the evolution for a time longer than \(10^4\) time steps, all agents of the population become defectors. In general, this first result confirms that in absence of particular behaviors (e.g., movements and social characters) the defection strategy dominates, according to the expected Nash equilibrium. Hence, we can go ahead studying the population by introducing the competitiveness.
Simulations on the continuous space
We recall that the continuous space is represented by a bidimensional square of side \(L = 1.\) In this geometrical configuration, we spread \(N=100\) agents by two different ways: uniform distribution and regular lattice distribution. The former entails the distribution is completely random in the space, whereas the latter entails agents can occupy specific positions, forming a bidimensional lattice. We consider two different conditions related to the initial average degree: \(\bar{k(0)} = 4\) and \(\bar{k(0)} = 8.\) Then, we provide agents with a radius \(r_0 = \sqrt{\bar{k(0)}/(\pi N)}.\) In doing so, at the beginning all agents have the same radius. Results related to the uniform distribution are shown in Figure 2, while those related to the population arranged on a regular lattice (embedded in the continuous space) are shown in Figure 3.
Continuous space: uniform distribution. Cooperation frequencies in the TS-plane. On the left, results achieved using agents provided with \(\bar{k(0)} = 4.\) On the right, results achieved using agents provided with \(\bar{k(0)} = 8.\) Parameter S indicates the payoff obtained by cooperators that face defectors, that in turn gain a payoff equal to T (when facing cooperators)—see matrix 1. Colors indicate the averaged degree of cooperation achieved by the population. We recall that red indicates strong cooperation, while blue defection (i.e., no cooperation).
Continuous space: lattice distribution. Cooperation frequencies in the TS-plane. On the left, results achieved using agents provided with \(\bar{k(0)} = 4\). On the right, results achieved using agents provided with \(\bar{k(0)} = 8.\) In both cases, \(\bar{k(0)}\) refers to \(\bar{k(0)_{\rm in}}\) and \(\bar{k(0)_{\rm out}}.\) Parameter S indicates the payoff obtained by cooperators that face defectors that in turn gain a payoff equal to T (when facing cooperators)—see matrix 1. Colors indicate the averaged degree of cooperation achieved by the population. We recall that red indicates strong cooperation, while blue defection (i.e., no cooperation).
Observations of these diagrams in Figures 2 and 3 let emerge that when agents have a higher initial average degree the final density of cooperators decreases. Furthermore, it is relevant to emphasize that by arranging agents in a regular lattice, with 4 and 8 neighbors, when they increase/decrease their radius the variation of faced opponents is equal to their initial average degree, i.e., \(\pm 4\) and \(\pm 8,\) respectively.
Simulations on the discrete space
We recall that the discrete space is represented by a directed network. Notably, we implemented this scenario using a regular lattice as initial configuration. In this case, we were able to consider a population with \(N = 1,000\) agents, comparing the case with agents having a fixed out-degree and variable out-degree. The former constitutes a scenario equivalent to that given by agents with fixed radius in the continuous domain, whereas the latter corresponds to a variable radius (in the continuous domain). Furthermore, due to the increasing of \(k_{\rm out}\) over time for competitive agents (and to the decreasing of the same parameter for non-competitive agents), we are dealing with adaptive networks (see [33]), i.e., networks whose structure varies over time. Results of simulations are shown in Figure 4.
Discrete space. Cooperation frequencies in the TS-plane. On the left, results achieved using agents provided with constant out-degree, i.e., a scenario equivalent to 'constant radius' in the continuous domain. On the right, results achieved using agents provided with a variable out-degree (i.e., equivalent to variable radius)—see [30]. Parameter S indicates the payoff obtained by cooperators that face defectors that in turn gain a payoff equal to T (when facing cooperators)—see matrix 1. Colors indicate the averaged degree of cooperation achieved by the population. We recall that red indicates strong cooperation, while blue defection (i.e., no cooperation).
Then, we analyzed the degree distributions (both the in-degree and the out-degree distribution) of resulting networks, choosing representative points of the TS-plane. Figure 5 shows the degree distributions for a cooperation region (of the TS-plane), and Figure 6 shows degree distributions achieved in a defection region.
Degree distributions achieved in networks of cooperative agents (selected according to the TS-plane). In-degree distributions \(P(k_{\rm in})\): a at \(t= 1,000\). b At \(t = 5,000\). c At \(t = 10,000.\) Out-degree distributions \(P(k_{\rm out})\): d at \(t= 1,000\). e At \(t = 5,000\). f At \(t = 10,000.\)
Degree distributions achieved in networks of non-cooperative agents (selected according to the TS-plane). In-degree distributions \(P(k_{\rm in})\): a at \(t= 1,000\). b At \(t = 5,000\). c At \(t = 10,000\). Out-degree distributions \(P(k_{\rm out})\): d At \(t= 1,000\). e At \(t = 5,000\). f At \(t = 10,000.\)
It is worth to see how the in-degree distributions vary much lesser than the out-degree distributions, although both are involved in the evolution of the system.
In this study, we aim to investigate if there are relations between two social behaviors, i.e., cooperation and competitiveness, when an agent population evolves playing the Prisoner's Dilemma. In particular, we map the competitiveness to a parameter embedded in the model, so that competitive agents face many opponents, whereas non-competitive ones do the opposite. In the proposed model, becoming a non-competitive agent entails to loose challenges, while playing the Prisoner's Dilemma. After performing a brief mean-field analysis of our model, where the population reached the expected Nash equilibrium, agents have been arranged in two different domains: a continuous space and a discrete space. The former is represented by a bidimensional square, whereas the latter has been modeled by a directed network. First of all, we highlight the main differences between our work and those performed by previous authors (e.g., [31, 32, 34]): we focus our attention on fixed agents and we provide them with a social character, i.e., the competitiveness. Due to the computational cost of our model, we were able to perform simulations up to \(t = 10^4\) time steps, with \(N = 100\) agents in the continuous space and with \(N = 1,000\) agents in the discrete space. In general, the main result of numerical simulations shows that competitiveness allows the emergence of cooperation areas in the TS-plane, in both domains. Moreover, in the continuous domain, we investigated the outcomes on varying the initial conditions: the spreading of agents in the bidimensional square (i.e, random vs regular lattice) and the average degree (i.e., \(\bar{k(0)} = 4\) and \(\bar{k(0)} = 8\)). Notably, when agents are randomly spread, several intermediate phases are obtained, indicating an equal presence of cooperators and defectors, instead by an ordered distribution (i.e., lattice) we found more neat areas of cooperation and defection. On the other hand, the initial average degree seems to have a strong influence on these dynamics, as for \(\bar{k(0)} = 4\) the cooperation area in the TS-plane is greater than for \(\bar{k(0)} = 8,\) using the two spreading strategies. This difference can be explained by the fact that, as for each agent the number of neighbors increases (at \(t = 0\)), the probability that the related social circle be composed of cooperators (i.e., be a cluster of cooperative agents) reduces. In the discrete domain, the scenario is a bit different as only for very low T values and high S values, a full cooperation emerges. An analysis related to the influence of the initial arrangements of agents, in both domains, performed to understand why some of them appear more advantageous to obtain more cooperation is important and it will constitute the argument for future investigations. Finally, we analyzed the degree distributions (i.e., the in-degree and the out-degree distributions) of directed networks. This analysis is relevant as agents can vary their in-degree distribution and out-degree distribution as result of their behavior (more competitive or not). It is important to note that the in-degree distribution has low variations over time, whereas the opposite happens for the out-degree distribution. Notably, this latter represents the competitive parameter, i.e., the number of opponents that competitive agents face as their payoff increases. Analyzing networks related to cooperation areas, in the TS-plane, we found that the out-degree distribution is characterized by the presence of more hubs (i.e., many competitive agents appear, even if they tend to cooperate among themselves). On the other hand, considering networks-related non-cooperative areas, of the TS-plane, we found only few variations of the out-degree distribution. In our view, this difference between the two areas, considering the out-degree distributions, means that when agents cooperate the network loses its homogeneous structure (recall that at \(t= 0\) all agents have the same values of \(k_{\rm in}\) and \(k_{\rm out}\)); while when agents do not cooperate, the network structure has an exponential degree distribution (i.e., the homogeneous structure is conserved over time). In the light of these results, we can state that competitiveness strongly affects cooperation. Therefore, it is important trying to explain the underlying mechanism that leads to this result. Let us consider first the continuous case, where agents are fixed and, according to previous works, should not cooperate. Now, if only few of them have many cooperative agents in their neighborhood, they increase their interaction radius. Hence, they face more agents during next time steps, having the opportunity to face other cooperative agents. Now, according to the matrix 1, clusters of cooperators strongly increase their payoff, while clusters of defectors do not increase it in absence of cooperators. Since cooperators are randomly spread in the space, increasing the interaction radius the probability to find cooperators increases. On the other hand, defector agents, although never decrease their radius, may increase their payoff (and their radius) only for high values of T, otherwise they will have a constant small radius and, as a consequence, a small degree of competitiveness. Similar considerations hold also for the discrete domain, where defectors do not increase their out-degree, while cooperators have this opportunity. To conclude, we highlight that achieved results clearly indicate the existence of a relation between competitiveness, interpreted as an inclination to face many players, and the emergence of cooperation in the Prisoner's Dilemma.
Galam, S.: Sociophysics: a review of Galam models. Int. J. Mod. Phys. C 19–3, 409–440 (2008)
Castellano, C., Fortunato, S., Loreto, V.: Statistical physics of social dynamics. Rev. Mod. Phys. 81–2, 591–646 (2009)
Mantegna, R.N., Stanley, H.E.: Introduction to Econophysics. Cambridge University Press, Cambridge (1999)
Javarone, M.A.: Models and framework for studying social behaviors. Ph.D. thesis (2013)
Galam, S.: Contrarian deterministic effects on opinion dynamics: "the hung elections scenario". Physica A 333, 453–460 (2004)
Javarone, M.A.: Social influences in opinion dynamics: the role of conformity. Physica A Stat. Mech. Appl. 414, 19–30 (2014)
Javarone, M.A., Armano, G.: Perception of similarity: a model for social network dynamics. J. Phys. A Math. Theor. 46–45, 455102 (2013)
Javarone, M.A., Squartini T.: Conformism-driven phases of opinion formation on heterogeneous networks: the q-voter model case. arXiv:1410.7300 (2015)
Javarone, M.A.: Is poker a skill game? New insights from statistical physics. Europhys. Lett. 110(5), 58003 (2015). doi:10.1209/0295-5075/110/58003
Javarone, M.A.: Poker as a skill game: rational versus irrational behaviors. J. Stat. Mech. Theory Exp. P03018 (2015). doi:10.1088/1742-5468/2015/03/P03018
Perc, M., Grigolini, P.: Collective behavior and evolutionary games—an introduction. Chaos Solitons Fractals 56, 1–5 (2013)
Poncela Casasnovas, J.: Evolutionary Games in Complex Topologies. Interplay Between Structure and Dynamics. Springer (2012)
Tomassini, M.: Introduction to evolutionary game theory. In: Proceedings of Conference on Genetic and Evolutionary Computation Companion (2014)
Colman, A.M.: Game theory and its applications in the social and biological sciences, 2nd edn. Butterworth-Heinemann, Routledge, Oxford, London (1995)
Perc, M., Szolnoki, A.: Social diversity and promotion of cooperation in the spatial Prisoner's Dilemma. Phys. Rev. E 77, 011904 (2008)
Axelrod, R.: The Evolution of Cooperation. Basic Books, Inc., New York (1984)
Aronson, E., Wilson, T.D., Akert, R.M.: Social Psychology. Pearson Education, Prentice Hall (2006)
Antonioni, A., Tomassini, M., Sanchez, A.: Short-range mobility and the evolution of cooperation: an experimental study. Sci. Rep. 5, 10282 (2015). doi:10.1038/srep10282
Perc, M., Szolnoki, A.: Coevolutionary games—a mini review. BioSystems 99, 109–125 (2010)
Wang, Z., Szolnoki, A., Perc, M.: Interdependent network reciprocity in evolutionary games. Sci. Rep. 3, 1183 (2013)
Wang, Z., Wang, L., Szolnoki, A., Perc, M.: Evolutionary games on multilayer networks: a colloquium. Eur. Phys. J. B 88, 124 (2015)
Gracia-Lazaro, C., Ferrer, A., Ruiz, G., Tarancon, A., Cuesta, J.A., Sanchez, A., et al.: Heterogeneous networks do not promote cooperation when humans play a Prisoner's Dilemma. PNAS 109, 12922–12926 (2012)
Gomez-Gardenes, J., Campillo, M., Floria, L.M., Moreno, Y.: Dynamical organization of cooperation in complex topologies. Phys. Rev. Lett. 109, 12922–12926 (2007)
Assenza, S., Gomez-Gardenes, J., Latora, V.: Enhancement of cooperation in highly clustered scale-free networks. Phys. Rev. E 78–1, 017101 (2008)
d'Orsogna, M., Perc, M.: Statistical physics of crime: a review. Phys. Life Rev. 12, 1–21 (2015)
Perc, M., Gomez-Gardenes, J., Szolnoki, A., Floria, L.M., Moreno, Y.: Evolutionary dynamics of group interactions on structured populations: a review. J. R. Soc. Interface 10–80, 20120997 (2013)
Szolnoki, A., Xie, N.-G., Wang, C., Perc, M.: Imitating emotions instead of strategies in spatial games elevates social welfare. Europhys. Lett. 96, 38002 (2011)
Szolnoki, A., Perc, M.: Conformity enhances network reciprocity in evolutionary social Dilemmas. J. R. Soc. Interface 12, 20141299 (2015)
Javarone, M.A., Atzeni, A.E., Galam, S.: Emergence of Cooperation in the Prisoner's Dilemma Driven by Conformity. LNCS. Springer, New York (2015)
Javarone, M.A., Atzeni, A.E.: Emergence of cooperation in competitive environements. In: SITIS 2014—Complex Networks and their Applications—IEEE (2014)
Meloni, S., Buscarino, A., Fortuna, L., Frasca, M., Gomez-Gardenes, J., Latora, V., et al.: Effects of mobility in a population of Prisoner's Dilemma players. Phys. Rev. E 79–6, 067101 (2009)
Antonioni, A., Tomassini, M., Buesser, P.: Random diffusion and cooperation in continuous two-dimensional space. J. Theor. Biol. 344, 40–48 (2014)
Gross, T., Hiroki, S.: Adaptive Networks: Theory, Models and Applications. Springer, Berlin (2009)
Tomassini, M., Antonioni, A.: Levy flights and cooperation among mobile individuals. J. Theor. Biol. 364, 154–161 (2015)
MAJ devised the research work. Both authors performed experiments and analyzed the outcomes. Both authors read and approved the final manuscript.
MAJ would like to thank Fondazione Banco di Sardegna for supporting his work.
Compliance with ethical guidelines
Competing interests The authors declare that they have no competing interests.
Department of Mathematics and Computer Science, Palazzo delle Scienze, Via dell Ospedale, 72, 09124, Cagliari, Italy
Marco A Javarone
Department of Humanities and Social Science, University of Sassari, Via Roma, 120, 07100, Sassari, Italy
Department of Physics, University of Cagliari, Cittadella Universitaria-Sestu, 09124, Cagliari, Italy
Antonio E Atzeni
Correspondence to Marco A Javarone.
Javarone, M.A., Atzeni, A.E. The role of competitiveness in the Prisoner's Dilemma. Compu Social Networks 2, 15 (2015). https://doi.org/10.1186/s40649-015-0024-5 | CommonCrawl |
v hat symbol
Update: If you are familiar with stats you will be familiar with p hat. [4] The caret symbol is written below the line of text for a line-level punctuation mark, such as a comma, or above the line as an inverted caret (cf. Write text symbols using keyboard, HTML or by copy-pasting. Comprehensive T e X Archive Network {\displaystyle \theta } e ^ (2) The small up-facing arrow on the "6" key (shift-6) on a typewriter keyboard. A similar mark has a variety of unrelated uses in programming, mathematics and other contexts. Also called a "hat," it is used as a symbol for several different operations. Also used in the Vatican in the Eucharist. 5 out of 5 stars (3,028) 3,028 reviews. An online LaTeX editor that's easy to use. r Reference space & time, mechanics, thermal physics, waves & optics, electricity & magnetism, modern physics, mathematics, greek alphabet, astronomy, music Style sheet. 1 Clothing 2 Signs 3 Symbols 4 Trivia 5 References Many of these symbols are available to put on your customized characters clothing or skin in the video game Dragon Ball Z: Ultimate Tenkaichi, such as the Ginyu Force symbol, the Demon mark, and many others. The following list of mathematical symbols by subject features a selection of the most common symbols used in modern mathematical notation within formulas, grouped by mathematical topic. θ Music. The caret is a V-shaped grapheme, usually inverted and sometimes extended, used in proofreading and typography to indicate that additional material needs to be inserted at this point in the text. {\displaystyle \mathbf {\hat {e}} _{1},\mathbf {\hat {e}} _{2},\mathbf {\hat {e}} _{3}} This table explains the meaning of every Letter e symbol. Letter E symbol is a copy and paste text symbol that can be used in any desktop, web, or mobile applications. In the bottom right, you'll see a text area and a dropdown. Letter X symbol is a copy and paste text symbol that can be used in any desktop, web, or mobile applications. are functions of x In Apple's C extensions for Mac OS X and iOS, circumflex are used to create blocks and to denote block types. It is also used as an exclusive OR operator (see XOR), and it is sometimes found as a symbol for the Control key; for example, ^Y means Ctrl-Y. For a more complete description, see Jacobian matrix and determinant. [citation needed] In addition to indicating the line number and column number of the error location, the compiler prints out the faulty line of code and uses a single circumflex on the next line, padded by spaces, to give a visual indication of the error location. This is because it's not a Unicode symbol. , the direction in which the angle from the positive z axis is increasing. The original 1963 version of the ASCII standard reserved the code point 5Ehex for an up-arrow ↑. The V&A's significant hat collection is a revealing and exciting record of the changes in headgear over the past 17 centuries. {\displaystyle (\mathbf {\hat {e}} _{1},\mathbf {\hat {e}} _{2},\mathbf {\hat {e}} _{3})} = The misnomer "caret" is frequently applied to the circumflex symbol in that context, because of its similarity to the proofreading mark for insertion. Often seen as caret notation to show control characters, for instance ^A means the control character with value 1. caret (1) A vertical, flashing bar used as a pointer for entering text. → Jun 15, 2002 1,165 0 76. ^ 4 pics 1 word : 4 letter word, plate of fish, instrumentss, cello, music symbol. This circumflex is not to be confused with other chevron-shaped characters, such as the circumflex accent, the turned v or the logical AND, which may occasionally be called carets.[1][2]. ^ Here, the American "physics" convention[5] is used. In the case of Node.js, a circumflex allows any kind of update, unless it is seen as a "major" update as defined by semver.[7]. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination of unit vectors. , When θ is a right angle, the versor is a right versor: its scalar part is zero and its vector part v is a unit vector in ℝ3. Defined here in Chapter 4. z … 1 Unit vectors may be used to represent the axes of a Cartesian coordinate system. ^ θ 3 n 1 A similar use has been adopted by programming language compilers such as Java compiler to point out where a compilation error has occurred. In fact, he was the originator of the term vector, as every quaternion ^ aligned parallel to a principal direction (red line), and a perpendicular unit vector and 2 z 1. I want to be able to put the "^" over a "p"- a p with a hat, basically. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The free-standing circumflex symbol ^ has many uses in programming languages, where it is typically called a caret. See: Language code; List of Unicode characters; List of writing systems; Punctuation; Category:Typographical symbols; The remainder of this list focuses on graphemes not part of spoken language-encoding systems. The ASCII art of this website has been created by many different artists and credit has been given where the artist is known. × φ + Unicode symbols. The other, is the mark of the beast from ancient Egypt. How to Type Symbols Using the ALT Key. Multiple circumflexes may indicate the comment is replying to or relating to the post above that correlates with the number of circumflexes used, or to "underscore" the correct portion of the previous post, or may simply be used for emphasis. LATEX Mathematical Symbols The more unusual symbols are not defined in base LATEX (NFSS) and require \usepackage{amssymb} 1 Greek and Hebrew letters α \alpha κ \kappa ψ \psi z \digamma ∆ \Delta Θ \Theta β \beta λ \lambda ρ \rho ε \varepsilon Γ \Gamma Υ \Upsilon Pepe the Frog is a popular Internet meme used in a variety of contexts. This table explains the meaning of every Letter s symbol. President Abraham Lincoln helped make the hat popular. In a table, letter Э located at intersection line no. Please update this article to reflect recent events or newly available information. Used in rituals for: celebrating, drawing on the divine feminine. In internet forums, social networking sites such as Facebook, or in online chats, a circumflex or a series of them may be used beneath or after the post of one user by another user. Some languages usually need a dedicated input system to ease document writing. As a mathematician, I would prefer Fourier transforms or series written with a hat or wedge (or check) at the end of the expression, if the expression is long, because otherwise the symbol looks too big to me. [3] (the actual number being equal to the degrees of freedom of the space). Represents: ceremony, celebration, divine feminine. {\displaystyle q=s+v} {\displaystyle {\boldsymbol {\hat {\varphi }}}} φ – Benjamin McKay Mar 2 '13 at 15:55 ) are versors of a 3-D Cartesian coordinate system. e ) e v r {\displaystyle \delta _{ij}} Even reading every time you want, this task will not disrupt your various other tasks; many individuals commonly review the publications V Hat Symbol when they are having the downtime. Like Dislike. ^ The symbol is a celebratory one, evoking the image of the god Odin toasting with his drinking horns. and Unfortunately, no B. Lets take a close up look at Hitlers cap: All who… Guides on Alt codes for symbols, cool Unicode characters, HTML entity characters. {\displaystyle \mathbf {\hat {n}} } Thus by Euler's formula, You can look at the characters with circumflexes here. s , the direction in which the radial distance from the origin increases; Answer Save. The command-line interpreter, cmd.exe, of Windows uses the circumflex to escape reserved characters (most other shells use the backslash). ) 2 In terms of polar coordinates; v In September 2019, the Anti-Defamation League released an updated list of hate-related symbols, hand signs and numbers. Turn messages 180° … Upside down text generator - flip dᴉʅⅎ Aboqe generator is a tool that can flip your text upside down by utilising special letters, symbols and characters. by: It is important to note that 1Cheap2Crazy Golden Member. Lv 7. This leaves the azimuthal angle Among them was a familiar, seemingly innocent gesture: the … {\displaystyle \varphi } Mathematical symbols and signs of basic math, algebra, geometry, statistics, logic, set theory, calculus and analysis ^ HAT.V, $HAT.V, stock technical analysis with charts, breakout and price targets, support and resistance levels, and more trend analysis indicators In music theory and musicology, a circumflex above a numeral is used to make reference to a particular scale degree. ^ {\displaystyle {\boldsymbol {\hat {\varphi }}}} φ As it is virtually impossible to list all the symbols ever used in mathematics, only those symbols which occur often in mathematics or mathematics education are included. ^ r In this chapter we will tackle matters related to input encoding, typesetting diacritics and special characters. , Just click on the symbol to get more information such as Letter e symbol unicode, download Letter e emoji as a png image at different sizes, or copy Letter e symbol to clipboard then paste into your favorite application . ^ A common question I get (at least common in Unicode terms) is what the code is for the p-hat (p̂) symbol and x-bar (x̄) symbols in statistics. Carets telling reader to insert a comma, an apostrophe, and quotation marks, Caret telling a reader to insert a letter. [3][4] The term normalized vector is sometimes used as a synonym for unit vector. sin θ A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in ^ (pronounced "v-hat").. e Symbols often have powerful and different meanings depending on how they're used. In Windows, the key combined with numeric codes can access characters that aren't readily available on a normal keyboard. exp "[8] The use of the circumflex for exponentiation can be traced back to ALGOL 60,[citation needed] which expressed the exponentiation operator as an upward-pointing arrow, intended to evoke the superscript notation common in mathematics. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula.As formulas are entierely constitued with symbols of various types, many symbols are needed for expressing all mathematics. defined the same as in cylindrical coordinates. has a scalar part s and a vector part v. If v is a unit vector in ℝ3, then the square of v in quaternions is –1. Letter X symbol is a copy and paste text symbol that can be used in any desktop, web, or mobile applications. Learn how symbols can convey hate and other negative connotations. δ Does anyone know how I could do this? {\displaystyle \mathbf {\hat {\rho }} } It can signify exponentiation, the bitwise XOR operator, string concatenation, and control characters in caret notation, among other uses. x ). In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors , This database provides an overview of many of the symbols most frequently used by a variety of white supremacist groups and movements, as well as some other types of hate groups. [11], This article is about the proofreader's symbols that indicate insertion, and about a similar symbol used in computing. The term direction vector is used to describe a unit vector being used to represent spatial direction, and such quantities are commonly denoted as d; 2D spatial directions represented this way are numerically equivalent to points on the unit circle. thanks . φ The hat is a caret-shaped symbol commonly placed on top of variables to give them special meaning. 3 Unicode is a giant code system that assigns a code to every single character in common usage, from every single language on earth. By definition, the dot product of two unit vectors in a Euclidean space is a scalar value amounting to the cosine of the smaller subtended angle. ^ ^ ( {\displaystyle \mathbf {\hat {r}} } I have samsung chat 3222. when i type msg and want to insert smileys or emoticon it comes in it but ther is no symbol only word comes. In C++/CLI, .NET reference types are accessed through a handle using the ClassName^ syntax. Relevance. ^ , with or without hat, are also used,[3] particularly in contexts where i, j, k might lead to confusion with another quantity (for instance with index symbols such as i, j, k, which are used to identify an element of a set or array or sequence of variables). These are the conventions used in this book. {\displaystyle {\hat {z}}} X-Bar, P-Hat and D-Bar Some codes like that for x̄ (X-bar) are actually a combination of two codes – the base letter x plus a code for the line above (̄: or ̄ ). First select the symbol then you can drag&drop or just copy&paste it anywhere you like. {\displaystyle {\boldsymbol {\hat {\theta }}}} What's up everybody? e j θ Letter S symbol is a copy and paste text symbol that can be used in any desktop, web, or mobile applications. j , View or Print: These pages change automatically for your screen or printer. Although these are common symbols, they haven't made it as a single character into Unicode (much like there thermodynamic dot symbols are half missing unless they are also in Old Irish or another foreign language's spelling system.The good news is that they can be created in Unicode, but it's quirky. I found $\hat{\beta}$ is describing an estimator for $\beta$ ( Wikipedia) But I also found $\tilde{\beta}$ is describing an estimator for $\beta$ ().Is there any difference in the meaning? ^ This database provides an overview of many of the symbols most frequently used by a variety of white supremacist groups and movements, as well as some other types of hate groups. In the insert symbols, the only ones with the hat are the French vowels. {\displaystyle {\vec {\jmath }},} First, type in a letter that you want to adorn with a hat. ^ {\displaystyle {\vec {\imath }}} ) This table explains the meaning of every Letter x symbol. are often reversed. SUBSCRIBE TO GOLD N SLOTS! The symbol x^^ is voiced "x-hat" (or sometimes as "x-roof") in mathematics, but is more commonly known as the circumflex in linguistics (Bringhurst 1997, p. 274). {\displaystyle (\mathbf {\hat {x}} ,\mathbf {\hat {y}} ,\mathbf {\hat {z}} )} and e Say, for example, p. Next, go to Insert -> Symbol, drop down to "More Symbols", and in the window that pops up, make sure you have selected "Arial Unicode MS" as the font. During the height of its American popularity at the start of the 19th century, the hat was a status symbol worn by affluent men and politicians. In this usage, the circumflex ^ represents an upwards-pointing arrow meaning for readers, posters or the original post (OP) to see the above line/post,[9] and in addition to the arrow usage, can also mean that the user who posted the ^ agrees with the above post. Enjoy! and ^ 6 Answers. The following list of mathematical symbols by subject features a selection of the most common symbols used in modern mathematical notation within formulas, grouped by mathematical topic. , As it is virtually impossible to list all the symbols ever used in mathematics, only those symbols which occur often in mathematics or mathematics education are included. For instance, the standard unit vectors in the direction of the x, y, and z axes of a three dimensional Cartesian coordinate system are. If you use ASCII artwork from here, please do … The caret was originally and continues to be used in handwritten form as a proofreading mark to indicate where a punctuation mark, word, or phrase should be inserted into a document. The symbol ^ was included in typewriter and computer keyboards so that circumflex accents could be overprinted on letters (as in ŵ). {\displaystyle \mathbf {\hat {e}} _{\parallel }} Scalar quantities (m, K, t) and scalar magnitudes of vector quantities (F, g, v) are written in an italic, serif font — except for Greek symbols (α, τ, ω), which use a roman serif font. Note: These symbols use combining characters and may not appear correctly in some older browsers. 1 (Lists thousands of symbols and the corresponding L a T e X commands that produce them.) In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. ^ Alt-Codes can be typed on Microsoft Operating Systems: First make sure that numlock is on, Then press and hold the ALT key, While keeping ALT key pressed type the code for the symbol that you want and release the ALT key. This table explains the meaning of every Letter x symbol. {\displaystyle \mathbf {\hat {\imath }} } Symbols often have powerful and different meanings depending on how they're used. ε cos ^ ^ , second letter is an "a"? → z With the proliferation of monitors, however, this was seen insufficient, and precomposed characters, with the diacritic included, were instead introduced into appended character sets, such as Latin-1 and subsequently Unicode. is the Levi-Civita symbol (which is 1 for permutations ordered as ijk, and −1 for permutations ordered as kji). For ordinary 3-space, these vectors may be denoted θ , or https://www.youtube.com/channel/UCmV5uZQcAXUW7s4j7rM0POg?sub_confirmation=1How to type beta symbol in Word It can be used as a variation of ":P". This page will call this symbol a "circumflex" to distinguish it from a true caret. ^ n
Bts Spring Day Smyang Piano Sheet Music, Vegan Cornbread With Creamed Corn, Agile Development Model, Stomach Pain After Taking Medicine, Agile System Development And Spiral Models, Sergeant Peppers Menu, Whole Wheat Pita Bread Recipe Without Oven, Mgsv Leopard Tortoise, Hyderabad Rain News Live, Cooking Intro No Copyright,
v hat symbol 2020 | CommonCrawl |
Temperature of individual particles in kinetic theory?
Is it valid to assign a temperature to individual particles within kinetic theory and then claim that the temperature of the gas is simply the average of the temperature of the molecules?
In other words, can we say that the temperature of each molecule is $T=mv^2/3k_b$ where $v$ is the speed of the molecule, and then the temperature of the body is the mean of the temperature of the atoms or molecules comprising that body?
thermodynamics statistical-mechanics temperature kinetic-theory
looksquirrel101looksquirrel101
$\begingroup$ Related, if not duplicate: physics.stackexchange.com/questions/65690/… $\endgroup$ – Rococo Feb 18 '20 at 14:06
The first thing that needs to be said in this discussion is the fundamental connection between statistical physics and thermodynamics. Statistical physics describes microstates and thermodynamics describes macrostates. They are connected by so called thermodynamic limit—limit of infinite size and infinite particle number.
Microscopic parameters describing particles, like kinetic energy, are not automatically equivalent to macroscopic parameters like internal energy. They are found to be equivalent, which is a nontrivial result. The connection between micro and macro world appears only after doing the thermodynamic limit.
It is also important that many quantities are not the features of a single particle, but of the system. For instance, there is no entropy assigned to a single particle like the energy or momentum is, but there is entropy of the system containing one particle, if you specify the states that can be there.
Temperature is most commonly defined in the macroscopic realm as a quantity that lets you compare the state of two different systems that are interacting only by heat transfer. It turns out that this quantity is: $$T :=\left(\frac{\partial U}{\partial S}\right)_{V,N}.$$ Stretching this definition to the microscopic realm means treating the above equation as definition and putting to it the entropy before the thermodynamic limit. It might cause some trouble—like negative temperatures (that are in fact considered for systems that undergo saturation). So you might talk about a temperature of a system containing one particle but:
it will be not a temperature of this particle, but of the system containing it,
it is a very different temperature than you would expect.
1) Temperature for one particle system
Let us first consider how it is usually done in two particle system ($N=2$), before the problem trivializes in the one particle case. Consider number of microstates given two particles in a box, that can have discrete energy states separated by a constant energy portion of $\epsilon$. We divide the box into $n$ virtual compartments.
Counting the microstates. When the total energy in the system is $1\epsilon$, this energy can be either on first particle, or on the second. Particles are the same, so two situations are exactly the same. This is only one possibility. At the same time, particles can fill compartments in $W=n^2$ ways, so there are $n^2$ possibilities.
For the total energy of $2\epsilon$ we will have 2 possibilities - either energy is distributed equally over both particles, or it is all on one of them. The number of total microstates is $W=2n^2$.
You can see that the number of microstates changes with energy. To calculate what the temperature of this system is, you need to express the number of microstates in terms of energy. This is simple combinatorics but it's not the part of the question. Then entropy is: $S=k_b \log(W)$.
Then you have the expression to calculate
$$\frac{1}{T}=\left(\frac{\partial S(U)}{\partial U}\right)_{V,N},$$ and this will be the expression for temperature of this system.
Let us consider number of microstates for a single particle in a box. As before, we will assume volume divided into compartments.
At energy $1\epsilon$ number of microstates is $n$. At energy $2\epsilon$ number of microstates is $n$, at $3\epsilon$ it's the same.
The number of microstates does not change when adding energy to the system. This means $\frac{1}{T}=\left(\frac{\partial S(U)}{\partial U}\right)_{V,N}$ is zero. Since it's not a limit, it means that a temperature defined this way is not infinite, it simply doesn't exist.
2) As for temperature dependence on the observer, the entropy is not dependent on the change of the observer, so the temperature isn't either, if it were then I would start wondering if the definition is right.
LichoLicho
$\begingroup$ So according to your last point #1, you are stating that it is not valid to assign individual particles a temperature. Correct? $\endgroup$ – looksquirrel101 Feb 17 '20 at 1:38
$\begingroup$ Temperature inherits this property from entropy, and entropy is the measure of the number of possible microstates. It is a feature of system. That correspond well with the main reason we even talk about temperature - to compare systems in different states. Unless the particle has any internal structure and we wish to consider it a system, there is no meaning to the notion of entropy of a particle. $\endgroup$ – Licho Feb 17 '20 at 1:52
$\begingroup$ Does this answer your question? $\endgroup$ – Licho Feb 18 '20 at 8:22
$\begingroup$ Not really. These are concepts that I am aware of. I'm looking for something more quantitative. Perhaps you could elaborate on the temperature of a system containing one particle and how it would be the same for two different inertial observers. $\endgroup$ – looksquirrel101 Feb 18 '20 at 12:01
$\begingroup$ +1. I noticed you committed to Materials Stack Exchange, did you notice we are launched now? materials.stackexchange.com Since you already have a physics account you can get signed in automatically if you click. $\endgroup$ – user1271772 May 16 '20 at 16:22
I don't believe so. Temperature is a macroscopic property of a system. We don't normally talk about the temperature of a single particle.
Thanks for your response. I understand that this is not normally done. But I am asking if there is a logical flaw with such an interpretation.
I wouldn't say there is a logical "flaw" per se. It's just that temperature is defined as a macroscopic property of an object that reflects the collective behavior (in this case average translational kinetic energy) of the multiple microscopic particles that make up an object. But consider the following single particle example at the "macroscopic" level.
I have a ball which I throw and give it translational kinetic energy with respect to the ground. The ball is now my "particle". Assuming I throw it in a vacuum (no air friction) what temperature would I assign to the ball based on the velocity I gave it? The temperature I measure on the ball is only due to the collective microscopic kinetic energies internal to the ball. The balls "internal" kinetic energy. In the absence of air friction, the external kinetic energy of the ball, which is due to the velocity of its center of mass with respect to an external (to the ball) frame of reference, has no influence on temperature that I measure on the ball.
Bob DBob D
$\begingroup$ Thanks for your response. I understand that this is not normally done. But I am asking if there is a logical flaw with such an interpretation. $\endgroup$ – looksquirrel101 Feb 15 '20 at 12:51
$\begingroup$ @LOLKlimateKatastropheKooks I have responded in an update to my answer. Hope it helps. $\endgroup$ – Bob D Feb 15 '20 at 14:44
$\begingroup$ I don't see how that modification is an improvement. The ball has its own internal degrees of freedom, so that is adding a complication that was not intended in the original question. The question then applies to the ball itself. Can we claim that the temperature of the ball is just the average of the temperatures of the individual particles that make up the ball? My thought is that the answer is no, as you seem to believe as well, but I am asking if such a description is logically absurd. $\endgroup$ – looksquirrel101 Feb 15 '20 at 15:27
$\begingroup$ @LOLKlimateKatastropheKooks Well, you are free to take it or leave it. But that's all I have to say on the matter. $\endgroup$ – Bob D Feb 15 '20 at 15:46
Temperature is a valid concept for any system in contact with a thermal bath. As such, you can take any subset of your system that is much smaller than the system, and consider it to be in contact with the remainder of the system as its thermal bath. Since the entropy and temperature are related by
$$\frac{1}{T}=-\left.\frac{\partial S}{\partial E}\right|_N,$$
Then by doing the "marble and matchstick" calculation (e.g. Callen, Thermodynamics, Ch.15) and using
$$S = \log\Big(\,{\rm number\,of\,microstates}\Big),$$
it is easy to show that the "temperature" of the subsystem is just
$$ T = \frac{E_{\rm subsystem}}{N_{\rm subsystem}}$$
In particular, you are allowed to take a subsystem consisting of only one particle, in which case its temperature is just its energy, as you are suggesting. However, the concept of temperature might not be very useful in this case.
Eric David KramerEric David Kramer
Rather than take a yes/no position on the question. As "food for thought", I would like to frame this question in the context of whether it is valid to assign a temperature to an individual isolated particle, or to assign a temperature to an individual non isolated particle of a collection of particles.
TEMPERATURE OF AN ISOLATED PARTICLE:
Temperature, like pressure, is considered to be an intensive property of a system. By intensive, we mean independent of mass. On the surface, this property would seem to justify that an individual particle can be assigned a temperature representative of the collection. After all, a single particle is simply a subsystem having a mass of 1/M where M is the total mass of all the identical particles of the system. Let's see if this works.
Let's say we partitioned in half a large thermodynamically isolated room containing a monatomic ideal gas (e.g., Helium) at room temperature $T$, by a rigid perfectly insulated partition and measured the temperature in each half. We would be fairly confident that the temperatures in each half would be the same, and equal to the temperature originally measured in the whole room. We could even partition each half in half again, and still be confident that all the temperatures will be the same. But can we continue to do this all the way until we are left with a single particle having a temperature $T$?
No. Because our initial confidence was based on the Maxwell-Boltzmann distribution of speeds and kinetic energies for a large collection of particles. As we continue to decrease the size of the volume being isolated, the average kinetic energy of the volume at the instant it is isolated, and thus its temperature, potentially moves further and further away from that of the original collection. When we reach the last particle, its speed and therefore kinetic energy will be constant. Its "temperature" may bear no resemblance to that of the original collection. Every time we repeat this experiment we wind up with a different "temperature" for the last particle.
May we conclude from the above that temperature is not simply an intensive property, but a macroscopic intensive property, and that the assignment of a temperature to an individual particle isolated from a collection of particles, makes no sense?
TEMPERATURE OF A NON-ISOLATED PARTICLE:
If assigning a temperature to an individual isolated particle as representative of the temperature of a collection of particles does not make sense, how about assigning a temperature to a single non-isolated particle within the collection?
Returning to the large isolated room of helium gas, we know that at any instant in time the speeds and thus kinetic energies of the individual particles vary according to the Maxwell Boltzmann distribution. On the other hand, since the individual particles are constantly colliding and exchanging kinetic energy with one another, the speeds of individual particles are also continuously changing in time.
If we were able to follow the speed history of an individual particle over a long period of time, and took the average of its speed over that period, what would the average kinetic energy of that particle be? We know that the average kinetic energy of the collection of particles at any given instant in time is constant with a given value. Would it not also be the case that the kinetic energy of any individual particle, selected at random, averaged over a long period of time will be the same as the average of the collection of particles at any given instant in time? Intuitively it would seem so.
In this example it appears we can assign a temperature to a single particle based on its kinetic energy averaged over a long period of time as being the same as the temperature of a collection of particles having the same average kinetic energy at a given instant in time. But we would be assigning the particle a temperature based on its behavior in a collection of particles.
May we conclude from the above that even if we say the assignment of a temperature to an individual particle is valid, it is also inexorably linked to the macroscopic behavior of a collection of particles, and because of that the temperature assigned to the individual particle has to be in the context of the macroscopic behavior of the collection?
$\begingroup$ So then two different inertial observers moving at different velocities would disagree on the temperature of the particle. Correct? $\endgroup$ – looksquirrel101 Feb 17 '20 at 0:44
$\begingroup$ @LOLKlimateKatastropheKooks would two different inertial observers disagree on the temperature of the book on your table? $\endgroup$ – Bob D Feb 17 '20 at 1:22
$\begingroup$ They shouldn't. Do you believe they should? $\endgroup$ – looksquirrel101 Feb 17 '20 at 1:31
$\begingroup$ @LOLKlimateKatastropheKooks Of course they shouldn't disagree. That was the point of my comment responding to yours. The internal kinetic energy of a container of gas particles is that associated with the random translational motions of those particles within the container, whether its a billion particles or a single particle. $\endgroup$ – Bob D Feb 17 '20 at 14:18
$\begingroup$ The uniform rectilinear motion of the container (inertial motion) has no effect on the internal energy. That is the external kinetic energy of the container of particles, that is, the kinetic energy of the system with respect to an external frame of reference. That kinetic energy is inertial frame dependent. I have updated my answer to clarify it is in reference to random translational motion. $\endgroup$ – Bob D Feb 17 '20 at 14:18
Is it valid to assign a temperature to individual particles within kinetic theory
No, it isn't. According to kinetic theory of gases, temperature of gasses is mapped to average kinetic energy of molecule (which maps to average speed of molecule): $$ T = \frac{2}{3}\,k_B^{-1}\,\overline{E}_k $$ Single molecule speed or kinetic energy shows nothing about gas temperature, thus it's meaningless to define "molecule temperature".
If we would like to somehow define own molecule temperature no matter what - it would be related to atoms (which composes molecule) vibrational energy. This kind of temperature is called "Vibrational temperature". And is defined in thermodynamics as : $$ \theta _{vib}={\frac {h\nu_{vib} }{k_{B}}} $$
Typical vibration frequencies of atoms in a molecule ranges $[10^{13}; 10^{14}] \,\text{Hz}$
This gives for typical $\text{O}_2$ molecule a $2256\,K$ vibrational temperature. Btw, same vibrational temperature equation can be applied in principle to electromagnetic radiation quanta, finding-out "own temperature" of photon. This time substituting electromagnetic wave frequency. However, physical meaning of "vibrational temperature" of photon would be highly questionable.
Agnius VasiliauskasAgnius Vasiliauskas
Strictly, temperature is a property of an ensemble, not a single particle, so one can only with qualifiers speak of the temperature of a single particle such as a molecule in a gas.
When assigning a temperature to a single particle, the right way to do it is to say the temperature is a property of the motion after averaging over the trajectory, not a property at each moment. Therefore, whereas the velocity and speed of the particle changes repeatedly by collisions, its temperature does not because temperature always was an average property after averaging over all the collisions etc. Therefore, when we understand the temperature this way, one finds that in thermal equilibrium all the molecules of a gas have the same temperature as one another.
In the case of laser cooling of single atoms, there is only a single particle in the system. It can happen (and usually does happen) that when illuminated by lasers the momentum of the particle undergoes diffusive heating combined with frictional cooling, so it is not a constant. The atom's kinetic energy fluctuates up and down. In this case it can so happen that the probability distribution of the kinetic energy $\epsilon$ takes the form $P(\epsilon) \propto \exp(-\epsilon/A)$ for some constant $A$. By comparing this to the Boltzmann factor, one may then say that the distribution is 'thermal' and the atom has a 'temperature' equal to $A/k_{\rm B}$. Strictly speaking however this is not a case of thermal equilibrium, which is why I put the word temperature in inverted commas. The laser field here is not in a thermal state, but it so happens that the net result of its interactions with the atom puts the atom in a thermal state of motion.
Andrew SteaneAndrew Steane
$\begingroup$ "whereas the velocity and speed of the particle changes repeatedly by collisions" If the particle collisions are ideal elastic, and the system is isolated (no black body radiation emitted from the system) the speed of the particle should be constant, no? $\endgroup$ – Bob D Feb 18 '20 at 17:59
$\begingroup$ @AndrewSteane - Thank you. This is how I have always understood it. Both with respect to temperature being a property of the ensemble, and that the averaging of the motion over the trajectory of a single particle will yield the same temperature for all particles in equilibrium. $\endgroup$ – looksquirrel101 Feb 18 '20 at 18:44
Well, I did a search engine search with the following words:
magneto-optical trap single atom
By the looks of it: there are multiple publications about experiments that involve bringing down the density of atoms in the trap to single atom observations. I had a quick look at one publication, and I noticed a remark about the 'thermal velocity of the atom'.
Just glancing at the text excerpts in the search results overview I do see the word 'cooling' used. So it looks to me that for people conducting single atom experiments it is common to still use the expression 'cooling' when referring to reducing the velocity of those single atoms.
(Of course, for a proper assessment one should go through a sizable number of publications.)
CleonisCleonis
$\begingroup$ This does not address the question. $\endgroup$ – looksquirrel101 Feb 15 '20 at 12:48
$\begingroup$ Cleonis: Agreed. The use of phrases like "LASER cooling" in ion traps is pretty common, and confusing when the idea is carried over to statistical mechanics. Plus we have terms like "thermal neutrons" that mean slow or low energy. If someone is reading material that is basically overview of a the topics, where will they look to learn the difference? $\endgroup$ – C. Towne Springer Feb 17 '20 at 1:25
$\begingroup$ I have also seen the "temperature" of a single particle referred to by astrophysicists, by which they are indicating the kinetic energy of the particle (relative to Earth) when it arrives in the atmosphere. It may not be thermodynamically "correct" but it is not invalid. $\endgroup$ – Guy Inchbald Feb 20 '20 at 19:43
This is a tricky question because of the phrasing "is it valid to assign a temperature to individual particles". The answer relies on the difference between Thermodynamics and Statistical Mechanics.
Thermodynamics is the study of macroscopic systems. The properties of thermodynamics are therefore macroscopic in nature, starting with fundamental properties such as total internal energy ($U$), entropy ($S$), volume ($V$), and some concept of quantity (i.e., how much "stuff" there is), often denoted by $N$. As a discipline, thermodynamics is a little hard to define; my grad school textbook defined it as "the study of the restrictions on the possible properties of matter that follow from the symmetry properties of the fundamental laws of physics". That may be a little broad, but the author was trying to get at two properties of thermodynamics: (1) it applies generally to all macroscopic systems, regardless of the actual constituents of the system, and (2) unlike other domains (electrodynamics, classical mechanics, etc.) it does not predict specific numerical values for observable quantities; rather, it sets limits (inequalities) on various processes and it establishes relationships among macroscopic properties that may not at first glance seem related. [1]
Statistical mechanics, on the other hand, is the bridge between thermodynamics and the other domains (electrodynamics, classical mechanics, quantum mechanics) that do offer specific predictions about individual particles. It does so by treating a macroscopic system as an ensemble comprised of a very large number of microscopic elements, and uses statistical techniques to derive macroscopic properties.
So given these definitions, the short (but unsatisfying) answer is no: it is not valid to assign a temperature to an individual particle. The reason is that temperature is a thermodynamic concept; it is squarely in the domain of thermodynamics, not statistical mechanics. Temperature is defined by the relationship between total internal energy and entropy: $$ T = \frac{\partial U}{\partial S},$$ with $V$ and $N$ held constant. In other words, temperature is a macroscopic property of bulk matter, and (as is true in all of thermodynamics) there is no concept of individually interacting particles on a microscopic level.
I suspect what you're really looking for, however, is the statistical mechanics treatment. In other words, what you're really interested in is how (thermodynamic) temperature is related to microscopic properties such as the kinetic energy of individual molecules.
The answer to that question starts with how you model the molecules. There are different options with varying levels of complexity. The simplest is to consider a gas and treat the molecules as microscopic sphere-like particles, each of which travels at a constant velocity in a random direction and undergoes random elastic collisions with other particles and with the walls of a container. This is the kinetic theory of gases. Under these assumptions, all the energy in the system ($U$) is in the ½$m v^2$ kinetic energy of the molecules, which is $\frac{3}{2}kT$ on average. This is what you were referring when you asked whether we can say that "the temperature of each molecule is $T=mv^2/3k_b$".
I have two comments
That result relates temperature to the average velocity of the particles. Each particle will actually have a different velocity $v$, which you can think of as drawn from a probability distribution given by the Maxwell Boltzman distribution: $$f(v) ~\mathrm{d}^3v = \left(\frac{m}{2 \pi kT}\right)^{3/2}\, e^{-mv^2/2kT} ~\mathrm{d}^3v$$ However, there's still nothing about this that prevents you from assigning a unique "temperature" to an individual molecule based on its particular velocity. But this leads to the second point:
The result holds only for the rather idealized situation of how we're treating molecules. Things change when you start to treat molecules a little more realistically, and also when you start to consider liquids and solids as well as gases. Rotational and vibrational energy modes start to become important, for example, and in some situations they become dominant.
So even in the statistical mechanics sense, you can't universally assign the "temperature" of a molecule to be $T=mv^2/3k_b$. You can do something similar, which is to find the relation between the thermodynamic temperature and the energy of a molecule (or, more generally a "microstate") for any system in thermodynamic equilibrium. This is done in an average way via the equipartition theorem, and on an individual (probabilistic) level using partition functions. However, this will depend on the system, and there's not much value in using the word "temperature" to describe a property of an individual molecule. On the microscopic level, it's better to stick with well-understood terms such as energy, and reserve the word "temperature" for macroscopic systems.
[1] Callen, Herbert (1985). Thermodynamics and an Introduction to Thermostatistics (2nd ed.). New York: John Wiley & Sons. ISBN 0-471-86256-8.
Richter65Richter65
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – tpg2114♦ Feb 20 '20 at 18:06
Physics is neither religion nor mathenatics. It is also good to see different concepts from different angles and not to dismiss ideas too early. Eventually the results from the ideas have to agree with experiment (and survive Occams razor).
It really depends where you try to go from that statement. If eg
A) you then claim that you can take each particle, sort them by their temperature and then violate 2nd law of Thermodynamics: then no , it is not valid to think about it like this B) you try to model the inelastic collision of two particle as a exchange of temperature, which should lead in that model to equalizing temperature, aka velocity: the no, this is alson wrong C) if you just say it and dont do anything with it: then it is not really wrong, but it wont survive Occam eventually D) if you use it as a stepping stone to understand the dynamics of mesoscopic systems better and eventually the new results agree with experiment: then yes, its valid, if this assumption still is central to the new theory
Similar example is the introduction of negative Temperature. It is wrong in Thermodynamics, but a good way to think about the statistical physics of system with bound Hamiltonian and special initial conditions (e.g. spin chains).
Also I believe there are some publications on single particle thermodynamics using the quantum statistical operator.
It depends on what you want to do with your definition. In science definitions are shorthand for phenomena/properties that capture some information that can be used to built up to explain more complicated phenomena/properties.
We define temperature for a macroscopic quantity to label some numerical quantity that equalises when two objects that are allowed to exchange energy. And using this defined quantity of temperature, among other things, we can build up a formalism to study heat and how it can be converted to work. Namely, thermodynamics.
Consider two particles of equal mass but one is is at rest and the other is moving. They are enclosed in a container that elastically scatters them. Now if we define temperature as the kinetic energy then the two particles will keep exchanging velocities which in your case is the same as the temperature. That means the two objects will never have the same temperature.
But you might say, oh but the temperature is defined only for macroscopic quantities and you took only two. Well consider as many particles as you like their velocities will never be equal. Which means they won't have the same temperature under your definition. What will be equal however, is the average kinetic energy of a sub collection of the particles.
Thus is we want temperature to have the property of being equal between objects in thermal equilibrium, then we can't use your definition of temperature.
Superfast JellyfishSuperfast Jellyfish
Not the answer you're looking for? Browse other questions tagged thermodynamics statistical-mechanics temperature kinetic-theory or ask your own question.
Can a single molecule have a temperature?
How does temperature relate to the kinetic energy of molecules?
Convert Temperature into Speed/Kinetic Energy
Kinetic Theory of Liquids
Is mean kinetic energy related to temperature of a system of interacting classical particles?
Do excited states contribute to temperature?
Temperature and kinetic energy of molecules
Why do different gases have the same average kinetic energy at the same temperature?
Does doubling density (keeping average gas molecule speed the same), increase temperature recorded on a thermometer? | CommonCrawl |
Kim P. Huynh1,
Yuri Ostrovsky2,
Robert J. Petrunia3 &
Marcel C. Voia4
Firm shutdown creates a turbulent situation for workers as it leads directly to layoffs for its workers. An additional consideration is whether a firm's shutdown within an industry creates turbulence for workers at other continuing firms. Using data drawn from the Longitudinal Worker File, a Canadian firm-worker matched employment database, we investigate the impact of industry shutdown rates on workers at continuing firm. This paper exploits variation in shutdown rates across industries and within an industry over time to explain the rate of permanent layoffs and the growth of workers' earnings. We find an increase in industry shutdown rates increases the probability of permanent layoffs and decreases earnings growth for workers at continuing firms.
The fortunes of firms and workers are inextricably linked. Firm shutdown results in displacement of workers through layoffs. Firm turnover creates uncertainty for workers by affecting their employment status and wages. These first-round effects have negative consequences for the laid-off workers of the shutting down firms. When examining firm shutdown within an industry and its impact on workers, industry shutdown rates also provide an indication of the state of an industry. If industry shutdown rates capture industry wide shocks and fluctuations, then industry shutdown rates may also tell us something about the fortunes of workers at continuing firms. Negative shocks within an industry cause firm profits to fall, which results in rising shutdown rates. Further, the falling demand also causes layoffs at continuing firms to rise, as these firms must reduce production and shed costs. The issue becomes whether industry shutdown rates capture turbulence and fluctuations within an industry, which spill over to cause second-round layoffs at the continuing firms.
This paper empirically investigates the effect of industry shutdown rates on the probability of worker layoffs at continuing firms and, by extension, earnings growth of these laid-off workers. A firm's exit or shutdown results in separations as the firm must lay off its worker. The purpose of this paper is not to consider these direct effects of firm shutdown on worker outcomes. Rather, we look at whether industry shutdown rates contain information indirectly relevant for layoff probabilities and earnings of workers at continuing firms. We focus on industry shutdown rates as its impact on workers receives little attention in the literature. This paper addresses the impact of industry shutdown rates by examining the questions: (i) does the industry shutdown rates affect workers at continuing firms; (ii) how does the industry shutdown rate affect workers at continuing firms; and (iii) what are the future earnings prospects of workers experiencing a permanent layoff from a continuing firm? Understanding the labor market interaction of firms and workers requires access to firm-worker matched datasets.1 Our study utilizes one such Canadian administrative employer-employee dataset called the Longitudinal Worker File (LWF).2
Earnings growth allows us to look at the future prospects of laid-off workers. This analysis captures the intensive margin of associated with layoffs. Using administrative data on workers in the USA, von Wachter et al. (2009) find that the annual earnings of workers in relative stable jobs experiencing a surprise layoff during the 1982 recession are still 20% lower than their nondisplaced counterparts after more than 20 years. Using Canadian data, Morissette R et al. (2007) find that mass layoffs due to firm closure have a greater impact on more senior workers. Further, Song and von Wachter (2014) show that the long-term nonemployment rate increase is similar across recessions in the past 30 years. However, they find the long-term unemployment rate increase is higher in the 2008 recession than in previous recessions. These studies demonstrate that layoffs, especially mass layoffs typically occurring during recessions, have long-term consequences for the earnings and employment prospects of displaced workers.3
Our study is similar to Quintin and Stevens (2005a); Quintin and Stevens (2005b), who investigate the impact of industry exit rates on firm-worker separation rates using cross-sectional French data. However, three additional aspects of the LWF database allow us to build on these previous studies. First, the LWF database classifies separations as (i) voluntary separation when a worker quits or an involuntary separation as a result of a layoff and (ii) permanent or temporary. The data used in Quintin and Stevens (2005a); Quintin and Stevens (2005b) only identifies worker separations with no classification on type of separation. Due to these data limitations, Quintin and Stevens (2005a); Quintin and Stevens (2005b) focus on explanations for worker separations related to the workers choice to leave the firm. In contrast, the LWF database allows us to empirically analyze the firm's decision to separate from workers through permanent layoffs.
The second aspect is that the LWF is a longitudinal database, while the third aspect is that the LWF contains worker earnings information. Unlike the data in Quintin and Stevens (2005a); Quintin and Stevens (2005b), the longitudinal aspect of the LWF database allows us to exploit variation in industry shutdown rates both across industries and within an industry over time and also allows us to follow workers over time. As the empirical specifications includes industry dummy variables as controls, the analysis focuses on the within industry variation in shutdown rates. Using the longitudinal worker information, this paper provides further analysis of the growth rate of individual worker earnings following a permanent layoff.
The findings of our study are4:
Industry shutdown rates have a positive and significant effect on the probability of a permanent layoff at continuing firms. The impact of industry shutdown rates on the probability of a permanent layoff captures the extensive margin or the number of affected workers. For men, a 1% increase in industry shutdown rates means approximately a 0.13% increase in the probability of a worker layoff. For women, the marginal effect can be negative or positive and ranges from −0.01% at extra small-sized (less than 5 employees) firms to 0.11% at small-sized firms (5 to 19 employees).
The effect of industry shutdown rates on earnings growth is generally negative for both laid-off men and women. The exceptions include men at medium-sized firms and women at small-sized firms. The impact of industry shutdown rates on individual workers through wage growth captures an intensive margin.
For workers experiencing a permanent layoff, their post-layoff wage prospects vary with the size of firm at which they eventually find employment. Most laid-off workers moving to a larger firm see their wages increase, while most laid-off workers moving to a smaller firm see their wages fall.
The first result extends the finding of Quintin and Stevens (2005b). Quintin and Stevens (2005a, b) are not able to distinguish between layoffs and quits. They focus on workers voluntarily leaving continuing firms to explain the positive relationship between worker separation rates and industry shutdown rates. Our first finding indicates that layoff rates at continuing firms also increase with industry shutdown rates. Therefore, models of worker turnover must capture both workers choosing to quit firms and firms choosing to lay off workers when investigating worker separations in the context of industry fluctuations. The second result also extends the previous work by demonstrating that rising industry shutdown rates also cause deterioration of the earnings prospects for laid-off workers. However, the final result shows that some workers do find "good" jobs after experiencing a layoff, which allows them to increase their earnings. Thus, a layoff need not necessarily result in a "bad" outcome for a displaced worker.
These results demonstrate the necessity of the joint analysis of firm shutdown with either permanent layoff or worker wages. Industry shutdown rates provide a measure of firm turnover or churn within an industry. Exogenous conditions within an industry, such things as cyclical movements or demand decline, cause profits of firms to change. The typical view of the firm in economics is that falling profits for firms within an industry lead to firm shutdown and possible exit. Thus, increasing shutdown rates indicate falling profits within an industry. For continuing firms, direct and indirect effects on employment result when moving to a new equilibrium. With these falling profits, output falls at continuing firms, which leads directly to worker layoffs. Indirect effects occur for continuing firms for two reasons. First, they now face less competition with greater shutdown of competitors. Second, more workers are available to hire with the shutdown of competitors. Continuing firms are now better able to substitute for current workers as new hires are cheaper (see Farber (1999)). Direct effects of falling profits result in increased layoffs at continuing firms and, by extension, lower earnings for laid-off workers. Indirect effects are ambiguous.
Recent research suggests that, in the case of involuntary separations, there are large differences in the income losses associated with differences in human capital. Kambourov and Manovskii (2009) argue that many skills acquired by workers during their working careers are job-specific. Job displacement is especially detrimental to those workers with job-specific skills not easily transferable. Davis and Wachter (2011) provide an extensive review of the literature on the effects of large cyclical movements in job displacement and how worker anxieties about job loss, wage cuts, and job opportunities respond to contemporaneous economic conditions. They find that the job loss as a result of mass layoffs results in a loss of earnings results in roughly 1.4 to 2.8 years of pre-displacement earnings (depending on the current unemployment rate). This macroeffect is of first-order importance. However, there are spillover effects of mass layoffs.5
Gathmann et al. (2017) exploit regional variation to find spillover effects of mass layoffs are about 35% of local employment losses stem from spillover effects in plants not directly affected by the mass layoff (55% after a decade). In our analysis, we are not able to use regional variations but rather rely on the industry shutdown rate as a proxy for industry variation. Depending on the firm size class of a worker, we compute that there is an annual earnings loss of between 10 and 60% for laid-off men and 20 to 60% for laid-off women as a result of a 1% increase in the shutdown rate.
The rest of the paper is organized in the following fashion: the LWF (firm-worker matched) dataset is described in Section 2 while Section 3 provides an empirical model of permanent layoffs which discusses the issue of selection due to firm survival. Section 4 discusses the effect of firm shutdown rates on workers' earnings. Finally, Section 5 concludes.
The Longitudinal Worker File
Our data are from the Longitudinal Worker File (LWF). The LWF is an annual administrative dataset from 1983 onwards and contains a 10% random sample of Canadians who either filed a tax return (T1 form) or received a statement of remuneration (T4 form). Appendix A gives a brief description of the LWF data sources and its construction. The LWF has information on individuals' earnings, demographics, and occupation, as well as on the the firm of employment. LWF's matched employer-employed structure allows for examining workers' mobility, turnover, and earnings dynamics.
Our sample consists of individuals living in the 10 Canadian provinces who are between 25 and 64 years of age. The source of firm-level information is the Longitudinal Employment Analysis Program (LEAP) database. Given that the LWF and LEAP databases contain common firm identifiers, firm information from the LEAP database is linkable to the worker in the LWF database. LEAP contains annual employment information on firms with at least one dollar in payroll in a given year from 1991 to 2008. The LEAP payroll information allows the identification in year t of continuing firms with a positive payroll versus temporarily or permanently (exit) firm shutdown with a zero payroll. Industry j's shutdown rate in year t, SR jt , is
$$\begin{array}{@{}rcl@{}} {SR}_{{jt}}= {SD}_{j,t+1}/N_{{jt}} \end{array} $$
where SD j,t+1 gives the total number of firms in industry j with a positive payroll in year t and a zero payroll in period t+1 and N jt gives the total number of firms with in industry j positive payroll in period t. The structure of the LEAP database implies that firm shutdown is not due to merger or acquisition activity. Table 1 provides the list of the 39 industries in the data. LEAP assigns a NAICS code to each firm from 1992 onwards6. We restrict our sample of workers to the period from 1992 to 2007 since the analysis uses firm and NAICS information taken from the LEAP database.
Table 1 Industry classification by NAICS
A separation occurs in year t, if t is the last year of an individual's tenure in firm j (i.e., the end of a job spell). The LWF database allows for the categorization of employee-employer separations. Quits and layoffs are two such categories. Layoffs are further broken into temporary, worker subject to recall, and permanent, worker not subject to recall subcategories. These categories allow for the creation of dummy variables. The value of a given separation dummy variable is 1 for any type of the given separation, including, but not limited to, quits and layoffs. For example, the value of the layoff variable is 1 if the Record of Employment (ROE) states that the shortage of work is the reason for the separation, i.e., layoff.
Table 2 provides summary statistics across industries. There is industry heterogeneity in terms of (i) workers' characteristics of age, gender, tenure, and earnings and (ii) industry characteristics of shutdown rate, permanent layoff rate, number of firms, and number of workers. The age range of average worker varies from a low of 37.8 years in the motion picture and recording industry to a high of 44.0 years in the primary metal manufacturing industry. Women dominate clothing manufacturing and leather and allied manufacturing at 76% of workers but only constitute 10% of workers in mining. Tenure ranges from 3.81 years in administrative and support services to 11.45 years in primary metal manufacturing. Average earnings are the highest in oil and gas extraction at $107,090 per year while earnings in accommodation and food services are $18,800 per year on average. The shutdown rate is the highest in utilities at 16.1% and the lowest in fabricated metal product manufacturing at 7.4%. Forestry has the highest permanent layoff rate 12.4%, while oil extractions has the lowest at 1.5%.
Table 2 Summary statistics by industry
Table 3 provides summary statistics on worker characteristics across five firm size classes. We define firm size groupings as (i) extra small (XS)—less than 5 employees; (ii) small (S)—5–19 employees; (iii) medium (M)—20–99 employees; (iv) large (L)—100–500 employees; and (v) extra large (XL)—greater than 500 employees. XS size class firms have workers with the lowest tenure and earnings relative to the other size classes, but these firms experience the highest shutdown rates. The permanent layoff rate is the highest for the firm size classes XS, S, and M at around 5%. L size class firms have a 3.7% layoff rate, while XL firms have a 2% layoff rate.
Table 3 Summary statistics by size of firms
Table 4 provides summary statistics for worker characteristics across five regions: (1) Atlantic provinces—Newfoundland, New Brunswick Nova Scotia, and Prince Edward Island; (2) Quebec; (3) Ontario; (4) Prairie provinces—Alberta, Saskatchewan, and Manitoba; and (5) British Columbia. Across the regions, average age, proportion of men versus women, and exit rate are similar. The eastern Canadian regions of the Atlantic provinces, Quebec, and Ontario tend to have longer tenure rates compared to the Prairie provinces and British Columbia. Wage rates range from an average high of $45,780 in Ontario to a low of $29,710 in the Atlantic provinces. The opposite occurs for layoff rate as the Atlantic provinces have the highest permanent layoff rate of 6.7% and Ontario has lowest at 3%.
Table 4 Summary statistics by region
Comparison of continuing and shutting down firms
One issue to consider when investigating the impact of industry shutdown rates on worker layoff rate is that workers may choose to quit in anticipation of deteriorating industry conditions in order to avoid any negative consequences of being laid off. A worker may quit in anticipation of being laid off or firm shutdown. This may create a possible selection bias when investigating firm layoffs of workers. Given that a random sample of workers forms the basis of the LWF database, we observe separations for workers in the LWF sample but do not observe separations rates at the firm level. Therefore, we are unable to determine quit rates in the years prior to a firm's shutdown. However, the data contain a measure of firm employment which allows us to look at overall employment activity at firms.
Figure 1 presents the median employment size and growth for firms in their last 3 years prior to shutdown. As a comparison, the figure also presents median employment size and growth for rival continuing firms over a similar 3-year window. Continuing firms tend to be larger and have higher growth than shutting down firms. The median employment size and growth both tend to be flat for continuing firms. Alternatively, shutting firms experience a drop in size and increasingly negative growth as shutdown approaches.
Comparison of shutdown and continuing firms. Note: This graph provides a comparison between shutting down firms in year t with continuing firms. For these two groups of firms, the graph provides the median employment size and growth rate in 3 years prior to firm shutdown in the former group. For a full comparison by industry, see Tables 5 and 6
Table 5 Size comparison of shutting down and continuing firms
Table 6 Growth comparison of shutting down and continuing firms
Tables 5 and 6 provide these comparisons between continuing and shutting down and firms across the industries. Similar results occur at the industry level. The shedding of workers, whether through layoffs or quits, appears to occur in the years leading to firm shutdown.
Permanent layoffs—extensive margin
Industry shutdown rates measure the short-run performance of firms within an industry. High shutdown rates indicate firms within an industry deem that shutdown is more profitable than continuing operations. The implication of shutdown is that a firm must become profitable or eventually exit. One method to reduce costs is worker layoffs. These layoffs can be temporary or permanent depending on circumstances. Temporary layoffs may lead to permanent layoffs in the long run if the firm eventually exits or workers are not recalled.
Thus, our analysis focuses on permanent layoffs by firms as a method to analyze the process of shedding workers. We consider the effects of industry shutdown rates along with the other controls to assess the qualitative and quantitative impacts of industry conditions on a firm's decision to permanently layoff workers.
We identify shutdowns in year t as those firms transitioning from a positive payroll in year t to a zero payroll in year t+1.
A firm's shutdown does not imply an exit, as the firm may have a positive payroll in some future period. Our focus on anticipated separations motivates the choice of shutdown rates. The absence of a positive annual payroll in year t signals at least a year-long closure. From the worker's point of view, there is little difference whether or not his/her firm reopens in some future year following shutdown. In either case, the firm's workers anticipate prolonged separations and adjust their labor market decisions. Shutdowns are also more easily identified in the data than firm exits since they only require the knowledge of the firm's payroll in two consecutive periods. For the analysis, we perform separate analysis for men and women and across firms in different size classes. We analyzed the pooled data but found the assumption of homogeneity of effects across men and women is rejected statistically and economically.7
Selection issues and identification strategy
A selection issue arises as the permanent layoff decisions are only observable for continuing firms in year t. In the remainder of the paper, we will refer to continuing firms to indicate those firms not experiencing a shutdown at year t. To account for the selection bias, we consider two separate dichotomous variables and allow for correlated disturbances. For worker i at firm k in industry j at time t, we estimate a bivariate probit model. The continuing firm (FS) equation accounts for firm selection and the permanent layoff (PL) equation captures a worker's outcome or the probability of a permanent layoff, which gives the following bivariate probit worker selection (BPWS) model:
$$\begin{array}{@{}rcl@{}} \text{FS}_{ikjt}^{*} &=& \alpha^{\text{FS}} + \beta^{\text{FS}} \text{SR}_{jt} + \gamma^{\text{FS}} B_{it} + \sum_{j=1}^{J} \psi_{j}^{\text{FS}} I_{j} + \sum_{t=1993}^{2002} \delta_{t}^{\text{FS}} D_{t} + \lambda Z_{kjt} + v_{ikjt}, \\ \text{PL}_{ikjt}^{*} &=& \alpha^{\text{PL}} + \beta^{\text{PL}} \text{SR}_{jt} + \gamma^{\text{PL}} B_{it} + \sum_{j=1}^{J} \psi_{j}^{\text{PL}} I_{j} +\sum_{t=1993}^{2002} \delta_{t}^{\text{PL}} D_{t} + u_{ikjt}. \end{array} $$
$$\begin{array}{@{}rcl@{}} v_{ikjt},u_{ikjt}\sim N(\mu,\Sigma), \mu= \left[\begin{array}{cc} 0\\0 \end{array}\right], \Sigma= \left[\begin{array}{cc} 1&\rho \\ \rho&1 \end{array}\right] \end{array} $$
The sample includes only continuing workers or workers experiencing a permanent layoff. Thus, the indicator variable, PL ikjt , equals 1 if a worker experiences a permanent layoff with \(\text {PL}_{ikjt}^{*} \geq 0\) and 0 if a worker continues employment. A second indicator variable, FS ikjt , equals 1 if a firm remains active with \(\text {FS}_{ikjt}^{*} \geq 0 \) and 0 otherwise. SR jt is the annual shutdown rate in industry j in period t. The PL equation includes individual-, firm-, and industry-specific control variables: (i) B it is a set of worker including an age categories, marital status, job tenure and tenure squared, region of residence, union membership, and earnings in period year t−1; (ii) I j is industry-specific dummy variables; and (iii) D t is a set of year-specific dummy variables. We break the sample of workers into subsamples for estimation purposes based on their firm's employment size. The FS equation includes all the relevant variables from PL equation but with Z kjt as the exclusion restrictions both at the firm (k) and industry (j) levels. For a technical discussion of this method, please refer to Maddala (1983).
Identification strategy
The BPWS model given in Eq. 2 identifies the impact of selection in two ways: (1) the correlation parameter (ρ) of the joint model and (2) using exclusion restrictions of variables (Z jt ). The correlation parameter achieves identification through functional form. Han and Vytlacil (2017) prove that identification is achievable in bivariate models without exclusion restrictions (i.e., instruments) if there are common exogenous regressors in both equations. They also show that having an exclusion restriction is necessary and sufficient for identification in these models without common exogenous variables but is sufficient only in models with common exogenous covariates.
The second method requires at least one variable that affects whether a firm continues or not but not whether a worker experiences a permanent layoff or not, contemporaneously. There are two exclusion restrictions. The first exclusion restriction is the use of industry-level US-Canada bilateral real exchange rate:
$$\begin{array}{@{}rcl@{}} \text{RER}_{jt}= P_{jt}^{\text{US}}/P_{jt}^{\text{CDN}} \times e_{t}, \end{array} $$
where \(P_{jt}^{\text {US}}\) is the US industry gross output price index, \(P_{jt}^{\text {CDN}}\) is the Canada industry gross output price index and e t is the nominal bilateral exchange rate between Canadian and USA in year t. The choice of RER jt as the exclusion restriction is motivated by the fact that the USA is the major trading partner of Canada. The real exchange rate affects Canadian export and import propensities with the USA. Short-run profits of Canadian firms likely fluctuate with export/import propensities. Thus, real exchange rate movements likely affect the probability of whether a Canadian firm continues to operate or temporarily shutdown; see for example Huynh et al. (2010). For employment, the impact of exchange rates differs. Huang et al. (2014) provide empirical evidence that exchange rate movements have little effect on manufacturing employment and no effect on non-manufacturing employment in Canada for the period 1994 to 2010. Commodity prices and exchange rate movements are tied together. The authors show that commodity price movements are a main driver to employment changes in manufacturing resulting from exchange rate movements. Further, Campa and Goldberg (2001) show that the real exchange rate movements for the USA have effects on wages and hours worked but have negligible effects on total employment and number of jobs. Based on these empirical findings, we argue that fluctuations of the real exchange rate is correlated with firm exit rates but are unlikely to affect the contemporaneous probability a worker experiences a permanent separation.
The second exclusion restriction is a relative firm-to-industry variable. We compute the logarithm of the ratio of the wage bill of firm k at time t relative to the average wage bill of firms in industry j and size class s at time t or:
$$\begin{array}{@{}rcl@{}} \log \overline{\text{wage bill}}_{kjst} = \log \bigg(\frac{{\text{wage bill}}_{kjst}} {\overline{\text{wage bill}}_{jst}} \bigg). \end{array} $$
This variable is strongly correlated with whether a firm continues operations, as it proxies for how competitive a firm is relative to its industry peers. Controlling for the employment size of a firm, the relative wage bill provides a measure of firm efficiency/productivity within an industry. More productive firms pay higher wages and, thus, have a higher wage bill as discussed in Abowd et al. (1999), Michelacci and Quadrini (2009) and Moscarini and Postel-Vinay (2012). More productive firms with higher wage bills should be more likely to continue operations. However, the contemporaneous relative wage bill of a firm is unlikely to contain information about worker layoff probabilities at continuing firms.
The BPWS results provide estimates of the impact of industry shutdown rates on worker layoffs with an additional selection control for whether a firm is active or not. Table 7 presents estimation coefficients for the probability of a permanent layoff when controlling for firm shutdown selection effects for men while Table 8 provides estimation coefficients for women.
Table 7 Bivariate probability of permanent layoff: men
Table 8 Bivariate probability of permanent layoff: women
The descriptive statistics illustrate that there is substantial variation in the shutdown rates across industry and time. Therefore, the impact of industry shutdown rates on permanent layoffs should be well-identified. A likelihood ratio test reveals that selection is statistically significant in all cases for men and three out of five cases for women. The exceptional cases are women at large and extra large firms. Therefore, selection via the impact of firm shutdown affects the probability of permanent layoff on a worker. Most of the discussion emphasizes the variable of interest, industry shutdown rates.
With the exception of women at small-sized firms, the coefficient on the shutdown rate is positive for both men and women across the firm size classes. Thus, these estimates indicate that the impact of industry shutdown rates on worker layoff rates are positive. Figure 2 provides estimated marginal effects of an increase in industry shutdown rate on the probability of a worker layoff across the firm size classes. For comparison, this figure also provides the estimated marginal effect without accounting for selection.8 For both men and women, these quantitative impacts of the industry shutdown rate on permanent layoff probability change when accounting for selection. After controlling for selection, the results for men indicate that a 1% increase in industry shutdown rate causes between 0.04 and 0.14% increase in the probability of a permanent layoff. For women, the marginal effects vary across the firm size classes; a 1% increase in industry shutdown rates implies (i) a 0.01% decrease in the probability of a permanent layoff at extra small-sized firms and (ii) a 0.11, 0.03, 0.01, and 0.05% increase in the probability of a permanent layoff at small-, medium-, large-, and extra large-sized firms, respectively.
Probability of permanent layoff and the effect of selection. Note: The figure provides the marginal effects of industry shutdown rates on the probability of a permanent layoff for a worker across various size classes of firms. Selection corresponds to estimates from Tables 7 and 8 for men and women, respectively. For comparison, no selection are estimates when not accounting for selection effects of continuing or shutdown of a firm
Returning to Tables 7 and 8, coefficients on the other control variables remain fairly constant across the firm size classifications and qualitatively identical for men and women. The probability of a permanent layoff falls with a worker's income. Tenure effects are concave in shape. Married workers have a lower probability of permanent layoff separation, while unionized workers have a higher permanent separation probability. Across the regions, workers in the Atlantic provinces experience the highest probability of a permanent layoff, where the lowest permanent layoff separation probability occurs for workers in the Prairie provinces. Tables 7 and 8 also report the coefficients on the log of the firm's wage bill and the log of the real exchange rate, which are our exclusion restriction variables in the selection equation. The coefficient on the wage bill variable is always positive and significant. This result likely captures the effect of firm size on firm survival as larger firms tend to have higher survival rates. The coefficient on the log of the real exchange rate varies between negative and positive and is only statistically significant with a negative value for men at small-sized firms and women at large-sized firms. For men, the correlation in the error terms between the two equations is approximately −0.45 in the extra small-, small-, medium-sized firm categories and approximately 0.9 in the large- and extra large-sized firm categories. A negative correlation implies that a positive shock to a firm remaining active has a negative impact on the probability of a male worker being permanently laid off. This correlatation also varies for women across firm size classes.
Earnings transitions—intensive margin
The previous section discusses permanent layoffs or the extensive margin of employment. In this section, we discuss workers earnings transitions or the intensive margins of permanent layoffs by looking at the earnings growth for those workers experiencing a permanent layoff. We do not use the identification strategy found in Abowd et al. (1999), where the worker and firm fixed effects enter additively. The LWF allows us to follow the worker transitions from a separation (layoff) to possible employment to a another firm. Eeckhout and Kircher (2011) provide motivation for using transitions. They show the estimated worker and firm fixed effects from the log-linear wage equation do not directly identify the underlying worker skill and firm productivity heterogeneity. In particular, the correlation between the estimated worker and firm fixed effects does not identify sorting in the matching between worker skill and firm productivity.
Earnings and selection
Similar to the previous selection problem, the estimated earnings growth model must account for selection effects due to firm shutdown. To deal with this selection problem, we estimate the effect of the transitions on the change in log wage using a Heckman-selection model. Again, the selection equation describes the probability of a firm continuing \(\left (\text {FS}_{kjt}^{*}\right)\), while the outcome equation describes the log wage \(\left (\ln w_{ikjt}^{*}\right)\) of a specific transition:
$$\begin{array}{@{}rcl@{}} \text{FS}_{ikjt}^{*} &=& \alpha^{\text{FS}} + \beta^{\text{FS}} \text{SR}_{jt} + \gamma^{\text{FS}} B_{it} + \sum_{j=1}^{J} \psi_{j}^{\text{FS}} I_{j} + \sum_{t=1993}^{2002} \delta_{t}^{\text{FS}} D_{t} + \lambda Z_{kjt} + v_{ikjt}, \\ \Delta \log w_{ikjt}^{*} &=& \alpha^{w} + \beta^{w} \text{SR}_{jt} + \gamma^{w} B_{it} + \sum_{j=1}^{J} \psi_{j}^{w} I_{j} +\sum_{t=1993}^{2002} \delta_{t}^{w} D_{t} + u_{ikjt}. \end{array} $$
where Δ lnw ikjt is wage growth of worker i from firm k in industry j at time t and the errors u ikjt and v ikjt are normally distributed with zero means and correlation ρ. The other variables are defined as in the BPWS model from Eq. 2. The analysis examines wage growth as a way to control for potentially unobservable factors. For example, there may be wages differentials due to job risk, education, or occupations with higher layoff rates. The analysis includes industry, location, and firm size variables which partially capture some of these differentials. Further, these unobservable-time invariant worker or job characteristics are unlikely to affect wage growth. Dostie (2005) and Abowd et al. (2005) show unobserved heterogeneity affects the level of lnw ikjt . However, the analysis of wage growth, Δ lnw ikjt , differences out time invariant factors and, thus, removes these unobservable variables. In contrast to the BPWS model, the exclusion restriction only includes the firm-to-industry relative wage (\(\log \overline {\text {wage bill}}_{ikjt}\)). The specification does not include the relative real exchange rate as an exclusion restriction. Campa and Goldberg (2001) show an impact of the real exchange rate on wages, which justifies this change from the previous worker separation analysis.
Tables 9 and 10 present the coefficient estimates for the earnings regression accounting for selection effects for men and women, respectively. The selection parameter (λ) is significant for all size classes except small- and large-sized firm categories for men and small- and extra large-sized firm categories for women.9 This result is due to the small correlation (ρ) between the two equations. For comparison purposes, Fig. 3 provides coefficient estimates on the industry shutdown variable for the selection and non-selection models. For men, the coefficient on the industry shutdown rate variable becomes positive and statistically significant for workers at medium-sized firms, while the coefficients remain negative, statistically significant and increase slightly in magnitude for workers at other size classes when moving from the non-selection to the selection model. For women, there is no change in the qualitative findings and little change in the quantitative effects of the industry shutdown rate after accounting for selection. Thus, the impact of selection effects of firm shutdown is small when examining worker earnings growth. With the exception of men at medium-sized firms, the correlation between the error terms in the two equations, ρ, is positive. Positive correlation indicates that firms with unexplained increases in the probability of remaining active also have unexplained increases to wages paid.
Δ logw ikjt and the effect of selection. Note: The figure provides the marginal effects of industry shutdown rates on the probability of a permanent layoff for a worker across various size classes of firms. Selection corresponds to estimates from Tables 9 and 10 for men and women, respectively. For comparison, no selection are estimates when not accounting for selection effects of continuing or shutdown of a firm
Table 9 Earnings regression with selection: men
Table 10 Earnings regression with selection: women
The change in the logarithm of worker wages measures the wage growth for a worker. Thus, the coefficient on the industry shutdown rate variable gives the response of worker wage growth to changes in the industry shutdown rate. Equivalently, this coefficient gives an elasticity or the percentage change in worker earnings in response to a 1% change in the industry shutdown rate. The estimated coefficient values indicate economic significance in that worker wage growth is highly responsive to industry shutdown rates. For men, extra small-sized firms show the least response of wage growth to industry shutdown rates with a coefficient of −0.98, while men at large-sized firms have the most response with a coefficient of −3.05. For women, workers at the extra small-sized firms have the largest response as the coefficient estimate indicates a 1% increase in industry shutdown rate causes a 3% decrease in worker wage growth.
The coefficients on the other variables indicate similar patterns across firm size classes and genders. Earnings growth falls with age and rise with being married or part of a unionized firm. The effect of job tenure is nonlinear. Wage growth initially falls with tenure but begins to rise after approximately 11 years at a job. We investigate worker earnings while controlling for the possible association of the firm size class with the worker earning changes. There are two potential reasons for a worker's firm size class to change. First, the worker moves to a different firm belonging to a different size class. Second, the worker stays at the same firm, but the firm moves to a different firm size class. Since we look at workers experiencing a permanent layoff, our analysis focuses on the group of workers moving to a different firm. This analysis demonstrates whether a layoff necessarily results in a worse situation for a worker. We examine the impact of firm size class switches on the earnings of laid-off workers since firm size provides a clear dimension for improvement in worker's earnings. Oi and Idson (1999) document that larger firms pay higher wages. Therefore, workers experiencing a layoff but moving to firms in larger size classes may actually see their wages increase.10
Figure 4 presents the probability distribution function (PDF) for Δlog(wage ikjt ) for those men and women, respectively, who experience a permanent layoff but move to a different firm. Each figure shows CDFs for three subgroups: (i) switch down—worker moves to a firm in a smaller size class; (ii) switch to same size—worker moves to a firm in the same size class; and (iii) switch up—worker moves to a firm in a larger size class. For both men and women, the wage growth PDFs for the switch down, switch to the same size, and switch up are left, middle, and right, respectively. These figures indicate that workers who transition to larger sized firms do better than workers who move to a firm in the same size class, while workers who move to smaller sized firms do worse. An asymmetry results when comparing the distributions across the three groups. For negative values of wage growth, the lower tail for the switch down group of workers is much fatter than for the other two groups, while the lower tail looks similar for the switch to same size and switch up groups. For positive values of wage growth, the opposite occurs. The distribution switch down and switch to same size groups have similar upper tails while the switch up group has a fatter upper tail.
Unconditional probability distribution of Δ logw ikjt . Note: This graph illustrates the unconditional growth rate of wages (Δ logw ikjt ) for male (top graph) and female (bottom graph) workers who experienced a permanent layoff and found a new job. The following three lines are for groups of workers: (1) transition to a smaller size firm (switch down), (2) transition to a larger size firm (switch up), and (3) transition to a same size firm
This unconditional analysis ignores the rich characteristics of firms and workers. So, we amend the wage model with selection (5) to include the firm size class switches. The switchers are treated as exogenous as we focus only on involuntary separations or permanent layoffs. The following specification combines workers experiencing a firm size class switch with the selection wage model:
$$\begin{array}{@{}rcl@{}} \text{FS}_{ikjt}^{*} &=& \alpha^{\text{FS}} \,+\, \beta^{\text{FS}} \text{SR}_{jt} \,+\, \gamma^{\text{FS}} B_{it} \,+\, \sum_{j=1}^{J} \psi_{j}^{\text{FS}} I_{j} \,+\,\! \sum_{t=1993}^{2002} \delta_{t}^{\text{FS}} D_{t} \,+\, \lambda Z_{kjt} \,+\, \sum_{i \in m}\eta^{\text{FS}} \text{SW}_{it} \,+\, v_{ikjt}, \\ \Delta \log w_{ikjt}^{*} &=& \alpha^{w} + \beta^{w} \text{SR}_{jt} + \gamma^{w} B_{it} + \sum_{j=1}^{J} \psi_{j}^{w} I_{j} +\sum_{t=1993}^{2002} \delta_{t}^{w} D_{t} + \sum_{i \in m}\eta^{w} \text{SW}_{it} + u_{ikjt}. \end{array} $$
where SW it is a series of indicator variables for individuals across various firm size transitions between time t−1 and t and η w is the corresponding coefficients on the indicator variables. Firm size transition classes, m, are: (i) extra small to small (XS–S); (ii) small to extra small (S–XS); (iii) small to small (S–S); (iv) small to medium (S–M); (v) medium to small (M–S); (vi) medium to medium (M–M); (vii) medium to large (M–L); (viii) large to medium (L–M); (ix) large to large (L–L); (x) large to extra large (L–XL); (xi) extra large to large (XL–L); and (xii) extra large to extra large (XL–XL). Table 11 provides estimates for the earnings regressions controlling for firm size class changes. Industry shutdown rate continues to have a negative impact on worker earnings even with the additional control for switching firm size class. The coefficients on the switching variables have the expected sign. An increase in the firm size class of a worker sees the worker's earnings increase, while a decrease in firm size class sees the worker's earnings fall.
Table 11 Earnings switcher regression with selection: pooled
Switching from extra small- to small-sized firm causes wages to increase by 0.22% for men and 0.18% for women. The magnitude is not as great in the reverse direction as switching from small- to an extra small-sized firms causes earnings for men to fall by 0.20% and women earnings to fall by 0.14%. A movement from medium- to large-sized firms causes earnings of men to increase by 0.11% and earnings of women to increase by 0.09%, while a movement from large- to medium-sized firms causes the earnings of men and women to fall by 0.06 and 0.01%, respectively. Those workers not changing firm size class generally do not see changes in their earnings. The exceptions to this rule are men at medium- and large-sized firms who see a statistically significant increase in earnings of 6%.
Figure 5 present the PDFs of the residuals from the regressions in Table 11 for men and women. As in Fig. 4, these workers are broken into three categories based on pre-layoff to post-layoff size class transition of their firms. The conditioning removes a significant amount of the difference between the distributions across the three categories. Further, the asymmetries at the tails of the distributions across the three categories disappear after the conditioning. A worker does not necessarily end up in a worse position with a lower earning job after being permanently laid off. However, almost 60% of those laid-off workers who move to smaller or similarly sized firms see a fall in wages. In contrast, less than 50% of laid-off workers eventually moving to a larger sized firm see their earnings fall. Thus, the type of firm a worker ends up at after being laid off explains a significant amount of the resulting wages.
Conditional probability distribution of Δ logw ikjt . Note: This graph illustrates the conditional growth rate of wages (Δ logw ikjt ) for male (top graph) and female (bottom graph) workers who experienced a permanent layoff and found a new job. The following three lines are for groups of workers: (1) transition to a smaller size firm (switch down), (2) transition to a larger size firm (switch up), and (3) transition to a same size firm. The residual wage growth is generated by the Heckman selection model (6) and results in Table 11
We quantify the effect of industry shutdown rates on worker outcomes such as involuntary separations or permanent layoffs (extensive margin) and wage earnings (intensive margin). Our empirical work shows that when controlling for individual- and firm-specific characteristics, industry shutdown rates generally have a positive and significant effect on the probability of a permanent worker layoff. For wage growth, shutdown rates have a negative effect but the effects are amplified for workers in smaller firms. The unique structure of the LWF database allows us to differentiate among different industries in our analysis. We find substantial differences across industries in the roles of individual- and firm-level attributes on permanent layoff and wage growth. Our analysis controls for firm selection effects on worker outcomes due to firm shutdown. Accounting for selection effects does alter the estimated impact of industry shutdown rates on worker outcomes.
Determining the relative contribution of worker, firm, industry, and time factors to the overall employment instability is an essential step in developing training programs to counter the adverse effects of employment loss. If job instability is mostly determined by differences in individual human capital, then future policies may focus on providing opportunities for workers to improve their education or skills. If, on the other hand, job instability is mostly a reflection of industry conditions or, more specifically, firm shutdown within an industry, then education and skill development programs may not be as effective. Hence, understanding the relative impact of individual and firm characteristics on worker turnover is important in determining the effectiveness of specific training and skill-development programs provided both privately and publicly. In the light of the recent economic downturn that affected many Western countries including Canada, the costs and benefits associated with such programs are likely to remain subject to intense policy discussions in the foreseeable future. Our estimates of the impact of industry shutdown rates on earnings growth is line with other papers that focus on uncertainty and variability such as Gathmann et al. (2017).
These results demonstrate the necessity of the joint analysis of firm shutdown with either permanent layoff or worker wages. Industry shutdown rates provide a measure of turbulence and firm turnover within an industry. Without controlling for firm selection, the analysis ignores a major portion of workers and firms. Higher industry shutdown rates suggest more turbulence within an industry. Substantial hiring and firing costs lead to a desire by continuing firms to keep and not lay off their workers. These costs factor into a firm's choice to continue operations or shutdown. Higher hiring and firing costs within an industry also factor into a firm's choice between temporary shutdown or permanent exit.
Controlling for a firm's shutdown probability allows the industry shutdown rate to fully capture industry turnover which leads to the positive correlation between industry shutdown and the permanent worker layoff rate. This finding complements the work by Moscarini and Postel-Vinay (2012) who document that the negative correlation between net job creation rates and the unemployment rate is larger for small firms versus large firms.
Job turnover has a rich set of dynamics that cannot necessarily be explored with reduced-form methods. As suggested by Postel-Vinay and Robin (2006), they highlight the role for modeling job turnover using frictional models of unemployment. In these models, job turnover is a dynamic process that involves explicitly laying out the microfoundations. However, there is an important opportunity for further research on voluntary separations or a worker quitting their job to find a new one. Recent work by Lise et al. (2016) allows for matched agents to undertake on-the-job search and illustrates the complexity of labor outcomes in terms employment prospects and earnings. A fruitful extension would consider both involuntary and voluntary quits.
1 Work in this literature is driven by collection of administrative data, which usually have restricted access. For example, a recent study by Song et al. (2015) shows that rising labor earnings dispersion in the USA is driven by increasing wage dispersion across firms and not by changes to within firm wage dispersion. Haltiwanger et al. (2006) provide a broad overview.
2 Morissette (2004), Morissette R et al. (2007) and Morissette et al. (2013) use the LWF database to investigate permanent layoffs and worker reallocation.
3 Job instability has wide ranging financial and other consequences for individuals and families (Jacobson et al. (1993); Gottschalk and Moffitt (1994) Gottschalk and Moffitt (2009); Beach et al. (2003); Morissette and Ostrovsky (2005)). Often, it signals high earnings uncertainty, which may, in turn, lead to lower consumption Browning and Lusardi (1996) and alter family savings and labor supply decisions (Pistaferri (2003)). It may also affect families' schooling and occupational choices (Guiso et al. (2002)) and even their fertility behavior (Fraser (2001)).
4 We perform separate analysis on men and women as labor market decisions and outcomes are likely to differ; see Killingsworth and Heckman (1987), Loprest (1992), and Altonji and Blank (1999), inter alia.
5 We thank an anonymous referee for pointing out this salient feature.
6 This NAICS coding is partially due to retro-coding by Statistics Canada.
7 Results are available upon request.
8 The coefficients on the other variables are quite similar for the models with and without a firm selection control. A complete set of estimates for the model with no control for selection are available from the authors upon request.
9 In a full-information maximum likelihood estimation, the selection parameter is a function of correlation and variance (σ) or λ=ρ×σ.
10 Other dimensions to look at when investigating worker earnings following layoffs include workers moving to new occupations or industries. Our dataset does not include information regarding worker occupation. Further, there is no clear direction to the change in worker earnings when moving to a new industry or occupation unlike moving to larger firms.
11A T4 form closely resembles a W-2 form in the USA.
Construction of Longitudinal Worker File
Statistics Canada constructs the LWF database from four data sources. The first data source in the LWF is the T4 Supplementary Tax File, which is a random sample of all individuals who received a T4 supplementary tax form and filed a tax return. A T4 supplementary tax form is issued by an employer to each employee for any earnings that either exceed a certain threshold or trigger income tax, Canada/Quebec Pension Plan (C/QPP), or unemployment insurance premiums. It contains information about the earnings received from an employer in a given year, tax deducted, pension contributions, union dues, and other information.
The second data source is the Record of Employment (ROE), which includes employer provided information on separations and their reasons. Canadian employers are by law required to provide such information for any separation. A detailed list of reasons for separations includes voluntary and involuntary separations such as the shortage of work, labor dispute, injury or illness, quit, pregnancy and parental leaves, retirement, and other reasons. The third data source is the Longitudinal Employment Analysis Program (LEAP). Statistics Canada constructs and maintains the LEAP database. This database includes information about the size of the employee's firm and tracks employees who move from one firm to another. The LEAP database covers the entire Canadian economy and includes firms (but not establishments) with at least one dollar in annual payroll. The key information that comes from the LEAP is the firm's employment derived from its payroll using average labor units (ALU). LEAP tracks employees who move from one firm to another. Statistics Canada constructs LEAP, and by extension the LWF database, to handle mergers and acquisitions in a retrospective manner. Suppose two firms, A and B, merge in year t to create firm C. Within the database prior to year t, a synthetic history for firm C is created by aggregating information from firms A and B, so that only firm C's information appears in the database. Thus, identification of a firm's exit or shutdown imply these are not due to merger activity. The final data source is personal income tax files (T1), which add demographic variables such as age, sex, family status, and area of residence. They also provide information about individuals' income sources other than T4 earnings.
Our data was constructed by using information from the LEAP to classify firm entries and shutdowns and to compute industry-specific shutdown rates. Identification of firm entries and shutdowns is based on firm payroll transitions from 1 year to the next one. A firm's entry year is the first year; the firm has a positive payroll. We identify firm shutdown in year t when a firm has zero payroll in year t but positive payroll in year t−1. Thus, entry year is not identifiable for firms existing in 1991 or the first year of the LEAP database, while firm shutdown is not identifiable in 2008 or the last year of the database. Further, LEAP includes NAICS codes for firms from 1992 and onwards. Consequently, NAICS industry-specific shutdown rates can be computed only from 1992 to 2007.
We proceed by extracting individual data from the LWF. Since NAICS codes in the LWF are available only from 1992, we used the LWF data from 1992 to 2008. We kept men and women aged 24 to 64. Total earnings in year 4t were defined as individual's total annual paid employment income (wages and salaries) computed from all T4 forms issued to the individual in year t. All earnings are adjusted to 2007 constant dollars using the Consumer Price Index for Canada. For individuals who held multiple jobs in a given year, we then retained only the characteristics of main jobs defined as jobs with the highest T4 amount in that year.11 To each individual record in the LWF, we added industry-specific shutdown rates by matching firm identifiers in the LWF to those in the LEAP. We excluded individuals who died and whose employer's industry classification was unknown.
Next, individual employer-employee records from the LWF are matched to industry price information available for the period from 1987 to 2007. US industry prices are taken from Industry Economic Accounts tables available from the Bureau of Economic Analysis, US Department of Commerce (Chain-Type Price Indexes for Gross Output by Industry series). Canadian industry price indexes are computed from the information on gross output and real gross output, by industry (Statistics Canada CANSIM series 383-0022). Although both the US and Canadian industry price indexes are based on the North American Industry Classification System (NAICS) codes, there are some differences between the industries available in each series. We identified 42 industries for which a direct correspondence between the two series could be established. Excluded are primarily industries that are most likely to be represented by the public sector, such as, for instance, public administration, education and healthcare. Three industries ("petroleum and coal product manufacturing," "pipeline transportation," and "waste management") had to be excluded because of insufficient sample size. Therefore, our final sample includes 39 industry categories. The list of included industries is given in Table 1. Finally, the LWF records are also matched to annual Canada/US nominal exchange rates necessary to produce real exchange rates used in the study. The rates used in the study are from the G.5 Foreign Exchange Rates series provided by the Board of Governors of the Federal Reserve System (Series ID: EXCAUS).
Abowd, J, Kramarz F, Roux S (2005) Wages, mobility and firm performance: advantages and insights from using matched worker-firm data. Econ J116(512): F245–F285.
Abowd, JM, Kramarz F, Margolis DN (1999) High wage workers and high wage firms. Econometrica67(2): 251–334.
Altonji, JG, Blank RM (1999) Race and gender in the labor market. In: Ashenfelter O Card D (eds)Handbook of Labor Economics, 3143–3259.. Elsevier, Amsterdam.
Beach, CM, Finnie R, Gray D (2003) Earnings variability and earnings instability of women and men in Canada: how do the 1990s compare to the 1980s?. Can Public Policy29(s1): 41–64.
Browning, M, Lusardi A (1996) Household saving: micro theories and micro facts. J Econ Lit34(4): 1797–1855.
Campa, JM, Goldberg LS (2001) Employment versus wage adjustment and the U.S. dollar. Rev Econ Stat83(3): 477–489.
Davis, SJ, Wachter TV (2011) Recessions and the costs of job los. Brook Pap Econ Act2: 1–72.
Dostie, B (2005) Job turnover and the returns to seniority. J Bus Econ Stat23(2): 192–199.
Eeckhout, J, Kircher P (2011) Identifying sorting–in theory. Rev Econ Stud78(3): 872–906.
Fraser, CD (2001) Income risk, the tax-benefit system and the demand for children. Economica68(269): 105–25.
Farber, HS (1999) Mobility and stability: the dynamics of job change in labor markets. In: Ashenfelter O Card D (eds)Handbook of Labor Economics,2439–2483.. Elsevier, Amsterdam.
Gathmann, C, Helm I, Schönberg U (2017) Spillover effects of mass layoffs, working paper, University College London.
Gottschalk, P, Moffitt R (1994) The growth of earnings instability in the U.S. labor market. Brook Pap Econ Act25(1994-2): 217–272.
Gottschalk, P, Moffitt R (2009) The rising instability of U.S. earnings. J Econ Perspect23(4): 3–24.
Guiso, L, Jappelli T, Pistaferri L (2002) An empirical analysis of earnings and employment risk. J Bus Econ Stat20(2): 241–53.
Haltiwanger, J, Brown C, Lane J (2006) Economic turbulence: the impact on workers, firms and economic growth. University of Chicago Press, Chicago.
Han, S, Vytlacil E (2017) Identification in a generalization of bivariate probit models with dummy endogenous regressors. mimeo. J Econ199: 63–73.
Huang, H, Pang K, Tang Y (2014) Effects of exchange rates on employment in Canada. Can Public Policy40(4): 339–352.
Huynh, KP, Petrunia RJ, Voia M (2010) The impact of initial financial state on firm duration across entry cohorts. J Ind Econ58(3): 661–689.
Jacobson, LS, LaLonde RJ, Sullivan DG (1993) Earnings losses of displaced workers. Am Econ Rev83(4): 685–709.
Kambourov, G, Manovskii I (2009) Occupational specificity of human capital. Int Econ Rev50(1): 63–115.
Killingsworth, MR, Heckman JJ (1987) Female labor supply: a survey. In: Ashenfelter O Layard R (eds)Handbook of Labor Economics,103–204.. Elsevier, Amsterdam.
Lise, J, Meghir C, Robin J-M (2016) Mismatch, sorting and wage dynamics. Rev Econ Dyn19(1): 63–87.
Loprest, PJ (1992) Gender differences in wage growth and job mobility. Am Econ Rev82(2): 526–532.
Maddala, G (1983) Limited dependent and qualitative variables in econometrics. Cambridge University Press, Amsterdam.
Michelacci, C, Quadrini V (2009) Financial markets and wages. Rev Econ Stud76(2): 795–827.
Morissette, R (2004) Have permanent layoff rates increased in Canada? Analytical studies branch research paper No. 218 Statistics Canada.
Morissette, R, Lu Y, Qiu T (2013) Worker reallocation in Canada. Analytical Studies Branch Research Paper No. 348, Statistics Canada.
Morissette, R, Ostrovsky Y (2005) The instability of family earnings and family income in Canada, 19861991 and 1992001. Can Public Policy31(3): 273–302.
Morissette R, Zhang X, Frenette M (2007) Earnings losses of displaced workers: Canadian evidence from a large administrative database on firm closures and mass layoffs. Analytical Studies Branch Research Paper No. 291, Statistics Canada.
Moscarini, G, Postel-Vinay F (2012) The contribution of large and small employers to job creation in times of high and low unemployment. Am Econ Rev102(6): 2509–39.
Oi, W, Idson T (1999) Firm size and wages. In: Ashenfelter O Card D (eds)Handbook of Labor Economics,2165–2214.. Elsevier, Amsterdam.
Pistaferri, L (2003) Anticipated and unanticipated wage changes, wage risk, and intertemporal labor supply. J Labor Econ21(3): 729–728.
Postel-Vinay, F, Robin J-M (2006) Microeconometric search-matching models and matched employer-employee data. Open Access publications from University College London. http://eprints.ucl.ac.uk/.
Quintin, E, Stevens J (2005a) Growing old together: firm survival and employee turnover. Top Macroecon5(1): 1319–1319.
Quintin, E, Stevens JJ (2005b) Raising the bar for models of turnover. Finance and Economics Discussion Series 2005-23, Board of Governors of the Federal Reserve System (U.S.)
Song, J, Price DJ, Guvenen F, Bloom N, von Wachter T (2015) Firming up inequality. NBER working paper 11199, National Bureau of Economic Research, Inc.
Song, J, von Wachter T (2014) Long-term nonemployment and job displacement. mimeo, UCLA.
von Wachter, T, Song J, Manchester J (2009) Long-term earnings losses due to mass layoffs during the 1982 recession: an analysis using U.S. Administrative Data from 1974 to 2004. mimeo.
The assistance and hospitality of Statistics Canada is gratefully acknowledged. Comments and suggestions from the participants at various conferences and seminars are greatly appreciated. The authors thank Kathryn Shaw, Michael Veall, Gueorgui Kambourov, John Stevens, Joni Hersch, and anonymous referees for the valuable comments. The views in this paper represent those of the authors alone and are not those of the Bank of Canada or Statistics Canada. All errors and opinions are our own.
No funding was received for this paper.
Bank of Canada, 234 Wellington Street, Ottawa, K1A 0G9, ON, Canada
Kim P. Huynh
Statistics Canada, 24-J RHC, 100 Tunney's Pasture Driveway, Ottawa, K1A 0T6, ON, Canada
Yuri Ostrovsky
Lakehead University, 955 Oliver Road, Thunder Bay, P7B 5E1, ON, Canada
Robert J. Petrunia
Department of Economics, Carleton University, 1125 Colonel By Drive, Ottawa, K1S 5B6, ON, Canada
Marcel C. Voia
Search for Kim P. Huynh in:
Search for Yuri Ostrovsky in:
Search for Robert J. Petrunia in:
Search for Marcel C. Voia in:
Correspondence to Marcel C. Voia.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Huynh, K.P., Ostrovsky, Y., Petrunia, R.J. et al. Industry shutdown rates and permanent layoffs: evidence from firm-worker matched data. IZA J Labor Econ 6, 7 (2017) doi:10.1186/s40172-017-0057-0
Worker separation
Firm survival | CommonCrawl |
2020 North Central NA Regional Contest
Contests/ 2020 North Central NA Regional Contest/ Problems/ Safest Taxi
View problem statement: Safest Taxi
Problem G
Safest Taxi
Consider a town whose road network forms an $N \times M$ grid, where adjacent intersections are connected by roads. All roads are bi-directional. Each direction has an associated number - the time needed to travel from one end-point to another.
Each direction of each road consists of one or more lanes. A lane can serve one of the following functions: left-turn, straight, right-turn, or any combination of them. However, a left-turn lane cannot be placed to the right of a straight or right-turn lane, and a straight lane cannot be placed to the right of a right-turn lane. There are no U-turn lanes.
The rules for crossing intersections are illustrated in the above figure (suppose a car enters the intersection from the south). To make a left turn, it must be in one of the $L$ left-turn lanes; let's number them $1$ through $L$ from left to right. The traffic rule says Lane $i$ must turn into the $i$-th lane (counting from the left) of the target road, except that Lane $L$ may turn into the $L$-th lane or any other lanes to its right.
Similarly, to go straight through an intersection, the car must be in one of the $S$ straight lanes; let's number them $1$ through $S$ from left to right. Lane $i$ must go into the $i$-th lane (counting from the left) of the target road, except that Lane $S$ may go into the $S$-th lane or any other lanes to its right.
To make a right turn, the car must be in one of the $R$ right-turn lanes. For the convenience of discussion, we consider these lanes and those of the target road from right to left. Let's number the right-turn lanes $1$ through $R$ from right to left. Lane $i$ must turn into the $i$-th lane (counting from the right) of the target road, except that Lane $R$ may turn into the $R$-th lane or any other lanes to its left.
It is guaranteed that if at least one left-turn / straight / right-turn lane is present, the target road must exist and have enough lanes to accommodate the left turn / straight / right turn, respectively. The time spent on crossing intersections is negligible.
In addition, a driver may change lanes in the middle of a road. When changing lanes, the taxi can only go into a singlee neighboring lane. Note that in the above rules for intersections, it doesn't count as a lane change to drive into any of the legal lanes of the target road. The time spent on lane changes is negligible.
A trip starts and ends at the rightmost lane of the midpoint of roads. The time needed to travel midpoint-to-endpoint is half of endpoint-to-endpoint.
You are running a taxi company called "Safest Taxi" in this town, with the slogan "your safety is in your hands". You let your customers choose the numbers $X$ and $Y$ for their trip, and the driver will make at most $X$ left turns and $Y$ lane changes to accomplish the trip.
What is the shortest time to fulfill each trip given the rules?
The first line consists of three integers $N$ ($2 \leq N \leq 15$), $M$ ($2 \leq M \leq 15$) and $K$ ($1 \leq K \leq 3$), separated by a single space. The town's road network has $N$ intersections north-south and $M$ intersections west-east. Each road has $K$ lanes.
The second line consists of a single integer $D$. The town's road network has $D$ road segments. Every adjacent pair of intersections must appear in the list exactly once.
Each of the next $D$ lines describes a road segment with the following format:
\[ R_0\; C_0\; R_1\; C_1\; T\; L_0\; L_1 ... L_{K-1} \]
This describes a road segment going from the intersection at row $R_0$ column $C_0$ to the intersection at row $R_1$ column $C_1$ ($0 \leq R_0,R_1<N$, $0 \leq C_0,C_1<M$). Rows are numbered $0$ through $N-1$ from north to south, and columns are numbered $0$ through $M-1$ from west to east. The segment must connect two adjacent intersections, i.e., $\mid R_0 - R_1 \mid + \mid C_0 - C_1\mid = 1$. The time to travel through the entire segment is $T$ ($2 \leq T \leq 100$, $T$ must be an even number). The next $K$ strings describe the function of each of the $K$ lanes, from left to right, with the following semantics:
L $\mid $ Left-turn only
S $\mid $ Straight only
R $\mid $ Right-turn only
LR $\mid $ Left-turn or right-turn
LS $\mid $ Left-turn or straight
SR $\mid $ Straight or right-turn
LSR $\mid $ Left-turn, straight or right-turn
The next line consists of a single integer $P$ ($1 \leq P \leq 50$), the number of trips to fulfill.
Each of the next $P$ lines describes a trip with the following format:
\[ R_{S0}\; C_{S0}\; R_{S1}\; C_{S1}\; R_{D0}\; C_{D0}\; R_{D1}\; C_{D1}\; X\; Y \]
This indicates that the starting point is the midpoint of segment ($R_{S0}, C_{S0}$) $\to $ ($R_{S1}, C_{S1}$), and the destination is the midpoint of segment ($R_{D0}, C_{D0}$) $\to $ ($R_{D1}, C_{D1}$). Both segments must appear in the above list. Both the starting point and the destination are on the rightmost lane. The customer requests that at most $X$ ($0 \leq X \leq 4$) left turns and $Y$ ($0 \leq Y \leq 4$) lane changes are allowed for the trip.
Output $P$ lines. The $i$-th line contains a single integer which is the shortest time to fulfill each trip given the rules, or $-1$ if no feasible route exists.
Sample Explanation
The first three lines of the sample output are illustrated in the figure below.
If $X = 1$ and $Y = 1$, the shortest path is shown in red: make a lane change before reaching E and make a left turn. The total time is $8/2+8/2=8$;
If $X = 1$ and $Y = 0$, the shortest path is shown in green: go through E-F-I-H-E and make a left turn. The total time is $8/2+16+8+8+8+8/2=48$;
If $X = 0$ and $Y = 0$, the shortest path is shown in blue: go through E-B-C-F-E. The total time is $8/2+16+16+8+18+8/2=66$.
0 0 0 1 6 S R
0 1 0 0 8 L L
0 1 0 2 16 R R
0 2 0 1 18 LS S
0 0 1 0 8 LS S
1 0 0 0 8 R R
0 1 1 1 10 LS SR
1 1 0 1 16 L R
1 0 1 1 6 L SR
1 1 1 0 8 L R
1 2 1 1 18 L SR
2 1 1 1 8 LS SR
2 2 2 1 8 S SR
Edit & SubmitMetadataSubmissions
Please log in to submit a solution to this problem
CPU Time limit 9 seconds
Memory limit 1024 MB
Sample data files
Rongqi Qiu
Source 2020-2021 North Central North America and Southern California ICPC Regional Contest
You haven't made any submissions on this problem yet. | CommonCrawl |
Improving contig binning of metagenomic data using \( {d}_2^S \) oligonucleotide frequency dissimilarity
Ying Wang ORCID: orcid.org/0000-0001-8766-59501,
Kun Wang1,
Yang Young Lu2 &
Metagenomics sequencing provides deep insights into microbial communities. To investigate their taxonomic structure, binning assembled contigs into discrete clusters is critical. Many binning algorithms have been developed, but their performance is not always satisfactory, especially for complex microbial communities, calling for further development.
According to previous studies, relative sequence compositions are similar across different regions of the same genome, but they differ between distinct genomes. Generally, current tools have used the normalized frequency of k-tuples directly, but this represents an absolute, not relative, sequence composition. Therefore, we attempted to model contigs using relative k-tuple composition, followed by measuring dissimilarity between contigs using \( {d}_2^S \). The \( {d}_2^S \) was designed to measure the dissimilarity between two long sequences or Next-Generation Sequencing data with the Markov models of the background genomes. This method was effective in revealing group and gradient relationships between genomes, metagenomes and metatranscriptomes. With many binning tools available, we do not try to bin contigs from scratch. Instead, we developed \( {d}_2^S\mathrm{Bin} \) to adjust contigs among bins based on the output of existing binning tools for a single metagenomic sample. The tool is taxonomy-free and depends only on k-tuples. To evaluate the performance of \( {d}_2^S\mathrm{Bin} \), five widely used binning tools with different strategies of sequence composition or the hybrid of sequence composition and abundance were selected to bin six synthetic and real datasets, after which \( {d}_2^S\mathrm{Bin} \) was applied to adjust the binning results. Our experiments showed that \( {d}_2^S\mathrm{Bin} \) consistently achieves the best performance with tuple length k = 6 under the independent identically distributed (i.i.d.) background model. Using the metrics of recall, precision and ARI (Adjusted Rand Index), \( {d}_2^S\mathrm{Bin} \) improves the binning performance in 28 out of 30 testing experiments (6 datasets with 5 binning tools). The \( {d}_2^S\mathrm{Bin} \) is available at https://github.com/kunWangkun/d2SBin.
Experiments showed that \( {d}_2^S \) accurately measures the dissimilarity between contigs of metagenomic reads and that relative sequence composition is more reasonable to bin the contigs. The \( {d}_2^S\mathrm{Bin} \) can be applied to any existing contig-binning tools for single metagenomic samples to obtain better binning results.
Metagenomics sequencing provides deep insights into microbial communities [1]. A key step toward investigating their taxonomic structure within metagenomics data involves assigning assembled contigs into discrete clusters known as bins [2]. These bins represent species, genera or higher taxonomic groups [3]. Therefore, efficient and accurate binning of contigs is essential for metagenomics studies.
The binning of contigs remains challenging owing to repetitive sequence regions within or across genomes, sequencing errors, and strain-level variation within the same species [4]. Many studies have reported on binning, essentially highlighting two different strategies [5]: "taxonomy-dependent" supervised classification and "taxonomy-independent" unsupervised clustering. "Taxonomy-dependent" studies are based on sequence alignments [6], phylogenetic models [7, 8] or oligonucleotide patterns [9]. "Taxonomy-independent" studies extract features from contigs to infer bins based on sequence composition [10,11,12,13,14], abundance [15], or hybrids of both sequence composition and abundance [4, 5, 16,17,18]. Therefore, these approaches can be applied to bin contigs from incomplete or uncultivated genomes. Some hybrid binning tools, such as COCACOLA [5], CONCOCT [4], MaxBin2.0 [18] and GroopM [16], are designed to bin contigs based on multiple related metagenomic samples. Contigs with similar coverage profiles are more likely to come from the same genome. Previous studies showed that co-varying coverage profiles across multiple related metagenomes play important roles in contig binning [4, 5]. The multiple related samples should be temporal or spatial samples of a given ecosystem [16] composed of similar microbial organisms, but different abundance levels. However, in many situations, multiple related samples may not be available in the required numbers, and as a result, contig-binning based on single metagenomes is still important.
Contig binning tools based on a single sample generally follow one of three strategies. 1) Sequence composition. It is usually denoted as frequencies of k-tuples (k-mers) with k= 2–6 as genomic signatures of contigs. MetaWatt [12] and SCIMM [11] built multivariate statistics and/or interpolated Markov models of background genomes to bin the contigs. Metacluster 3.0 [14] clustered the contigs using k-tuple frequency and Spearman correlation between the k-tuple frequency vectors. LikelyBin [10] utilized Markov Chain Monte Carlo approaches based on 2- to 5-tuples. 2) Abundance. AbundanceBin [15] estimated the relative abundance levels of species living in the same environment based on Poisson distributions of 20-tuples with an Expectation Maximization (EM) algorithm. The MBBC [19] package estimated the abundance of each genome using the Poisson process. All tools based on abundance are designed to bin short or long reads instead of assembled contigs. 3) Hybrid of composition and abundance. Maxbin1.0 [17] combined 4-tuple frequencies and scaffold coverage levels to populate the genomic bins using single-copy marker genes and an Expectation Maximization (EM) algorithm. MyCC [20] combined genomic signatures, marker genes and optional contig coverages within one or multiple samples.
Contig binning using k-tuple composition is based on the observation that relative sequence compositions are similar across different regions of the same genome, but differ between distinct genomes [21, 22]. The frequency vector of k-tuples is one of the representation of sequence composition. In general, current tools use the frequency of k-tuples directly, but this represents absolute, not relative, sequence composition. Here, "absolute" frequency refers to the number of occurrences of a k-tuple over the total number of occurrences of all k-tuples. On the other hand, "relative" frequency refers to the difference between the observed frequency of a k-tuple and the corresponding expected frequency under a given background model. Contigs in the same bin are from the same taxonomic group, such as one class, species or strain. Therefore, contigs from the same bin are expected to obey a consistent background model. Several sequence dissimilarity measures based on relative frequencies of k-tuples have been developed such as CVTree, \( {d}_2^{\ast } \) and \( {d}_2^S, \) and recent studies [23,24,25,26,27] have shown that \( {d}_2^S \) is superior to other dissimilarity measures for the comparison of genome sequences based on relative k-tuple frequencies. Therefore, in the present study, we attempted to model the relative sequence composition and measure dissimilarity between contigs with \( {d}_2^S \) for a single metagenomic sample. The \( {d}_2^S \) was designed to measure the dissimilarity between two sequences or next generation sequencing data by modeling the background genomes [23] using Markov and interpolated Markov chains. Previous studies verified the effectiveness of \( {d}_2^S \) in revealing group and gradient relationships between genomes [24, 25], metagenomes [28] and metatranscriptomes [26, 27]. However, binning of contigs directly using \( {d}_2^S \) is computationally expensive and impractical for large metagenomics studies due to the need to construct Markov background models for sequences and to calculate the expected counts of k-tuples. On the other hand, many binning tools based on absolute k-tuple frequencies and the results from such methods are reasonable. Still, these tools and methods can be improved by using \( {d}_2^S \) dissimilarity. Therefore, in the present study, we do not bin the contigs from scratch. Instead, we attempt to adjust contig bins based on the output of any existing binning tools. We model each contig with a Markov chain based on its k-tuple frequency vector. The bin's center is represented by the averaged k-tuple frequency vectors of all contigs in this bin and is also modeled with a Markov chain. Then, \( {d}_2^S \) measures dissimilarity between a contig and a bin's center based on relative sequence composition, as represented by the Markov chains. Finally, a K-means clustering algorithm is applied to cluster the contigs based on the \( {d}_2^S \) dissimilarities, where K is the number of clusters. Such an approach, on the one hand, overcomes the issue of extensive computational complexity directly using \( {d}_2^S \) and, on the other hand, further improves the initial binning results. The method is developed as an open source package, termed \( {d}_2^S\mathrm{Bin} \), which is available at https://github.com/kunWangkun/d2SBin.
We selected six synthetic and real datasets that had originally been used to evaluate existing tools as testing datasets. \( {d}_2^S\mathrm{Bin} \) was applied to adjust the binning results of five representative binning tools using sequence composition (MetaCluster3.0 [14], MetaWatt [12] and SCIMM [11]) and the hybrid of sequence composition and abundance (MaxBin1.0 [17], MyCC [20]) based on a single metagenomic sample. Tuple length k = 6 and the independent identically distributed (i.i.d.) background model (i.e., Markov order r = 0) are frequently the optimal parameters for \( {d}_2^S\mathrm{Bin} \) to achieve the best performance for metagenomics contig binning. \( {d}_2^S\mathrm{Bin} \) improved the binning results in 28 out of 30 testing experiments for 6 datasets using 5 binning tools, giving significantly better performance in terms of recall, precision and ARI (Adjusted Rand Index).
The framework of \( {d}_2^S\mathrm{Bin} \) is shown in the flowchart of Fig. 1. Any existing contig binning tool is applied with its default settings to bin the contigs in a single metagenomic sample. Each contig is modeled with a Markov chain based on its k-tuple frequency vector. For each bin, the bin's center is also modeled with a Markov chain based on the averaged frequency vector of all contigs in this bin. The \( {d}_2^S \) measures the dissimilarity between a contig and a bin's center based on the background probability models. Assuming that contigs in the same bin come from an identical background model, the \( {d}_2^S \) dissimilarity between contigs from the same bin should be smaller than that between contigs from different bins under correct binning. The K-means algorithm is then applied to adjust the contigs among different bins to minimize the within-bin sum of squares based on \( {d}_2^S \) dissimilarity.
Flowchart contig binning with \( {d}_2^S\mathrm{Bin} \)
The \( {d}_2^S \) dissimilarity measure between two contigs based on k-tuple sequence signature
The \( {d}_2^S \) is a normalized dissimilarity measure for two sequences based on either long genomic sequences or NGS short reads in which expected word counts are subtracted from the observed counts for each sequence. The background adjusted word counts are then compared using correlation to measure the dissimilarity between the two sequences [25]. Let \( {c}_X=\left({c}_{X,1},{c}_{X,2},\cdots, {c}_{X,{4}^k}\right) \) and \( {c}_Y=\left({c}_{Y,1},{c}_{Y,2},\cdots, {c}_{Y,{4}^k}\right) \) be the k-tuple frequency vectors from two sequences X and Y, respectively, where c X , i is the occurring times of the i th k-tuple in sequence X and i = 1 ⋯ 4k. At each base in the tuple, there are four possible nucleotides, that is A, C, G, and T, for nucleotide sequences. So there are 4k combinations when tuple length is k.
The \( {d}_2^S \) dissimilarity is defined as
$$ {d}_2^S\left({\tilde{c}}_X,{\tilde{c}}_Y\right)=\frac{1}{2}\left(1-\frac{D_2^S\left({\tilde{c}}_X,,,{\tilde{c}}_Y\right)}{\sqrt{\sum_{i=1}^{4^k}\frac{{\tilde{c}}_{X,i}^2}{\sqrt{{\tilde{c}}_{X,i}^2+{\tilde{c}}_{Y,i}^2}}}\sqrt{\sum_{i=1}^{4^k}\frac{{\tilde{c}}_{Y,i}^2}{\sqrt{{\tilde{c}}_{X,i}^2+{\tilde{c}}_{Y,i}^2}}}}\right), $$
$$ {D}_2^S\left({\tilde{c}}_X,{\tilde{c}}_Y\right)=\sum_{i=1}^{4^k}\frac{{\tilde{c}}_{X,i}{\tilde{c}}_{Y,i}}{\sqrt{{\tilde{c}}_{X,i}^2+{\tilde{c}}_{Y,i}^2}}, $$
$$ {\tilde{c}}_{X,i}={c}_{X,i}-{n}_X{p}_{X,i},\kern0.5em {\tilde{c}}_{Y,i}={c}_{Y,i}-{n}_Y{p}_{Y,i}, $$
where p • , i is the probability of the i th k-tuple under the Markov model with order r = 0 − 3 for one long sequence or set of reads and \( {n}_{\bullet }=\sum_{i=1}^{4^k}{c}_{\bullet, i} \), • = X or Y is the sum of occurrences of all k-tuples. The value of \( {d}_2^S \) is between 0 and 1. The p X , i is the probability of the i th k-tuple under the background sequence for X. The p X , i can be the probability under the i.i.d. model, or under the Markov chain of different orders. The i th k-tuple is denoted as w = w 1 w 2⋯ w k . Under the r th order Markov chain M r , the probability of the k-tuple w, namely the expected frequency, can be computed as
$$ p\left(w|{M}_r\right)=\left\{\begin{array}{l}\prod \limits_{j=1}^kp\left({w}_j\right)\kern5.00em r=0\\ {}p\left({w}_1{w}_2\dots {w}_r\right)\prod \limits_{j=1}^{k-r}p\left({w}_{j+r}|{w}_j{w}_{j+1}\dots {w}_{j+r-1}\right)\kern0.5em 1\le r\le k-1\end{array}\right. $$
where p(w j ) is the probability of w j estimated by the ratio of the number of occurrences of w j over the number of all nucleotides. The value of p(w 1 w 2⋯ w r ) is estimated by the ratio of the number of occurrences of w 1 w 2⋯ w r over all the number of r-tuple occurrences. The value of p(w j + r | w j w j + 1⋯ w j + r − 1) is estimated by the fraction of occurrences of w j + r conditional on the previous occurrences of w j w j + 1⋯ w j + r − 1.
\( {d}_2^S\mathrm{Bin} \): Contig binning based on the \( {d}_2^S \) measure
Let S = {S 1, S 2, ⋯S l } be the partition of all contigs into l bins. Contig X is represented as \( {c}_X=\left({c}_{X,1},{c}_{X,2},\cdots, {c}_{X,{4}^k}\right) \), the occurrence vector of k-tuples within the contig. The center of bin S j is represented as the average frequency vector,
$$ {c}_{S_j}=\frac{1}{n_j}{\sum}_{X_i\in {S}_j}{C}_{X_i}, $$
where X i is the contig currently in S j and n j is the number of contigs in S j . The value of \( {d}_2^S\left({\overset{\sim }{c}}_X,{\overset{\sim }{c}}_{S_j}\right) \) quantifies the dissimilarity between contig X and bin S j .
In our study, when the number of bins is fixed, the metrics of binning call for minimizing the within-bin sum of squares based on \( {d}_2^S \) dissimilarity, that is,
$$ \underset{s}{\arg \min}\sum_{j=1}^l\sum_{X\in {S}_j}{d}_2^s\left({\tilde{c}}_X,{\tilde{c}}_{S_j}\right). $$
We then used the K-means clustering algorithm to optimize Eq. (6).
The purpose of our study is to improve binning results using \( {d}_2^S\mathrm{Bin} \) based on the output of current existing binning tools. Therefore, we adopted both synthetic and real testing datasets generated, or used, by previous binning tools in order to test the performance of \( {d}_2^S\mathrm{Bin} \), as shown in Table 1. The \( {d}_2^S\mathrm{Bin} \) was applied to the binning results of five contig-binning tools, respectively, to evaluate its performance in improving their binning results.
Table 1 Synthetic and real testing datasets for contig binning
Selection of contig binning tools
The \( {d}_2^S\mathrm{Bin} \) was applied to adjust the contig-binning results from MaxBin1.0 [17], MetaCluster3.0 [14], MetaWatt [12], MyCC [20] and SCIMM [11] to evaluate its performance. These five widely used contig-binning tools use different binning strategies to bin the contigs for single metagenomic sample: 1) Sequence composition: MetaCluster3.0 [14] measures the Spearman distance between 4-tuple frequency vectors and bins contigs with the K-median algorithm. The MetaCluster4.0 [29] and 5.0 [30] were designed to bin the reads from metagenomics samples of different abundance characteristics. MetaWatt [12] and SCIMM [11] build interpolated Markov models of the background genomes and assign the contigs to bins with maximum likelihood. 2) Hybrid of abundance and sequence composition: MaxBin1.0 [17] measures the Euclidean distance between 4-tuple frequency vectors of contigs and assigns them with an EM algorithm, taking scaffold coverage levels into consideration. MyCC [20] combines genomic signatures, marker genes and optional contig coverages within one or multiple samples.
Five synthetic testing datasets with 10 genomes and 100 genomes
MaxBin1.0 [17] used these five datasets to evaluate its performance. Here we used the same five datasets to evaluate the performance of \( {d}_2^S\mathrm{Bin}. \) Short reads were simulated by MetaSim [31] and assembled to contigs by Velvet [32]. The contigs and their labels are available for downloading from the MaxBin1.0 paper [17]. For the metagenomes containing 10 genomes, 5 million and 20 million paired-end reads were sampled as 20× and 80× average coverage, respectively. For the metagenomes containing 100 genomes, 100 million paired-end reads were sampled with three settings to create simLC+, simMC+ and simHC+. The three datasets represent microbial communities with different levels of complexity, which mimicked the setting of the previous study [33]: simLC simulates low-complexity communities dominated by a single near-clonal population flanked by low-abundance ones. Such datasets result in a near-complete draft assembly of the dominant population in, for example, bioreactor communities [34]. simMC resembles moderately complex communities with more than one dominant population, also flanked by low-abundance ones, as has been observed in an acid mine drainage biofilm [35] and Olavius algarvensis symbionts [36]. These types of communities usually result in substantial assembly of the dominant populations according to their clonality. simHC simulates high-complexity communities lacking dominant populations, such as agricultural soil [37], where no dominant strains are present and minimal assembly results. In addition, the empirical 80-bps error model, which incorporates different error types (deletion, insertion, substitution) at certain positions with empirical error probabilities for Illumina, was produced by MetaSim [31] and used in simulating all metagenomes [17].
One real testing dataset, Sharon
This dataset was applied to test the binning tools COCACOLA [5] and CONCOCT [4]. The dataset is composed of a time-series of 11 fecal microbiome samples from a premature infant [38], denoted as 'Sharon'. All metagenomic sequencing reads from the 11 samples were merged together, and 5579 contigs were assembled. The contigs were annotated with TAXAassign [39], and 2614 contigs were unambiguously aligned to 21 species [5].
The above datasets cover various species diversity, species dissimilarity, sequencing depth, and community complexity. They include synthetic and real data. Therefore, testing on these datasets would yield a comprehensive evaluation of \( {d}_2^S\mathrm{Bin} \).
To evaluate the performance of \( {d}_2^S\mathrm{Bin} \), three commonly used criteria in binning studies [4, 5, 17], recall, precision and ARI (Adjusted Rand Index), were applied in our study. As described in COCACOLA [5], the binning result is represented as a K × S matrix A = (a ks ) with K bins on S species where a ks indicates the shared number of contigs between the k th bin and the s th species. Each contig binning tool filters out low-quality contigs; therefore, N is the total number of contigs passing through the filter and binned by the tools.
Recall: For each species, we first find the bin that contains the maximum number of contigs from the species. We then sum over the maximum number of all species and divide by the number of contigs.
$$ recall=\frac{1}{N}{\sum}_s{max}_k\left\{{a}_{ks}\right\} $$
Precision: For each contig bin, we first find the species with the maximum number of contigs assigned to the bin. We then sum the maximum numbers across all bins and divide by the number of contigs.
$$ precision=\frac{1}{N}{\sum}_k{max}_s\left\{{a}_{ks}\right\} $$
ARI (Adjusted Rand Index): ARI is a unified measure of clustering results to determine how far from that perfect grouping a bin result falls. ARI focuses on whether pairs of contigs belonging to the same species can be binned together or not. The detailed descriptions can be found in [4, 5].
$$ ARI=\frac{\sum_{k,s}\left(\begin{array}{c}{a}_{ks}\\ {}2\end{array}\right)-{t}_3}{\frac{1}{2}\left({t}_1+{t}_2\right)-{t}_3} $$
where \( {t}_1=\sum_k\left(\begin{array}{c}{a}_{k\bullet}\\ {}2\end{array}\right) \), \( {t}_2=\sum_s\left(\begin{array}{c}{a}_{\bullet s}\\ {}2\end{array}\right) \), \( {t}_3=\frac{2{t}_1{t}_2}{\left(\begin{array}{c}N\\ {}2\end{array}\right)}\kern0.5em \) and a k∙ = ∑ s a ks , a ∙s = ∑ k a ks .
In the calculation of \( {d}_2^S \) dissimilarity, the setting of tuple length for k-tuple and Markov order for the background sequences are required. Based on previous studies [4, 5], for \( {d}_2^S \), tuple length k was generally set to 4–7 tuples, and the order of Markov chain was generally set as 0–2, as in previous applications, to analyze metagenomic and metatranscriptomic samples [25, 26]. Therefore, we extended the testing range of tuple length and Markov order as 4–8 and 0–3 to assess the effect of tuple length and Markov order for \( {d}_2^S\mathrm{Bin} \) on contig binning. As shown in Table 2, for the binning results of MaxBin on 10genome-80×, the i.i.d. (that is 0-order Markov) model obtained the highest three indexes at almost all tuple lengths. The models based on tuple length k = 6 represent superior performance. The best performance was achieved under the i.i.d. background model of 6-tuples. All three criteria dropped suddenly at k = 8. The experiment offered initial guidance for the selection of tuple length and Markov order.
Table 2 Initial assessments of the effects of tuple length and Markov order of the background sequences on the performance of MaxBin+ \( {d}_2^S\mathrm{Bin} \) in terms of recall, precision and ARI for dataset 10genome-80×
Length selection of k-tuple in \( {d}_2^S\mathrm{Bin} \)
According to Table 2, we calculated \( {d}_2^S \) with 4-8 bp tuples under the i.i.d. model based on the output of the existing binning tools. These tools were run under their default tuple length and mode. The datasets 10genome 80× and 100genome-simHC+ were selected to test the effect of tuple length on the performance of \( {d}_2^S\mathrm{Bin} \). For both datasets, \( {d}_2^S\mathrm{Bin} \) based on 6-tuples achieved the best performance on precision, recall and ARI for all five tools. Figures 2 and 3 only plot the curves of tuple length k = 4–6 because the severe dropping in performance with k = 7, 8 led to an excessively wide Y-axis coordinate range, and the curves of k = 4–6 appeared to aggregate, making it hard to display the superiority of k = 6. Therefore, we set k = 6 with \( {d}_2^S \) in the rest of our study.
The effect of tuple length on the binning of contigs with different binning algorithms (MaxBin, MaxCluster, MetaWatt, SCIMM and MyCC) further modified by \( {d}_2^S\mathrm{Bin} \) under the i.i.d. background model for dataset 10genome 80×. a-e are the Recall, Precision and ARI of 4–6 tuples \( {d}_2^S\mathrm{Bin} \) on the five contig-binning tools. From the figures, it can be clearly seen that 6-tuple \( {d}_2^S\mathrm{Bin} \) achieves the best performance in almost all cases
The effect of tuple length on the binning of contigs with different binning algorithms (MaxBin, MaxCluster, MetaWatt, SCIMM and MyCC) further modified by \( {d}_2^S\mathrm{Bin} \) under the i.i.d. background model for dataset 100genome simHC+. a-e are the Recall, Precision and ARI of 4–6 tuples \( {d}_2^S\mathrm{Bin} \) on the five contig-binning tools. From the figures, it can be clearly seen that 6-tuple \( {d}_2^S\mathrm{Bin} \) achieves the best performance in almost all cases
Order selection for Markov chain in \( {d}_2^S\mathrm{Bin} \)
To obtain the most suitable Markov order for the background genome, we fixed the tuple length k = 6 and applied 0-2nd order Markov chain to calculate \( {d}_2^S \) for datasets 10genome 80× and 100genome-simHC+ on the output of five contig-binning tools. As shown in Figs. 4 and 5, for both datasets, \( {d}_2^S\mathrm{Bin} \) under the i.i.d. model of 6-tuple achieves the best performance for Precision, Recall and ARI on all five tools. According to our previous studies about applying \( {d}_2^S \) to compare metagenomic [28] and metatranscriptomic samples [26], \( {d}_2^S \) under the i.i.d. model always achieved best results for all the 12 testing datasets, which illustrated that the i.i.d. model works well for the study of microbial communities. This is probably due to the fact that each bin is a mixture of several genomes and no Markov chain models with fixed order greater than 0 can describe the bin better. Therefore, we set tuple length k = 6 and the i.i.d. model in \( {d}_2^S\mathrm{Bin} \).
The effect of the order of Markov chain on the binning of contigs with different binning algorithms (MaxBin, MaxCluster, MetaWatt, SCIMM and MyCC) further modified by \( {d}_2^S\mathrm{Bin} \) for 6-tuples on dataset 10genome 80×. a-e are the Recall, Precision and ARI of 0–2 order of Markov chain to calculate \( {d}_2^S\mathrm{Bin} \) on the five contig-binning tools. From the figures, it can be clearly seen that \( {d}_2^S\mathrm{Bin} \) calculated on 0-order Markov chain achieves the best performance in all cases
The effect of the order of Markov chain on the binning of contigs with different binning algorithms (MaxBin, MaxCluster, MetaWatt, SCIMM and MyCC) further modified by \( {d}_2^S\mathrm{Bin} \) for 6-tuples on dataset 100genome simHC+. a-e are the Recall, Precision and ARI of 0–2 order of Markov chain to calculate \( {d}_2^S\mathrm{Bin} \) on the five contig-binning tools. From the figures, it can be clearly seen that \( {d}_2^S\mathrm{Bin} \) calculated on 0-order Markov chain achieves the best performance in all cases
Experiments on contig binning
The contig-binning tools Maxbin [17], Metacluster 3.0 [14], Metawatt [3], SCIMM [11] and MyCC [20] were applied to bin the contigs from the six synthetic and real datasets with their original running modes. Based on the results from these tools, \( {d}_2^S\mathrm{Bin} \) was further applied to adjust the contigs among bins. \( {d}_2^S\mathrm{Bin} \) did not change the number of bins obtained by the original tools. The bar graphs in Fig. 6 illustrate the Recall, Precision and ARI of the output of the five existing tools and after the adjustment of \( {d}_2^S\mathrm{Bin} \) for the six datasets. In most cases, the three criteria were improved by 1%–22%. Additional file 1: Table S1 presents the numerical values of the three indexes and offers more detailed information on all experiments, including the number of total&binned contigs and actual&clustered bins, providing more comprehensive view about the scale of dataset, complexity and original binning performance.
Contig binning on the six testing datasets. a-f are the results of six synthetic and real datasets for the five tools. The blue-, green- and red-colored bars are recall, precision and ARI, respectively. The bars without border are the criteria of the original outputs of the five tools. The bordered bars are the criteria after using \( {d}_2^S\mathrm{Bin} \). It is obvious that performance increases in each case after adjustment by \( {d}_2^S\mathrm{Bin} \)
Contig binning on synthetic dataset 10 genome 80× coverage
From Fig. 6a, it is easy to see that the three criteria were improved for all five tools. As shown in Additional file 1: Table S1, 8022 contigs were assembled from simulated metagenomic reads. The best results were obtained on MyCC where \( {d}_2^S\mathrm{Bin} \) increased recall, precision and ARI from 97.21%, 97.21%, and 95.58% to 97.75%, 97.75% and 96.16%, respectively. MaxBin, MetaCluster and MyCC assigned the contigs into 10 bins. MetaWatt and SCIMM obtained 27 and 8 bins, respectively, but \( {d}_2^S\mathrm{Bin} \) still adjusted contigs among these bins to achieve better performance.
Compared with 20 million reads in 10 genome 80× data, 10 genome 20× data have only 5 million reads for the 10 genomes. Fig. 6b shows that \( {d}_2^{\mathrm{S}}\mathrm{Bin} \) improved the binning of MaxBin, MetaWatt, SCIMM and MyCC. As shown in Additional file 1: Table S1, both MaxBin and MetaCluster only produced three bins, and most contigs belonged to the three genomes with highest abundances because most contigs from the seven low-abundance genomes were discarded during preprocessing by having short length [17]. However, the \( {d}_2^S\mathrm{Bin} \) only improved precision, but not recall or ARI, on MetaCluster. In order to have a deep insight on the deterioration of binning performance, we list the number of contigs from the 10 genomes in each bin, as shown in Additional file 1: Table S2–2 for MetaCluster and MetaCluster+ \( {d}_2^S\mathrm{Bin} \). Each row of the table is one genome defined by its genome ID and corresponding genome name in NCBI and each column is the clustered bin, so the element is the number of contigs from one genome inside the current bin. Among the 1217 contigs assigned by MetaCluster, there are 1209 contigs from four dominant genomes: Flavobacterium branchiophilum, Halothiobacillus neapolitanus, Lactobacillus casei and Acetobacter pasteurianus with at least 100 contigs. But MetaCluster only output three bins: the contigs from Flavobacterium branchiophilum, Halothiobacillus neapolitanus and Lactobacillus casei are dominant in the three bins, and the contigs from Acetobacter pasteurianus are scattered into the three bins. After adjustment by \( {d}_2^S\mathrm{Bin} \), the contigs from Acetobacter pasteurianus were merged into the same bin as Halothiobacillus neapolitanus. Acetobacter pasteurianus and Halothiobacillus neapolitanus are both from the phylum Proteobacteria. Therefore, Acetobacter pasteurianus is phylogenetically closer to Halothiobacillus neapolitanus than to the other two genomes. From this point of view, \( {d}_2^S\mathrm{Bin} \) indeed improved the binning of MetaCluster although the performance index did not show improvement. Additional file 1: Table S2 also gives the details of contigs' assignments in bins before and after \( {d}_2^S\mathrm{Bin} \) for the other four tools. For MyCC in Additional file 1: TableS2–5, before using \( {d}_2^S\mathrm{Bin} \), MyCC produced 5 bins and the contigs from Halothiobacillus neapolitanus were assigned to bin 1 and bin 4 and bin 1 included Halothiobacillus neapolitanus and Lactobacillus casei, which lead to the low ARI index as 24.76%. After using \( {d}_2^S\mathrm{Bin} \), most contigs from Halothiobacillus neapolitanus were assigned to bin 4, and bin 1 mainly included contigs from Lactobacillus casei. The ARI was increased to 70.48%. The result demonstrates that \( {d}_2^S\mathrm{Bin} \) tends to assign contigs with consistent or similar background models to the same bin.
Contig binning on synthetic dataset 100 genome-simHC+
simHC+ has evenly distributed species abundance levels with no dominant species. According to Fig. 6c, the three criteria were all improved for the five tools. According to Additional file 1: Table S1, among a total of 407,873 contigs, 13,919 were clustered into 87 bins by MaxBin with 80.23%, 76.69 and 64.58% recall, precision and ARI, respectively. After \( {d}_2^S\mathrm{Bin} \), the three indexes were improved to 90.67%, 80.14% and 74.03%, respectively, showing overall superior performance. MetaCluster, MetaWatt, and MyCC produced 97, 129 and 94 bins, respectively, and recall, precision and ARI were improved for all of them by \( {d}_2^S\mathrm{Bin} \). SCIMM only clustered 19 bins, which led to low precision and ARI, but \( {d}_2^S\mathrm{Bin} \) still improved the three metrics.
Contig binning on synthetic dataset 100 genome-simMC+
According to Fig. 6d, the three criteria were improved by \( {d}_2^S\mathrm{Bin} \) for MaxBin, MetaCluster, SCIMM and MyCC. Owing to the poor assembly quality of simMC+ [17], only ~10,000+ contigs of the 795,573 passed the minimum length threshold, among which a small portion came from low-abundance genomes. Therefore, only high-abundance genomes were binned, and 11 bins were generated for MaxBin and MetaCluster, and 15 bins for MyCC. The large disparity between the number of real species and bins led to low precision and ARI. However, \( {d}_2^S\mathrm{Bin} \) still greatly improved recall, precision and ARI. The exception was MetaWatt. Among the 11,987 clustered contigs, MetaWatt isolated 41 bins. In this case, extracting contigs from the dominant genome from each bin would leave only 7978, meaning that one-third of the contigs would remain to interfere with the modeling of the 41 dominant genomes, in turn leading to decreased performance for precision and ARI.
Contig binning on synthetic dataset 100 genome-simLC+
\( {d}_2^S\mathrm{Bin} \) improved the binning performance for all tools. All three metrics were also significantly improved by \( {d}_2^S\mathrm{Bin} \). For SCIMM, \( {d}_2^S\mathrm{Bin} \) increased recall, precision and ARI from 70.99%, 46.29% and 32.64% to 76.42%, 65.46% and 55.24%, respectively, which represents the best performance among the five tools.
Contig binning on real dataset Sharon
For this real dataset, the ground truth of binning was not available. The following two evaluations were implemented: (1) We only binned the 2614 contigs with unambiguous labels belonging to 21 species, and the annotations were considered as the ground truth. MaxBin, MetaCluster, MetaWatt, SCIMM and MyCC isolated 11, 10, 23, 19 and 16 bins for Sharon originally. As shown in Fig. 6f, based on their binning outputs, \( {d}_2^S\mathrm{Bin} \) adjusted the contig binning and increased Recall, Precision and ARI for all tools. (2) We applied CheckM [40] to estimate the approximate contamination and genome completeness of the contigs in the bins free from ground truth. Figure 7a shows the number of recovered genome bins by each method in different recall (completeness) threshold with precision (lack of contamination) > 80%. Although the tools identified 10–23 bins among the 21 species in the Sharon dataset, only 4–6 genome bins were recovered with precision > 80%. \( {d}_2^S\mathrm{Bin} \) did improve recall and precision. For MetaWatt and MyCC, \( {d}_2^S\mathrm{Bin} \) increased the number of bins with precision > 80%. For MetaCluster and SCIMM, \( {d}_2^S\mathrm{Bin} \) not only increased the number of bins with precision > 80% but also increased the number of bins with recall > 90%. The \( {d}_2^S\mathrm{Bin} \) also increased the recall of each bin for MaxBin and MyCC. Figure 7b shows the number of recovered genome bins at different precision thresholds with recall > 80%. For all tools, \( {d}_2^S\mathrm{Bin} \) increased the number of bins with recall > 80%. For MaxBin and MyCC, the number of bins with precision > 90% is also increased by \( {d}_2^S\mathrm{Bin} \).
Evaluation of recall and precision of the Sharon dataset with CheckM. a The plot shows the number of recovered genome bins (X-axis) by each method (Y-axis) at different recall (completeness) thresholds (gray scale) with precision (lack of contamination) ≥ 80%. b The plot shows the number of recovered genome bins (X-axis) by each method (Y-axis) at different precision thresholds (gray scale) with recall ≥ 80%. It is clear that \( {d}_2^S\mathrm{Bin} \) improved the recall and precision of each bin compared with the original tools. The number "0" shown on the border means that one or more value intervals were skipped because no genome was recovered in the intervals
Testing on these synthetic and real datasets showed that \( {d}_2^S\mathrm{Bin} \) could achieve obvious improvement on the original outputs of the five testing tools.
Convergence of K-means iteration on \( {d}_2^S\mathrm{Bin} \)
In order to evaluate the convergence of K-means iteration on \( {d}_2^S\mathrm{Bin} \), we plotted the performance curves of the three indexes on randomly selected tools and datasets, as shown in Fig. 8. During our experiments with ten iterations, the three indexes increased significantly on the first iteration and reached steady state quickly. The "0" in the horizontal ordinate indicates the performance of the original binning tool. Therefore, in \( {d}_2^S\mathrm{Bin} \), the iterations of contig binning with K-means will stop when no contigs is adjusted or the number of iterations reaches 5.
Curves of the three indexes with the K-means iterations. The "0" in the horizontal ordinate reflects the output performance of the original binning tool, MetaCluster in (a) and SCIMM in (b). The three indexes increase significantly on the first iteration, followed by slight adjustment to reach steady values
Software implementation and running
The code of \( {d}_2^S\mathrm{Bin} \) was implemented with Python and Cython running under the Linux system. Cython is a superset of the Python language that additionally supports calling C functions, and the code can be compiled into a sharing library called by python directly. Tested on a server with 128G memory and Intel(R) Xeon(R) CPU E5–2620 v2 @ 2.10GHz with 6 CPU cores at 2.10 GHz, it takes 16 min to finish the adjustment of contig binning for \( {d}_2^S\mathrm{Bin} \) on 6-tuples for 8022 contigs of 10 bins with 4000 bp length on average and the peak memory is 6.7GB. The source code of \( {d}_2^S\mathrm{Bin} \) is available at https://github.com/kunWangkun/d2SBin.
Our experiments demonstrate \( {d}_2^S \) can measure the similarity between contigs more accurately. However, \( {d}_2^S \) requires to build the background Markov model for each contig, which bring heavy computation burden. Therefore, in our study, instead of de novo binning from scratch, we attempt to adjust contig bins based on the output of any existing binning tools for the single metagenomic sample. The computational issue can be overcome using this strategy. When there are multiple related samples available, the sequence composition contribute less than the co-varying coverage profiles across samples for contig binning and \( {d}_2^S\mathrm{Bin} \) can not improve the contig binning for multiple metagenomic samples. The tools designed for multiple samples, like COCACOLA, GroopM, Concoct, MaxBin2.0, can achieve satisfactory results if multiple metagenomic samples are available.
Currently, \( {d}_2^S\mathrm{Bin} \) does not merge, or split, the bins. In some situations that there may be large differences between the numbers of clustered bins and ground truth, merging and splitting the bins would improve the results. However, the algorithms to adjust the clustering number, such as ISODATA [41], require the inputs of the minimum threshold of between-class dissimilarity and the maximum threshold of within-class dissimilarity. These thresholds depend on the detailed taxonomic level which the investigators are interested in. Once these thresholds are given, we can combine the algorithms for merging and splitting bins with \( {d}_2^S\mathrm{Bin} \) to further improve the binning results.
The ability of \( {d}_2^S\mathrm{Bin} \) to achieve improved binning performance is based on the idea that contigs clustered into one bin will come from the same genome and that relative sequence compositions will be similar across different regions of the same genome, but differ between genomes [21, 22]. \( {d}_2^S \) measures the dissimilarity between contig and the bin's center based on the Markov model of k-tuple sequence compositions.
Our experiments demonstrate that \( {d}_2^S\mathrm{Bin} \) significantly improves binning performance in almost all cases, thus giving credence to the relative sequence composition model over the direct application of absolute sequence composition. We applied \( {d}_2^S\mathrm{Bin} \) to five contig-binning tools with different binning strategies. Irrespective of the different strategies employed by the contig-binning tools, \( {d}_2^S\mathrm{Bin} \) was able to achieve better performance for all tools tested. Finally, the optimal results for \( {d}_2^S\mathrm{Bin} \) are always obtained on steady tuple length k = 6 under the i.i.d. model with no need to search for the optimal parameters.
ARI :
Adjusted rand index
Expectation maximization
i.i.d. :
independent identically distributed
Riesenfeld CS, Schloss PD, Handelsman J. Metagenomics: genomic analysis of microbial communities. Annu Rev Genet. 2004;38:525–52.
Mande SS, Mohammed MH, Ghosh TS. Classification of metagenomic sequences: methods and challenges. Brief Bioinform. 2012;13(6):669–81.
Sedlar K, Kupkova K, Provaznik I. Bioinformatics strategies for taxonomy independent binning and visualization of sequences in shotgun metagenomics. Comput Struct Biotechnol J. 2017;15:48–55.
Alneberg J, et al. Binning metagenomic contigs by coverage and composition. Nat Methods. 2014;11:1144–6.
Lu YY, et al. COCACOLA: binning metagenomic contigs using sequence COmposition, read CoverAge, CO-alignment, and paired-end read LinkAge. Bioinformatics. 2017;33(6):791–8.
Huson DH, et al. MEGAN analysis of metagenomic data. Genome Res. 2007;17(3):377–86.
Wood DE, Salzberg SL. Kraken: ultrafast metagenomic sequence classification using exact alignments. Genome Biol. 2014;15(3):R46.
Finn RD, et al. The Pfam protein families database: towards a more sustainable future. Nucleic Acids Res. 2016;44(D1):D279–85.
Rosen GL, Reichenberger ER, Rosenfeld AM. NBC: the naive Bayes classification tool webserver for taxonomic classification of metagenomic reads. Bioinformatics. 2011;27(1):127–9.
Kislyuk A, et al. Unsupervised statistical clustering of environmental shotgun sequences. BMC Bioinformatics. 2009;10(1):316.
Kelley DR, Salzberg SL. Clustering metagenomic sequences with interpolated Markov models. BMC Bioinformatics. 2010;11(1):544.
Strous M, et al. The binning of metagenomic contigs for microbial physiology of mixed cultures. Front Microbiol. 2012;3:410.
Laczny CC, et al. VizBin-an application for reference-independent visualization and human-augmented binning of metagenomic data. Microbiome. 2015;3(1):1.
Leung HC, et al. A robust and accurate binning algorithm for metagenomic sequences with arbitrary species abundance ratio. Bioinformatics. 2011;27(11):1489–95.
Wu Y-W, Ye Y. A novel abundance-based algorithm for binning metagenomic sequences using l-tuples. J Comput Biol. 2011;18(3):523–34.
Imelfort M, et al. GroopM: an automated tool for the recovery of population genomes from related metagenomes. PeerJ. 2014;2:e603.
Wu Y-W, et al. MaxBin: an automated binning method to recover individual genomes from metagenomes using an expectation-maximization algorithm. Microbiome. 2014;2(1):26.
Wu Y-W, Simmons BA, Singer SW. MaxBin 2.0: an automated binning algorithm to recover genomes from multiple metagenomic datasets. Bioinformatics. 2016;32(4):605–7.
Wang Y, Hu H, Li X. MBBC: an efficient approach for metagenomic binning based on clustering. BMC Bioinformatics. 2015;16(1):36.
Lin H-H, Liao Y-C. Accurate binning of metagenomic contigs via automated clustering sequences using information of genomic signatures and marker genes. Sci Rep. 2016;6:24175.
Karlin S, Mrazek J, Campbell AM. Compositional biases of bacterial genomes and evolutionary implications. J Bacteriol. 1997;179(12):3899–913.
Dick GJ, et al. Community-wide analysis of microbial genome sequence signatures. Genome Biol. 2009;10(8):R85.
Wan L, et al. Alignment-free sequence comparison (II): theoretical power of comparison statistics. J Comput Biol. 2010;17(11):1467–90.
Ahlgren NA, et al. Alignment-free d2* oligonucleotide frequency dissimilarity measure improves prediction of hosts from metagenomically-derived viral sequences. Nucleic Acids Res. 2017;45(1):39–53.
Song K, et al. Alignment-free sequence comparison based on next-generation sequencing reads. J Comput Biol. 2013;20(2):64–79.
Wang Y, et al. Comparison of metatranscriptomic samples based on k-tuple frequencies. PLoS One. 2014;9(1):e84348.
Liao W, et al. Alignment-free transcriptomic and Metatranscriptomic comparison using sequencing signatures with variable length Markov chains. Sci Rep. 2016;6:37243.
Jiang B, et al. Comparison of metagenomic samples using sequence signatures. BMC Genomics. 2012;13(1):730.
Wang Y, et al. MetaCluster 4.0: a novel binning algorithm for NGS reads and huge number of species. J Comput Biol. 2012;19(2):241–9.
Wang Y, et al. MetaCluster 5.0: a two-round binning approach for metagenomic data for low-abundance species in a noisy sample. Bioinformatics. 2012;28(18):i356–62.
Richter DC, et al. MetaSim—a sequencing simulator for genomics and metagenomics. PLoS One. 2008;3(10):e3373.
Zerbino DR, Birney E. Velvet: algorithms for de novo short read assembly using de Bruijn graphs. Genome Res. 2008;18(5):821–9.
Mavromatis K, et al. Use of simulated data sets to evaluate the fidelity of metagenomic processing methods. Nat Methods. 2007;4(6):495–500.
Hallam SJ, et al. Genomic analysis of the uncultivated marine crenarchaeote Cenarchaeum symbiosum. Proc Natl Acad Sci. 2006;103(48):18296–301.
Tyson GW, et al. Community structure and metabolism through reconstruction of microbial genomes from the environment. Nature. 2004;428(6978):37–43.
Woyke T, et al. Symbiosis insights through metagenomic analysis of a microbial consortium. Nature. 2006;443(7114):950–5.
Tringe SG, et al. Comparative metagenomics of microbial communities. Science. 2005;308(5721):554–7.
Sharon I, et al. Time series community genomics analysis reveals rapid shifts in bacterial species, strains, and phage during infant gut colonization. Genome Res. 2013;23(1):111–20.
Ijaz, U, Quince C. TAXAassign v0.4. https://github.com/umerijaz/taxaassign 2013.
Parks DH, et al. CheckM: assessing the quality of microbial genomes recovered from isolates, single cells, and metagenomes. Genome Res. 2015;25(7):1043–55.
Ball GH, Hall DJ. ISODATA, a novel method of data analysis and pattern classification. Menlo Park CA: Stanford research inst; 1965.
Wu Y-W, et al. MaxBin: an automated binning method to recover individual genomes from metagenomes using an expectation-maximization algorithm. 2014 13 Apr 2017; Available from: http://downloads.jbei.org/data/microbial_communities/MaxBin/MaxBin.html.
This research is supported by the National Natural Science Foundation of China (61673324, 61503314), U.S. National Science Foundation grants (DMS-1518001), NIH R01GM120624, China Scholarship Council (201606315011) and Natural Science Foundation of Fujian (2016 J01316). The funding agencies had no role in study design, analysis, interpretation of results, decision to publish, or preparation of the manuscript.
The \( {d}_2^S\mathrm{Bin} \) source codes are available at https://github.com/kunWangkun/d2SBin.
The five synthetic testing datasets were from: http://downloads.jbei.org/data/microbial_communities/MaxBin/MaxBin.html [42].
The real Sharon dataset was from the NCBI short-read archive (SRA052203).
Department of Automation, Xiamen University, Xiamen, Fujian, 361005, China
& Kun Wang
Molecular and Computational Biology Program, University of Southern California, Los Angeles, California, CA, 90089, USA
Yang Young Lu
& Fengzhu Sun
Center for Computational Systems Biology, Fudan University, Shanghai, 200433, China
Search for Ying Wang in:
Search for Kun Wang in:
Search for Yang Young Lu in:
YW and FS planned the project; YW developed the model and designed the experiments; KW realized the models and implemented the experiments; KW and YL analyzed the results; YW and FS wrote the main manuscript. All authors read and approved the final manuscript.
Correspondence to Ying Wang or Fengzhu Sun.
Additional file 1: Table S1.
The file gives the numerical values of three criteria of contig binning on the experiments of the six testing datasets. Table S2. Detailed binning results of the contigs before and after \( {d}_2^S\mathrm{Bin} \) for dataset 10genome-20× based on the five testing tools. (DOCX 38 kb)
Wang, Y., Wang, K., Lu, Y.Y. et al. Improving contig binning of metagenomic data using \( {d}_2^S \) oligonucleotide frequency dissimilarity. BMC Bioinformatics 18, 425 (2017) doi:10.1186/s12859-017-1835-1
Accepted: 11 September 2017
Contig binning
Taxonomy-independent
\( {d}_2^S \) dissimilarity, k-tuple
Sequence analysis (methods) | CommonCrawl |
Kendall Hunt Logo
Accelerated Grade 6
Middle School AcceleratedAccelerated Grade 6Accelerated Grade 7
Accelerated Grade 6Unit 1Unit 2Unit 3Unit 4Unit 5Unit 6Unit 7Unit 8Unit 9
Unit 2 Family Materials
Ratios, Rates, and Percentages
What are Ratios?
A ratio is an association between two or more quantities. For example, say we have a drink recipe made with cups of juice and cups of soda water. Ratios can be represented with diagrams like those below.
Description: <p>A discrete diagram of squares that represent cups of juice and cups of soda water. The top row is labeled "juice, in cups" and contains 6 greens squares. The bottom row is labeled "soda water, in cups" and contains 4 white squares.</p>
Here are some correct ways to describe this diagram:
The ratio of cups of juice to cups of soda water is \(6:4\).
The ratio of cups of soda water to cups of juice is 4 to 6.
There are 3 cups of juice for every 2 cups of soda water.
The ratios \(6:4\), \(3:2\), and \(12:8\) are equivalent because each ratio of juice to soda water would make a drink that tastes the same.
Here is a task to try with your student:
There are 4 horses in a stall. Each horse has 4 legs, 1 tail, and 2 ears.
Draw a diagram that shows the ratio of legs, tails, and ears in the stall.
Complete each statement.
The ratio of ________ to ________ to ________ is ________ : ________ : ________.
There are ________ ears for every tail. There are ________ legs for every ear.
Answers vary. Sample response:
Answers vary. Sample response: The ratio of legs to tails to ears is \(16:4:8\). There are 2 ears for every tail. There are 2 legs for every ear.
Representing Equivalent Ratios
There are different ways to represent ratios.
Let's say the 6th grade class is selling raffle tickets at a price of $6 for 5 tickets. Some students may use diagrams with shapes to represent the situation. For example, here is a diagram representing 10 tickets for $12.
Drawing so many shapes becomes impractical. Double number line diagrams are easier to work with. The one below represents the price in dollars for different numbers of raffle tickets all sold at the same rate of $12 for 10 tickets.
Raffle tickets cost $6 for 5 tickets.
How many tickets can you get for $90?
What is the price of 1 ticket?
75 tickets. Possible strategies: Extend the double number line shown and observe that $90 is lined up with 75 tickets. Or, since 90 is 6 times 15, compute 5 times 15.
$1.20. Possible strategies: Divide the number line into 5 equal intervals, as shown. Reason that the price in dollars of 1 ticket must be \(6 \div 5.\)
Who biked faster: Andre, who biked 25 miles in 2 hours, or Lin, who biked 30 miles in 3 hours? One strategy would be to calculate a unit rate for each person. A unit rate is an equivalent ratio expressed as something "per 1." For example, Andre's rate could be written as "\(12\frac12\) miles in 1 hour" or "\(12\frac12\) miles per 1 hour." Lin's rate could be written "10 miles per 1 hour." By finding the unit rates, we can compare the distance each person went in 1 hour to see that Andre biked faster.
Every ratio has two unit rates. In this example, we could also compute hours per mile: how many hours it took each person to cover 1 mile. Although not every rate has a special name, rates in "miles per hour" are commonly called speed and rates in "hours per mile" are commonly called pace.
Andre:
time (hours)
Lin:
Dry dog food is sold in bulk: 4 pounds for $16.00.
At this rate, what is the cost per pound of dog food?
At this rate, what is the amount of dog food you can buy per dollar?
$4.00 per pound because \(16 \div 4=4.\)
You get \(\frac14\) or 0.25 of a pound per dollar because \(4 \div16 =0.25.\)
dog food (pounds)
cost (dollars)
Let's say 440 people attended a school fundraiser last year. If 330 people were adults, what percentage of people were adults? If it's expected that the attendance this year will be 125% of last year, how many attendees are expected this year? A double number line can be used to reason about these questions.
Students use their understanding of "rates per 1" to find percentages, which we can think of as "rates per 100." Double number lines and tables continue to support their thinking. The example about attendees of a fundraiser could also be organized in a table:
Toward the end of the unit, students develop more sophisticated strategies for finding percentages. For example, you can find 125% of 440 attendees by computing \(\frac{125}{100} \boldcdot 440.\) With practice, students will use these more efficient strategies and understand why they work.
For each question, explain your reasoning. If you get stuck, try creating a table or double number line for the situation.
A bottle of juice contains 16 ounces, and you drink 25% of the bottle. How many ounces did you drink?
You get 9 questions right in a trivia game, which is 75% of the questions. How many questions are in the game?
You planned to walk 8 miles, but you ended up walking 12 miles. What percentage of your planned distance did you walk?
Any correct reasoning that a student understands and can explain is acceptable. Sample reasoning:
4. 25% of the bottle is \(\frac14\) of the bottle, and \(\frac14\) of 16 is 4.
12. If 9 questions is 75%, we can divide each by 3 to know that 3 questions is 25%. Multiplying each by 4 shows that 12 questions is 100%.
150%. If 8 miles is 100%, then 4 miles is 50%, and 12 miles is 150%.
IM 6–8 Math was originally developed by Open Up Resources and authored by Illustrative Mathematics®, and is copyright 2017-2019 by Open Up Resources. It is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). OUR's 6–8 Math Curriculum is available at https://openupresources.org/math-curriculum/.
Adaptations and updates to IM 6–8 Math are copyright 2019 by Illustrative Mathematics, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Adaptations to add additional English language learner supports are copyright 2019 by Open Up Resources, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Adaptations and additions to create IM 6–8 Math Accelerated are copyright 2020 by Illustrative Mathematics, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
The second set of English assessments (marked as set "B") are copyright 2019 by Open Up Resources, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
This site includes public domain images or openly licensed images that are copyrighted by their respective owners. Openly licensed images remain under the terms of their respective licenses. See the image attribution section for more information. | CommonCrawl |
首页 > 书院学术 > 至美数学 > 会议论坛
Advances in Homotopy Theory II
时间:2022/5/2-2022/5/4
地点:Zoom
组织者:BIMSA
主讲人:BIMSA
主办方:北京雁栖湖应用数学研究院
This is the second edition of a twice-yearly workshop that will alternate between the Southampton Centre for Geometry, Topology and Applications (CGTA) and the Beijing Institute of Mathematical Sciences and Applications (BIMSA). The aims are to promote exciting new work in homotopy theory, with an emphasis on that by younger mathematicians, and to showcase the wide relevance of the subject to other areas of mathematics and science.
Passcode: BIMSA
Anthony Bahri (Rider)
Matthew Burfitt (Aberdeen)
Sebastian Chenery (Southampton)
Xin Fu (Ajou University)
Alexander Grigor'yan (Bielefeld)
Fedor Pavutnitskiy (HSE)
Tseleung So (Regina)
Vladimir Vershinin (Montpelier)
Juxin Yang (Hebei)
Schedule of talks
London Time Beijing Time Speaker
12:00-12:50 19:00-19:50 Alexander Grigor'yan
13:00-13:50 20:00-20:50 Matthew Burfitt
14:00-14:50 21:00-21:50 Fedor Pavutnitskiy
12:00-12:50 19:00-19:50 Xin Fu
13:00-13:50 20:00-20:50 Sebastian Chenery
14:00-14:50 21:00-21:50 Anthony Bahri
12:00-12:50 19:00-19:50 Vladimir Vershinin
13:00-13:50 20:00-20:50 Juxin Yang
14:00-14:50 21:00-21:50 Tseleung So
Titles and Abstracts
Title: Symmetric products and a realization of generators in the cohomology of a polyhedral product
Speaker: Anthony Bahri
Polyhedral products, which are determined by a simplicial complex and a family of CW pairs, behave sufficiently well with respect to symmetric products as to allow for a description of the cohomology in terms of the link structure of the simplicial complex and the cohomology of the CW pairs. The process works particularly well under certain freeness conditions which include the use of field coefficients. Moreover, generators obtained via this process are robust enough to compute products. This talk however will focus on the way in which symmetric products are used to obtain the additive structure. Applications include the computation of Poincar series. The results are the culmination of a project which had its genesis in 2014 and is joint work with Martin Bendersky, Fred Cohen and Sam Gitler.
....................................................................................................................................................................................................................................................................................................
Title: Topological data analysis of Fast Field-Cycling MRI images
Speaker: Matthew Burfitt
Fast Field-Cycling MRI (FFC MRI) has the potential to recover new biomarkers from a range of diseases by scanning a number of low magnetic eld strengths simultaneously. The images produced by an FFC scanner can be interpreted in the form of a sequence time series of 2-dimensional grey scale images with each times series corresponding to each of the magnetic eld strengths. I will investigate the applications of topological data analysis and machine learning to brain stroke images obtained by the FFC MRI scanner.
A main obstacle to achieving good results lies in multiplicative brightness errors occurring in the data. A simple solution might be to consider pixelwise image feature vectors initially only up to multiplication by a constant. This can be thought of as splitting the data point cloud within a product by rst embedding into standard n-simplices. We observe that this point cloud embedding can provide good information on tissue types which when modelled against the other component of the data point cloud can be used to highlight stroke damaged tissue.
A drawback of the rst method is that it discarded the pixel spatial locations information of the image. However, this can be captured by persistent homology in a parameter choice free process. A direct comparison between pixel intensity histograms and the Betti curves reveals that homology captures and emphasises tissue signals. Moreover, additional geometric and topological information about the images is captured with persistent homology. From here the ultimately aim is to extract persistent homology features useful for machine leaning and develop new visual diagnostics for the FFC data.
Title: The rational homotopy type of homotopy fibrations over connected sums
Speaker: Sebastian Chenery
We provide a simple condition on rational cohomology for the total space of a pullback fibration over a connected sum to have the rational homotopy type of a connected sum, after looping. This takes inspiration from recent work of Jeffrey and Selick, in which they study pullback fibrations of this type, but under stronger hypotheses compared to our result.
Title: The homotopy classification of four-dimensional toric orbifolds
Speaker: Xin Fu
Quasitoric manifolds are compact, smooth 2n-manifolds with a locally standard $T^n$-action whose orbit space is a simple polytope. The cohomological rigidity problem in toric topology asks whether quasitoric manifolds are distinguished by their cohomology rings. A toric orbifold is a generalized notion of a quasitoric manifold, and there are examples of toric orbifolds that do not satisfy cohomological rigidity. In this talk, we see that certain toric orbifolds in four dimensions, though not cohomologically rigid, are homotopy equivalent if their integral cohomology rings are isomorphic. We achieve this goal by decomposing those spaces up to homotopy. This is joint work with Tseleung So (University of Regina) and Jongbaek Song (KIAS).
Title: Path homology and join of digraphs
Speaker: Alexander Grigor'yan
We introduce the path homology theory on digraphs (=directed graphs) and present Kunneth like formulas for path homology of various joins of digraphs.
Title: Homology of Lie rings
Speaker: Fedor Pavutnitskiy
Homology of Lie algebras over fields is usually defined in terms of the Chevalley-Eilenberg chain complex. Similarly to group homology there are other equivalent definitions in terms of simplicial resolutions and Tor functors. Turns out that these definitions in general are no longer equivalent for Lie algebras over commutative rings. In the talk we will discuss these different approaches to homology of Lie rings and some theorems relating them. This is joint work with Sergei O. Ivanov, Vladislav Romanovskii and Anatoliy Zaikovskii.
Title: Suspension splittings of manifolds and their applications
Speaker: Tseleung So
In order to study the topology of a space, it is useful to decompose the space into smaller pieces, analyse the pieces and reassemble into a whole. We say that a space has a suspension splitting if its suspension decomposes into a wedge of smaller spaces. In this talk I will talk about suspension splittings of 4-dimensional and 6-dimensional smooth manifolds, and their applications to computing generalized cohomology theories and gauge groups of 4- and 6-dimensional smooth manifolds.
Title: On homotopy braids
Speaker: Vladimir Vershinin
The homotopy braid group b Bn is the subject of the work. First, linearity of b Bn over the integers is proved. Then we prove that the group b B3 is torsion free. Also we conjecture that the homotopy braid groups are torsion free for all n.
The talk is based on the joint work with Valerii Bardakov ans Wu Jie, Forum Math. 34 (2022), no. 2, 447-454.
Title: On the homotopy groups of the suspended Quaternionic projective plane
Speaker: Juxin Yang
In this talk, I'll report on my computation of the homotopy groups $\pi_{r+k}(\sum^k\mathbb{H}P^2)$ (for r ≤ 15 and k ≥ 0) localized at 2 or 3, especially the unstable ones. Then I'll give some applications of them, including two classification theorems of a kind of 3-local CW complexes, and some decompositions of the self smash products
The link of the first Advances in Homotopy Theory Workshop: https://www.southampton.ac.uk/cgta/pages/soton-bimsa-biannual-2021-09.page
Recent developments in Seiberg-Witten theory
Description:In 1994, Witten [12] introduced a non-linear partial differential equation on a 4-manifold, called the Seiberg-Witten equations today. This PDE has brought significant progresses in 4-dimensional topology and geometry. In this series of lectures, I shall start with the basics of Seiberg-Witten theory and survey some of rather recent developments in Seiberg-Witten theory.The first le...
Recent theoretical advances in non convex optimization
AbstractIn this report, we give an overview of recent theoretical results on global performance guarantees of optimization algorithm for non convex optimization about deep neural network and data analysis | CommonCrawl |
Cardiac output and stroke volume variation measured by the pulse wave transit time method: a comparison with an arterial pressure-based cardiac output system
Takeshi Suzuki ORCID: orcid.org/0000-0003-3703-93231,
Yuta Suzuki1,
Jun Okuda1,
Rie Minoshima1,
Yoshi Misonoo2,
Tomomi Ueda1,
Jungo Kato1,
Hiromasa Nagata1,
Takashige Yamada1 &
Hiroshi Morisaki1
Journal of Clinical Monitoring and Computing volume 33, pages385–392(2019)Cite this article
Hemodynamic monitoring is mandatory for perioperative management of cardiac surgery. Recently, the estimated continuous cardiac output (esCCO) system, which can monitor cardiac output (CO) non-invasively based on pulse wave transit time, has been developed. Patients who underwent cardiovascular surgeries with hemodynamics monitoring using arterial pressure-based CO (APCO) were eligible for this study. Hemodynamic monitoring using esCCO and APCO was initiated immediately after intensive care unit admission. CO values measured using esCCO and APCO were collected every 6 h, and stroke volume variation (SVV) data were obtained every hour while patients were mechanically ventilated. Correlation and Bland–Altman analyses were used to compare APCO and esCCO. Welch's analysis of variance, and four-quadrant plot and polar plot analyses were performed to evaluate the effect of time course, and the trending ability. A p-value < 0.05 was considered statistically significant. Twenty-one patients were included in this study, and 143 and 146 datasets for CO and SVV measurement were analyzed. Regarding CO, the correlation analysis showed that APCO and esCCO were significantly correlated (r = 0.62), and the bias ± precision and percentage error were 0.14 ± 1.94 (L/min) and 69%, respectively. The correlation coefficient, bias ± precision, and percentage error for SVV evaluation were 0.4, − 3.79 ± 5.08, and 99%, respectively. The time course had no effects on the biases between CO and SVV. Concordance rates were 80.3 and 75.7% respectively. While CO measurement with esCCO can be a reliable monitor after cardiovascular surgeries, SVV measurement with esCCO may require further improvement.
Measurement of cardiac output (CO) has been a mainstay of hemodynamic monitoring in the perioperative management of cardiovascular surgery, and a pulmonary artery catheter (PAC) has been considered as a gold standard for measurement of CO. However, the frequency of use of PAC is gradually decreasing due to the lack of studies demonstrating clinical benefits [1, 2]. Alternative and minimally invasive techniques for measuring CO, such as arterial pressure-based CO (APCO) [3], transthoracic bioimpedance [4], and transthoracic echocardiography [5], have been developed and are widely used to avoid risks associated with PAC use.
Recently, the estimated continuous CO (esCCO) system, which can continuously monitor CO using the pulse wave transit time (PWTT) technique, has been developed as a non-invasive technique and its efficacy evaluated in various types of patients, including surgical [6, 7] and critically-ill patients in the intensive care unit (ICU) [8]. The principle of CO measurement using esCCO is based on the inverse relationship between PWTT, which is the interval between the peak of the R wave on the electrocardiogram (ECG) and the rising phase of the percutaneous oxygen saturation (SpO2) pulse wave, and stroke volume (SV). Due to this relationship, CO can be calculated using the following formula with PWTT, heart rate (HR), and three numeric constants (K, α, and β), using an ECG monitor and SpO2 waveform only without additional devices [9]:
$${\text{esCCO}}=K \times (a \times {\text{PWTT}}+\beta ) \times {\text{HR}}$$
Among the minimally invasive hemodynamic monitoring devices, APCO has been widely used for CO and stroke volume variation (SVV) measurement, and previous clinical studies have demonstrated that the APCO system can perform precise CO measurements and SVV can be used as a reliable variable to predict fluid responsiveness [10, 11]. Recently, esCCO has also been ameliorated to continuously measure SVV, but its efficacy has not yet been evaluated in the clinical setting. Furthermore, studies directly comparing esCCO and APCO, despite the frequent use of APCO among surgical ICU patients, including cardiovascular surgery patients, have been limited. This study aimed to evaluate the accuracy, precision, and interchangeability of CO and SVV measured using esCCO in the post-operative period among cardiovascular surgery patients, compared with that of APCO.
This prospective observational study was conducted in the 10-bed capacity ICU of Keio University Hospital, which is a 1044-bed capacity teaching hospital. Intensivists treat all patients in cooperation with attending physicians and medical specialists in our ICU. This study was approved by the ethics committee of Keio University School of Medicine (approval number: 20130515) and registered at the UMIN Clinical Trials Registry (http://www.umin.ac.jp/, registration number UMIN000013984) before enrollment of patients. Patients who underwent cardiovascular surgeries under hemodynamic monitoring using the APCO system (Flo Trac system: Edwards Lifesciences, CA, USA) were enrolled in this study. All participants provided written informed consent before surgery. Patients younger than 20 years and with arrhythmias, pacemaker, and circulatory support devices before surgery and those who presented continuous arrhythmias were excluded from the study.
Study protocol
Anesthetic management during the surgical procedure was entrusted to the attending anesthesiologists. All patients were admitted to the ICU after operation with invasive mechanical ventilation. Measurement of CO and SVV by two different methods, esCCO and APCO systems, were initiated as soon as patients were admitted to the ICU. CO and SVV were simultaneously and continuously monitored with two devices after the esCCO calibration using the APCO value. CO values were recorded every 6 h after initiation of measurement (6, 12, 18, 24 h after admission) until discharge from the ICU, and SVV values were recorded every hour while patients were mechanically ventilated under assist/control ventilation mode or synchronized intermittent mandatory ventilation mode with respiratory rate of ≥ 10/min.
Doses of sedative and analgesic drugs (propofol, dexmedetomidine, and fentanyl) were adjusted by the intensivists or attending physicians to maintain Richmond Agitation Sedation Scale scores from − 3 to 0 according to the patient's condition. Delirium, which was evaluated by the attending nurses using Confusion Assessment Method of ICU (CAM-ICU), was treated by intravenous administration of haloperidol as the first choice, and atypical antipsychotics as the second choice. Ventilator setting was also adjusted by the intensivists or attending physicians, and the timing of weaning from mechanical ventilation was determined between the intensivists and attending physicians. Hemodynamic monitoring by means of APCO and esCCO were continued until patients were transferred to the high care unit.
Measurement procedure of esCCO and APCO
ECG was monitored using lead II, and a pulse oximeter probe was placed on the fingertip of an upper limb into which an arterial line for APCO was not inserted. The side for placement of the arterial line was determined by the cardiologist before surgery. ECG and pulse oximetry wave, which are required to calculate pulse wave transit time, were obtained from a BSM-9101 bedside monitor (Nihon Kohden, Tokyo, Japan) and transmitted to a personal computer equipped with a C-compiled program for continuous esCCO calculation. esCCO was calculated using the following formula: esCCO = K × (α × PWTT + β) × HR (K, α, and β are numeric constants). SVV based on the esCCO system (esSVV) was also calculated simultaneously. Invasive blood pressure was continuously monitored through the cannula inserted into the radial artery, through which APCO and SVV based on APCO (APSVV) were measured using an Edwards Vigileo Monitor (Edwards Lifesciences, CA, USA). Calibration had to be performed before the continuous monitoring of esCCO. While α is a numerical constant determined before calibration, β and K are calculated after calibration using four values: CO measured by the alternative method (APCO in this study), and 3–10 min averaged PWTT, HR, and PP (pulse pressure: systolic pressure − diastolic pressure). After the calibration period, esCCO was initiated to monitor CO and SVV continuously.
Based on a previous study, which compared esCCO and thermodilution CO (TDCO) measurement [12], 136 points were considered to be required after calculating the sample size to detect a difference of 0.3 L/min between the two devices at the significance level of 0.05 with a statistical power of 80%, assuming that the standard deviation of the differences was 1.25 L/min.
Correlation and precision analyses were performed using Spearman's and Bland–Altman analyses, respectively. Multiple measurements per patient were performed for these analyses. Regarding the Bland–Altman analysis, we defined a reliable percentage error as less than 45%, following a previous study [13]. Welch's analysis of variance was conducted at every measurement point to evaluate the changes of biases between the two devices according to the time course. The trending ability of esCCO was evaluated using the four-quadrant plot and polar plot analyses with the exclusion zone being 15% of the mean APCO [14]. Based on the exclusion zone as 15% of the mean APCO, we analyzed the data in which APCO changes between the two consecutive measurements were > 15% of the mean APCO value to evaluate the trending ability more precisely. A concordance rate of > 80% was defined as an acceptable value based on a previous report that compared TDCO and APCO [15], which was evaluated at 30° in the polar plot analysis [16]. The results were presented as mean ± standard deviation or median (interquartile range) when appropriate. P-values < 0.05 were considered statistically significant.
Patient characteristics
This study was conducted from July 2014 to September 2015. A total of 21 patients who underwent cardiovascular surgeries were included in this study. Table 1 shows the patient characteristics. The mean age of the participants was 64 ± 15 years, and most patients (17/21) were men. Nineteen patients underwent operation for aortic aneurysm, while only two patients were operated for mitral valve diseases. The durations of invasive mechanical ventilation and measurements of CO and SVV were 24 ± 32 and 55 ± 36 h, respectively.
Analysis of CO measured by esCCO and APCO
A total of 143 measurement points were used for CO analysis. Figure 1 shows the results of correlation and precision analyses between esCCO and APCO evaluated using Spearman's and Bland–Altman analyses, respectively. The correlation analysis demonstrated a significantly strong correlation between esCCO and APCO with a 0.62 correlation coefficient (p < 0.01). Bland–Altman analysis revealed that mean APCO, bias, and precision were 5.68, 0.14, and 1.96 L/min respectively, and the 95% limits of agreement (bias ± 2 × precision) was − 3.78 to 4.05 L/min. The percentage error of 69%, which was twice the percentage of precision to the mean APCO value and served as an indicator of interchangeability, was higher than the reliable percentage error of 45%.
The results of CO analysis between esCCO and APCO. The values of 143 measurement points were collected and analyzed to compare CO measured by esCCO and APCO. A The result of Spearman analysis. B The result of Bland–Altman analysis
Analysis of SVV measured by esCCO and APCO
To evaluate SVV, a total of 146 measurement points were gathered. Figure 2 shows the results of the correlation and precision analyses. Correlation analysis revealed that the correlation coefficient between esSVV and APSVV was 0.4 (p < 0.01), which was lower compared with that of CO. Precision analysis due to Bland–Altman analysis showed that the mean APSVV, bias, and precision were 10.3, − 3.79, and 5.08% respectively, and the 95% limits of agreement (bias ± 2 × precision) were − 13.94 to 6.37%. The percentage error was 99%, which was high, indicating that the interchangeability was not acceptable.
The results of SVV analysis between esCCO and APCO. The values of 146 measurement points were collected and analyzed to compare SVV measured by esCCO and APCO. A The result of Spearman analysis. B The result of Bland–Altman analysis
The changes of biases in CO and SVV measured by esCCO and APCO according to the time course
Figure 3 presents the changes of biases between esCCO and APCO according to the time course after calibration. No significant differences in CO and SVV biases were observed during the measurement time course after calibration (p = 0.95 for CO and p = 0.94 for SVV). These results indicate that the precision of esCCO is not affected by the time.
The changes of biases during the study period. These figures represented the changes of biases during the study period. A The change of CO bias between esCCO and APCO. Measurement points more than 48 h apart were compiled at 48 h. B The change of SVV bias between esCCO and APCO
The ability of esCCO to trend with changes in CO
The results of the four-quadrant plot and polar plot analyses, which were performed to evaluate the trending ability of esCCO, are shown in Fig. 4. Regarding the four-quadrant plot analysis, the number of measurement points was 71, and the concordance rate was 80.3%, which was acceptable. The polar plot analysis with 37 measurement points showed a mean polar angle of − 4.4% and a concordance rate of 75.7% at 30°, which showed that the trending ability of esCCO was almost acceptable [16].
Evaluation of the trending ability of esCCO. A The result of four-quadrant plot analysis. B The result of polar plot analysis
This is the first study to compare CO and SVV between the esCCO system based on PWTT analysis and APCO system based on arterial waveform analysis directly in postoperative patients in the ICU. Regarding CO, esCCO was significantly correlated with APCO, as shown by the correlation coefficient of 0.62. The Bland–Altman analysis showed that the bias was 0.14 L/min, which was comparable to the value of reference, and the precision of 1.96 L/min was slightly higher than the reference [12]. The percentage error of 69% was slightly higher compared with those reported in previous studies. Regarding the trending ability, the four-quadrant plot and polar plot analyses revealed that the ability of esCCO to trend with changes in CO could be justified. However, with regard to SVV, esSVV did not correlate so well with APSVV (r = 0.4) as compared with esCCO. The bias of − 3.79%, precision of 5.08%, and the percentage error of 99% indicate that SVV evaluated by esCCO was not reliable and acceptable. These results suggest that continuous CO measurement using the esCCO system is a potentially reliable non-invasive hemodynamic monitoring method, while further studies are warranted for esSVV as a reliable hemodynamic monitoring tool in the clinical setting.
Agreement, accuracy, and interchangeability between esCCO and APCO, compared with previous studies
Although few studies compared esCCO and APCO simultaneously and directly among postoperative patients, some previous studies have evaluated the agreement, accuracy, and interchangeability between esCCO and other CO measurement techniques among various types of patients. Among studies which compared esCCO and TDCO measurement, Ishihara et al. compared esCCO with continuous TDCO measured by a pulmonary artery catheter in patients scheduled for elective cardiac surgery and showed that the correlation coefficient was 0.8 and the bias ± precision was − 0.06 ± 0.82 L/min [9]. Another multicenter study conducted by Yamada et al. [12], which included both ICU and intraoperative patients, showed that the correlation between esCCO and bolus TDCO measured by pulmonary artery catheterization was good (r = 0.79), and the bias ± precision was 0.13 ± 1.15 L/min. In the study which included only post off-pump coronary artery bypass grafting surgery patients [6], postoperative evaluation of esCCO compared with intermittent TDCO revealed that the bias was 0.4 L/min with a precision of 1.15 L/min and percentage error of 41%.
Regarding studies that compared esCCO and APCO, Terada et al. evaluated esCCO and APCO simultaneously in 15 patients who underwent kidney transplant surgery, compared with intermittent bolus TDCO, and revealed that the difference (bias ± precision) and percentage error between esCCO and TDCO were − 0.39 ± 1.15 L/min and 35.6%, respectively, while those between APCO and TDCO were 0.04 ± 1.37 L/min and 42.4%, respectively [17]. They concluded that the trending ability of esCCO was comparable with APCO. In another study that compared esCCO and APCO simultaneously [18], the bias between the two systems was reported to be 0.6 L/min, which was a higher value compared with those reported in other studies.
Although the bias of 0.14 L/min in our study was quite comparable to those of previous studies, the correlation coefficient of 0.62 and the precision of 1.96 L/min were slightly inferior to those reported in previous studies. Furthermore, regarding the percentage error (2 × precision/mean APCO), which is an indicator of interchangeability, the value of 69% in our study was higher than 41, 54, and 48.5% reported in previous studies [6, 12, 19], which compared esCCO and TDCO, and was also higher than the upper limit of the 45% percentage error defined as a reliable value [13]. However, considering that TDCO measurement is the gold standard method and there is a deviation between TDCO and APCO, it is quite natural that a larger deviation exists between esCCO and APCO. Since the bias between esCCO and APCO did not change during the study procedure (42 h after ICU admission) in our study, CO monitoring by esCCO could be used for a long period after a single calibration.
Reliability of SVV measured by esCCO (esSVV), compared with SVV based on APCO (APSVV), in evaluating intra-vascular volume status
To our knowledge, studies evaluating SVV based on PWTT analysis in clinical situations were limited. As shown in previous studies, SVV has been used as a reliable hemodynamic variable to predict fluid responsiveness in critically ill patients, even though several limitations were pointed out [10, 11]. Thus, if SVV can be measured continuously, precisely, and noninvasively by using esCCO, it is very ideal. However, the correlation coefficient of 0.4, the bias ± precision of -3.79 ± 5.08%, and percentage error of 99% in our study suggest that the SVV measurement system based on PWTT may require further amelioration of the measurement algorithm.
The discrepancy between SVV measured by esCCO and APCO systems may be attributable to the difference in the underlying principles between the two devices. The APCO system, which measures CO based on an arterial waveform analysis, calculates single values of SV, and SVV using maximum, minimum, and average SV during the preceding 20 s. In contrast, the esCCO system, which measures CO based on PWTT analysis, calculates single values of SV, and SVV using the same three values as used in the APCO system during each breath. Thereafter, the esCCO system presents a mean SVV value over 32 breaths. This difference in measurement principles could contribute to the disagreement between SVV values measured by esCCO and APCO systems. In order to evaluate intravascular volume status and accurately predict responsiveness to volume challenge using the esCCO system, further improvement might be necessary for this device to be able to track acute preload changes.
Regarding the esCCO response to fluid challenge, some previous studies evaluated the ability of esCCO to track changes in CO induced by the changes in preload. In a study which evaluated the changes in esCCO and intermittent bolus TDCO after change of body position, volume challenge, and the Pringle maneuver in patients undergoing partial hepatectomy, Tsutsui et al. showed that the direction of CO change between esCCO and TDCO represented a good concordance rate of 96% and concluded that esCCO could provide good trending abilities despite the wide range of limits of agreement [20]. Feissel et al. assessed the abilities of esCCO to detect CO changes after fluid infusion in septic shock patients as a secondary outcome compared with transthoracic echocardiography, and showed that a threshold of 11% increase in esCCO discriminated responders from non-responders with a sensitivity of 83% and a specificity of 77% [21]. These results purported that esCCO can be used as a reliable hemodynamic monitoring system for intravascular volume change in critically-ill patients. However, in a study [22] that investigated whether esCCO could track changes in CO induced by volume expansion and passive leg raising in critically-ill patients as compared with transthoracic echocardiography, Biais et al. concluded that esCCO could not track preload-induced CO change accurately by showing the concordance rate of 83% and angular bias of − 20°.
In this study, we evaluated the trending ability of esCCO by the four-quadrant plot and polar plot analyses, even though the number of measurement points per analysis was reduced due to the effect of the 15% exclusion zone. In the analysis of the trending ability, a concordance rate more than 90% is usually considered to indicate reliable trending ability when thermodilution is the reference method [14]. However, considering that the concordance rate of APCO compared with TDCO was reported to be 81% [15], the concordance rates of this study (80.3 and 75.7%, respectively) were comparable to the value reported in the previous report. Although a prospective study to examine the change in esCCO induced by volume resuscitation is required for evaluation of the trending ability of esCCO, the results of this study have shown the possibility of esCCO to trend with changes in CO accurately.
Our study has several limitations that should be considered. First, CO was not evaluated with a pulmonary artery catheter based on the thermodilution technique, which is the gold standard method for measuring CO. However, hemodynamic monitoring using a pulmonary catheter is not routinely used during perioperative management for patients undergoing cardiovascular surgeries in our hospital, and simultaneous monitoring using both TDCO and APCO is not practical in the clinical setting. Since APCO has been a reliable perioperative hemodynamic monitoring system in many previous studies [10, 11], it is quite a valid method to compare esCCO with APCO directly in this study. Second, whether the APCO system can precisely and continuously monitor CO is questionable, given that APCO values, which were calculated based on the wave of invasive arterial pressure line, were sometimes unreliable due to the flexion of the joint where an invasive arterial line was inserted. Considering that the nursing staff taking care of patients in our ICU were always paying attention to ensure that the arterial pressure wave was not affected by the movement of joints, the accuracy of APCO must be guaranteed. Third, we did not evaluate the effects of systemic vascular resistance, which has been shown to affect the esCCO value in previous studies [7, 8, 20]. Although the effects of systemic vascular resistance should be examined in future studies before esCCO application in the clinical setting, we believe that we accomplished the aim of this study to evaluate the accuracy of esCCO in post-cardiovascular surgery patients compared with APCO. Fourth, patients with continuous arrhythmias after surgery were excluded from this study, which might be an obstacle to the generalization of the study results. However, considering that there was only one patient who presented continuous arrhythmias after surgery in this study, the generalizability of the study results should not be denied. Finally, the results of this study are not applicable to other types of ICU patients, since only post-cardiovascular surgery patients were enrolled in this study and the majority of patients underwent operation for aortic aneurysm (19/21). Furthermore, the replaced vascular graft may affect PWTT measurement in comparison with the normal aorta. Further studies seem necessary to evaluate the accuracy and interchangeability of esCCO in other types of critically-ill patients.
In this study, we evaluated the accuracy, precision, and interchangeability of CO and SVV measured by the esCCO system, which can calculate these values based on PWTT, compared with APCO. The correlation and precision analysis revealed that while continuous CO measurement using esCCO was almost acceptable, SVV measured by esCCO was not so reliable. A further improved algorithm may be required for esSVV to predict an intravascular volume status.
Harvey S, Harrison DA, Singer M, Ashcroft J, Jones CM, Elbourne D, Brampton W, Williams D, Young D, Rowan K. Assessment of the clinical effectiveness of pulmonary artery catheters in management of patients in intensive care (PAC-Man): a randomized controlled trial. Lancet. 2005;366:472–7.
Sandham JD, Hull RD, Brant RF, Knox L, Pineo GF, Doig CJ, Laporta DP, Viner S, Passerini L, Devitt H, Kirby A, Jacka M. A randomized, controlled trial of the use of pulmonary-artery catheters in high-risk surgical patients. N Engl J Med. 2003;348:5–14.
Cannesson M, Attof Y, Rosamel P, Joseph P, Bastien O, Lehot JJ. Comparison of FloTrac cardiac output monitoring system in patients undergoing coronary artery bypass grafting with pulmonary artery cardiac output measurements. Eur J Anaesthesiol. 2007;24:832–9.
Zoremba N, Bickenbach J, Krauss B, Rossaint R, Kuhlen R, Schalte G. Comparison of electrical velocimetry and thermodilution techniques for the measurement of cardiac output. Acta Anaesthesiol Scand. 2007;51:1314–9.
McLeans AS, Needham A, Stewart D, Parkin R. Estimation of cardiac output by noninvasive echocardiographic techniques in the critically ill subjects. Anaesth Intensive Care. 1997;25:250–4.
Smetkin AA, Hussain A, Fot EV, Zakharvov VI, Izotova NN, Yudina AS, Dityateva ZA, Gromova YV, Kuzkov VV, Bjertnaes LJ, Kirov MY. Estimated continuous cardiac output based on pulse wave transit time in off-pump coronary artery bypass grafting: a comparison with transpulmonary thermodilution. J Clin Monit Comput. 2017;31:361–70.
Magliocca A, Rezoagli E, Anderson TA, Burns SM, Ichinose F, Chitilian HV. Cardiac output measurements based on the pulse wave transit time and thoracic impedance exhibit limited agreement with thermodilution method during orthotopic liver transplantation. Anesth Analg. 2018;126:85–92.
Bataille B, Bertuit M, Mora M, Mazerolles M, Cocataille B, Bertuit M, Mora M, Mazerolles M, Cocuet P, Masson B, Moussot PE, Ginot J, Silva S, Larche J. Comparison of esCCO and transthoracic echocardiography for non-invasive measurement of cardiac output intensive care. Br J Anaesth. 2012;109:879–86.
Ishihara H, Okawa H, Tanabe K, Tsubo T, Sugo Y, Akiyama T, Takeda S. A new non-invasive continuous cardiac output trend solely utilizing routine cardiovascular monitors. J Clin Monit Comput. 2004;18:13–20.
Slagt C, Malagon I, Groeneveld AB. Systematic review of uncalibrated arterial pressure waveform analysis to determine cardiac output and stroke volume variation. Br J Anaesth. 2014;112:626–37.
Zhang Z, Lu B, Sheng X, Jin N. Accuracy of stroke volume variation in predicting fluid responsiveness: a systematic review and meta-analysis. J Anesth. 2011;25:904–16.
Yamada T, Tsutsui M, Sugo Y, Akazawa T, Sato N, Yamashita K, Ishihara H, Takeda J. Multicenter study verifying a method of noninvasive continuous cardiac output measurement using pulse wave transit time: a comparison with intermittent bolus thermodilution cardiac output. Anesth Analg. 2012;115:82–7.
Peyton PJ, Chong SW. Minimally invasive measurement of cardiac output during surgery and critical care: a meta-analysis of accuracy and precision. Anesthesiology. 2010;113:1220–35.
Critchley LA, Lee A, Ho AM. A critical review of the ability of continuous cardiac output monitors to measure trends in cardiac output. Anesth Analg. 2010;111:1180–92.
Breukers RM, Sepehrkhouy S, Spiegelenberg SR, Groeneveld AB. Cardiac output masured by a new arterial pressure waveform analysis method without calibration compared with thermodilution after cardiac surgery. J Cardiothorac Vasc Anesth. 2007;21:632–5.
Critchley LA, Yang XX, Lee A. Assessment of trending ability of cardiac output monitors by polar plot methodology. J Cardiothorac Vasc Anesth. 2011;25:536 – 46.
Terada T, Oiwa A, Maemura Y, Robert S, Kessoku S, Ochiai R. Comparison of the ability of two continuous cardiac output monitors to measure trends in cardiac output: estimated continuous cardiac output measured by modified pulse wave transit time and an arterial pulse contour-based cardiac output device. J Clin Monit Comput. 2016;30:621–7.
Dache S, Van Rompaey N, Joosten A, Desebbe O, Saxena S, Eynden FV, Van Aelbrouck C, Huybrechts I, Obbergh LV, Barvais L. Comparison of the ability of esCCO and volume view to measure trends in cardiac output in patients undergoing cardiac surgery. Anaesthesiol Intensive Ther. 2017;49:175–80.
Ishihara H, Sugo Y, Tsutsui M, Yamada T, Sato N, Akazawa T, Sato N, Yamashita K, Takeda J. The ability of a new continuous cardiac output monitor to measure trends in cardiac output following implementation of a patient information calibration and an automated exclusion algorithm. J Clin Monit Comput. 2012;26:465–71.
Tsutsui M, Araki Y, Masui K, Kazama T, Sugo Y, Archer TL, Manecke GR Jr. Pulse wave transit time measurement of cardiac output in patients undergoing partial hepatectomy: a comparison of the esCCO system with thermodilution. Anesth Analg. 2013;117:1307–12.
Feissel M, Aho LS, Georgiev S, Trapponnier R, Badie J, Bruyere R, Quenot JP. Pulse wave transit time measurements of cardiac output in septic shock patients: a comparison of the estimated continuous cardiac output system with transthoracic echocardiography. PLoS ONE. 2015;10:e0130489.
Article CAS PubMed PubMed Central Google Scholar
Biais M, Berthezene R, Petit L, Cottenceau V, Sztark F. Ability of esCCO to track changes in cardiac output. Br J Anaesth. 2015;115:403–10.
We are grateful to the management of Nihon Kohden Corporation, Japan, who kindly provided equipment.
Department of Anesthesiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
Takeshi Suzuki, Yuta Suzuki, Jun Okuda, Rie Minoshima, Tomomi Ueda, Jungo Kato, Hiromasa Nagata, Takashige Yamada & Hiroshi Morisaki
Department of Anesthesiology, Saitama Medical Center, 4-9-3 Kitaurawa, Urawa-ku, Saitama-shi, Saitama, 330-0074, Japan
Yoshi Misonoo
Takeshi Suzuki
Yuta Suzuki
Jun Okuda
Rie Minoshima
Tomomi Ueda
Jungo Kato
Hiromasa Nagata
Takashige Yamada
Hiroshi Morisaki
Correspondence to Takeshi Suzuki.
Dr. Hiroshi Morisaki received a research fund from Nihon Kohden Corporation (Tokyo, Japan). The funding institution played no role in this study. The other authors have no conflicts of interest to declare.
Suzuki, T., Suzuki, Y., Okuda, J. et al. Cardiac output and stroke volume variation measured by the pulse wave transit time method: a comparison with an arterial pressure-based cardiac output system. J Clin Monit Comput 33, 385–392 (2019). https://doi.org/10.1007/s10877-018-0171-y
Issue Date: 01 June 2019
Non-invasive hemodynamic monitoring
Cardiovascular surgery patient
Perioperative management
Estimated continuous cardiac output
Arterial pressure-based cardiac output
Not logged in - 18.234.247.75
Not affiliated | CommonCrawl |
Home / Artículos por Website / The physics of turbulence localised to the tokamak divertor volume
The physics of turbulence localised to the tokamak divertor volume
11:31 13 junio in Artículos por Website by Consultora CIS
Experimental database
This paper focusses on results from the Mega Ampere Spherical Tokamak device, MAST41, during it's final experimental campaign in 2013. During these experiments a visible light camera capable of recording in excess of 120,000 frames per second was placed on the divertor with a tangential view into the vessel (see Fig. 1) for several hundred individual plasma discharges. Rather than base this study on individual plasma discharges within this set, a database has been drawn together that covers the widest available parameter range of the plasmas viewed by the camera. Plasma parameters from the database are given in Table 1. The database is constructed of discharges mainly configured in the lower single null (LSN, where only the lower X-point is active) configuration (pictured in Fig. 1) where the data quality is highest, but also considers the impact of resonant magnetic perturbations (used to control violent edge instabilities) and High confinement (H-) mode. The strategy employed in this paper is to compare simulation and experiment, across a wide-ranging experimental database, with robust measurements to draw high-level conclusions around the characteristics of the turbulence, and importantly to validate these aspects of the simulations. With the simulations validated, the flexibility of the code will be leveraged to diagnose the fundamental physics drivers of the turbulence.
Table 1 Survey of plasma parameters from MAST and the STORM code used for analysis. The discharge number, confinement mode of the discharge, plasma density ne,sep and electron temperature Te,sep (measured at the upstream separatrix); plasma current Ip; toroidal magnetic field Btor; and, input heating power PNBI are shown for all discharges/simulations analysed. RMPs refer to Resonant Magnetic Perturbations for Edge Localised Mode (ELM) control.
Imaging analysis
The method developed and deployed in this article for the tomographic inversion of camera images, described and rigorously tested by ref. 30,42, provides a mapping between the complex image recorded by a high speed camera and a two-dimensional plane in the divertor, taken here as the poloidal (radial-vertical) plane around the inner and outer divertor legs. It assumes that the 3D structures being imaged by the camera align to the background magnetic field (an assumption that is confirmed in simulation). This allows for the formation of a basis on which to perform a tomographic inversion using standard minimisation routines. During the pre-processing stage, subtraction of the pixel-wise minimum of a given frame with its 19 predecessors43,44 is applied to isolate fluctuations from the slowly-varying background component of the light. Figure 8 a), b) and d) show an example of a typical camera frame with important features of the plasma indicated. The effect of background subtraction on that frame is shown in panel (b), and the inversion of the background subtracted image onto the inner and outer divertor legs is shown in panel (d). The inversion domain is chosen to isolate the PFR and near SOL region of both divertor legs, avoiding the X-point and core plasma. The light emission contained in the camera images is dominated by Balmer 3 ⇒ 2 emission and is a complex nonlinear function of plasma quantities—density, temperature and neutral density. Without a multi-measurement comparison, which is extremely challenging for turbulent structures and was not practicable for MAST, the direct experimental inference of these thermodynamic quantities and (more importantly) their fluctuations utilising the diagnostic camera images could not be carried out, though previous studies indicate consistency between camera and probe fluctuation measurements26,28. Instead, this study utilises the turbulence code (STORM) for predictions of the plasma turbulent solution to forward model the Balmer 3 ⇒ 2 light emission observed in synthetic camera image measurements. This provides like-for-like comparison of experiment and simulation, ensuring that any systematic uncertainties are respected in both datasets and allowing high-level comparisons and conclusions to be drawn with confidence.
Fig. 8: Example stages of a typical camera data analysis process for divertor turbulence imaging.
a, b Raw and background subtracted camera data. c Synthetic camera data from the STORM simulation (see ref. 36 for simulation details). d Tomographically inverted data on sections of the poloidal plane around the inner (closest to the device center) and outer (furthest from device center) divertor legs. White lines indicate line-segments where the emissivity is extracted for analysis in (e), the inverted emissivity from the line segments in (d) projected onto the toroidal angle on the ψN = 0.99 flux surface where ψN is the normalised poloidal flux coordinate. Crosses mark detected peak locations, horizontal lines show local Full-Width Half-Maxima.
Computational modelling and synthetic imaging
The STORM model is based on a 3D drift-reduced two-fluid plasma model, with the electron density, n, electron temperature, T, parallel ion velocity U, parallel electron velocity V, and parallel scalar vorticity Ω as dynamic variables. The plasma potential, ϕ, is derived through an inversion of the parallel vorticity, \({{\Omega }}=\nabla \cdot \left({B}^{-2}\nabla \phi \right)\). The set of equations is solved in a field-aligned coordinate system on a grid with toroidal symmetry, with geometric factors derived directly from an equilibrium reconstruction45 of the experimental plasma discharge under study, and Bohm sheath boundary conditions are applied at upper and lower divertor boundaries. The grid and geometric properties of the system are not evolved during the simulation. The simulation evolves the full fields (ie no specification of a background profile) and is driven by a particle source centered on the last closed flux surface to mimic neutral ionisation, and an energy source in the core region of the simulation. These sources are scaled until n and T within the simulation match experiment at the outer midplane separatrix, and do not evolve within the simulation. The model makes the cold-ion, Boussinesq, and electrostatic assumptions to make the system tractable in the complex geometry employed for the simulation. The latter is justified by the high resistivity of the SOL and divertor plasma in MAST, however the former two assumptions may impact the detailed characteristics of turbulence in the simulation. Nevertheless, detailed experimental validation has demonstrated that the STORM model captures the main aspects of SOL turbulence well35,36, and without a more detailed simulation available, is a good basis for a first detailed study of divertor turbulence within this manuscript.
This paper employs synthetic images of the divertor turbulence derived from simulations conducted by ref. 36. Data from the the simulation is interpolated onto a grid identical to that used in the experimental analysis, which is then projected along the path of the magnetic field to produce a camera image accounting for line-integration effects and occlusion by machine structures. The emissivity in the poloidal plane is a complex function of thermodynamic quantities of the plasma and neutral gas, and atomic physics, and is forward-modelled in this paper using the OpenADAS database46 for the Balmer 3 → 2 transition, employing a neutral particle distribution from a complementary laminar simulation including plasma-neutral interactions. This complementary simulation was conducted with the SOLPS-ITER (Scrape-Off Later Plasma Simulation – ITER) code47, with Monte-Carlo neutral transport and diffusive cross-field plasma transport. The frames are then processed in the same manner as the experimental data. A synthetic camera frame is shown in Fig. 8c). By design the image does not account for any emission from the X-point, core plasma or outer-SOL regions to capture only the salient features of the divertor legs allowing for robust comparison between simulation and experiment.
The use of a fixed, axisymmetric neutral distribution from a auxiliary laminar simulation to generate the Dα emissivity for the synthetic simulation images was the best estimate available, but means that experimental and synthetic images cannot be considered entirely alike. The thermodynamic fluctuations in the plasma may induce fluctuations in the ionisation of neutrals, which cannot be captured in the synthetic images used here since there is no interaction between the turbulence and the neutral gas in the simulation. For this reason the magnitude of the emission is not compared between experiment and simulation, only the geometric positional and geometric properties are compared. It is important to recognise that plasma-neutral interactions are neglected in the turbulent simulation, but are present in the experimental situation, meaning that such an approach to comparison should be limited to leading order turbulent characteristics.
The STORM simulation analysed is in the slightly different lower disconnected double null (LDN) configuration', where both X-points are active, but the lower is still the primary X-point. In the shot studied by Riva et al. the gap between primary and secondary separatrix is between 2 mm and 5 mm. In such a configuration between 5% and 30% of the total power entering the SOL is measured on the lower inner divertor48. Reference 48 also shows that an LSN plasma may have up to twice the power to the lower inner target compared to an LDN, however a wide range of input powers in the LSN configuration has been studied here, with no clear leading order variation in fluctuation properties. As such, this potential variation in power between the LSN and LDN configurations is not considered likely to impact the features of the turbulence studied. Therefore, from the perspective of the PFR of the lower divertor which is the area of study in this paper, the STORM simulation in the LDN configuration is considered sufficiently comparable to an LSN plasma to justify the comparison.
Shape, distribution, and spectra of turbulent structures in the divertor
Turbulence is complex and difficult to diagnose with acceptable uncertainty. In order to draw robust conclusions, this article focusses on simple and robust measurements that can be readily compared between divertor legs, and between experiment and simulation. The first such set of measurements forms an assessment of the shape and distribution of turbulence structures across the database by calculating a quasi toroidal mode-number (the number of structures in 2π radians toroidally around the device), calculated by counting peaks in the emission along the projection of a magnetic field line in the R–Z plane, and the poloidal structure width calculated as the full-width half maximum of these identified peaks. A useful radial coordinate is the 'poloidal magnetic flux' normalised using values at the magnetic axis ψax and separatrix ψsep, such that ψN = (ψ − ψax)/(ψax − ψsep). The analysis is carried out on the flux surface at ψN = 0.99 which is sufficiently far into the PFR to avoid questions of magnetic field reconstruction misalignment, but sufficiently close to the separatrix that the flux of turbulent structures across the surface is significant. A systematic offset of the experimental flux-surfaces is present which results in a radial shift of measurements by ΔψN = 0.005, though this has little impact on the conclusions of this study.
In Fig. 8 (d) the embedded white lines show the trajectory of the ψN = 0.99 surface in the R–Z plane in the inner and outer divertor legs, and in (e) the emissivity along the surface is shown in an example discharge. This is cast onto the toroidal angle subtended by the analysed section of the magnetic field line simply by mapping the projection of the magnetic field. By casting this data onto the toroidal angle it is possible to directly compare the features of the inner and outer legs.
Turbulent flow in the inner divertor leg
Since the tomographic inversion employed in this paper produces 2D time-histories in the R–Z plane, flow velocities can be derived by mapping the trajectory of turbulent structures. Velocimetry based on two-point time-delayed cross-correlations has been used here to map the average flow of structures in the inner divertor leg. No clear directive flow was reliably measurable in the outer divertor leg, as demonstrated by the symmetric kθ spectra in Fig. 4.
Turbulence drives in inner and outer divertor legs
To determine the driving mechanisms for turbulence in the divertor a simulation study has been carried out in the manner of refs. 49,50 by eliminating terms from the vorticity equation, which determines the electrostatic potential and therefore regulates turbulence, that are known to drive certain classes of turbulent transport. The vorticity equation in STORM is [Eq. 1]36
$$\frac{\partial {{\Omega }}}{\partial t}+U{{{{{{{\bf{b}}}}}}}}\cdot \nabla {{\Omega }}=-\frac{1}{B}{{{{{{{\bf{b}}}}}}}}\times \nabla \phi \cdot \nabla {{\Omega }}+\frac{1}{n}\nabla \times \left(\frac{{{{{{{{\bf{b}}}}}}}}}{B}\right)\cdot \nabla P+\frac{1}{n}\nabla \cdot \left({{{{{{{\bf{b}}}}}}}}{J}_{\parallel }\right)+{\mu }_{{{{\Omega }}}_{0}}{\nabla }_{\perp }^{2}{{\Omega }}$$
where ϕ is the plasma potential, Ω = ∇ ⋅ (B−2∇⊥ϕ) the scalar vorticity, B the magnetic field strength, P = nT the electron pressure, n and T the electron density and temperature, J∥ = n(U − V) the parallel current with U and V the ion and electron velocities parallel to the magnetic field, b the magnetic field unit vector and μΩ the (small) collisional perpendicular viscosity. This equation has three terms that drive different classes of turbulence. The term \(\frac{1}{n}\nabla \times \left(\frac{{{{{{{{\bf{b}}}}}}}}}{B}\right)\cdot \nabla P\) drives interchange turbulence51, which is analogous to Rayleigh–Taylor turbulence, and is driven by thermodynamic gradients in regions where the curvature of the magnetic field has a destabilising effect. The term \(\frac{1}{B}{{{{{{{\bf{b}}}}}}}}\times \nabla \phi \cdot \nabla {{\Omega }}\) drives Kelvin–Helmholtz turbulence via sheared flows52, whilst the term \(\frac{1}{n}\nabla \cdot \left({{{{{{{\bf{b}}}}}}}}{J}_{\parallel }\right)\) term mediates drift-wave turbulence driven ubiquitously by cross-field thermodynamic gradients in a resistive plasma. To test the effect of these three different mechanisms, three simulations were performed beginning from the baseline simulation presented in this paper thus far, with the three turbulent drive terms removed in turn. To remove interchange turbulence from the simulation, \(\nabla \times \left(\frac{{{{{{{{\bf{b}}}}}}}}}{B}\right)\to 0\) was set in the lower divertor. To remove Kelvin–Helmholtz turbulence, b × ∇ϕ ⋅ ∇Ω → < b × ∇ ϕ ⋅ ∇Ω > Φ in the vorticity equation, whilst to remove drift-waves the substitution \(\frac{1}{n}{\nabla }_{\parallel }P\to < \frac{1}{n}{\nabla }_{\parallel }P \,{{ > }}_{{{\Phi }}}\) is made in parallel Ohm's law (equation 4 from ref. 36) which blocks energy transfer into resistive drift-waves. < > Φ indicates a toroidal average in the divertor volume. | CommonCrawl |
Dropout of infertility treatments and related factors among infertile couples
Maryam Ghorbani1,
Fatemeh Sadat Hosseini2,
Masud Yunesian3 &
Afsaneh Keramat4
Reproductive Health volume 17, Article number: 192 (2020) Cite this article
Dropout of infertility treatments is a global issue and many factors play role in this phenomenon. It is one of the most challenges in life of infertile couples. The purpose of this study was to determine dropout rate and related factors/reasons in the world and in Iran.
We will conduct a mixed method study with sequential exploratory design (systematic review, qualitative and quantitative phase). In the first stage a systematic review on dropout rate of infertility treatments and related factors will be done. In second stage (quantitative–qualitative study), a retrospective cohort study will be conducted on infertile couples to determine dropout rate of infertility treatments. The follow-up period to assess the discontinuation of treatment in patients, who have discontinued the treatment, will be considered 6 months after the treatment cessation. Data would be analyzed by descriptive statistics. We want to determine proportion and percentage of discontinuation rate among different groups with different causes of infertility. Then, we also will use Chi-square test to compare discontinuation rates among these groups. In qualitative section of second stage, semi-structured interviews would be performed with infertile female who had the history of infertility treatments failure. In this stage, participants will be selected using purposeful sampling method with maximum variation in terms of age, education, occupation, type of infertility, type of treatments, number of unsuccessful treatment and infertility duration. Data would be analyzed using conventional content analysis.
Determining dropout rate and its related factors/reasons would be helpful for future studies to plan suitable interventions for supporting infertile couples. It also helps politicians to have a better understanding of infertility and its consequences on infertile couple's life.
In today's world, infertility is a common phenomenon due to postponement of childbearing following the older age of marriage, tendency to reach higher educational level, economical problems and etc. Infertility brings many challenges and stresses to the individuals by itself and it is very hard to cope with. The problem gets worse, when it is associated with failure in treatments. Many of infertile couples cannot tolerate this failure and may decide to discontinue treatments before achieving pregnancy for ending many stressors which are associated with treatments. As we know, childbearing and having at least one child has important position in some societies such as Iranian culture; so ending the treatment before achieving optimal result may have some adverse consequences in the families such as divorce, remarriage, family conflicts, et. Absolutely many factors play role in dropout of infertility treatments, and many studies around the world have suggested many factors/reasons in dropout of infertility treatments, but there are still many gaps about this subject, especially among Iranian society. This study would be conducted in three consecutive stages, in the first stage; we will do a complete review of existing studies of the world to find out related factors/reasons of dropout in detail. In second stage, dropout rate of infertile couples (380 couples) after at least one unsuccessful cycle of treatment would be achieved by assessing medical records and telephone interview. Data of the first and second stage will help us to have better vision about the issue of dropout and would be used to construct a semi structured interview for the last stage. And finally in the third stage, reasons of dropout would be asked by an in depth interview from infertile couples. We hope the information from this study will help politicians better understand and plan for dropout of treatment.
Infertility is a clinical condition which means no clinical pregnancy after 12 months of unprotected and regular sex, resulting from a defect in a person's ability to reproduce as an individual or with a male/female partner [1]. It affects 13.2% of couples in Iran [2], and 60–80 million couples of reproductive age in the world [3]. Among the infertile couples, about 56% seek medical help for getting pregnant [4]. The Assisted Reproductive Technology (ART) is the assistance that usually applied to describe medical techniques that increase the probability of pregnancy. The techniques include the In Vitro Fertilization (IVF), Stimulation of Ovulation, and methods that lead to the manipulation of eggs or sperm or involvement of donated eggs and sperm in the laboratory [5, 6].
The success rates of assisted reproductive technologies are relatively constant with about 25% of live births per cycle occurring by the age of 35, after which it declines sharply. This rate of success seems desirable, but it also means that the failure rate is about 75% that is distressing for people who bear heavy financial and psychological costs of these treatments [7]. Therefore, many couples do not continue treatment until a reasonable result is achieved.
The dropout of fertility treatments refers to the further treatment despite the favorable progress and the ability to pay for treatment that can occur at any stage of the treatment [8]. The prevalence of discontinuation varies from 5.6% to 70% in different studies [9,10,11,12,13,14,15]. In Iran, the rate of discontinuation of treatment was 56.5% in one study [16] and 28.3% in another study [17]. The variety in prevalence is due to the use of different concepts and definitions [18]. Research on discontinuation of treatment began in 1980 and was motivated by the need to understand its impact on the effectiveness of fertility treatments as well as the reason why some couples did not continue the treatment [4]. Several studies have sought to understand why patients do not continue the treatment. According to the studies, the psychological burden was the most common reason cited by the patients for not continuing the treatment, and there were relationships between not continuing the treatment and the physical burden of treatment, feelings of futility associated with treatment, marital and personal problems, poorly understood diagnosis, parity, and demographic-social factors [4, 8, 18,19,20,21].
Due to the heterogeneity of costs, access to infertility treatment services, reimbursement policies, etc., it is difficult to compare the treatment withdrawal rates between centers and countries. In addition, most fertility experts are willing to report positive outcomes and success rates, and tend to ignore or forget the "invisible patients" who discontinue the treatment. Due to the lack of sufficient study on the rate of discontinuation of treatment and its associated factors in Iran, the present study aimed to investigate the dropout rate of treatment and its relevant factors.
Determining the continuation/dropout rate of treatments and factors affecting the continuation of treatment in infertile couples.
Objectives of the first stage (systematic review)
Identifying the factors affecting the continuity/dropout of infertility treatments in studies worldwide
Objectives of the second stage (conducting a quantitative–qualitative study to determine the acceptance of infertility treatments and its determinants)
Quantitative study objectives
Determining the continuation/dropout rate of treatment in infertile couples who visit the certain infertility treatment centers.
Determining the continuation/dropout of treatment according to the cause of infertility (male, female, unexplained, both) in infertile couples who visit the certain infertility treatment centers.
Determining the continuation/dropout of treatment according to the type of infertility treatment in infertile couples who visit the certain infertility treatment centers.
Objectives of qualitative study
Explaining the concept of treatment continuation/dropout and its determinants in infertile couples
Explaining the dimensions of treatment continuation/dropout and its determinants in infertile couples
This is a sequential exploratory mixed-method study, with multi stages; the systematic review and quantitative–qualitative studies.
Stage 1: systematic review
Search strategy
We will do a comprehensive search in international databases including; Medline (via PubMed), Embase, Web of Science, Cinhal, Proquest and Scopus to find the relevant published papers to the subject of the study from the beginning of 2000 to September 2019. A systematic search will be done by using Mesh terms including "fertility treatment", "artificial insemination", "assisted reproductive technology", "in vitro fertilization", "intra cytoplasmic sperm injection", "intra uterine insemination", 'IUI", "ICSI", "IVF", AND "discontinuation" OR "dropout" OR "cessation" OR "end" OR "stop" OR "end" OR "termination" OR "withdraw" OR "abandon". To find relevant papers, we will also perform manual search in Google scholar and checked references of retrieved articles.
Stage 2: Quantitative–qualitative study to determine the continuation of treatment and explain the factors affecting the continuation of treatment
At the quantitative stage, a retrospective cohort study will be conducted to determine the rates of infertility treatment in women who are referred to the infertility treatment centers for the infertility treatment.
Research population
The statistical population consists of 18–42 year-old women of reproductive age who are considered infertile according to the global definition of infertility, and visit the infertility treatment centers to receive the IVF/ICSI (Intra Cytoplasmic Sperm Injection), and FET (Frozen Embryo Transfer) services.
Research sample
The research samples are 420 infertile women (by considering 10% as attrition of samples) (105 women in each of four subgroups based on the cause of infertility: male, female, unexplained, and both).
We measure the sample size based on the following equation:
$$n=\frac{{Z}_{1-\frac{\alpha }{2}}2*P(1-P)}{{d}^{2}}$$
where, α = 0.05 →\({Z}_{1-\frac{\alpha }{2}}\) = 1.96: is the Percentile of a normal distribution of the Type I error (5%).
P = 0.56: refers to the estimation of the prevalence of discontinuation of infertility treatments in studies [16].
d2 = 0.1 (research precision).
Characteristics of research samples
Women at the childbearing age of 18–42 years who are considered infertile according to the global definition of infertility.
To have at least an unsuccessful IVF/ICSI and FET cycle
Having a file in the infertility center
Voluntary participation and written consent
The Iranian nationality (couple)
No addiction to narcotics and psychotropic drugs
Fluency in Persian
Exclusion criteria
Unwillingness to continue cooperating at any stage of the study and getting pregnant.
Sampling method
We perform a simple random sampling due to descriptive nature of this stage. In this case, a semi-private facility and a Charity-public center are selected for sampling to increase diversity in the participants. Due to the fact that the patients' visit of these centers do not have a specific time pattern, patients who visit consecutively, can be considered as a random sample of patients. Accordingly, a questionnaire of demographic factors (including questions about age, marital status, age of marriage, age of first menstruation, number of pregnancies, number of deliveries, number of children, number of abortions, duration of infertility, number of infertility treatments, education level, socio-economic status, employment status, and type of client's job, employment status, and type of spouse's job, spouse's education level, ethnicity, insurance status, place of residence) will be completed through the interviews, and the patients' file information will be used to ensure the accuracy of data during a year (2018) and in a retrospective cohort study of all couples who are infertile according to the definition, met the inclusion criteria, and visited the infertility treatment centers. Therefore, the continuation of treatment and non-continuation of infertility treatments will be calculated. It should be noted that the follow-up period to assess the discontinuation of treatment in patients, who have discontinue the treatment, will be considered 6 months after the treatment cessation according to research by Moini et al. [16].
Data analysis method
Descriptive statistics including frequency, mean and standard deviation tables will be used to express the characteristics of the research units. For example:
We will use mean and standard deviation to describe age of female and male participants, duration of marriage, marriage age, duration of infertility and menarche age. We also will use ANOVA test to compare theses variables among groups with different causes of infertility.
Frequency and proportion will be used for theses variables: educational level of female and male participants and income level of family. We will use Kruskal Wallis test to compare theses variables among groups with different causes of infertility.
We will use frequency and proportion to describe ethnicity of male and female participants, residential place, number of pregnancy, number of abortion and number of live child. Chi square test will help us to compare theses variables among groups with different cause of infertility.
To compare relationship between different variables such as age groups, educational levels of male and female participants, ethnicity of male and female participants, income level of family, age groups, cause of infertility and continuation or dropout of treatment, we will use Chi-square test.
We want to determine proportion and percentage of discontinuation rate among different groups with different causes of infertility. Then, we also will use Chi-square test to compare discontinuation rates among these groups. Confidence interval of 95% and significance level of α = 0.05 will be considered in all tests. SPSS 21 will be used to analyze data.
At the qualitative stage of study, since the effective factors in discontinuing the infertility treatments are complex and different and not well known in the Iranian society, a qualitative study with content analysis and a conventional approach will be conducted to achieve the second stage objective, namely "determining the acceptance of infertility treatments and its determinants".
Data will be collected through in-depth semi-structured interviews with infertile women who are eligible for participating in the study at least after 6 months of the last unsuccessful IVF/ICSI or FET. Participants will be selected using purposeful sampling method with maximum variation in terms of age, education, occupation and infertility duration, type of infertility (primary of secondary), type of infertility treatment and number of unsuccessful treatments. Purposive sampling technique is a technique in which researcher relies on his or her own judgment when choosing members of population to participate in the study. Therefore, now, it is not possible for us to determine sample size (and number of participants based on their characteristics) and we must wait the result of previous stages to choose our participants with purpose. Also, by selecting participants from both semi-private and public (charitable) centers, efforts will be made to provide the maximum possible diversity for the researcher. In qualitative studies, sample size is unpredictable; therefore, we cannot specify sample size and must keep on interviews until reaching saturation, which is commonly used in qualitative research. Saturation provides an indication of data validity [22]. While collecting the data, the interviews will be analyzed using a conventional qualitative content analysis method.
In addition, before starting interview, a General Health Questionnaire (GHQ-28) will be completed to assess the mental health status of the participants [23].
Participants' characteristics
Research participants will include infertile women who have been diagnosed infertile according to the definition of infertility, with at least a history of unsuccessful IVF/ ICSI or FET treatment, and willingness to participate in the research. They also have the ability to communicate effectively and transfer their experiences in the field of research.
The interviews are individually conducted in quiet places, including the clinics of Jihad Daneshgahi Infertility Center and Nekouei Hospital of Qom. The place and time of interviews will be determined based on the interviewees' desire, preference and convenience. Prior to the interviews, the informed written consent will be obtained from the participants. The interviews will be individually conducted in a quiet place. The approximate time of each session will be announced in advance for the participants and will increase or decrease according to their desire.
The content analysis will be performed with a conventional approach along with data collection using a method proposed by Graneheim and Lundman, including the following steps:
Implementing the entire interview immediately after each interview.
Reading the whole text for a general understanding of its content.
Determining the meaning units and basic codes.
Classifying similar primary codes into more comprehensive classes.
Determining the main themes of classes [24].
Accuracy and stability of data
We will use four criteria presented by Lincoln and Guba, including the Credibility, Confirmability, Dependability, and Transferability to evaluate the accuracy and reliability of data [25]. To this end, we determine the credibility using the peer and faculty member's check to approve and modify the codes and classifications. Therefore, the supervisors and advisors will review and examine the data obtained from the interviews after implementation and coding. After the data analysis, two participants will be contacted and they will receive a full text of the interview coding to determine their relevance to the participants' experiences. To determine the confirmability, two reproductive health faculty experts will be then asked to study the interviews, codes, and themes. In order to increase the dependability, two experts in the quality work will be asked to re-encode the coding process to identify any inconsistencies in coding. Also, the researcher will encode the interviews again two weeks after the initial coding (code-barcode); and its research steps and methodology, coding process, and decisions at different stages will be described in details so that it will make possible for others to pursue research if necessary. Also, all documents and evidence will be safely stored. To determine the transferability, two participants with inclusion criteria, but not a part of the research community, will receive the coding results and their opinions will be recorded. A comprehensive description of the type of research, the participants' characteristics, and their experiences will be provided for readers in the final report of the study.
Dropout of infertility treatments is a common phenomenon in IVF (In Vitro Fertilization)/ICSI (Intra cytoplasmic Sperm Injection) procedures and many patients avoid continuing infertility treatments [26,27,28,29,30,31,32]. In literature, many factors have been suggested for couple's decision to stop further treatment, such as psychological and physical burden, financial reasons and poor prognosis [12, 19, 33, 34]. But if we had a quick look to the studies reporting the reasons/factors of dropout show that, almost all of them, had the quantitative method for assessing the reasons. But it seems that we need qualitative method with in depth interview to gain worth information about this phenomenon. On the other hand, knowledge about rate and related factors of dropout in Iranian society is limited and there are many gaps for this issue. To our knowledge, this is the first multi methods q study, probing the rate and reasons of dropout in several stages especially with in-depth interview. The result of this study will help politicians to know better the reasons of dropout and have better programs to encourage infertile couples to continue their treatments. The other strong point of the present study is using selecting a large number of participants from infertile women with variety of characteristics. It is possible that in the present study, some of the participants would not express all of truth about infertility, treatments and reasons of dropout which would be one of the limitations of this study. However, we would make efforts to gain the trust of the participants and establishing a relationship with them for resolving this limitation.
IVF:
ICSI:
Intra Cytoplasmic Sperm Injection
FET:
GHQ-28:
General Health Questionnaire
Zegers-Hochschild F, Adamson GD, Dyer S, Racowsky C, de Mouzon J, Sokol R, et al. The international glossary on infertility and fertility care, 2017. Hum Reprod. 2017;32(9):1786–801.
Direkvand Moghadam A, Delpisheh A, Sayehmiri K. The prevalence of infertility in Iran, a systematic review. Iran J Obst Gynecol Infert. 2013;16(81):1–7.
Organization WH. International statistical classification of diseases and related health problems: World Health Organization; 2004.
Gameiro S, Boivin J, Peronace L, Verhaak CM. Why do patients discontinue fertility treatment? A systematic review of reasons and predictors of discontinuation in fertility treatment. Human Reprod Update. 2012;18(6):652–69.
Wright VC, Schieve LA, Reynolds MA, Jeng G. Assisted reproductive technology surveillance—United States, 2002. Morbidity Mortality Weekly Report. 2005;54(2):1–24.
Rebar RW, DeCherney AH. Assisted reproductive technology in the United States. N Engl J Med. 2004;350(16):1603–4.
Vayena E, Rowe PJ, Griffin PD. Current practices and controversies in assisted reproduction: report of a meeting on medical, ethical and social aspects of assisted reproduction, held at WHO Headquarters in Geneva. Switzerland: World Health Organization; 2002.
Boivin J, Domar AD, Shapiro DB, Wischmann TH, Fauser BC, Verhaak C. Tackling burden in ART: an integrated approach for medical staff. Hum Reprod. 2012;27(4):941–50.
Smeenk JM, Verhaak CM, Braat DD. Psychological interference in in vitro fertilization treatment. Fertil Steril. 2004;81(2):277.
Huppelschoten AG, van Dongen AJ, Philipse I, Hamilton CJ, Verhaak CM, Nelen WL, et al. Predicting dropout in fertility care: a longitudinal study on patient-centredness. Hum Reprod. 2013;28(8):2177–86.
Eisenberg ML, Smith JF, Millstein SG, Nachtigall RD, Adler NE, Pasch LA, et al. Predictors of not pursuing infertility treatment after an infertility diagnosis: examination of a prospective US cohort. Fertil Steril. 2010;94(6):2369–71.
Rajkhowa M, McConnell A, Thomas G. Reasons for discontinuation of IVF treatment: a questionnaire study. Hum Reprod. 2006;21(2):358–63.
Bedrick BS, Anderson K, Broughton DE, Hamilton B, Jungheim ES. Factors associated with early in vitro fertilization treatment discontinuation. Fertil Steril. 2019;112(1):105–11.
Kreuzer VK, Kimmel M, Schiffner J, Czeromin U, Tandler-Schneider A, Krüssel J-S. Possible reasons for discontinuation of therapy: an analysis of 571 071 treatment cycles from the German IVF registry. Geburtshilfe Frauenheilkd. 2018;78(10):984–90.
Domar AD, Rooney K, Hacker MR, Sakkas D, Dodge LE. Burden of care is the primary reason why insured women terminate in vitro fertilization treatment. Fertil Steril. 2018;109(6):1121–6.
Moini A, Salehizadeh S, Moosavi F, Kiani K, Khafri S. Discontinuation decision in assisted reproductive techniques. Int J Fertil Steril. 2009;4:8.
Khalili MA, Kahraman S, Ugur MG, Agha-Rahimi A, Tabibnejad N. Follow up of infertile patients after failed ART cycles: a preliminary report from Iran and Turkey. Eur J Obst Gynecol Reprod Biol. 2012;161(1):38–41.
Pedro J, Sobral MP, Mesquita-Guimarães J, Leal C, Costa ME, Martins MV. Couples' discontinuation of fertility treatments: a longitudinal study on demographic, biomedical, and psychosocial risk factors. J Assist Reprod Genet. 2017;34(2):217–24.
Van den Broeck U, Holvoet L, Enzlin P, Bakelants E, Demyttenaere K, D'Hooghe T. Reasons for dropout in infertility treatment. Gynecol Obstet Invest. 2009;68(1):58–64.
Farr SL, Anderson JE, Jamieson DJ, Warner L, Macaluso M. Predictors of pregnancy and discontinuation of infertility services among women who received medical help to become pregnant, National Survey of Family Growth, 2002. Fertil Steril. 2009;91(4):988–97.
Sharma V, Allgar V, Rajkhowa M. Factors influencing the cumulative conception rate and discontinuation of in vitro fertilization treatment for infertility. Fertil Steril. 2002;78(1):40–6.
Speziale HS, Streubert HJ, Carpenter DR. Qualitative research in nursing: advancing the humanistic imperative. Lippincott Williams & Wilkins; 2011.
Goldberg DP, Hillier VF. A scaled version of the General Health Questionnaire. Psychol Med. 1979;9(1):139–45.
Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105–12.
Lincoln YS, Guba EG. But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Direct Prog Eval. 1986;1986(30):73–84.
Land JA, Courtar DA, Evers JL. Patient dropout in an assisted reproductive technology program: implications for pregnancy rates. Fertil Steril. 1997;68(2):278–81.
De Vries MJ, De Sutter P, Dhont M. Prognostic factors in patients continuing in vitro fertilization or intracytoplasmic sperm injection treatment and dropouts. Fertil Steril. 1999;72(4):674–8.
Osmanagaoglu K, Tournaye H, Camus M, Vandervorst M, Van Steirteghem A, Devroey P. Cumulative delivery rates after intracytoplasmic sperm injection: 5 year follow-up of 498 patients. Hum Reprod. 1999;14(10):2651–5.
Roest J, Van Heusden A, Zeilmaker G, Verhoeff A. Cumulative pregnancy rates and selective drop-out off patients in in-vitro fertilization treatment. Hum Reprod. 1998;13(2):339–41.
Stolwijk A, Wetzels A, Braat D. Cumulative probability of achieving an ongoing pregnancy after in-vitro fertilization and intracytoplasmic sperm injection according to a woman's age, subfertility diagnosis and primary or secondary subfertility. Hum Reprod. 2000;15(1):203–9.
Goverde AJ, McDonnell J, Vermeiden JP, Schats R, Rutten FF, Schoemaker J. Intrauterine insemination or in-vitro fertilisation in idiopathic subfertility and male subfertility: a randomised trial and cost-effectiveness analysis. Lancet. 2000;355(9197):13–8.
Emery JA, Slade P, Lieberman BA. Patterns of progression and nonprogression through in vitro fertilization treatment. J Assist Reprod Genet. 1997;14(10):600–2.
Verberg M, Eijkemans M, Heijnen E, Broekmans F, de Klerk C, Fauser B, et al. Why do couples drop-out from IVF treatment? A prospective cohort study. Human Reprod. 2008;23(9):2050–5.
Olivius C, Friden B, Borg G, Bergh C. Why do couples discontinue in vitro fertilization treatment? A cohort study. Fertil Steril. 2004;81(2):258–61.
The authors thank Shahroud University of Medical Sciences for its financial support. This study was a part of PhD thesis with grant number: 97198.
This study was supported by Grant No 97198 from Shahroud University of Medical Sciences.
Student Research Committee, School of Nursing and Midwifery, Shahroud University of Medical Sciences, Shahroud, Iran
Maryam Ghorbani
Faculty Member of Medical School, Golestan University of Medical Sciences, Gorgan, Iran
Fatemeh Sadat Hosseini
School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
Masud Yunesian
Reproductive Studies and Women's Health Research Center, Shahroud University of Medical Sciences, Shahroud, Iran
Afsaneh Keramat
MG, AK, FSH, MY; Contributed to conception and design. MG; Contributed to data collection, MG, MY; Contributed to statistical analysis, MG, AK, FSH, MY; Contributed to interpretation of data. AK; was responsible for overall supervision. MG; Drafted the manuscript, which was revised by all authors. All authors read and approved the final manuscript.
Correspondence to Afsaneh Keramat.
This study has the ethical approval of the Ethics Committee of Shahroud University of Medical Sciences, Shahroud, Iran:( Approval ID:IR.SHMU.REC.1397.235, Approval date: 2019/3/11). All participants will be informed about the aim of the study and written informed consent will take from each participant.
Authors declare that there is no conflict of interest.
Ghorbani, M., Hosseini, F.S., Yunesian, M. et al. Dropout of infertility treatments and related factors among infertile couples. Reprod Health 17, 192 (2020). https://doi.org/10.1186/s12978-020-01048-w
DOI: https://doi.org/10.1186/s12978-020-01048-w
Sequential exploratory mixed-method study
Treatment dropout | CommonCrawl |
Transcription initiation of distant core promoters in a large-sized genome of an insect
Qing Liu1,2,3 na1,
Feng Jiang1,4 na1,
Jie Zhang1,
Xiao Li5 &
Le Kang ORCID: orcid.org/0000-0003-4262-23291,4,5
Core promoters have a substantial influence on various steps of transcription, including initiation, elongation, termination, polyadenylation, and finally, translation. The characterization of core promoters is crucial for exploring the regulatory code of transcription initiation. However, the current understanding of insect core promoters is focused on those of Diptera (especially Drosophila) species with small genome sizes.
Here, we present an analysis of the transcription start sites (TSSs) in the migratory locust, Locusta migratoria, which has a genome size of 6.5 Gb. The genomic differences, including lower precision of transcription initiation and fewer constraints on the distance from transcription factor binding sites or regulatory elements to TSSs, were revealed in locusts compared with Drosophila insects. Furthermore, we found a distinct bimodal log distribution of the distances from the start codons to the core promoters of locust genes. We found stricter constraints on the exon length of mRNA leaders and widespread expression activity of the distant core promoters in locusts compared with fruit flies. We further compared core promoters in seven arthropod species across a broad range of genome sizes to reinforce our results on the emergence of distant core promoters in large-sized genomes.
In summary, our results provide novel insights into the effects of genome size expansion on distant transcription initiation.
The core promoter, which is located towards the 5′ region of a gene on the sense strand, is an upstream regulatory region facilitating the transcription initiation of a protein-coding gene. Core promoters contain necessary sequence features to recruit RNA polymerase II to form transcription initiation complexes and initiate the transcription of protein-coding genes [1]. Core promoters play important roles in gene expression regulation with respect to many aspects of transcription, including initiation, elongation, termination, polyadenylation, and finally, translation. The gene expression regulation based on the core promoter can be achieved by diverse mechanisms, including the transcription initiation mode, diversity in the core promoter composition, interactions of the basal transcription machinery with the core promoter, enhancer-promoter specificity, core promoter-preferential activation, enhancer RNAs, Pol II pausing, transcription termination, Pol II recycling, and translation [2]. The identification and characterization of core promoters is very important to understand how transcription occurs and how gene expression is regulated. The accurate annotation of core promoter architecture is largely dependent on the empirical determination of the 5′ end of mRNA transcripts by generating the expression profiles of transcription start sites (TSSs). High-throughput sequencing combined with oligo-capping, which yields millions of 5′ end sequences derived from 5′ capped mRNA transcripts produced by RNA polymerase II, can generate a genome-wide scale map of TSSs and efficiently contribute to the annotation of the core promoter architecture [3]. Transcription initiation occurs at multiple nucleotide positions within a core promoter region [4]. Therefore, core promoters contain not only a single TSS but also an array of closely located initiation sites. Conceptually, the core promoter is entirely different from alternative promoters, which generate alternative isoforms with either distinct 5′-untranslated regions or coding sequences. Multiple TSSs within the same core promoter usually respond in a similar manner to external stimuli and exhibit the same patterns of tissue specificity [5].
The precise annotation of core promoters not only is necessary for understanding the cis-regulatory elements controlling protein-coding gene transcription but also is crucial for genome annotation. Despite the rapid generation of genome sequences of diverse insect taxa, the current official insect gene sets are mostly derived from RNA sequencing (RNA-seq) assemblies. Owing to the inherent technological limitations RNA-seq, the RNA-seq read coverage is strongly biases towards the 3′ landscape of the transcriptome, and the 5′ ends of transcript models are generally inaccurate [6]. The knowledge of transcription initiation and core promoter features in insects lags far behind that in vertebrates as a consequence of the relatively small number of genome-wide TSS studies conducted so far [7,8,9]. Owing to the absence of a TSS study comparing the insect core promoter architecture beyond Diptera, the current understanding of insect core promoters is largely restricted to the order Diptera [10]. Therefore, in a large number of studies of insect promoters, the region less than 2 kb upstream of the start codon ATG site (2 kb limitation rule) is considered the putative promoter [11,12,13,14]. However, the extent to which this 2 kb limitation rule is valid remains an open question, especially in the large-sized genomes of insect species due to the reduced constraints on gene structure size.
The intron position is one of the critical factors in the regulation of transcription initiation [15]. Although a large majority of introns are located within open reading frames, introns in mRNA leaders (5′-UTRs) are common in complex eukaryotes [16]. Introns in mRNA leaders are spliced out before protein translation occurs. Although functions including promoting transcription and nuclear export have been reported for the introns in mRNA leaders [16], their regulatory role is often overlooked. Because intron size and abundance in mRNA leaders have been analyzed in only a few model organisms [15, 16], the effects of introns in mRNA leaders on transcription initiation have been rarely studied within the context of genome size variation.
The migratory locust, Locusta migratoria, is a global species representing a model system with remarkable phenotypic plasticity regulated by gene expression [17,18,19]. Its genome is approximately 6.5 Gb in size, which is at least 30 times larger than the fruit fly genome. The locust genome has undergone a size expansion in intronic and intergenic regions, resulting in a much larger genome and more loosely organized genes than in Drosophila melanogaster [20]. These genomic characteristics make the migratory locust a very important organism for analyzing the effects of genome expansion on core promoter features. However, no study has been performed to identify and characterize the TSSs and core promoters at the genome-wide level in locusts so far. The availability of a comprehensive map of the locust core promoters will provide the opportunity to explore the differences in core promoter characteristics between these two insect species separated by 350 million years of evolution [21].
In this study, we identified TSSs at the genome-wide level using 14 oligo-capping libraries derived from nine tissues or organs of the migratory locust. We identified TSS clusters (TSCs) by clustering individual TSSs along the genome into high-density TSS regions and characterized the core promoter features of locusts. We compared the general characteristics of the core promoter features and dynamics between locusts and fruit flies. Furthermore, we unexpectedly detected widespread distant transcription initiation and explored the distinct aspects of distant core promoters in locusts. In addition, we further identified TSCs in seven arthropod species across a broad range of genome sizes, and we revealed specific characteristics of transcription factor (TF) binding sites (TFBSs) of distant transcription initiation in the context of genome size.
Identification of transcription start sites and their clusters
To identify TSSs in the migratory locust, we mapped the oligo-capping sequencing reads from 14 libraries obtained from nine different tissues and organs, including the ovary, testis, wing, thoracic muscle, pronotum, labipalp, brain, fat body, and antenna (Additional file 1: Table S1). All of the oligo-capping libraries were sequenced using an Illumina NovaSeq 6000 System (150-bp paired-end reads). The sequencing of the oligo-capping libraries yielded 1893 million sequencing reads (284 Gb) in total, providing an unprecedented dataset for investigating the 5′ transcriptional start sites of mRNA transcripts in locusts. Only the read pairs that contained both the 5′ oligo-capping and 3′ oligo-capping adapters were mapped to the locust reference genome, with a mean mapping rate of 74.91%. The sequencing read pairs that were properly mapped to the reference genome were used for further analyses. The individual OTSSs were clustered along the genome into TSS clusters. The nucleotide composition of the OTSSs confirmed the absence of systematic G nucleotide addition bias, which is usually observed in the cap analysis gene expression technique (Additional file 1: Fig. S1). The number of OTSSs identified in each library ranged from 290,320 to 1,555,558, with a mean of 615,362 (Additional file 1: Fig. S2). The OTSS number (5,230,229) identified from the combined data of all the libraries was 3.36–18.02 times the number of TSCs identified in any single library (Additional file 1: Fig. S3), suggesting the necessity of investigating more tissues and organs to obtain a more comprehensive TSS landscape in locusts. The OTSSs located within the 1000 bp upstream of the translation start codon formed a broad distribution (Additional file 1: Fig. S4). As expected, the majority of OTSSs (66.94%) were mapped to the intergenic regions (Fig. 1a), indicating that widespread transcription is initiated from noncoding regions in the locust genome.
Characteristics of TSSs and TSCs in locusts. a Distribution of OTSSs in different genomic regions. b Metaprofile of TSCs across the gene bodies of protein-coding genes in the official gene set of locusts. c Consensus 25-bp sequences surrounding the dominant TSSs in different genomic regions. The symbol height within the stack indicates the relative frequency of each nucleic acid at that position. The frequency of each nucleotide for each position was represented using the R package Seqlogo. d Sunburst charts summarizing the identified TSCs that are derived from TEs
We identified TSCs, which are high-density regions of TSSs, by clustering individual OTSSs along the genome into TSS clusters. To avoid false TSCs, the TSCs with sequencing reads showing fewer than 3 read tags of TSCs per million (TPM) were not used in further analyses because they probably come from truncated transcripts and cryptic transcripts due to the inherent nature of the basic transcriptional machinery [22]. The number of TSCs identified in each library ranged from 22,858 to 47,615, with a mean of 36,229. The combined data from all libraries yielded 72,280 TSCs, which were used as the initial set of TSCs. In the initial set of TSCs, we observed a large proportion (56.39%) of 1-bp-wide TSCs in the intergenic regions (Additional file 1: Fig. S5).
To discover motifs within TSCs, we generated a consensus sequence logo of 25 bp surrounding the dominant TSSs in each TSC. The composition of [− 1, + 1] initiator dinucleotides revealed a severe overrepresentation of pyrimidine–purine (PyPu) dinucleotides in the non-1-bp-wide TSCs located in the 5′-UTRs, coding DNA sequences (CDSs), and intronic and intergenic regions but not in those located in the 3′-UTRs (Additional file 1: Fig. S6). However, PyPu dinucleotide initiators were absent in all of the 1-bp-wide TSCs except for the 1-bp-wide TSCs located in 5′-UTRs. These results imply that a considerable portion of the 1-bp-wide TSCs were derived from false-positive signals due to the experimental limitation of the oligo-capping method. Thus, we applied additional correction steps to remove putative false TSCs. A large proportion of the identified TSCs showed significant enrichment of the TGAG motif and its 1-bp-substitution variants upstream of the dominant TSS sites of TSCs (Additional file 1: Fig. S7 and Tables S2-S7). These results suggested that these 1-bp-wide TSCs were likely false TSCs derived from mis-hybridization of the 5′ oligo-capping adapters and internal RNA molecules (Additional file 1: Fig. S8). Therefore, the TSCs showing significant enrichment (observed number greater than 20 and a q-value of less than 1e−10) of the TGAG motif and its 1-bp-substitution variants were filtered for further analysis. The insert fragments generated in sequencing library construction showed a unimodal distribution with a peak at approximately 300–400 bp. Therefore, the internal false signals were generated from possible truncated mRNAs, and the insert fragment size in the 3′ ends did not obey the insert fragment limits along the mRNA transcripts (Additional file 1: Fig. S9). As expected, the 3′ end distribution (median = 352.3 bp; first quartile to third quartile, 229.1–557.3 bp) of the mean insert fragments inferred by determining the start sites of paired R2 (reverse) reads was consistent with the insert fragment distribution observed in sequencing library construction. The inferred 3′ end distribution of the insert fragments was unimodal and asymmetric, with a long tail to the right (Additional file 1: Fig. S10). Therefore, the TSCs showing deviation of the distance from the 3′ ends of the insert fragment to the start sites of the mRNAs were considered to come from possible truncated mRNAs and were therefore removed from further analysis above the threshold of the 90% quantile (Additional file 1: Fig. S11). It is worth noting that a total of 8247 (2252 in 1-bp-wide TSCs and 5995 in non-1-bp-wide TSCs) intergenic TSCs could be assigned to the mRNAs of protein-coding genes by paired R2 read linking. This result suggested that the start sites of mRNAs in the official gene set do not represent authentic TSS sites because of the technological inability to achieve sufficient 5′-UTR coverage using standard Illumina RNA-seq. Overall, we identified 38,136 TSCs in the final set after removing the false TSCs derived from adapter mis-hybridization and internal truncated sites. The width of most of the identified TSCs (median = 38 bp and 90% quantile = 133 bp in non-1-bp-wide TSCs) in the final set was less than 150 bp, which is consistent with that of Drosophila TSCs (median = 36 bp and 90% quantile = 152 bp in non-1-bp-wide TSCs) identified via the RAMPAGE method [23]. The performance assessment for TSC identification was judged on the basis of the distribution of the identified TSCs over gene bodies. Compared with that of the TSCs in the initial set, the higher enrichment of TSCs in the 5′ ends of protein-coding genes in the final set indicated that the application of the correction steps greatly improved the ability to distinguish between authentic TSCs and false TSCs (Fig. 1b). The composition of [− 1, + 1] initiator dinucleotides showed that the PyPu dinucleotide initiators are preferentially used as TSSs in the different genomic regions (Fig. 1c). Thus, these 38,136 TSCs in the final set are reliable TSCs in locusts.
Compared with distance-based promoter identification, promoter identification involving the paired-read-based assignment rule allows the more accurate assignment of core promoters to existing gene models and provides direct evidence for characterizing the transcription start site landscape of protein-coding genes. For each TSC, if an insert fragment with its 5′ end in the TSCs and its 3′ end in an annotated exon of a protein-coding gene was identified, the TSC was functionally linked to the gene. The 38,136 reliable TSCs in the final set were linked to annotated protein-coding genes based on gene structure information using the paired-read-based assignment rule. We thus assigned 48.0% (18,305 in 38,136) of the reliable TSCs to annotated protein-coding genes, and the remaining 52.0% represented potential initiation sites of unannotated nonprotein-coding genes. The comparison of biological replicates of the tissue and organ data confirmed the ability to quantify promoter expression using the oligo-capping method with excellent quantification reproducibility (Additional file 1: Fig. S12, Ps < 2.2e−16, Pearson's R = 0.99; Pearson correlation coefficients were obtained using the TPM values of all of the detected TSCs). To identify the genic TSCs for which promoter activities are provided by transposable elements (TEs), we searched for TE-containing TSCs that drive the expression of annotated protein-coding genes. We identified a considerable proportion (14.64%, 2779 of 18,305) of reliable TSCs derived from 36 multiple families of TEs that drive the expression of 2190 annotated protein-coding genes. This observation was more prevalent among TSCs located in intergenic regions. Although all three major classes of locust TEs were present (non-LTR, LTR and DNA), the percentage of genic TSCs whose expression was driven by TEs was not directly proportional to the composition of TEs in the locust genome (Fig. 1d). For example, members of the RTE/BovB subfamily (constituting 244 Mb [4.05%] of the locust genome), which is the most prevalent TE subfamily in the locust genome [20], contributed to only 0.07% (2 in 2679) of the genic TSCs.
Characteristics of locust core promoters
We obtained 22,820 genic TSCs in Drosophila melanogaster (Additional file 1: Fig. S13) and used them to compare core promoter characteristics between locusts and fruit flies [7, 23]. Similar initiator (PyPu dinucleotide; notably, because no mutational analysis was performed, the identified PyPu dinucleotide was considered the overrepresented motif) elements were detected in the genic TSCs of locusts and fruit flies (Additional file 1: Fig. S14A). However, the analysis of the nucleotide composition flanking the PyPu dinucleotide of core promoters in locusts revealed a preference for 2-bp downstream T/A usage, which was not observed in fruit flies. By examining the AT contents of the 2-kb flanking regions of core promoters, we found two striking distinct patterns in the nucleotide composition of locusts and fruit flies. Unlike in fruit flies, enrichment of GC nucleotides in the 500-bp flanking regions of the core promoters of locusts was observed, emphasizing the preferential location of the GC-rich regions of locust core promoters (Fig. 2a). Although there is evidence of DNA methylation in gene bodies and repeat regions in locusts, the status of promoter methylation has not been explored because of the unavailability of promoter data [20, 24]. The relative depletion of CpG dinucleotides is negatively correlated with DNA methylation. Thus, we performed a normalized CpG content analysis to determine whether the increases in GC content were associated with the existence of DNA methylation. The fruit fly exhibited a unimodal (centered on approximately 0.93) normalized CpG content (CpG o/e, CpG observed/expected ratio) distribution (a signal of devoid of DNA methylation), and the CpG o/e values consistently remained at approximately 0.93 in the 2-kb flanking region of PyPu dinucleotide (Additional file 1: Fig. S15). However, the locust exhibited a broad distribution of CpG o/e values (Additional file 1: Fig. S16), and the 2-kb flanking regions of the locust core promoters exhibited signatures of gradual CpG restoration (Fig. 2b) as the distance to the dominant OTSS decreased [25]. Thus, the CpG occurrence peaks (approaching the right side of the bimodal CpG o/e value distribution) at the center of the locust core promoters are suggestive of the absence of DNA methylation in the core promoters of locusts despite the enrichment of GC nucleotides in the 500-bp flanking regions of core promoters.
Characterization of core promoters in locusts. a Patterns of the GC content in the 2-kb flanking region of transcription start sites. The deviation of the GC content in sliding windows was determined using the GC content normalized to the mean GC content. b CpG occurrence in the 2-kb flanking region of transcription start sites. Normalized CpG contents (CpG observed/expected, CpG oe) and GC contents were computed in a 50-bp sliding window across 4-kb regions centered on the dominant TSSs. c De novo motif discovery in the 20 to 40 bp region upstream of the dominant OTSSs of core promoters. d TATA-box signals in the upstream regions of PyPu dinucleotides in the core promoters with different promoter shapes. e Density distribution of the PSS values of promoters with ubiquitous and restricted TSC expression. f Shannon index of OTSS diversity in genic core promoters. g Shannon index of OTSS diversity in genic core promoters in the down-sampled data. h Shannon diversity index of OTSSs in the core promoters with different promoter shapes. i Density of SNPs flanking the transcription start sites. The TSCs flanked by repetitive elements were not included in this comparison. The red asterisk indicates P < 2.2e−16 according to the Wilcoxon rank-sum test
To identify the well-accepted Drosophila core promoter elements, the consensus sequences (Additional file 1: Table S8) of the TATA-box, initiator (Inr), polypyrimidine initiator (TCT), motif ten element (MTE), and downstream core promoter element (DPE) were obtained from a recent review [26]. The consensus sequences were used in pattern matching of the putative Drosophila core promoter elements while allowing one mismatch. There was no obvious difference in the percentage of the Drosophila core promoter elements between the locust and fruit fly core promoters (Additional file 1: Fig. S14B). We extracted the random genomic sequences of which numbers are equal to the numbers of the genic TSCs identified in locusts (N = 18,305) and fruit flies (N = 22,820), respectively. The percentages of PyPu dinucleotide in the locust (85.38% in the genic TSCs and 6.87% in the random sequences) and fruit fly (86.49% in the genic TSCs and 23.75% in the random sequences) core promoters are statistically different (Ps < 2.2e−16, chi-squared tests) from those in the random sequences. The statistical differences (Ps < 2.2e−16, chi-squared tests) remain unchanged for the TATA-box elements in both locusts (18.19% in the genic TSCs and 10.45% in the random sequences) and fruit flies (17.61% in the genic TSCs and 10.47% in the random sequences).
Like in fruit flies, we observed an increase in AT contents in the 20 to 40 bp regions upstream of the PyPu dinucleotide in locusts; these are typical regions in which TATA-box elements are located (Additional file 1: Fig. S17). We performed a de novo motif discovery analysis to identify the potential enriched motifs in the 20 to 40 bp regions upstream of the PyPu dinucleotide in both species. Two different TATA-box motif variants were present in the 20 to 40 bp regions upstream of the PyPu dinucleotide in both species. In fruit flies, the TATA-box motif variant identified by our de novo motif discovery analysis was identical to the TATA-box motif (matrix profile POL012.1) deposited in the JASPAR database [27]. Although the four core nucleotides (TATA) were present in both species, the distinct hallmark of the TATA-box motif variant in locusts was a G/C preference in the 3 bp upstream region of the TATA core nucleotides (Fig. 2c), consistent with the increases in GC nucleotides in locust core promoters. The GC contents in the 3 bp region upstream of the four core nucleotides in locusts and in fruit flies are 92.3% and 53.5%, respectively.
Imprecise transcription initiation and symmetrical pattern of the SNP density of locust core promoters
Transcription can be initiated at precise genomic regions or dispersed genomic regions, a distinction referred to as promoter shape. Distinct promoter classes are defined based on the shape of the TSS distribution: sharp core promoters or broad core promoters [3, 28]. The sharp and broad structures of core promoters are largely conserved across species and are likely to be associated with different functional motifs, emphasizing distinct functional roles between different shapes [9, 29]. The Promoter Shape Score (PSS), a metric for describing promoter shape, was determined to characterize the shape of the locust core promoters. We classified the core promoters based on PSS values and examined the association between the promoter shape and TATA-box signal. On the basis of a previous study [30], the core promoters were divided into three categories: sharp (PSS > − 10) core promoters, intermediate (PSS ≤ − 10 and PSS > − 20) core promoters, and broad (PSS ≤ − 20) core promoters. Both locusts and fruit flies showed a broad PSS value distribution, suggesting that the transcription can be initiated from precise genomic regions to dispersed genomic regions in insects. We observed an obvious tendency for TATA-box signals to appear in upstream regions of PyPu dinucleotides in sharp core promoters in both locusts and fruit flies. However, the sharp core promoters of locusts showed stronger TATA-box signals than those of fruit flies (Fig. 2d), despite the lower overall TA content in the core promoters of locusts. The broad core promoters driving the transcription of ubiquitously expressed genes were TATA-less promoters in both species [31, 32]. To explore the roles of promoter shape in reflecting TSC expression specificity, we used τ to measure the expression specificity. τ varies from 0 to 1, where 0 indicates ubiquitous expression and 1 indicates restricted expression. Contrary to protein-coding genes with a bimodal distribution of τ scores [33], the τ scores in locusts were skewed towards classifying many core promoters as showing restricted expression (Additional file 1: Fig. S18) [34]. We performed Gene Ontology (GO) enrichment analysis to classify the core promoters with ubiquitous and restricted expression according to the functional annotation of linked protein-coding genes. The sets of ubiquitously expressed core promoters were predominantly enriched for GO categories associated with the general/basic functions such as ncRNA processing, translation, and regulation of RNA metabolic processes. Conversely, the core promoters with restricted expression patterns were enriched for specific biological processes related to synaptic transmission, neuron fate specification, regulating TF activities and signaling pathways, and response to light intensity (Additional file 1: Fig. S19). Like in fruit flies, the genic TSCs with ubiquitous expression patterns tended to form broad core promoters in locusts (mean PSS = − 33.62 in locusts and mean PSS = − 30.91 in fruit flies, Fig. 2e). The genic TSCs with restricted expression patterns tended to form sharp core promoters in fruit flies, whereas the genic TSCs with restricted expression patterns exhibited a broader distribution in terms of promoter width in locusts (mean PSS = − 15.87 in locusts and mean PSS = − 6.48 in fruit flies, P < 2.2e−16, Wilcoxon rank-sum test). Therefore, both sharp and broad core promoters in locusts can drive the transcription of protein-coding genes with restricted expression.
In both restricted and ubiquitously expressed TSCs, the locusts showed a greater proportion of PSS values towards the left tail (Fig. 2e) than fruit flies. Therefore, we asked whether the imprecision of transcription initiation in locusts is higher than that in fruit flies. Because imprecise transcription initiation in a genic TSC results in an increased number of OTSSs (OTSS diversity), we compared the number of OTSSs of genic TSCs between locusts and fruit flies. We found that the mean number of OTSSs of genic TSCs in locusts was significantly higher than that in fruit flies (P < 0.05, Wilcoxon rank-sum test). Furthermore, we assessed the OTSS diversity of genic TSCs using the Shannon index (H), which is a diversity index that takes into account not only the OTSS number but also the evenness of the relative usage of different OTSSs [35]. In general, the H values of locusts were significantly higher than those of fruit flies (Fig. 2f, mean H values = 2.84 in locusts and mean H values = 2.35 in fruit flies, P < 2.2e−16, Wilcoxon rank-sum test), suggesting increased OTSS diversity of genic TSCs in locusts. To exclude the potential influences of the different TSS profiling methods applied and unequal sequencing depths in the locust and fruit fly data, we performed down-sampling analyses to examine the robustness of the above results. The P value of the Wilcoxon rank-sum test remained significant in the down-sampled data (Fig. 2g, mean H values = 1.80 in locusts and 1.58 in fruit flies, P < 2.2e−16, Wilcoxon rank-sum test). To describe the mean relationship between TSC expression and OTSS diversity using a partitioning method, we grouped the genic TSCs into 10 bins based on their expression quantile ranges. The binscatter plot shows that the TSC expression in both locusts and fruit flies is significantly positively (Additional file 1: Fig. S20, Pearson's R = 0.75 in locusts and 0.51 in fruit flies; Ps < 2.2e-16) correlated with OTSS diversity, suggesting that increases in TSC expression are generally achieved by the activation of transcription initiation from expanding OTSSs (increasing OTSS diversity within individual genic TSCs) in insects. When the genic TSCs were grouped into three categories based on PSS values, we found that the H values of the nonsharp core promoters in locusts were significantly higher than those in fruit flies (Fig. 2h, P < 2.2e−16 in the broad core promoters and P < 2.2e−16 in the intermediate core promoters, Wilcoxon rank-sum tests). However, a similar observation was not made for the sharp core promoters. Overall, the increased OTSS diversity of genic TSCs indicates a lower precision of transcription initiation of the broad and intermediate core promoters in locusts compared with fruit flies.
Genomic regions flanking TSSs are enriched in functionally important regulatory elements, which show depletion of single-nucleotide polymorphisms (SNPs) due to evolutionary conservation [36]. To determine the sequence variability of the genomic regions flanking TSSs, we extracted 1-kb fragments centered on TSSs and computed the position-specific density of SNPs using the resequencing data of locusts and fruit flies [37, 38]. We found two distinctive patterns of SNP density in the vicinity of dominant OTSSs in the two insect species (Fig. 2i); symmetrical and asymmetrical patterns of SNP density were observed in locusts and in fruit flies, respectively. The steep decline in the SNP density at approximately 250 bp upstream of dominant OTSSs suggests immediate constraints imposed by the presence of TFBSs or regulatory elements. However, the gradual decrease in SNP density from 1 kb upstream to the center of dominant OTSSs in locusts indicated fewer constraints on the distance from TFBSs or regulatory elements to TSSs in locusts than in fruit flies.
Alternative usage of core promoters of protein-coding genes in locusts
Based on the number of assigned core promoters, the protein-coding genes in locusts are divided into two categories: single-core-promoter genes and multicore-promoter genes. The multicore-promoter genes with two or more core promoters included 38.90% of the assigned protein-coding genes in locusts. The proportion of multicore-promoter genes in locusts was similar to that in fruit flies (38.71%). Compared with the fly genome, in which 37.78% of the genome consists of intergenic regions, the locust genome is greatly expanded, and a total of 73.67% of the locust genome consists of intergenic regions. The average length of the intergenic sequences in locusts was much longer than that in the fruit fly genome (217.63 kb in locusts and 4.42 kb in fruit flies). Therefore, despite the remarkable discrepancy in the sizes of intergenic regions between the species, the proportions of multicore-promoter genes are similar between locusts and fruit flies.
For multicore-promoter genes, the distributions of OTSSs between core promoters differed considerably in different tissues of locusts. The alternative usage of core promoters is also referred to as core-promoter shifting, which is used to quantify OTSS distribution dynamics among core promoters. To determine the prevalence of core-promoter shifting, the degree of shift (Ds value) was determined by calculating changes in the OTSS distribution between core promoters for each multicore-promoter gene. The distribution of Ds values approximately followed a normal distribution (Additional file 1: Fig. S21), implying the dynamic usage of core promoters. We found that 31.09% of multicore-promoter genes have undergone a significant shift in core-promoter usage in at least one tissue or organ (Ds values < − 1 or Ds values > 1, P < 0.05 and FDR < 0.1, chi-squared tests, Additional file 1: Fig. S22). This suggests pervasive variability of the 5′-UTR sequences of protein-coding genes in locusts, with implications for translation start site selection in a tissue-specific manner.
Adjacent and distant core promoters in locusts and flies
The density distribution of the distances from the annotated start codons to the farthest upstream genic core promoters showed a distinctive bimodal log distribution with a valley at 3 kb in locusts (Fig. 3a). However, only a unimodal distribution with a peak at approximately 100 bp was found in fruit flies. Thus, the genome size expansion resulted in a looseness of upstream regulatory elements in locusts. This observation held when all of the core promoters of each protein-coding gene were included (Additional file 1: Fig. S23). Therefore, the core promoters in these two species were classified into the adjacent and distant core promoters using a threshold of 3 kb. Compared with fruit flies (15.48%), locusts exhibited more than triple the number of distant core promoters (45.02%, P < 2.2e−16, chi-squared test). Thus, a considerable portion of protein-coding genes contain introns located between the coding and 5′-untranslated first exons, considering that the mean length of mRNA leaders is much shorter than 3 kb.
Distant transcription initiation in locusts and fruit flies. a The density distribution of distances from the annotated start codon of protein-coding genes to its farthest upstream core promoters. b Boxplot showing the length difference of the mRNA leaders (5′-UTRs) with different intron numbers. c Boxplot showing the exon length difference in the mRNA leaders with different intron numbers. The red asterisk indicates P < 2.2e−16 according to the Wilcoxon rank-sum test. N.S., not significant. d Standard deviation of exon lengths in the mRNA leaders with different intron numbers. The red asterisk indicates P < 2.2e−16 according to the Wilcoxon rank-sum test. N.S., not significant
For the protein-coding genes with annotated mRNA leaders, the intron number in mRNA leaders was less than 3 in both locusts (99.98%) and fruit flies (99.16%), indicating the possibility of strong selection constraints against presenting many introns in mRNA leaders. In mRNA leaders, the intron lengths in locusts (median length = 14,302 bp, 25% quantile = 1957 bp, and 75% quantile = 34,602 bp for the mRNA leaders that have one intron [MLOI]; median length = 17,627 bp, 25% quantile = 8170 bp, and 75% quantile = 46,448 bp for the mRNA leaders that have two introns [MLTI]) were significantly longer (Ps < 2.2e−16, Wilcoxon rank-sum tests) than those in fruit flies (median length = 929 bp, 25% quantile = 173 bp, and 75% quantile = 3105 bp for MLOI; median length = 3068 bp, 25% quantile = 598 bp, and 75% quantile = 9430 bp for MLTI; Additional file 1: Fig. S24). Furthermore, in both locusts (P = 1.303e−06, Wilcoxon rank-sum test) and fruit flies (P < 2.2e−16, Wilcoxon rank-sum test), the mean intron length in MLTI was significantly longer than that in MLOI. The median length of the mRNA leaders in locusts was 133 bp (25% quantile = 70 bp and 75% quantile = 230 bp), which was similar (P = 0.9652, Wilcoxon rank-sum test) to that in fruit flies (median length = 117 bp, 25% quantile = 67 bp and 75% quantile = 263 bp). Significant increases (Ps < 2.2e−16, Wilcoxon rank-sum tests) in the length of mRNA leaders were accompanied by increases in intron numbers in both locusts and fruit flies (Fig. 3b). Furthermore, the mRNA leader lengths in locusts were significantly shorter (Ps < 2.2e−16, Wilcoxon rank-sum tests) than those in fruit flies in both MLOI and MLTI. However, no significant length difference in mRNA leaders between locusts and fruit flies was observed in the mRNA leaders without introns (MLIPs). With respect to the exon-level comparison of mRNA leaders, it was observed only in fruit flies that significant increases in exon length (Ps < 2.2e−16, Wilcoxon rank-sum tests) were accompanied by increases in intron numbers (Fig. 3c). The exon lengths of both MLOI and MLTI in locusts were significantly shorter (Ps < 2.2e−16, Wilcoxon rank-sum tests) than those in fruit flies. In addition, a comparison of the standard deviation showed that the exon lengths (mean = 118.5 bp) of MLTI presented less variance than those of either MLIP or MLOI in locusts (Fig. 3d).
Both the single-core promoters (34.23% in locusts and 4.70% in fruit flies are distant core promoters, P < 2.2e−16, chi-squared test) and multicore promoters (68.24% in locusts and 37.90% in fruit flies are distant core promoters, P < 2.2e−16, chi-squared test) genes of locusts exhibited a significantly greater number of distant core promoters than those of fruit flies. Furthermore, the multicore-promoter genes presented a significantly greater number of distant core promoters than the single-core-promoter genes did in both locusts and fruit flies (Ps < 2.2e−16, chi-squared tests). One distinctive difference between the two types of core promoters in the two species was their transcriptional abundance. As expected, the majority of the adjacent core promoters in fruit flies showed higher transcriptional abundance than the distant core promoters. The transcriptional activities of distant core promoters were significantly weaker than those of adjacent core promoters in fruit flies. However, the scatter plot of TSC expression quantiles indicated that a substantial portion of the distant core promoters in locusts showed high transcriptional activity (Fig. 4a). Furthermore, we observed a weak but significant positive correlation (Pearson's R = 0.12 and P = 5.192e−15) between the distances from the annotated start codons to the upstream core promoters and TSC expression (TPM) levels in distant core promoters in locusts. However, no similar positive correlation was observed for either the adjacent core promoters in locusts or the adjacent/distant core promoters in fruit flies (Fig. 4b). Broad core promoters were detected in the adjacent core promoters of both locusts and fruit flies (Fig. 4c). In fact, a depletion of distant broad core promoters was observed in the broad core promoters of fruit flies. However, broad core promoters were found in a substantial proportion of distant core promoters in locusts.
Comparison of adjacent and distant core promoter genes between locusts and fruit flies. a Scatter plot of the TSC expression quantiles and the distances from the annotated start codons of protein-coding genes to the core promoters upstream. The density of points is shown using the smooth Scatter kernel-based density function in R. b Correlation between the distances from the annotated start codons to the upstream core promoters and TSC expression (TPM). c Density distribution of the PSS values of adjacent and distant core promoters. d Relationships between genic TSC expression and the shape dynamics of core promoters. All of the core promoters were sorted by the TPM of each genic TSC and were assigned to expression quantiles for each species. For all core promoters, we used a 200-core-promoter window size with a moving step size of 40 core promoters. The data represent the mean PSS values and mean TSC expression, which are normalized on the basis of the maximum TPM value of each category. The smooth lines were plotted with stat_smooth within the R environment using the ggplot2 package
To determine whether the overrepresentation of distant broad core promoters in locusts reflects TSC expression specificity, we used τ to measure expression specificity. No significant differences in the TSC expression specificity of distant core promoters were detected between locusts and fruit flies (Ps > 0.05, Wilcoxon rank-sum test), suggesting that both ubiquitous and restricted expression patterns are present in the distant core promoters of locusts and fruit flies. Considering all of these results together, the detection of broad core promoters with restricted expression patterns in locusts reflects only distinct fundamental aspects of gene regulation in locusts and fruit flies, suggesting the lower precision of transcription initiation of locust genic TSCs with restricted expression patterns.
Considering TSC expression in relation to the shape dynamics of core promoters, we performed a sliding window analysis using PSS values and TSC transcriptional abundances to determine the relationship between the core promoter shape and TSC expression. By plotting the PSS values and the expression quantiles of each sliding window, we found that the increases in TSC expression were generally accompanied by decreases in PSS values (Fig. 4d). Enhanced TSC transcriptional abundance gradually results in a broader range of core promoters in both locusts and fruit flies. Therefore, increases in TSC expression are generally achieved by the activation of transcription initiation from expanding OTSSs in insects rather than increased TSC expression of sharp promoters within a few OTSSs, demonstrating a positive correlation between gene expression and PSS values. Compared with distant core promoters, adjacent core promoters showed higher expression activity of genic TSCs in locusts and fruit flies. In addition, similar to fruit flies, the spatial distribution of OTSS signals varied considerably among adjacent core promoters in locusts, spanning a range of promoter shapes from sharp to broad. However, the line termini of the distant core promoters of locusts approached towards those of adjacent core promoters. Thus, unlike fruit flies, the distant core promoters in locusts showed higher expression activities of genic TSCs and became broader with respect to promoter width.
Distant core promoter emergence in the context of genome size evolution in insects
To examine the presence of distant core promoters in the context of genome size, we further generated the oligo-capping data from seven arthropod species (six insect species and one chelicerate species), the genome sizes of which are much smaller than the locust genome (Additional file 1: Table S9). The involved insect species, whose genomes represent a wide range of sizes, included Tribolium castaneum (red flour beetle, Coleoptera, ~ 0.17 Gb), Bombus terrestris (buff-tailed bumblebee, Hymenoptera, ~ 0.25 Gb), Helicoverpa armigera (cotton bollworm, Lepidoptera, ~ 0.34 Gb), Laodelphax striatellus (small brown planthopper, Hemiptera, ~ 0.54 Gb), Acyrthosiphon pisum (pea aphid, Hemiptera, ~ 0.54 Gb), and Aedes aegypti (yellow fever mosquito, Diptera, ~ 1.28 Gb). We also included a chelicerate species, Tetranychus urticae (two-spotted spider mite, Trombidiformes, ~ 0.09 Gb), as it has the smallest genome size among those of the sequenced arthropod species. The same TSC identification and false TSC removal approaches that were used in the locust data were applied in these arthropod data (Additional file 1: Table S10). The presence of the PyPu dinucleotide initiators guarantees the authenticity of the identified TSCs (Additional file 1: Fig. S25). The genic core promoters, which were linked to the annotated protein-coding genes using paired-read-based assignment rule, showed strong enrichment in the 5′ ends of the gene body (Additional file 1: Fig. S26). Except for the yellow fever mosquito, a unimodal log distribution of the distances from the annotated start codons to the farthest/all genic core promoters upstream was observed in all of the arthropod species, reinforcing our results in the comparison of locusts and fruit flies (Fig. 5a and Additional file 1: Fig. S27). Compared with the migratory locust, the yellow fever mosquito (~ 1.28 Gb) also exhibited a bimodal log distribution with a minor peak shifting towards shorter distances, indicating a lower proportion of distant core promoters in this species. Furthermore, the significant positive correlation between the genome size and the number ratio of distant core promoters to adjacent core promoters suggests that the genome size expansion results in the emergence of distant core promoters to initiate distant transcription (Fig. 5b). Because TEs are the dominant contributors to overall genome size variability in metazoan species [39], we next investigated the contribution of TE insertion into the upstream regions from the start codon to its corresponding core promoter. In the dominant portion of adjacent core promoters, the TE sequences were not detected in the upstream regions from the start codon to the adjacent core promoter, suggesting a strong resistance of TE insertion in adjacent transcription initiation (Fig. 5c, bottom panel). However, in the upstream regions from the start codon to the distant core promoter, the TE sequences could be detected in a large portion of distant core promoters. The decrease in genome size was accompanied by decreases in the average TE coverage in the upstream regions from the start codon to the distant core promoter (Fig. 5c, top panel).
Distant core promoter emergence in the context of genome size evolution in insects. a Density distribution of distances from the annotated start codon of protein-coding genes to its farthest core promoters upstream. b Correlation between the genome size and the number ratio of distant core promoters to adjacent core promoters. c Insertion of TEs in the upstream region from the start codon to core promoter. Top panel: average TE coverage and its standard deviation in the upstream region from the start codon to distant core promoter. Bottom panel: percentage of core promoters that do not contain TEs in the upstream region from the start codon to the distant core promoter. A, adjacent core promoter; D, distant core promoter. d TFBS abundance between distant and adjacent core promoters using the number of TFBSs per core promoter per TF. The heatmap was constructed using the log2 transformed ratios of the TFBS abundances between distant and adjacent core promoters. The TFBSs showing significant changes (chi-squared tests with Yates' correction) in at least one comparison were included in this analysis. Statistical significances were adjusted by Benjamini–Hochberg FDR multiple-testing correction. The asterisk indicates a significant difference in the TFBS abundance between adjacent and distant core promoters at a threshold of FDR < 0.01. e Estimation of TFBS divergence between distant and adjacent core promoters using normalized SE. The asterisk (P < 0.05) indicates a significant difference in the SE value between adjacent and distant core promoters according to the Wilcoxon rank-sum test. Aedes aegypti, AAEGY; Acyrthosiphon pisum, APISU; Bombus terrestris, BTERR; Helicoverpa armigera, HARMI; Laodelphax striatellus, LSTRI; Tribolium castaneum, TCAST; Tetranychus urticae, TURTI
The genomic organization of sequence-specific TFBSs represents a profound recognition source for transcriptional initiation. We evaluated the extent of TF-mediated regulatory elements by analyzing the TFBS occurrence and used the dominant TSSs as the reference point to determine the spatial distribution of TFBSs. TFBSs are enriched from − 125 to 0 bp upstream of the dominant TSS, indicating a positioning bias of TFBSs relative to the TSS (Additional file 1: Fig. S28). To explore the different preferences of sequence context in which the TFBS motif occurs, we compared genome-wide TFBS abundances between adjacent and distant core promoters via the number of TFBSs per core promoter per TF (Fig. 5d). A greater number of TFs showing significantly higher TFBS abundance is observed in the distant core promoters. However, only a few TFs showing significantly higher TFBS abundance are observed in the adjacent core promoters. This suggests that the overall architecture of TF-mediated regulation in distant transcriptional initiation is more variable than that in adjacent transcriptional initiation. To estimate the extent of sequence divergence of TFBSs between the adjacent and distant core promoters, we used a normalized Shannon entropy (SE) as a measure of the degree of conservation of TFBSs for each TF. The higher the SE value is, the higher the sequence divergence of TFBSs. In the migratory locust, the distant core promoters show significantly higher values (P < 0.05, Wilcoxon rank-sum test) of SE than adjacent core promoters, suggesting a high variation in TFBSs in distant core promoters (Fig. 5e). However, in the eight other species, the mean values of SE are lower in distant core promoters than in adjacent core promoters. A significantly lower value of SE was observed in five of the eight species involved.
Exon length constraint of locust genes having distant core promoters
The locust genomic architecture is very different from the architecture of the fly genome. The locust genes are characterized by short exons flanked by long introns. Intron size plays a critical role in determining the splicing efficiency and the recognition mode of the splicing program [40, 41]. The mean intron length in the locust genome is 11.12 kb, which is 12 times longer than the mean intron size (0.88 kb) in the fruit fly genome [20]. Compared with those of locusts, the introns of fruit flies are shorter, with more than half of introns between 40 and 80 bp [42]. Regarding the recognition mode of the splicing program, shorter introns favor intron definition, and longer introns favor exon definition. Therefore, exon definition is the major recognition mode for vertebrate genes, while intron definition is common in lower eukaryotes [43]. The switch from intron definition to exon definition occurs when the intron length is longer than 250 bp [40]. In locust mRNA leaders, only 4.73% of introns have lengths of less than 250 bp, suggesting that the predominant recognition mode in their splicing generally depends on the exon definition model. The intron length distribution in mRNA leaders suggests that the presence of at least one intron in the mRNA leaders results in a transition from adjacent core promoters to distant core promoters in locusts. We found that the exon lengths in the mRNA leaders containing at least one intron in locusts are significantly shorter than those in fruit flies. Notably, the mRNA leaders containing two introns show the least variance in exon length in locusts and that their mean exon length is 118.5 bp. This mean exon length in locusts is consistent with the optimal exon length (~ 150 bp) of the vertebrate exon definition model, which was verified using artificial constructs in previous experimental studies [44]. In contrast, in all of the comparison groups, the mRNA leaders with two introns show the most variance in exon length in fruit flies. This increasing exon length may decrease exon inclusion [45]. These findings suggest that the constraints on the exon length of the mRNA leaders (especially on those derived from distant core promoters) are more important than the constraints on the intron length of the mRNA leaders in the locust genome. In the human genome, short first exons lead to increases in H3K4me3 and H3K9ac at promoters and higher expression levels [46]. This is consistent with the detection of high TSC expression and the strict constraints on exon length in the mRNA leaders with two introns in the distant core promoters of locusts. Taken together, these results suggest that genome size expansion has played a determinant role in the prevalence of splicing recognition modes in locusts and that there is a selection constraint on exon length to preserve optimal splicing effectiveness for distant core promoters in locusts.
Benefits of distant transcription initiation caused by the presence of large introns
Because the mean intron length in mRNA leaders is 16.11 kb, the presence of at least one intron in the mRNA leaders results in a transition from adjacent core promoters to distant core promoters in locusts. We observed the widespread transcription initiation of distant core promoters, and a considerable proportion of them showed high transcriptional activities, suggesting their functional importance in gene expression. For the protein-coding genes with annotated mRNA leaders, the percentage (32.12%) of the protein-coding genes in which the mRNA leaders contain introns in locusts is greater than that (24.97%) in fruit flies, despite a better annotation of mRNA leaders in fruit flies. This result implies a greater tolerance of introns in mRNA leaders in locusts. Some specific introns in the mRNA leaders have regulatory impact on promoting the transcription and nuclear export of the corresponding host genes [47, 48]. In addition, the enhancer, repressor, and repetitive elements in the introns of mRNA leaders are crucial for transcriptional regulation [49,50,51]. mRNA leaders containing upstream open reading frames (ORFs) have been identified in approximately one third to half of mRNAs in eukaryotes, and some upstream ORFs have been suggested to play regulatory roles [52]. Furthermore, because splicing order under exon definition does not generally follow the direction of transcription, the distant transcription initiation observed in locusts can overcome the deleterious effects of large intron insertion in mRNA leaders [53]. Considering all of these results together, the regulatory codes of gene expression in locusts may benefit from distant transcription initiation caused by the presence of large introns in mRNA leaders.
2 kb limitation rule in promoter function studies of insects
We found that a total of 45.97% of core promoters in locusts are located in the more than 2-kb region upstream of the start codon ATG site. Compared with locusts (~ 6.5 Gb), fruit flies (180 Mb) have a dramatically smaller genome size. Even in the small-sized genome of the fruit fly, a considerable portion (19.41%) of the core promoters are located in the more than 2 kb upstream region of the start codon ATG site, implying a positive correlation between genome size and the distance from the core promoter to the start codon. Because no TSS studies have previously been conducted beyond Diptera, the current catalog of insect core promoters includes only data from this order [8, 10]. The less than 2-kb region upstream of the start codon ATG site has been considered the putative promoter region in a large number of promoter function studies and comparative genomic studies in insects [11,12,13]. This 2 kb limitation rule has also been used to identify regulatory elements in promoter regions in insects with large genome sizes [14, 54, 55]. However, the alpine grasshopper, Podisma pedestris, has the largest genome size identified to date among insects (1C value = 16.93 pg), which is approximately 170-fold larger than the smallest insect genome studied (0.1 pg), indicating that genome sizes vary greatly among insects [56]. These results suggest that researchers should be cautious when the 2 kb limitation rule is applied for promoter function verification, emphasizing the importance of the accurate identification of transcription start sites across diverse insect taxa.
Imprecise transcription initiation within the core promoter
The increased OTSS diversity within core promoters suggests a less precise transcription initiation of the broad and intermediate core promoters in locusts than in fruit flies. A large majority of the human genome is pervasively transcribed, and protein-coding genes account for only a small proportion of the total transcriptional output [57]. Conversely, little evidence of pervasive transcription has been found in fruit flies [58], implying that genome size differences may contribute to the emergence of pervasive transcription. Compared with fruit flies, both locusts and humans have a genome that is one order of magnitude greater in size. This suggests that the imprecise transcription initiation within core promoters in locusts may be similar in nature to pervasive transcription in humans, in line with the absence of a steep decline in SNP density in the vicinity of locust TSSs. The functionality of the majority of pervasively transcribed transcripts is unknown in humans. Many of these transcripts are thought to be represented as transcriptional noise that is expressed at a low level and results from imprecise transcription initiation due to the promiscuity of RNA polymerase II [59, 60]. However, it has been suggested that cells can benefit from allowing random transcription to occur rather than suppressing nonspecific transcription [61]. Increases in TSC expression are generally achieved by the increasing OTSS diversity within individual core promoters in locusts. These results suggest that imprecise transcription initiation within core promoters does significantly contribute to the activation of protein-coding gene expression in locusts, inconsistent with a role in generating transcriptional noise. However, it is unknown whether OTSS selection to increase OTSS diversity occurs in a random manner or is driven by a specific regulatory manner in locusts.
Genome size and distant transcription initiation
In the comparison of core promoters across multiple species, the significant positive correlation between the genome size and the number ratio of distant core promoters to adjacent core promoters suggests that the genome size expansion results in the emergence of distant core promoters to initiate distant transcription. However, based on the comparison between locusts and fruit flies, the mRNA leader lengths in the large-sized genome were significantly shorter than those in the small-sized genome in genes having at least one intron in their mRNA leaders. These results suggest that in the large-sized genome, the emergence of distant core promoters was accompanied by increases in the intron size located in mRNA leaders. A large portion of the intergenic region, which is accessible to the transcription machinery, can initiate transcriptional noise at inappropriate positions within intergenic regions [59]. Therefore, transcriptional noise generates long noncoding RNAs, which are usually expressed at a low level [62]. However, a substantial portion of the distant core promoters in locusts showed high transcriptional activity. Furthermore, compared with that in fruit flies, the significantly shorter length of intron-containing mRNA leaders in locusts is consistent with the negative correlation between mRNA leader length and gene expression level [63]. In addition, we linked the distant core promoters in locusts to protein-coding genes using the paired-read-based assignment rule. The results of TFBS abundance and TFBS divergence indicate that the overall architecture of TF-mediated regulation in distant transcriptional initiation is more variable than that in adjacent transcriptional initiation. Cis-regulatory elements are composed of motifs that bind to TFs that determine regulatory activity. Thus, the gains and sequence divergence of TFBSs have the potential to modify the regulatory activity of cis-regulatory elements. These results imply that as the genome size has expanded, more sophisticated regulatory mechanisms have appeared to drive distant transcription initiation. Taken together, all these results suggest that distant transcription initiation in locusts is not the byproduct of transcriptional noise of lowly expressed long noncoding RNAs in intergenic regions.
In this study, we generated high-resolution transcription initiation datasets to define a comprehensive atlas of TSCs in locusts, contributing to the expansion of non-Drosophila taxonomic representation and revealing distinct genomic features to deepen the understanding of transcription initiation in insects. After conservative computational correction steps, we identified a total of 38,136 reliable TSCs, including 18,305 genic TSCs, in the locust genome. The availability of the locust TSC atlas offers an unprecedented opportunity for the comparative analysis of insect core promoters. The comparison of locust and fruit fly data showed a number of distinct features: the nucleotide composition of the PyPu dinucleotide; the strength and motifs of TATA-box signals; the distribution of the CpG o/e content; the effects of promoter shape on TSC expression specificity, transcription initiation accuracy; and SNP density patterns around core-promoter regions. Furthermore, we revealed a distinctive bimodal log distribution of the distance from the annotated start codons to the core promoters of locust protein-coding genes and defined the adjacent and distant core promoters using a threshold of 3 kb. We found stricter constraints on the exon length of mRNA leaders and widespread higher TSC expression activities of the distant core promoters in locusts compared with fruit flies, implying an important role of distant transcription initiation in locusts. We further compared core promoters in the seven arthropod species across a broad range of genome sizes to reinforce our results on the emergence of distant core promoters in large-sized genomes, and we also revealed the changes in abundance and divergence of TFBSs of distant transcription initiation in the context of genome size.
Insect rearing
The migratory locusts (Locusta migratoria) were reared in large, well-ventilated cages (40 cm × 40 cm × 40 cm) at a density of 500–1000 insects per container. These locust colonies were reared under a 14:10 light/dark photoperiod at 30 °C and were fed fresh wheat seedlings and wheat bran.
RNA isolation, library preparation and sequencing
The tissues and organs were dissected from the fifth-instar nymph of the locust on the third day after molting. Total RNA was isolated using TRIzol reagent (Invitrogen) according to the manufacturer's instructions. Genomic DNA was removed using TURBO DNase (Invitrogen), and poly-A RNA was enriched twice using a Dynabeads Oligo (dT) 25 kit (Thermo Fisher Scientific). A total of 500 ng of poly-A RNA was treated with calf intestinal alkaline phosphatase (CIP) at 37 °C for 1 h. The reaction mixture was purified using 2.2X Agincourt RNAClean XP beads (Beckman Coulter). The resulting CIP-treated RNA was treated with Tobacco Acid Pyrophosphatase (TAP) at 37 °C for 1 h and purified with 2.2X Agincourt RNAclean XP beads. A 5′ RNA adapter (5′-AGGCACGGGCUAUGAG-3′) was ligated to the CIP/TAP-treated RNA using T4 RNA ligase at 25 °C for 4 h. First-strand cDNA synthesis was carried out using SuperScript IV Reverse Transcriptase (Invitrogen) with 3′ specific random hexamer primers (5′-GATGGAGCGTGTTAGCGNNNNNN-3′). The cDNA was purified with double size selection using 0.6× followed by 0.9× of AMPure XP beads (Beckman Coulter) and was amplified by PCR via Phusion High-Fidelity DNA Polymerase (Thermo Fisher Scientific) in conjunction with 5A (5′-AGGCACGGGCTATGAG-3′) and 3A (5′-GATGGAGCGTGTTAGCG-3′) primers. Finally, sequencing libraries were constructed using the NEBNext Ultra II DNA Library Prep Kit for Illumina (NEB) following the manufacturer's instructions. A total of 14 oligo-capping libraries were generated and sequenced using the Illumina NovaSeq 6000 System. To generate the oligo-capping data for the seven arthropod species, total RNA was extracted from whole bodies and used for library construction using the protocols as described above.
Identification of transcription start sites and transcription start site clusters
The raw sequencing reads were preprocessed to remove Illumina adapter sequences and low-quality reads using the Trimmomatic program version 0.36 [64]. The detection of the 5′ oligo-capping and 3′ oligo-capping adapters was achieved using the Cutadapt program version 2.5 [65]. Only the read pairs that contained both the 5′ oligo-capping and 3′ oligo-capping adapters were kept for further analysis. The resulting reads were aligned to the locust genome using the HISAT2 program version 2.1.0 [20, 66]. The soft clipping alignments were not allowed to avoid false TSSs. According to a previous study [23], the PCR duplicates, which were defined as read pairs that share similar alignment coordinates (5′ start of inserts, 3′ end of inserts and splice sites), were removed. The resulting BAM files were used in the identification of oligo-capping transcription start sites (OTSSs) using the CAGEr version 1.24.0 package in R version 3.5.1 and Bioconductor version 3.8 environments [67]. OTSSs with tags per million (TPM) threshold of 3 were used as raw signals for TSS clustering to identify TSCs. The OTSSs, which are separated by less than 20 bp, were clustered into a TSC. To minimize the false TSCs, the TSCs with sequencing reads fewer than 3 TPM were not used in further analysis. The TSC boundaries were calculated based on a cumulative distribution of the sequencing reads to determine the intervals of the 10th and 90th percentiles. TSCs were identified from each library separately. The initial set of TSCs was obtained by merging the TSCs identified from all libraries using a distance threshold of 100 bp. The dominant TSS was defined as the TSS with the highest number of sequencing reads in each TSC. The TSCs mapped to rRNA sequences (28S, 18S, 5.8S, and 5S) were removed from the final set of TSCs.
Characterization of core promoters of protein-coding genes
Although the TSC is not the same as the promoter, these two terms are used interchangeably in TSS studies because of the strong association between TSCs and promoters [3, 35]. Thus, in this study, the identified TSCs were considered putative core promoters for brevity. The TSCs in the final set were linked to annotated protein-coding genes based on gene structure using paired-end information. For each TSC, if an inserted fragment having its 5′ end in the TSC and its 3′ end in an annotated exon of a protein-coding gene, the TSC was functionally linked to that gene. If a TSC could potentially be tied to multiple protein-coding genes, the TSC was functionally linked to the closest gene. The TSCs showing significant enrichment (observed number greater than 20 and a q-value of less than 1e−10) of the TGAG motif and their 1-bp-substitution variants were filtered for further analysis. The significant enrichment motifs were identified using the locust data. The false TSCs derived from internal truncated sites were also filtered for further analyses. The proximal and distal core promoters were defined by comparing two core promoters based on the distances to the annotated start codon, respectively. The core promoter located closer to the downstream start codon of the linked protein-coding gene is considered proximal, whereas the other is considered distal. The adjacent (less than 3 kb) and distant (greater than 3 kb) core promoters were defined based on the distances to the annotated start codon using a threshold of 3 kb, respectively. In intron-level comparison, we did not adopt the rebuilt transcript models that were assembled by the oligo-capping paired-end sequencing reads and the official gene set due to the potential assembly errors in isoform reconstruction [68]. For expression quantification of genic TSCs, the total R1 (forward) reads for each TSC were calculated and normalized to the total library size (defined as the total number of R1 reads derived from any genic TSCs). HOMER version 4.9 was used to perform a de novo motif analysis using the findMotifs.pl tool [69]. RepeatModeler version 2.0.1 was used to generate a de novo repeat library, and the resulting consensus sequences were used to identify genome-wide repeat sequences using RepeatMasker version 4.1.0 [70].
Expression analyses
As a measure of tissue specificity of TSC expression, τ (tau, the tissue specificity index), which is the best metric to measure tissue specificity [33], was calculated using log TPM expression data based on the following equation:
$$ \tau =\frac{\sum_{i=1}^n\left(1-\hat{x_i}\right)}{n-1};\hat{x_i}=\frac{x_i}{\begin{array}{c}\max \left({x}_i\right)\\ {}1\le i\le n\end{array}} $$
The values of τ vary from 0 to 1, where 0 < τ < 0.1 indicates ubiquitous expression and 0.9 < τ < 1 indicates restricted expression.
The biological process enrichment was determined using the clusterProfiler and GO.db packages of the R version 3.5.0 program [71]. The results of GO enrichment analysis were visualized as a scatter plot using the REViGO webserver [72].
Promoter shape and promoter shifting
The promoter shape was determined using the promoter shape score (PSS), which quantifies the promoter shape based on the promoter width and distribution of oligo-capping reads within a promoter [30]. The PSS was defined by the following equation:
$$ \mathrm{PSS}={\mathrm{Log}}_2w\sum \limits_i^L{p}_i{\mathrm{Log}}_2{p}_i $$
where pi is the probability of observing an OTSS at base position i within a promoter. L is the set of base positions that have the normalized TSS expression at a threshold of 3 TPM, and w is the promoter width, which is defined as the interval distance between the 10th and 90th quantiles. The PSS values are positively correlated to the width of core promoters, and a PSS value of 0 indicates that all transcription of a core promoter is initiated in a narrow genomic region. The core promoters can be classified into three categories: sharp (PSS > − 10) core promoters, intermediate (PSS ≤ − 10 and PSS > − 20) core promoters, and broad (PSS ≤ − 20) core promoters [30].
The promoter shifting was assessed using the degree of shift (Ds value), which calculates changes in the OTSS distribution between core promoters for each multicore-promoter gene. The Ds value was defined by the following equation:
$$ {D}_S={\mathrm{Log}}_2\left(\frac{P_t/{D}_t}{P_c/{D}_c}\right) $$
where Pt and Dt are the expression abundance values (TPM) of the proximal and distal core promoters in the tested tissue and Pc and Dc are the expression abundance values of the proximal and distal core promoters in the control. A Ds value = 0 indicates the absence of promoter shifting. A Ds value > 1 and a Ds value < −1 indicate promoter shifting towards the proximal and distal core promoters, respectively. To account for multiple comparisons, the raw P values from the chi-squared tests were adjusted using the Benjamini–Hochberg method to control the false discovery rate (FDR).
Nucleotide diversity and OTSS diversity of genic TSCs
The resequencing data of locusts and fruit flies were retrieved from the NCBI Sequence Read Archive (SRA) database under BioProject accessions: PRJNA256231 and PRJNA433455 [37, 38]. The sequencing reads were subject to quality-trimming and were aligned to their corresponding genome using BWA version 0.7.17 with a minimum mapping quality of Q30 [73]. Duplicated reads were filtered, and local realignment and base quality recalibration were performed by GATK version 4–4.1.7. SNPs were identified using the GATK HaplotypeCaller, and raw variants were filtered out based on the following parameters: QD < 2.0 || MQ < 40.0 || FS > 60.0 || SOR > 3.0 || MQRankSum < − 12.5 || ReadPosRankSum < − 8.0 || DP < 10 || DP > 800. To exclude single-nucleotide polymorphism (SNP) calling errors, SNPs showing minor allele frequency greater than 0.5% were kept for subsequent analysis. To determine the nucleotide diversity of the genomic regions flanking TSSs, the 1-kb fragments centered at the TSS were extracted to compute the position-specific density of SNPs.
The Shannon index (H) of OTSS diversity for a genic TSC was defined by the following equation:
\( H=-{\sum}_{i=1}^s{p}_i\mathit{\ln}{p}_i \)
where S is the number of OTSSs in a genic TSC and pi is the proportion of the ith OTSS relative to the total number of OTSSs. Following a previous TSS diversity study, the genic TSCs with less than 10 sequencing reads were excluded in Shannon index calculations due to the poor estimation of diversity when the sample size is too small [35]. To remove potential bias caused by different portions of TSCs transcribed from the extremely narrow region between locusts and fruit flies, the genic TSCs with an OTSS number less than 2 were excluded. We performed down-sampling analyses to exclude the potential influences of different TSS profiling methods used and unequal TSC sequencing depths in the comparison between the locust and fruit fly data. We randomly sampled 10 OTSS reads per TSC for the TSCs with at least 10 sequencing reads to generate the down-sampled data.
Analysis of TFBSs
HOMER version 4.9 was used to perform a de novo motif analysis using the findMotifs.pl tool [69]. Position weight matrices (PWMs) for TFs were taken from DMMPMM (Bigfoot), iDMMPMM, and JASPAR non-redundant insect collection [27]. Find Individual Motif Occurrences (FIMO) in the MEME version 5.0.5 suite was used to determine the occurrences of specific motifs using PWMs with a cut-off value of P < 1e− 5 [74]. In TFBS prediction, each PWM was scanned across the − 500 to 100 bp region centered on the dominant TSS in core promoters. Because most of the PWMs used are derived from Drosophila, we could not exclude the possibility that these TFs have evolved in different species and thus have modified/alternative TFBS sequence constraints. The predicted TFBSs are considered putative TFBSs because no functional analysis of TFBSs is performed in the arthropod species involved. The statistical significance of the TFBS abundance between adjacent and distant core promoters was calculated using the chi2_contingency function of the Python scipy.stats module. Statistical significances were adjusted by the Benjamini–Hochberg FDR multiple-testing correction. For each TFBS, a normalized Shannon entropy (SE) of multiple sequence alignment was defined to evaluate the TFBS divergence by the following equation:
$$ \mathrm{SE}=-\frac{1}{L}{\sum}_{i=1}^L{\sum}_{i=1}^M{P}_i{Log}_2{P}_i $$
where Pi is the fraction of nucleotide bases of nucleotide base type i, M is the number of nucleotide base types, and L is the length of the identified TFBS.
The dataset described or used in this study is available in the NCBI Sequence Read Archive under BioProject accession number PRJNA637188.
Landolin JM, Johnson DS, Trinklein ND, Aldred SF, Medina C, Shulha H, Weng Z, Myers RM. Sequence features that drive human promoter function and tissue specificity. Genome Res. 2010;20(7):890–8. https://doi.org/10.1101/gr.100370.109.
Danino YM, Even D, Ideses D, Juven-Gershon T. The core promoter: at the heart of gene expression. Biochim Biophys Acta. 2015;1849(8):1116–31. https://doi.org/10.1016/j.bbagrm.2015.04.003.
Yokomori R, Shimai K, Nishitsuji K, Suzuki Y, Kusakabe TG, Nakai K. Genome-wide identification and characterization of transcription start sites and promoters in the tunicate Ciona intestinalis. Genome Res. 2016;26(1):140–50. https://doi.org/10.1101/gr.184648.114.
Kawaji H, Frith MC, Katayama S, Sandelin A, Kai C, Kawai J, Carninci P, Hayashizaki Y. Dynamic usage of transcription start sites within core promoters. Genome Biol. 2006;7(12):R118. https://doi.org/10.1186/gb-2006-7-12-r118.
Mogilenko DA, Shavva VS, Dizhe EB, Orlov SV. Characterization of distal and proximal alternative promoters of the human ApoA-I gene. Mol Biol. 2019;53(3):485–96.
Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009;10(1):57–63. https://doi.org/10.1038/nrg2484.
Batut PJ, Gingeras TR. Conserved noncoding transcription and core promoter regulatory code in early Drosophila development. eLife. 2017;6 https://doi.org/10.7554/eLife.29005.
Mwangi S, Attardo G, Suzuki Y, Aksoy S, Christoffels A. TSS seq based core promoter architecture in blood feeding Tsetse fly (Glossina morsitans morsitans) vector of Trypanosomiasis. BMC Genomics. 2015;16(1):722. https://doi.org/10.1186/s12864-015-1921-6.
Main BJ, Smith AD, Jang H, Nuzhdin SV. Transcription start site evolution in Drosophila. Mol Biol Evol. 2013;30(8):1966–74. https://doi.org/10.1093/molbev/mst085.
Raborn RT, Brendel VP. Using RAMPAGE to identify and annotate promoters in insect genomes. In: Insect Genomics. New York, NY: Humana Press; 2019. p. 999–116. https://doi.org/10.1007/978-1-4939-8775-7_9.
Chen X, Tan A, Palli SR. Identification and functional analysis of promoters of heat-shock genes from the fall armyworm, Spodoptera frugiperda. Sci Rep. 2020;10(1):2363. https://doi.org/10.1038/s41598-020-59197-8.
Liu N, Zhang L. Identification of two new cytochrome P450 genes and their 5′-flanking regions from the housefly, Musca domestica. Insect Biochem Mol Biol. 2002;32(7):755–64. https://doi.org/10.1016/S0965-1748(01)00158-8.
Simola DF, Wissler L, Donahue G, Waterhouse RM, Helmkampf M, Roux J, Nygaard S, Glastad KM, Hagen DE, Viljakainen L, Reese JT, Hunt BG, Graur D, Elhaik E, Kriventseva EV, Wen J, Parker BJ, Cash E, Privman E, Childers CP, Munoz-Torres MC, Boomsma JJ, Bornberg-Bauer E, Currie CR, Elsik CG, Suen G, Goodisman MAD, Keller L, Liebig J, Rawls A, Reinberg D, Smith CD, Smith CR, Tsutsui N, Wurm Y, Zdobnov EM, Berger SL, Gadau J. Social insect genomes exhibit dramatic evolution in gene composition and regulation while preserving regulatory features linked to sociality. Genome Res. 2013;23(8):1235–47. https://doi.org/10.1101/gr.155408.113.
Veenstra JA. The contribution of the genomes of a termite and a locust to our understanding of insect neuropeptides and neurohormones. Front Physiol. 2014;5:454.
Hong X, Scofield DG, Lynch M. Intron size, abundance, and distribution within untranslated regions of genes. Mol Biol Evol. 2006;23(12):2392–404. https://doi.org/10.1093/molbev/msl111.
Lim CS, TW SJ, Kleffmann T, Brown CM. The exon-intron gene structure upstream of the initiation codon predicts translation efficiency. Nucleic Acids Res. 2018;46(9):4575–91. https://doi.org/10.1093/nar/gky282.
Jiang F, Liu Q, Liu X, Wang XH, Kang L. Genomic data reveal high conservation but divergent evolutionary pattern of Polycomb/Trithorax group genes in arthropods. Insect Sci. 2019;26(1):20–34. https://doi.org/10.1111/1744-7917.12558.
Jiang F, Zhang J, Liu Q, Liu X, Wang H, He J, Kang L. Long-read direct RNA sequencing by 5′-cap capturing reveals the impact of Piwi on the widespread exonization of transposable elements in locusts. RNA Biol. 2019;16(7):950–9. https://doi.org/10.1080/15476286.2019.1602437.
Wang X, Kang L. Molecular mechanisms of phase change in locusts. Annu Rev Entomol. 2014;59(1):225–44. https://doi.org/10.1146/annurev-ento-011613-162019.
Wang X, Fang X, Yang P, Jiang X, Jiang F, Zhao D, Li B, Cui F, Wei J, Ma C, Wang Y, He J, Luo Y, Wang Z, Guo X, Guo W, Wang X, Zhang Y, Yang M, Hao S, Chen B, Ma Z, Yu D, Xiong Z, Zhu Y, Fan D, Han L, Wang B, Chen Y, Wang J, Yang L, Zhao W, Feng Y, Chen G, Lian J, Li Q, Huang Z, Yao X, Lv N, Zhang G, Li Y, Wang J, Wang J, Zhu B, Kang L. The locust genome provides insight into swarm formation and long-distance flight. Nat Commun. 2014;5(1):2957. https://doi.org/10.1038/ncomms3957.
Wei Y, Chen S, Yang P, Ma Z, Kang L. Characterization and comparative profiling of the small RNA transcriptomes in two phases of locust. Genome Biol. 2009;10(1):R6. https://doi.org/10.1186/gb-2009-10-1-r6.
Yamashita R, Sathira NP, Kanai A, Tanimoto K, Arauchi T, Tanaka Y, Hashimoto S, Sugano S, Nakai K, Suzuki Y. Genome-wide characterization of transcriptional start sites in humans by integrative transcriptome analysis. Genome Res. 2011;21(5):775–89. https://doi.org/10.1101/gr.110254.110.
Batut P, Dobin A, Plessy C, Carninci P, Gingeras TR. High-fidelity promoter profiling reveals widespread alternative promoter usage and transposon-driven developmental gene expression. Genome Res. 2013;23(1):169–80. https://doi.org/10.1101/gr.139618.112.
Falckenhayn C, Boerjan B, Raddatz G, Frohme M, Schoofs L, Lyko F. Characterization of genome methylation patterns in the desert locust Schistocerca gregaria. J Exp Biol. 2013;216(Pt 8):1423–9. https://doi.org/10.1242/jeb.080754.
Glastad KM, Hunt BG, Yi SV, Goodisman MA. DNA methylation in insects: on the brink of the epigenomic era. Insect Mol Biol. 2011;20(5):553–65. https://doi.org/10.1111/j.1365-2583.2011.01092.x.
Vo Ngoc L, Wang YL, Kassavetis GA, Kadonaga JT. The punctilious RNA polymerase II core promoter. Genes Dev. 2017;31(13):1289–301. https://doi.org/10.1101/gad.303149.117.
Fornes O, Castro-Mondragon JA, Khan A, van der Lee R, Zhang X, Richmond PA, Modi BP, Correard S, Gheorghe M, Baranasic D, et al. JASPAR 2020: update of the open-access database of transcription factor binding profiles. Nucleic Acids Res. 2020;48(D1):D87–92. https://doi.org/10.1093/nar/gkz1001.
Hoskins RA, Landolin JM, Brown JB, Sandler JE, Takahashi H, Lassmann T, Yu C, Booth BW, Zhang D, Wan KH, Yang L, Boley N, Andrews J, Kaufman TC, Graveley BR, Bickel PJ, Carninci P, Carlson JW, Celniker SE. Genome-wide analysis of promoter architecture in Drosophila melanogaster. Genome Res. 2011;21(2):182–92. https://doi.org/10.1101/gr.112466.110.
Rach EA, Yuan HY, Majoros WH, Tomancak P, Ohler U. Motif composition, conservation and condition-specificity of single and alternative transcription start sites in the Drosophila genome. Genome Biol. 2009;10(7):R73. https://doi.org/10.1186/gb-2009-10-7-r73.
Lu Z, Lin Z. Pervasive and dynamic transcription initiation in Saccharomyces cerevisiae. Genome Res. 2019;29(7):1198–210. https://doi.org/10.1101/gr.245456.118.
Yang C, Bolotin E, Jiang T, Sladek FM, Martinez E. Prevalence of the initiator over the TATA box in human and yeast genes and identification of DNA motifs enriched in human TATA-less core promoters. Gene. 2007;389(1):52–65. https://doi.org/10.1016/j.gene.2006.09.029.
Schor IE, Degner JF, Harnett D, Cannavo E, Casale FP, Shim H, Garfield DA, Birney E, Stephens M, Stegle O, et al. Promoter shape varies across populations and affects promoter evolution and expression noise. Nat Genet. 2017;49(4):550–8. https://doi.org/10.1038/ng.3791.
Kryuchkova-Mostacci N, Robinson-Rechavi M. A benchmark of gene expression tissue-specificity metrics. Brief Bioinform. 2017;18(2):205–14.
Kryuchkova-Mostacci N, Robinson-Rechavi M. Tissue-specificity of gene expression diverges slowly between orthologs, and rapidly between paralogs. PLoS Comput Biol. 2016;12(12):e1005274. https://doi.org/10.1371/journal.pcbi.1005274.
Xu C, Park JK, Zhang J. Evidence that alternative transcriptional initiation is largely nonadaptive. Plos Biol. 2019;17(3):e3000197. https://doi.org/10.1371/journal.pbio.3000197.
Neininger K, Marschall T, Helms V. SNP and indel frequencies at transcription start sites and at canonical and alternative translation initiation sites in the human genome. Plos One. 2019;14(4):e0214816. https://doi.org/10.1371/journal.pone.0214816.
Bergland AO, Behrman EL, O'Brien KR, Schmidt PS, Petrov DA. Genomic evidence of rapid and stable adaptive oscillations over seasonal time scales in Drosophila. Plos Genet. 2014;10(11):e1004775. https://doi.org/10.1371/journal.pgen.1004775.
Ding D, Liu G, Hou L, Gui W, Chen B, Kang L. Genetic variation in PTPN1 contributes to metabolic adaptation to high-altitude hypoxia in Tibetan migratory locusts. Nat Commun. 2018;9(1):4991. https://doi.org/10.1038/s41467-018-07529-8.
Kidwell MG, Lisch DR. Transposable elements and host genome evolution. Trends Ecol Evol. 2000;15(3):95–9. https://doi.org/10.1016/S0169-5347(99)01817-0.
Fox-Walsh KL, Dou Y, Lam BJ, Hung SP, Baldi PF, Hertel KJ. The architecture of pre-mRNAs affects mechanisms of splice-site pairing. Proc Natl Acad Sci U S A. 2005;102(45):16176–81. https://doi.org/10.1073/pnas.0508489102.
Gelfman S, Burstein D, Penn O, Savchenko A, Amit M, Schwartz S, Pupko T, Ast G. Changes in exon-intron structure during vertebrate evolution affect the splicing pattern of exons. Genome Res. 2012;22(1):35–50. https://doi.org/10.1101/gr.119834.110.
Pai AA, Henriques T, McCue K, Burkholder A, Adelman K, Burge CB. The kinetics of pre-mRNA splicing in the Drosophila genome and the influence of gene architecture. eLife. 2017;6 https://doi.org/10.7554/eLife.32537.
Niu DK. Exon definition as a potential negative force against intron losses in evolution. Biol Direct. 2008;3(1):46. https://doi.org/10.1186/1745-6150-3-46.
Khodor YL, Menet JS, Tolan M, Rosbash M. Cotranscriptional splicing efficiency differs dramatically between Drosophila and mouse. RNA. 2012;18(12):2174–86. https://doi.org/10.1261/rna.034090.112.
Sterner DA, Berget SM. In vivo recognition of a vertebrate mini-exon as an exon-intron-exon unit. Mol Cell Biol. 1993;13(5):2677–87. https://doi.org/10.1128/MCB.13.5.2677.
Bieberstein NI, Carrillo Oesterreich F, Straube K, Neugebauer KM. First exon length controls active chromatin signatures and transcription. Cell Rep. 2012;2(1):62–8. https://doi.org/10.1016/j.celrep.2012.05.019.
Gallegos JE, Rose AB. The enduring mystery of intron-mediated enhancement. Plant Sci. 2015;237:8–15. https://doi.org/10.1016/j.plantsci.2015.04.017.
Vain P, Finer KR, Engler DE, Pratt RC, Finer JJ. Intron-mediated enhancement of gene expression in maize (Zea mays L.) and bluegrass (Poa pratensis L.). Plant Cell Rep. 1996;15(7):489–94. https://doi.org/10.1007/BF00232980.
Bianchi M, Crinelli R, Giacomini E, Carloni E, Magnani M. A potent enhancer element in the 5′-UTR intron is crucial for transcriptional regulation of the human ubiquitin C gene. Gene. 2009;448(1):88–101. https://doi.org/10.1016/j.gene.2009.08.013.
Sega P, Kruszka K, Szewc L, Szweykowska-Kulinska Z, Pacak A. Identification of transcription factors that bind to the 5′-UTR of the barley PHO2 gene. Plant Mol Biol. 2020;102(1–2):73–88. https://doi.org/10.1007/s11103-019-00932-9.
Merenciano M, Ullastres A, de Cara MA, Barron MG, Gonzalez J. Multiple independent retroelement insertions in the promoter of a stress response gene have variable molecular and functional effects in Drosophila. PLoS Genet. 2016;12(8):e1006249. https://doi.org/10.1371/journal.pgen.1006249.
Hinnebusch AG, Ivanov IP, Sonenberg N. Translational control by 5′-untranslated regions of eukaryotic mRNAs. Science. 2016;352(6292):1413–6. https://doi.org/10.1126/science.aad9868.
Drexler HL, Choquet K, Churchman LS. Splicing kinetics and coordination revealed by direct nascent RNA sequencing through nanopores. Mol Cell. 2020;77(5):985–98 e988. https://doi.org/10.1016/j.molcel.2019.11.017.
Cho KH, Cheon HM, Kokoza V, Raikhel AS. Regulatory region of the vitellogenin receptor gene sufficient for high-level, germ line cell-specific ovarian expression in transgenic Aedes aegypti mosquitoes. Insect Biochem Mol Biol. 2006;36(4):273–81. https://doi.org/10.1016/j.ibmb.2006.01.005.
Hou L, Li B, Ding D, Kang L, Wang X. CREB-B acts as a key mediator of NPF/NO pathway involved in phase-related locomotor plasticity in locusts. PLoS Genet. 2019;15(5):e1008176. https://doi.org/10.1371/journal.pgen.1008176.
He K, Lin K, Wang G, Li F. Genome sizes of nine insect species determined by flow cytometry and k-mer analysis. Front Physiol. 2016;7:569.
Djebali S, Davis CA, Merkel A, Dobin A, Lassmann T, Mortazavi A, Tanzer A, Lagarde J, Lin W, Schlesinger F, Xue C, Marinov GK, Khatun J, Williams BA, Zaleski C, Rozowsky J, Röder M, Kokocinski F, Abdelhamid RF, Alioto T, Antoshechkin I, Baer MT, Bar NS, Batut P, Bell K, Bell I, Chakrabortty S, Chen X, Chrast J, Curado J, Derrien T, Drenkow J, Dumais E, Dumais J, Duttagupta R, Falconnet E, Fastuca M, Fejes-Toth K, Ferreira P, Foissac S, Fullwood MJ, Gao H, Gonzalez D, Gordon A, Gunawardena H, Howald C, Jha S, Johnson R, Kapranov P, King B, Kingswood C, Luo OJ, Park E, Persaud K, Preall JB, Ribeca P, Risk B, Robyr D, Sammeth M, Schaffer L, See LH, Shahab A, Skancke J, Suzuki AM, Takahashi H, Tilgner H, Trout D, Walters N, Wang H, Wrobel J, Yu Y, Ruan X, Hayashizaki Y, Harrow J, Gerstein M, Hubbard T, Reymond A, Antonarakis SE, Hannon G, Giddings MC, Ruan Y, Wold B, Carninci P, Guigó R, Gingeras TR. Landscape of transcription in human cells. Nature. 2012;489(7414):101–8. https://doi.org/10.1038/nature11233.
Graveley BR, Brooks AN, Carlson JW, Duff MO, Landolin JM, Yang L, Artieri CG, van Baren MJ, Boley N, Booth BW, Brown JB, Cherbas L, Davis CA, Dobin A, Li R, Lin W, Malone JH, Mattiuzzo NR, Miller D, Sturgill D, Tuch BB, Zaleski C, Zhang D, Blanchette M, Dudoit S, Eads B, Green RE, Hammonds A, Jiang L, Kapranov P, Langton L, Perrimon N, Sandler JE, Wan KH, Willingham A, Zhang Y, Zou Y, Andrews J, Bickel PJ, Brenner SE, Brent MR, Cherbas P, Gingeras TR, Hoskins RA, Kaufman TC, Oliver B, Celniker SE. The developmental transcriptome of Drosophila melanogaster. Nature. 2011;471:473–9.
Struhl K. Transcriptional noise and the fidelity of initiation by RNA polymerase II. Nat Struct Mol Biol. 2007;14(2):103–5. https://doi.org/10.1038/nsmb0207-103.
Cloutier SC, Wang S, Ma WK, Al Husini N, Dhoondia Z, Ansari A, Pascuzzi PE, Tran EJ. Regulated formation of lncRNA-DNA hybrids enables faster transcriptional induction and environmental adaptation. Mol Cell. 2016;62(1):148. https://doi.org/10.1016/j.molcel.2016.03.012.
Boldogkoi Z. Transcriptional interference networks coordinate the expression of functionally related genes clustered in the same genomic loci. Front Genet. 2012;3:122.
Ma L, Bajic VB, Zhang Z. On the classification of long non-coding RNAs. RNA Biol. 2013;10(6):925–33. https://doi.org/10.4161/rna.24604.
Rao YS, Wang ZF, Chai XW, Nie QH, Zhang XQ. Relationship between 5′ UTR length and gene expression pattern in chicken. Genetica. 2013;141(7–9):311–8. https://doi.org/10.1007/s10709-013-9730-9.
Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014;30(15):2114–20. https://doi.org/10.1093/bioinformatics/btu170.
Martin M. Cutadapt removes adapter sequences from high-throughput sequencing reads. EMBnet J. 2011;17(1):10–2. https://doi.org/10.14806/ej.17.1.200.
Kim D, Langmead B, Salzberg SL. HISAT: a fast spliced aligner with low memory requirements. Nat Methods. 2015;12(4):357–60. https://doi.org/10.1038/nmeth.3317.
Haberle V, Forrest AR, Hayashizaki Y, Carninci P, Lenhard B. CAGEr: precise TSS data retrieval and high-resolution promoterome mining for integrative analyses. Nucleic Acids Res. 2015;43(8):e51. https://doi.org/10.1093/nar/gkv054.
Weirather JL, de Cesare M, Wang Y, Piazza P, Sebastiano V, Wang XJ, Buck D, Au KF. Comprehensive comparison of Pacific Biosciences and Oxford Nanopore Technologies and their applications to transcriptome analysis. F1000Res. 2017;6:100.
Heinz S, Benner C, Spann N, Bertolino E, Lin YC, Laslo P, Cheng JX, Murre C, Singh H, Glass CK. Simple combinations of lineage-determining transcription factors prime cis-regulatory elements required for macrophage and B cell identities. Mol Cell. 2010;38(4):576–89. https://doi.org/10.1016/j.molcel.2010.05.004.
Flynn JM, Hubley R, Goubert C, Rosen J, Clark AG, Feschotte C, Smit AF. RepeatModeler2 for automated genomic discovery of transposable element families. Proc Natl Acad Sci U S A. 2020;117(17):9451–7. https://doi.org/10.1073/pnas.1921046117.
Yu G, Wang L-G, Han Y, He Q-Y. clusterProfiler: an R package for comparing biological themes among gene clusters. OMICS. 2012;16(5):284–7. https://doi.org/10.1089/omi.2011.0118.
Supek F, Bosnjak M, Skunca N, Smuc T. REVIGO summarizes and visualizes long lists of gene ontology terms. Plos One. 2011;6(7):e21800. https://doi.org/10.1371/journal.pone.0021800.
Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25(14):1754–60. https://doi.org/10.1093/bioinformatics/btp324.
Bailey TL, Boden M, Buske FA, Frith M, Grant CE, Clementi L, Ren J, Li WW, Noble WS. MEME SUITE: tools for motif discovery and searching. Nucleic Acids Res. 2009;37(Web Server issue):W202–8.
We thank Prof. Weiwei Zhai for the insightful discussions on nucleotide diversity calculations and Dr. Yanli Wang for the experimental assistance. We thank Prof. Chenzhu Wang, Prof. Xianhui Wang, Prof. Feng Cui, Prof. Zhen Zou, and Prof. Aihua Zheng for providing insect samples. The computational resources were provided by the Research Network of Computational Biology and the Supercomputing Center at Beijing Institutes of Life Science, Chinese Academy of Sciences.
This study was supported by grants from the National Natural Science Foundation of China (32088102 and 31672353) and the Key Research Program of Frontier Sciences of Chinese Academy of Sciences (QYZDY-SSW-SMC009).
Qing Liu and Feng Jiang contributed equally to this work.
Beijing Institutes of Life Science, Chinese Academy of Sciences, Beijing, China
Qing Liu, Feng Jiang, Jie Zhang & Le Kang
Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
Qing Liu
Department of Biology, University of Copenhagen, Copenhagen, Denmark
CAS Center for Excellence in Biotic Interactions, University of Chinese Academy of Sciences, Beijing, China
Feng Jiang & Le Kang
State Key Laboratory of Integrated Management of Pest Insects and Rodents, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China
Xiao Li & Le Kang
Feng Jiang
Xiao Li
Le Kang
QL and FJ contributed equally to this work. FJ and LK designed and supervised the project. QL, FJ, and LK wrote the manuscript. QL and FJ performed the bioinformatic analysis. JZ performed the oligo-capping experiments. XL contributed to the insect collection and rearing. The authors read and approved the final manuscript.
Correspondence to Le Kang.
Additional file 1: Figure S1.
Nucleotide composition of OTSSs and nucleotide distribution of sequencing reads. Figure S2. Distribution of identified OTSSs in different genomic regions. Figure S3. Correlation of the number of tissues involved and the identified OTSSs. Figure S4. Distribution of the distance between the identified OTSSs and start codon. Figure S5. Width distribution of transcription start site clusters (TSCs) in different genomic regions. Figure S6. Consensus 25-bp sequences surrounding the dominant TSSs in different genomic regions. Figure S7. A significant enrichment of the TGAG motif and its 1-bp-substitution variants in the 1-bp-wide TSCs. Figure S8. Mis-hybridization of the 5′ oligo-capping adaptors and internal RNA sites results in overrepresentation of the TGAG motif. Figure S9. False TSCs derived from internal signals in the possible truncated mRNAs. Figure S10. Density histogram of the 3′ end of insert fragments along the mRNA transcript with lognormal fit. Figure S11. Percentage of removed TSCs by the 3′ end distribution of insert fragments. Figure S12. Quantification reproducibility for individual TSCs for two biological replicates. Figure S13. Number of identified TSCs per annotated protein-coding gene in the migratory locust and fruit fly. Figure S14. Summary of Drosophila core promoter elements in the core promoters of locusts and fruit flies. Figure S15. CpG distribution in the 4-kb flanking region of transcription start sites. Figure S16. Normalized CpG contents of locusts and fruit flies. Figure S17. Mean AT contents in the 10 to 50 bp regions upstream of dominant OTSSs of core promoters in locusts and fruit flies. Figure S18. Distribution of the tissue-specificity index (tau) of genic TSCs in locusts. Figure S19. Scatterplot of enriched GO terms of ubiquitously (Tau = 0) and restricted (tau = 1) TSC expression of locust core promoters. Figure S20. Correlation between TSC expression and OTSS diversity via binscatter estimation. Figure S21. The alternative usage of core promoters (promoter shifting) in the ovary sample when compared to the muscle sample as a control. Figure S22. The alternative usage of core promoters (promoter shifting) of protein-coding genes in different tissue or organ samples when compared to the muscle samples as controls in locusts. Figure S23. Distant transcription initiation in locusts and fruit flies. Figure S4. Mean intron length in mRNA leaders of locusts and fruit flies. Figure S25. Consensus sequences of the 25 bps surrounding the dominant TSSs. Figure S26. Meta-profile of TSCs over the gene body of protein-coding genes in the official gene sets. Figure S27. The density distribution of distances from the annotated start codon of protein-coding genes to the upstream genic core promoters. Figure S28. The abundance distribution of the distances from the TFBSs to the dominant transcription starting site (TSS) in protein-coding genes. Table S1. Sequencing data generated in this study for locusts. Table S2. Overrepresented motifs of TGAG and its variants in the 1-bp-wide TSCs located in the intergenic region. Table S3. Over-represented motifs of TGAG and its variants in the non-1-bp-wide TSCs located in the intergenic region. Table S4. Overrepresented motifs of TGAG and its variants in the 1-bp-wide TSCs located in the intronic region. Table S5. Over-represented motifs of TGAG and its variants in the non-1-bp-wide TSCs located in the intronic region. Table S6. Over-represented motifs of TGAG and its variants in the 1-bp-wide TSCs located in the coding sequence (CDS) region. Table S7. Overrepresented motifs of TGAG and its variants in the non-1-bp-wide TSCs located in the coding sequence (CDS) region. Table S8. Consensus sequences of Drosophila core promoter elements. Table S9. Sequencing data generated in this study for the arthropod species. Table S10. The identified TSCs in the arthropod species.
Liu, Q., Jiang, F., Zhang, J. et al. Transcription initiation of distant core promoters in a large-sized genome of an insect. BMC Biol 19, 62 (2021). https://doi.org/10.1186/s12915-021-01004-5
Accepted: 16 March 2021
Transcription initiation
Transcriptional start sites
Core promoter
Genome size | CommonCrawl |
Modular Cauchy kernel for the Hilbert modular surface
arxiv.org. math. Cornell University, 2018. No. 1802.08661.
Sakharova N.
In this paper we construct the modular Cauchy kernel on the Hilbert modular surface ΞHil,m(z)(z2−z2¯), i.e. the function of two variables, (z1,z2)∈H×H, which is invariant under the action of the Hilbert modular group, with the first order pole on the Hirzebruch-Zagier divisors. The derivative of this function with respect to z2¯ is the function ωm(z1,z2) introduced by Don Zagier in \cite{Za1}. We consider the question of the convergence and the Fourier expansion of the kernel function. The paper generalizes the first part of the results obtained in the preprint \cite{Sa}
Priority areas: mathematics
Text on another site
Keywords: modular forms
On M-functions associated with modular forms
Lebacque P., Zykin A. I. HAL:archives-ouvertes. HAL. Le Centre pour la Communication Scientifique Directe, 2017
Let f be a primitive cusp form of weight k and level N, let χ be a Dirichlet character of conductor coprime with N, and let L ( f ⊗ χ,s ) denote either log L ( f ⊗ χ,s ) or ( L ′ /L )( f ⊗ χ,s ) . In this article we study the distribution of the values of L when either χ or f vary. First, for a quasi-character ψ : C → C × we find the limit for the average Avg χ ψ ( L ( f ⊗ χ,s )) , when f is fixed and χ varies through the set of characters with prime conductor that tends to infinity. Second, we prove an equidistribu tion result for the values of L ( f ⊗ χ,s ) by establishing analytic properties of the above limit function. Third , we study the limit of the harmonic average Avg h f ψ ( L ( f,s )) , when f runs through the set of primitive cusp forms of given weight k and level N → ∞ . Most of the results are obtained conditionally on the Generalized Riemann Hypothesis for L ( f ⊗ χ,s ) .
Modular Cauchy kernel corresponding to the Hecke curve
Sakharova N. arxiv.org. math. Cornell University, 2018. No. 1802.03299.
In this paper we construct the modular Cauchy kernel $\Xi_N(z_1, z_2)$, i.e. the modular invariant function of two variables, $(z_1, z_2) \in \mathbb{H} \times \mathbb{H}$, with the first order pole on the curve $$D_N=\left\{(z_1, z_2) \in \mathbb{H} \times \mathbb{H}|~ z_2=\gamma z_1, ~\gamma \in \Gamma_0(N) \right\}.$$ The function $\Xi_N(z_1, z_2)$ is used in two cases and for two different purposes. Firstly, we prove generalization of the Zagier theorem (\cite{La}, \cite{Za3}) for the Hecke subgroups $\Gamma_0(N)$ of genus $g>0$. Namely, we obtain a kind of ``kernel function'' for the Hecke operator $T_N(m)$ on the space of the weight 2 cusp forms for $\Gamma_0(N)$, which is the analogue of the Zagier series $\omega_{m, N}(z_1,\bar{z_2}, 2)$. Secondly, we consider an elementary proof of the formula for the infinite Borcherds product of the difference of two normalized Hauptmoduls, ~$J_{\Gamma_0(N)}(z_1)-J_{\Gamma_0(N)}(z_2)$, for genus zero congruence subgroup $\Gamma_0(N)$.
Equations D3 and spectral elliptic curves
Golyshev V., Vlasenko M. In bk.: Feynman Amplitudes, Periods and Motives. Iss. 648. AMS, 2015. P. 135-152.
We study modular determinantal differential equations of orders 2 and 3. We show that the expansion of the analytic solution of a nondegenerate modular equation of type D3 over the rational numbers with respect to the natural parameter coincides, under certain assumptions, with the q–expansion of the newform of its spectral elliptic curve and therefore possesses a multiplicativity property. We compute the complete list of D3 equations with this multiplicativity property and relate it to Zagier's list of nondegenerate modular D2 equations.
Sakharova N. Arnold Mathematical Journal. 2018. Vol. 4. No. 3-4. P. 301-313.
In this paper we construct the modular Cauchy kernel $\Xi_N(z_1, z_2)$, i.e. the modular invariant function of two variables, $(z_1, z_2) \in \mathbb{H} \times \mathbb{H}$, with the first order pole on the curve $$D_N=\left\{(z_1, z_2) \in \mathbb{H} \times \mathbb{H}|~ z_2=\gamma z_1, ~\gamma \in \Gamma_0(N) \right\}.$$
The function $\Xi_N(z_1, z_2)$ is used in two cases and for two different purposes. Firstly, we prove generalization of the Zagier theorem (\cite{La}, \cite{Za3}) for the Hecke subgroups $\Gamma_0(N)$ of genus $g>0$. Namely, we obtain a kind of ``kernel function'' for the Hecke operator $T_N(m)$ on the space of the weight 2 cusp forms for $\Gamma_0(N)$, which is the analogue of the Zagier series $\omega_{m, N}(z_1,\bar{z_2}, 2)$. Secondly, we consider an elementary proof of the formula for the infinite Borcherds product of the difference of two normalized Hauptmoduls, ~$J_{\Gamma_0(N)}(z_1)-J_{\Gamma_0(N)}(z_2)$, for genus zero congruence subgroup $\Gamma_0(N)$.
Absolutely convergent Fourier series. An improvement of the Beurling-Helson theorem
Vladimir Lebedev. arxiv.org. math. Cornell University, 2011. No. 1112.4892v1.
We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results.
Обоснование адиабатического предела для гиперболических уравнений Гинзбурга-Ландау
Пальвелев Р., Сергеев А. Г. Труды Математического института им. В.А. Стеклова РАН. 2012. Т. 277. С. 199-214.
Сабейские этюды
Коротаев А. В. М.: Восточная литература, 1997.
Метод параметрикса для диффузий и цепей Маркова
Конаков В. Д. STI. WP BRP. Издательство попечительского совета механико-математического факультета МГУ, 2012. № 2012.
Added: Dec 5, 2012
Hypercommutative operad as a homotopy quotient of BV
Khoroshkin A., Markaryan N. S., Shadrin S. arxiv.org. math. Cornell University, 2012. No. 1206.3749.
We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV/Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras.
Is the function field of a reductive Lie algebra purely transcendental over the field of invariants for the adjoint action?
Colliot-Thélène J., Kunyavskiĭ B., Vladimir L. Popov et al. Compositio Mathematica. 2011. Vol. 147. No. 2. P. 428-466.
Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself.
Cross-sections, quotients, and representation rings of semisimple algebraic groups
V. L. Popov. Transformation Groups. 2011. Vol. 16. No. 3. P. 827-856.
Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
Edited by: А. Михайлов Вып. 14. М.: Социологический факультет МГУ, 2012.
Dynamics of Information Systems: Mathematical Foundations
Iss. 20. NY: Springer, 2012.
This proceedings publication is a compilation of selected contributions from the "Third International Conference on the Dynamics of Information Systems" which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study. | CommonCrawl |
Uspekhi Mat. Nauk:
Uspekhi Mat. Nauk, 1980, Volume 35, Issue 3(213), Pages 3–22 (Mi umn3355)
This article is cited in 24 scientific papers (total in 24 papers)
International Topology Conference
Survey lectures
Relations among the invariants of topological groups and their subspaces
A. V. Arkhangel'skii
Abstract: In this paper we study topological properties of topological groups and, first of all, cardinal invariants of topological groups. Many of the relevant questions are subsumed under the following general scheme: how does the compatibility of the topology with the group structure reflect on the relations among the invariants of this topology?
We use the notation and terminology of $ \lbrack 4\rbrack$. Cardinal invariants of a topological group are understood to mean those of its underlying space, which is assumed throughout to be completely regular and $ T_1$. Proofs are given in condensed form or omitted altogether.
Full text: PDF file (1494 kB)
Russian Mathematical Surveys, 1980, 35:3, 1–23
MSC: 22A05, 54B05, 54A25, 22A25
Citation: A. V. Arkhangel'skii, "Relations among the invariants of topological groups and their subspaces", Uspekhi Mat. Nauk, 35:3(213) (1980), 3–22; Russian Math. Surveys, 35:3 (1980), 1–23
\Bibitem{Ark80}
\by A.~V.~Arkhangel'skii
\paper Relations among the invariants of topological groups and their subspaces
\jour Uspekhi Mat. Nauk
\issue 3(213)
\pages 3--22
\mathnet{http://mi.mathnet.ru/umn3355}
\zmath{https://zbmath.org/?q=an:0443.22001|0458.22002}
\adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?1980RuMaS..35Q...1A}
\jour Russian Math. Surveys
\crossref{https://doi.org/10.1070/RM1980v035n03ABEH001674}
\isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=A1980MQ83600001}
http://mi.mathnet.ru/eng/umn3355
http://mi.mathnet.ru/eng/umn/v35/i3/p3
A. V. Arkhangel'skii, "Classes of topological groups", Russian Math. Surveys, 36:3 (1981), 151–174
V. V. Tkachuk, "On a method of constructing examples of $M$-equivalent spaces", Russian Math. Surveys, 38:6 (1983), 135–136
V. G. Pestov, "Some topological properties preserved by the relation of $M$-equivalence", Russian Math. Surveys, 39:6 (1984), 223–224
A.V. Arhangel'skiǐ, "On biradial topological spaces and groups", Topology and its Applications, 36:2 (1990), 173
A.V. Arhangel'skiǐ, A.P. Kombarov, "On ∇-normal spaces", Topology and its Applications, 35:2-3 (1990), 121
Vladimir G. Pestov, "Universal arrows to forgetful functors from categories of topological algebra", BAZ, 48:2 (1993), 209
Vladimir Pestov, "A remark on embedding topological groups into products", BAZ, 49:3 (1994), 519
Michael G. Tkačenko, "Free topological groups and inductive limits", Topology and its Applications, 60:1 (1994), 1
Vladimir Pestov, "Free Abelian topological groups and the Pontryagin-Van Kampen duality", BAZ, 52:2 (1995), 297
A.V. Arhangel'skiǐ, P.J. Collins, "On submaximal spaces", Topology and its Applications, 64:3 (1995), 219
A.V. Arhangel'skii, "On Lindelöf property and spread in Cp-theory", Topology and its Applications, 74:1-3 (1996), 83
Oleg Okunev, "Homeomorphisms of function spaces and hereditary cardinal invariants", Topology and its Applications, 80:1-2 (1997), 177
S. A. Morris, V. Pestov, "A topological generalization of the HigmanNeumannNeumann theorem", jgth, 1:2 (1998), 181
Mikhail Tkačenko, "Introduction to topological groups", Topology and its Applications, 86:3 (1998), 179
Dmitri Shakhmatov, "A comparative survey of selected results and open problems concerning topological groups, fields and vector spaces", Topology and its Applications, 91:1 (1999), 51
O. V. Sipacheva, "The topology of free topological groups", J. Math. Sci., 131:4 (2005), 5765–5838
Peter Nickolas, Mikhail Tkachenko, "Local compactness in free topological groups", BAZ, 68:2 (2003), 243
Alexander Arhangel'skii, "Topological vector spaces, compacta, and unions of subspaces", Journal of Mathematical Analysis and Applications, 350:2 (2009), 616
A.V. Arhangel'skii, "-points in remainders of topological groups and some addition theorems in compacta", Topology and its Applications, 156:12 (2009), 2013
M. Bruguera, M. Tkachenko, "Pontryagin duality in the class of precompact Abelian groups and the Baire property", Journal of Pure and Applied Algebra, 2012
O.T.. Alas, V.V.. Tkachuk, R.G.. Wilson, "Maximal pseudocompact spaces and the Preiss-Simon property", centr.eur.j.math, 12:3 (2014), 500
M. Hrušák, U.A. Ramos-García, "Malykhin's problem", Advances in Mathematics, 262 (2014), 193
Iván Sánchez, "Paratopological groups with a <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/common/table/dtd" xmlns:sb="http://www.elsevier.com/xml/common/struct-bib/dtd" xmlns:ce="http://www.elsevier.com/xml/common/dtd" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:cals="http://www.elsevier.com/xml/common/cals/dtd" xmlns:sa="http://www.elsevier.com/xml/common/struct-aff/dtd"><mml:msub><mml:mrow><mml:mi>G</mml:mi></mml:mrow><mml:mrow><mml:mi>δ</mml:mi></mml:mrow></mml:msub></mml:math>-diagonal of infinite rank", Topology and its Applications, 178 (2014), 459
Iván Sánchez, "Condensations of paratopological groups", Topology and its Applications, 180 (2015), 124
This page: 484
Full text: 168
References: 29
First page: 1 | CommonCrawl |
Self-intersections of trajectories of the Lorentz process
Non-normal numbers in dynamical systems fulfilling the specification property
November 2014, 34(11): 4765-4780. doi: 10.3934/dcds.2014.34.4765
The structure of limit sets for $\mathbb{Z}^d$ actions
Jonathan Meddaugh 1, and Brian E. Raines 1,
Department of Mathematics, Baylor University, Waco, TX 76798-7328, United States, United States
Received February 2013 Revised February 2014 Published May 2014
Central to the study of $\mathbb{Z}$ actions on compact metric spaces is the $\omega$-limit set, the set of all limit points of a forward orbit. A closed set $K$ is internally chain transitive provided for every $x,y\in K$ there is an $\epsilon$-pseudo-orbit of points from $K$ that starts with $x$ and ends with $y$. It is known in several settings that the property of internal chain transitivity characterizes $\omega$-limit sets. In this paper, we consider actions of $\mathbb{Z}^d$ on compact metric spaces. We give a general definition for shadowing and limit sets in this setting. We characterize limit sets in terms of a more general internal property which we call internal mesh transitivity.
Keywords: weak incompressibility, Omega-limit set, $\omega$-limit set, internal chain transitivity., pseudo-orbit tracing property, shadowing.
Mathematics Subject Classification: Primary: 37B50, 37B10, 37B20; Secondary: 54H2.
Citation: Jonathan Meddaugh, Brian E. Raines. The structure of limit sets for $\mathbb{Z}^d$ actions. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4765-4780. doi: 10.3934/dcds.2014.34.4765
F. Balibrea and C. La Paz, A characterization of the $\omega$-limit sets of interval maps, Acta Math. Hungar., 88 (2000), 291-300. doi: 10.1023/A:1026775906693. Google Scholar
A. D. Barwell, C. Good, R. Knight and B. E. Raines, A characterization of $\omega$-limit sets in shift spaces, Ergodic Theory Dynam. Systems, 30 (2010), 21-31. doi: 10.1017/S0143385708001089. Google Scholar
A. D. Barwell, A characterization of $\omega$-limit sets of piecewise monotone maps of the interval, Fund. Math., 207 (2010), 161-174. doi: 10.4064/fm207-2-4. Google Scholar
A. D. Barwell, C. Good, P. Oprocha and B. E. Raines, Characterizations of $\omega$-limit sets of topologically hyperbolic spaces, Discrete Contin. Dyn. Syst., 33 (2013), 1819-1833. doi: 10.3934/dcds.2013.33.1819. Google Scholar
W. H. Gottschalk and G. A. Hedlund, Topological Dynamics, American Mathematical Society, Providence, R. I., 1955. Google Scholar
M. W. Hirsch, H.L. Smith and X. Q. Zhao, Chain transitivity, attractivity, and strong repellors for semidynamical systems, J. Dynam. Differential Equations, 13 (2001), 107-131. doi: 10.1023/A:1009044515567. Google Scholar
M. Hochman, On the dynamics and recursive properties of multidimensional symbolic systems, Invent. Math., 176 (2009), 131-167. doi: 10.1007/s00222-008-0161-7. Google Scholar
M. Hochman and T. Meyerovitch, A characterization of the entropies of multidimensional shifts of finite type, Ann. of Math. (2), 171 (2010), 2011-2038. doi: 10.4007/annals.2010.171.2011. Google Scholar
A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, Cambridge University Press, Cambridge, 1995. Google Scholar
D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511626302. Google Scholar
P. Oprocha, Chain recurrence in multidimensional time discrete dynamical systems, Discrete Contin. Dyn. Syst., 20 (2008), 1039-1056. doi: 10.3934/dcds.2008.20.1039. Google Scholar
P. Oprocha, Shadowing in multi-dimensional shift spaces, Colloq. Math., 110 (2008), 451-460. doi: 10.4064/cm110-2-8. Google Scholar
P. Walters, An Introduction to Ergodic Theory, Graduate Texts in Mathematics, 79. Springer-Verlag, New York-Berlin, 1982. Google Scholar
Bruce Kitchens, Michał Misiurewicz. Omega-limit sets for spiral maps. Discrete & Continuous Dynamical Systems, 2010, 27 (2) : 787-798. doi: 10.3934/dcds.2010.27.787
Zheng Yin, Ercai Chen. The conditional variational principle for maps with the pseudo-orbit tracing property. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 463-481. doi: 10.3934/dcds.2019019
Changjing Zhuge, Xiaojuan Sun, Jinzhi Lei. On positive solutions and the Omega limit set for a class of delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (9) : 2487-2503. doi: 10.3934/dcdsb.2013.18.2487
Flavio Abdenur, Lorenzo J. Díaz. Pseudo-orbit shadowing in the $C^1$ topology. Discrete & Continuous Dynamical Systems, 2007, 17 (2) : 223-245. doi: 10.3934/dcds.2007.17.223
Carlos Arnoldo Morales, M. J. Pacifico. Lyapunov stability of $\omega$-limit sets. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 671-674. doi: 10.3934/dcds.2002.8.671
Jihoon Lee, Ngocthach Nguyen. Flows with the weak two-sided limit shadowing property. Discrete & Continuous Dynamical Systems, 2021, 41 (9) : 4375-4395. doi: 10.3934/dcds.2021040
Fang Zhang, Yunhua Zhou. On the limit quasi-shadowing property. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2861-2879. doi: 10.3934/dcds.2017123
Andrew D. Barwell, Chris Good, Piotr Oprocha, Brian E. Raines. Characterizations of $\omega$-limit sets in topologically hyperbolic systems. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1819-1833. doi: 10.3934/dcds.2013.33.1819
Hongyong Cui, Peter E. Kloeden, Meihua Yang. Forward omega limit sets of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : 1103-1114. doi: 10.3934/dcdss.2020065
Olexiy V. Kapustyan, Pavlo O. Kasyanov, José Valero. Chain recurrence and structure of $ \omega $-limit sets of multivalued semiflows. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2197-2217. doi: 10.3934/cpaa.2020096
Lidong Wang, Hui Wang, Guifeng Huang. Minimal sets and $\omega$-chaos in expansive systems with weak specification property. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1231-1238. doi: 10.3934/dcds.2015.35.1231
José S. Cánovas. Topological sequence entropy of $\omega$–limit sets of interval maps. Discrete & Continuous Dynamical Systems, 2001, 7 (4) : 781-786. doi: 10.3934/dcds.2001.7.781
Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979
Emma D'Aniello, Saber Elaydi. The structure of $ \omega $-limit sets of asymptotically non-autonomous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 903-915. doi: 10.3934/dcdsb.2019195
Francisco Balibrea, J.L. García Guirao, J.I. Muñoz Casado. A triangular map on $I^{2}$ whose $\omega$-limit sets are all compact intervals of $\{0\}\times I$. Discrete & Continuous Dynamical Systems, 2002, 8 (4) : 983-994. doi: 10.3934/dcds.2002.8.983
Liangwei Wang, Jingxue Yin, Chunhua Jin. $\omega$-limit sets for porous medium equation with initial data in some weighted spaces. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 223-236. doi: 10.3934/dcdsb.2013.18.223
José Ginés Espín Buendía, Víctor Jiménez Lopéz. A topological characterization of the $\omega$-limit sets of analytic vector fields on open subsets of the sphere. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1143-1173. doi: 10.3934/dcdsb.2019010
Alexander Blokh, Michał Misiurewicz. Dense set of negative Schwarzian maps whose critical points have minimal limit sets. Discrete & Continuous Dynamical Systems, 1998, 4 (1) : 141-158. doi: 10.3934/dcds.1998.4.141
Tatiane C. Batista, Juliano S. Gonschorowski, Fábio A. Tal. Density of the set of endomorphisms with a maximizing measure supported on a periodic orbit. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3315-3326. doi: 10.3934/dcds.2015.35.3315
Jaroslav Smítal, Marta Štefánková. Omega-chaos almost everywhere. Discrete & Continuous Dynamical Systems, 2003, 9 (5) : 1323-1327. doi: 10.3934/dcds.2003.9.1323
Jonathan Meddaugh Brian E. Raines | CommonCrawl |
Physics Meta
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
Rapidly oscillating piston — work or heat?
An ideal gas has its volume controlled by the $x$-coordinate of a piston. At $t=0$, the piston starts oscillating very quickly (compared to the time scale of equilibration in the gas), then after some time it stops at its original $x$-coordinate. Once the gas comes to equilibrium again, it will have the same volume as at $t=0$, but its energy will have increased (due to sound waves produced by the piston). Is this change in energy considered work, or heat?
Assume that the gas and piston are insulated in such a way that the final energy of the gas is independent of the temperature of the piston/surroundings.
If I understand it correctly, the following are the usual definitions of work and heat:
Work: Some parameters of the system's Hamiltonian are declared to be "external parameters," and work is a change in the system's energy due to a change in one or more external parameters.
Heat: Any change in the system's energy that is not due to work.
It seems to me that since the volume is an external parameter, the oscillating piston does work on the gas even though there is no net change in volume.
If instead you answer heat, then my follow-up is: repeat the above experiment with a final volume smaller than the initial volume (ie, the piston compresses the gas while oscillating rapidly). The gas again gains energy -- do you call this heat, or do you have a way to separate the change in energy into a "work" term and a "heat" term? What if the piston does not oscillate but simply compresses the gas very quickly, again producing sound waves that increase the final equilibrium energy of the gas?
thermodynamics non-equilibrium
marlowmarlow
Work is not a state variable. As such, the total change in volume is not what determines the work done on the system. Rather, the work $W$ is given by
$$ W = \int_{\text{initial state}}^{\text{final state}} p\; dV $$
Where $p$ is the pressure and $V$ is the volume. Since the piston is moved quickly and non-reversibly the work done is nonzero -- the internal energy gained by the gas is from the work done by the piston.
alexvasalexvas
$\begingroup$ Is not the net work on each cycle =0? (I am assuming the process is still quasistatic, otherwise it would not be possible to assume that, and the net work could have any sign) $\endgroup$ – user126422 Jan 7 '17 at 2:24
$\begingroup$ @AlbertAspect : I was thinking the same. But can the process be quasi-static if the piston oscillates rapidly? The conditions imply that the process is adiabatic (system thermally insulated from the surroundings), so doesn't that mean it is reversible anyway (since the gas is ideal)? $\endgroup$ – sammy gerbil Jan 7 '17 at 2:34
$\begingroup$ @sammygerbil They way I imagine it is using differentials of different order. So you can have an infinity of temporal scales or hierarchies, one being a differential for the next. But if there is turbulence, you cannot longer assume that. $\endgroup$ – user126422 Jan 7 '17 at 2:46
$\begingroup$ @sammygerbil Adiabatic is not the same as reversible. Free expansion, for instance, is irreversible but adiabatic. $\endgroup$ – user126422 Jan 7 '17 at 2:56
$\begingroup$ @Albert Aspect One argument is that the dissipation of the sound waves is an irreversible process, so therefore the entropy of the gas in the final equilibrium state is higher than the entropy in the initial equilibrium state. The energy of the gas is an increasing function of entropy, so there must have been an energy transfer from the piston to the gas. $\endgroup$ – marlow Jan 16 '17 at 2:39
Energy transfer due to temperature difference is called heat transfer. Everything else is called work (assuming there is no mass transfer etc.). In your example it would be proper to say that energy of system increases due to work done on it.
Change in volume is not necessary for using the term "work done"; consider for example, paddle work.
DeepDeep
5,81722 gold badges99 silver badges2323 bronze badges
$\begingroup$ I agree, given these definitions. But I wonder if these definitions have the disadvantage that an ordinary microwave oven would be said to do work on (e.g.) a glass of water? Any temperature difference between the water and the gadget that produces the microwave radiation is unimportant. It seems to me that the amplitude of the electromagnetic field in the oven is analogous to the volume in the piston example (both oscillate rapidly). $\endgroup$ – marlow Jan 16 '17 at 2:23
I'll concentrate for this question on the energy that has been converted to sound, rather than dissipative losses, which are obviously heat.
In this case, the answer is mostly work, at least initially, but it becomes heat as the sound is absorbed.
I don't believe there is in general a binary, "yes/no" or "heat/work" answer to this kind of energy conversion question - one needs instead to quantify ion a continuum how "heaty" or "worky" the added energy is and one does this through calculation of the change in entropy of the system that the energy in question is added to that arises from the addition of this energy.
In the case of sound, the additional energy is stored in well-defined oscillatory components of motion of the gas molecules. Sure, the molecules themselves have highly randomized motions, but the sound represents a well defined average additional motion that is precisely quantified by e.g. the solution of the relevant wave equation that tells you the vector velocities of the changes in motion of the molecules as a function of position and time. So one requires very little knowledge / information to describe how the states of motions of the molecules have changed - that is, the energy addition has wrought very little entropy change in the system.
However, as the wave propagates through the gas, the motion represented by additional kinetic energy becomes less "co-ordinated" as this additional kinetic energy contributes less to the co-ordinated, wave-equation-governed motion and is converted into randomized (thermalized) motion. The sound has dissipated, the distribution of molecular motions has returned to being described wholly by the Boltzmann distribution but with a slightly higher temperature than before.
The above illustrates the obvious quantification of "workiness / heatiness" as follows. Let a small amount $\mathrm{d}E$ of the energy be added to the system. Let the entropy change be $\mathrm{d}S$, and the system's initial temperature be $T$. Then the heat added is $\mathrm{d} Q = T\,\mathrm{d} S$: that part of the energy that goes into thermalized motion. The leftover is of course the work done on the system: it's the part whose addition brings about a change in the system microstate than can be precisely described (in this case through the relevant wave equation solution):
$$\mathrm{d} W = \mathrm{d} E - T\,\mathrm{d} S$$
Of course, entropy is not easily measured like energy and temperature, so this is mostly a way to define working on and heating of a system.
Selene RoutleySelene Routley
$\begingroup$ I wonder how you would analyze the case of the piston that compresses non-infinitesimally while rapidly oscillating, then stops? The initial and final states are equilibrium states (if we focus on the long time scale, after the sound waves have dissipated). Given $\Delta E$, $\Delta S$, $T_i$, and $T_f$, it is unclear to me what the heat term should be ($T_i \Delta S$? $T_f \Delta S$? Why?). The temperature is undefined in the intermediate states (because of the sound waves), so we cannot integrate the infinitesimal form $T dS$. What is the heat term? $\endgroup$ – marlow Jan 16 '17 at 2:35
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged thermodynamics non-equilibrium or ask your own question.
Why does compressing a piston increase the internal energy?
In an adiabatic process, how can you get work without applying heat?
Vague definitions of work and heat
Work done in Isobaric Process
Confusion on work done by an ideal gas
Forces on piston | CommonCrawl |
Prediction-based VM provisioning and admission control for multi-tier web applications
Adnan Ashraf1,
Benjamin Byholm1 &
Ivan Porres1
We present a prediction-based, cost-efficient Virtual Machine (VM) provisioning and admission control approach for multi-tier web applications. The proposed approach provides automatic deployment and scaling of multiple web applications on a given Infrastructure as a Service (IaaS) cloud. It monitors and uses collected resource utilization metrics itself and does not require a performance model of the applications or the infrastructure dynamics. The approach uses the OSGi component model to share VM resources among deployed applications, reducing the total number of required VMs. The proposed approach comprises three sub-approaches: a reactive VM provisioning approach called ARVUE, a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling (CRAMP), and a session-based adaptive admission control approach called adaptive Admission Control for Virtualized Application Servers (ACVAS). Performance under varying load conditions is guaranteed by automatic adjustment and tuning of the CRAMP and ACVAS parameters. The proposed approach is demonstrated in discrete-event simulations and is evaluated in a series of experiments involving synthetic as well as realistic load patterns.
The resource needs of web applications vary over time, depending on the number of concurrent users and the type of work performed. As the demand for an application grows, so does its demand for resources, until the demand for a key resource outgrows the supply and the performance of the application deteriorates. Users of an application starved for resources tend to notice this as increased latency and lower throughput for requests, or they might receive no service at all if the problem progresses further.
To handle multiple simultaneous users, web applications are traditionally deployed in a three-tiered architecture, where a computer cluster of fixed size represents the application server tier. This cluster provides dedicated application hosting to a fixed amount of users. There are two problems with this approach: firstly, if the amount of users grows beyond the predetermined limit, the application will become starved for resources. Secondly, while the amount of users is lower than this limit, the unused resources constitute waste.
A study by Vogels [36] showed that the under utilization of servers in enterprises is a matter of concern. This inefficiency is mostly due to application isolation: a consequence of dedicated hosting. Sharing of resources between applications leads to higher total resource utilization and thereby to less waste. Thus, the level of utilization can be improved by implementing what is known as shared hosting [35]. Shared hosting is already commonly used by web hosts to serve static content belonging to different customers from the same set of servers, as no sessions need to be maintained.
Cloud computing already allows us to alleviate the utilization problem by dynamically adding or removing available Virtual Machine (VM) instances at the infrastructure level. However, the problem remains to some extent, as Infrastructure as a Service (IaaS) providers operate at the level of VMs, which does not provide high granularity. This can be solved by operating at the Platform as a Service (PaaS) level instead. However, one problem still remains: resources cannot be immediately allocated or deallocated. In many cases, there exists a significant provisioning delay on the order of minutes.
Shared hosting of dynamic content also presents new challenges: capacity planning is complicated, as different types of requests might require varying amounts of a given resource. Application-specific knowledge is necessary for a PaaS provider to efficiently host complex applications with highly varying resource needs. When hosting third-party dynamic content in a shared environment that application-specific knowledge might be unavailable. It is also unfeasible for a PaaS provider to learn enough about all of the applications belonging to their customers.
Traditional performance models based on queuing theory try to capture the behavior of purely open or closed systems [25]. However, web applications often have workloads with sessions, exhibiting a partially-open behavior, which includes components from both the open and the closed model. Given a better performance model of an application, it might be possible to plan the necessary capacity, but the problem of obtaining said model remains.
If the hosted applications are seldom modified it might be feasible to automatically derive the necessary performance models by benchmarking each application in isolation [35]. This might apply to hosting first- or second-party applications. However, when hosting third-party applications under continuous development, they may well change frequently enough for this to be unfeasible.
Another problem is determining the amount of VMs to have at a given moment. As one cannot provision fractions of a VM, the actual capacity demand will need to be quantized in one way or another. Figure 1 shows a demand and a possible quantization thereof. Overallocation implies an opportunity cost — underallocation implies lost revenue.
The actual capacity demand has to be quantized at a resolution determined by the capacity of the smallest VM available for provisioning. Overallocation means an opportunity cost, underallocation means lost revenue
Finally, there is also the issue of admission control. This is the problem of determining how many users to admit to a server at a given moment in time, so that said server does not become overloaded. Preventive measures are a good way of keeping server overload from occurring at all. This is traditionally achieved by only relying on two possible decisions: rejection or acceptance.
Once more, the elastic nature of the cloud means that we have more resources available at our discretion and can scale up to accommodate the increase in traffic. However, resource allocation still takes a considerable amount of time, due to the provisioning delay, and admitting too much traffic is an unattractive option, even if new resources will arrive in a while.
This article presents a prediction-based, cost-efficient VM provisioning and admission control approach for multi-tier web applications. The proposed approach provides automatic deployment and scaling of multiple simultaneous third-party web applications on a given IaaS cloud in a shared hosting environment. It monitors and uses resource utilization metrics and does not require a performance model of the applications or the infrastructure dynamics. The research applies to PaaS providers and large Software as a Service (SaaS) providers with multiple applications. We deal with stateful Rich Internet Applications (RIAs) over the Hypertext Transfer Protocol (HTTP).
The proposed approach integrates three different mechanisms. It provides a reactive VM provisioning approach called ARVUE [7], a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling (CRAMP) [8], and a session-based adaptive admission control approach called adaptive Admission Control for Virtualized Application Servers (ACVAS) [9]. Both ARVUE and CRAMP provide autonomous shared hosting of third-party Java Servlet applications on an IaaS cloud. However, CRAMP provides better responsiveness and results than the purely reactive scaling of ARVUE. We concluded that admission control might be able to reduce the risk of servers becoming overloaded. Therefore, the proposed approach augments VM provisioning with a session-based adaptive admission control approach called ACVAS. ACVAS implements per-session admission, which reduces the risk of over-admission. Furthermore, instead of relying only on rejection of new sessions, it implements a simple session deferment mechanism that reduces the number of rejected sessions while increasing session throughput. Thus, the admission controller can decide to admit, defer, or reject an incoming new session. Performance under varying load conditions is guaranteed by automatic adjustment and tuning of the CRAMP and ACVAS parameters. The proposed approach is demonstrated in discrete-event simulations and is evaluated in a series of experiments involving synthetic as well as realistic load patterns.
We proceed as follows. Related work section discusses important related works. Architecture section presents the system architecture. The proposed VM provisioning and admission control algorithms are described in Algorithms section. Implementation section presents some important implementation details. In Experimental evaluation section, we present experimental results before concluding in Conclusions section.
Due to the problems mentioned in Introduction section, existing works on PaaS solutions tend to use dedicated hosting on a VM-level for web applications. This gives the level of isolation needed to reliably host different applications without them interfering with each other, as resource management will be handled by the underlying operating system. However, this comes at the cost of disallowing resource sharing among instances.
There are many metrics available for measuring Quality of Service (QoS). A common metric is Round Trip Time (RTT), which is a measure of the time required for sending a request and receiving a response. This approach has a drawback in that different programs might have various expected processing times for requests of different types. This means that application-specific knowledge is required when using RTT as a QoS metric. This information might not be easy to obtain if an application is under constant development. Furthermore, when a server nears saturation, its response time grows exponentially. This makes it difficult to obtain good measurements in a high-load situation. For this reason, we use server Central Processing Unit (CPU) load average and memory utilization as the primary QoS metrics. An overloaded server will fail to meet RTT requirements.
Reactive scaling works by monitoring user load in the system and reacting to observed variations therein by making decisions for allocation or deallocation. In our previous work [1, 7], we built a prototype of an autonomous PaaS called ARVUE. It implements reactive scaling. However, in many cases, the reactive approach suffers in practice, due to delays of several minutes inherent in the provisioning of VMs [31]. This shortcoming is avoidable with proactive scaling.
Proactive scaling attempts to overcome the limitations of reactive scaling by forecasting future load trends and acting upon them, instead of directly acting on observed load. Forecasting usually has the drawback of added uncertainty, as it introduces errors into the system. The error can be mitigated by a hybrid approach, where forecast values are supplemented with error estimates, which affect a blend weight for observed and forecast values. We have developed a hybrid reactive-proactive VM provisioning algorithm called CRAMP [8].
Admission control is a strategy for keeping servers from becoming overloaded. This is achieved by limiting the amount of traffic each server receives by means of an intermediate entity known as an admission controller. The admission controller may deny entry to fully utilized servers, thereby avoiding server overload. If a server were to become overloaded, all users of that server, whether existing or arriving, would suffer from deteriorated performance and possible Service-Level Agreement (SLA) violations.
Traditional admission control strategies have mostly been request-based, where admission control decisions would be made for each individual request. This approach is not appropriate for stateful web applications from a user experience point of view. If a request were to be denied in the middle of an active session, when everything was working well previously, the user would have a bad experience. Session-Based Admission Control (SBAC) is an alternative strategy, where the admission decision is made once for each new session and then enforced for all requests inside of a session [27]. This approach is better from the perspective of the user, as it should not lead to service being denied in the middle of a session. This approach has usually been implemented using interval-based on-off control, where the admission controller either admits or rejects all sessions arriving within a predefined time interval. This approach has a flaw in that servers may become overloaded if they accept too many requests in an admission interval, as the decisions are made only at interval boundaries. Per-session admission control avoids this problem by making a decision for each new session, regardless of when it arrives. We have developed ACVAS [9], a session-based admission control approach with per-session admission control. ACVAS uses SBAC with a novel deferment mechanism for sessions, which would have been rejected with the traditional binary choice of acceptance or rejection.
VM provisioning approaches
Most of the existing works on VM provisioning for web-based systems can be classified into two main categories: plan-based approaches and control theoretic approaches [16, 29, 30, 33]. Plan-based approaches can be further classified into workload prediction approaches [6, 17, 31, 39] and performance dynamics model approaches [12, 15, 20–22, 24, 38, 40]. One common difference between all existing works discussed here and the proposed approach is that the proposed approach uses shared hosting. Another distinguishing characteristic of the proposed approach is that in addition to VM provisioning for the application server tier, it also provides dynamic scaling of multiple web applications. In ARVUE [1, 7], we used shared hosting with reactive resource allocation. In contrast, our proactive VM provisioning approach CRAMP [8] provides improved QoS with prediction-based VM provisioning.
Ardagna et al. [6] proposed a distributed algorithm for managing SaaS cloud systems that addresses capacity allocation for multiple heterogeneous applications. Raivio et al. [31] used proactive resource allocation for short message services in hybrid clouds. The main drawback of their approach is that it assumes server processing capacity in terms of messages per second, which is not a realistic assumption for HTTP traffic where different types of requests may require different amounts of processing time.
Zhang et al. [39] introduced a statistical-based resource allocation approach that performs load balancing on Physical Machines (PMs) by predicting VM resource demands. It uses statistical prediction and available resource evaluation mechanisms to make online resource allocation decisions. Gong et al. [17] presented a predictive resource scaling system, which leverages light-weight signal processing and statistical learning methods to predict resource demands of applications and adjusts resource allocations accordingly. Nevertheless, the main challenge in the prediction-based approaches is in making good prediction models that could ensure high prediction accuracy with low computational cost. In our proposed approach, CRAMP is a hybrid reactive-proactive approach. It uses a two-step prediction method [4, 5] with Exponential Moving Average (EMA) and a simple linear regression model [9, 26], which provides high prediction accuracy under soft real-time constraints. Moreover, it gives more or less weight to the predicted utilizations based on the Normalized Root Mean Square Error (NRMSE).
TwoSpot [38] supports hosting of multiple web applications, which are automatically scaled up and down in a dedicated hosting environment. The scaling down is decentralized, which may lead to severe random drops in performance. Hu et al. [22] presented an algorithm for determining the minimum number of required servers, based on the expected arrival rate, service rate, and SLA. In contrast, the proposed approach does not require knowledge about the infrastructure or performance dynamics.
Chieu et al. [15] presented an approach that scales servers for a particular web application based on the number of active user sessions. However, the main challenge is in determining suitable threshold values on the number of user sessions. Carrera et al. [12] presented a utility-based web application placement approach to maximize application performance on clusters of PMs. Iqbal et al. [24] proposed an approach for multi-tier web applications, which uses response time and CPU utilization metrics to determine the bottleneck tier and then scales it by provisioning a new VM. Calinescu et al. [11] presented a tool-supported framework for QoS management and optimization of self-adaptive service-based systems. Zhao et al. [40] addressed the problem of minimizing resource rental cost for running elastic applications in the cloud while satisfying application-level QoS requirements. They proposed a deterministic resource rental planning model, which uses a mixed integer linear program to generate optimal rental decisions based on fixed cost parameters. They also presented a stochastic resource rental planning model that explicitly considers the price uncertainty of the Amazon Elastic Compute Cloud (EC2) spot instances in the rental decision making. However, they did not investigate cloud resource provisioning solutions for time-varying workloads.
Han et al. [21] proposed a reactive resource allocation approach to integrate VM-level scaling with a more fine-grained resource-level scaling. Similarly, Han et al. [20] presented a cost-aware, workload-adaptive reactive scaling approach for multi-tier cloud applications. In contrast, CRAMP supports hybrid reactive-proactive resource allocation with proportional and derivative factors to determine the number of VMs to provision.
Dutreilh et al. [16] and Pan et al. [29] used control theoretic models to design resource allocation solutions for cloud computing. Dutreilh et al. presented a comparison of static threshold-based and reinforcement learning techniques. Pan et al. used Proportional-Integral (PI)-controllers to provide QoS guarantees. Patikirikorala et al. [30] proposed a multi-model framework for implementing self-managing control systems for QoS management. The work is based on a control theoretic approach called the Multi-Model Switching and Tuning (MMST) adaptive control. Roy et al. [33] presented a look-ahead resource allocation algorithm based on the model predictive control. In comparison to the control theoretic approaches, our proposed approach also uses proportional and derivative factors, but it does not require knowledge about the performance models or infrastructure dynamics.
Admission control approaches
Admission control refers to the mechanism of restricting the incoming user load on a server in order to prevent it from becoming overloaded. Server overload prevention is important because an overloaded server fails to maintain its performance, which translates into a subpar service (higher response time and lower throughput) [19]. Thus, if an overloaded server keeps on accepting new user requests, then not only the new users, but also the existing users may experience a deteriorated performance.
The existing works on admission control for web-based systems can be classified according to the scheme presented in Almeida et al. [3]. For instance, Robertsson et al. [32] and Voigt and Gunningberg [37] are control theoretic approaches, while Huang et al. [23] and Muppala and Zhou [27] use machine learning techniques. Similarly, Cherkasova and Phaal [14], Almeida et al. [3], Chen et al. [13], and Shaaban and Hillston [34] are utility-based approaches.
Almeida et al. [3] proposed a joint resource allocation and admission control approach for a virtualized platform hosting a number of web applications, where each VM runs a dedicated web service application. The admission control mechanism uses request-based admission control. The optimization objective is to maximize the provider's revenue, while satisfying the customers' QoS requirements and minimizing the cost of resource utilization. The approach dynamically adjusts the fraction of capacity assigned to each VM and limits the incoming workload by serving only the subset of requests that maximize profits. It combines a performance model and an optimization model. The performance model determines future SLA violations for each web service class based on a prediction of future workloads. The optimization model uses these estimates to make the resource allocation and admission control decisions.
Cherkasova and Phaal [14] proposed an SBAC approach that uses the traditional on-off control. It supports four admission control strategies: responsive, stable, hybrid, and predictive. The hybrid strategy tunes itself to be more stable or more responsive based on the observed QoS. The proposed approach measures server utilizations during predefined time intervals. Using these measured utilizations, it computes predicted utilizations for the next interval. If the predicted utilizations exceed specified thresholds, the admission controller rejects all new sessions in the next time interval and only serves the requests from already admitted sessions. Once the predicted utilizations drop below the given thresholds, the server changes its policy for the next time interval and begins to admit new sessions again.
Chen et al. [13] proposed Admission Control based on Estimation of Service times (ACES). That is, to differentiate and admit requests based on the amount of processing time required by a request. In ACES, admission of a request is decided by comparing the available computation capacity to the predetermined delay bound of the request. The service time estimation is based on an empirical expression, which is derived from an experimental study on a real web server. Shaaban and Hillston [34] proposed Cost-Based Admission Control (CBAC), which uses a congestion control technique. Rather than rejecting user requests at high load, CBAC uses a discount-charge model to encourage users to postpone their requests to less loaded time periods. However, if a user chooses to go ahead with the request in a high load period, then an extra charge is imposed on the user request. The model is effective for e-commerce web sites when more users place orders that involve monetary transactions. A disadvantage of CBAC is that it requires CBAC-specific web pages to be included in the web application.
Muppala and Zhou [27] proposed the Coordinated Session-based Admission Control (CoSAC) approach, which provides SBAC for multi-tier web applications with per-session admission control. CoSAC also provides coordination among the states of tiers with a machine learning technique using a Bayesian network. The admission control mechanism differentiates and admits user sessions based on their type. For example, browsing mix session, ordering mix session, and shopping mix session. However, it remains unclear how it determines the type of a particular session in the first place.
The on-off control in the SBAC approach of Cherkasova and Phaal [14] turns on or off the acceptance of the new sessions for an entire admission control interval. Therefore, the admission control decisions are made only at the interval boundaries and can not be changed within an interval. Thus, a drawback of the on-off control is that it is highly vulnerable to over-admission, especially when handling a bursty load, which may result in the overloading of the servers. To overcome this vulnerability of the on-off control, CoSAC [27] used per-session admission control. Our proposed admission control approach also implements SBAC with per-session admission control [9]. Thus, it makes an admission control decision for each new session.
Huang et al. [23] proposed admission control schemes for proportional differentiated services. It applies to services with different priority classes. The paper proposes two admission control schemes to enable Proportional Delay Differentiated Service (PDDS) at the application level. Each scheme is augmented with a prediction mechanism, which predicts the total maximum arrival rate and the maximum waiting time for each priority class based on the arrival rate in the current and last three measurement intervals. When a user request belonging to a specific priority class arrives, the admission control algorithm uses the time series predictor to forecast the average arrival rate of the class for the next interval, computes the average waiting time for the class for the next interval, and determines if the incoming user request is admitted to the server. If admitted, the client is placed at the end of the class queue.
Voigt and Gunningberg [37] proposed admission control based on the expected resource consumption of the requests, including a mechanism for service differentiation that guarantees low response time and high throughput for premium clients. The approach avoids overutilization of individual server resources, which are protected by dynamically setting the acceptance rate of resource-intensive requests. The adaptation of the acceptance rates (average number of requests per second) is done by using Proportional-Derivative (PD) feedback control loops. Robertsson et al. [32] proposed an admission control mechanism for a web server system with control theoretic methods. It uses a control theoretic model of a G/G/1 system with an admission control mechanism for nonlinear analysis and design of controller parameters for a discrete-time PI-controller. The controller calculates the desired admittance rate based on the reference value of average server utilization and the estimated or measured load situation (in terms of average server utilization). It then rejects those requests that could not be admitted.
All existing admission control approaches discussed above, except CBAC [34], have a common shortcoming in that they rely only on request rejection to avoid server overloading. However, CBAC has its own disadvantages. The discount-charge model of CBAC requires additional web pages to be included in the web application and it is only effective for e-commerce web sites that involve monetary transactions. In contrast, we introduce a simple mechanism to defer user sessions that would otherwise be rejected. In ACVAS, such sessions are deferred on an entertainment server, which sends a wait message to the user and then redirects the user session to an application server as soon as a new server is provisioned or an existing server becomes less loaded [9]. However, if the entertainment server also approaches its capacity limits, the new session is rejected. Therefore, for each new session request, the admission controller makes one of the three possible decisions: admit the session, defer the session, or reject the session.
Cherkasova and Phaal [14] defined a simple method for computing the predicted resource utilization, yielding predicted resource utilizations by assigning certain weights to the current and the past utilizations. Muppala and Zhou [27] used the EMA method to make utilization predictions. Huang et al. [23] used machine learning techniques called Support Vector Regression and Particle Swarm Optimization for time-series prediction. Shaaban and Hillston [34] assumed a repeating pattern of workload over a suitable time period. Therefore, in their approach, load in a future period is predicted from the cumulative load of the corresponding previous period. These related works clearly indicate that admission control augmented with prediction models tends to produce better results. Therefore, ACVAS also uses a prediction model. However, for efficient runtime decision making, it is essential to avoid prediction models which might require intensive computation, frequent updates to their parameters, or (off-line) training. Thus, ACVAS uses a two-step approach [4, 5], which has been designed to predict future resource loads under soft real-time constraints. The two-step approach consists of a load tracker and a load predictor. We use the EMA method for the load tracker and a simple linear regression model [26] for the load predictor [9].
The system architecture of the proposed VM provisioning and admission control approach is depicted in Fig. 2. It consists of the following components: a load balancer with an accompanying configuration file, the global controller, the admission controller, the cloud provisioner, the application servers containing local controllers, the load predictors, an busy service server, and an application repository.
System architecture of the proposed VM provisioning and admission control approach
The purpose of the external load balancer is to distribute the workload evenly throughout the system, while the admission controller is responsible for admitting users, when deemed possible. The cloud provisioner is also an external component, which represents the control service of the underlying IaaS provider. Application servers are dynamically provisioned VMs belonging to the underlying IaaS cloud, capable of running multiple concurrent applications contained in an application repository.
The purpose of the load balancer is to distribute the workload among the available application servers. When an application request arrives at the load balancer, it gets redirected to a suitable server according to the current configuration. A request for an application not deployed at the moment is briefly sent to a server tasked with entertaining the user and showing that the request is being processed until the application has been successfully deployed, after which it is delivered to the correct server. This initial deployment of an application will take a much longer time than subsequent requests, currently on the order of several seconds.
The global controller is responsible for managing the cluster by monitoring its constituents and reacting to changes in the observed parameters, as reported by the local controllers. It can be viewed as a control loop that implements the VM provisioning algorithms described in Algorithms section.
The admission controller is responsible for admitting users to application servers. It supplements the load balancer in ensuring that the servers do not become overloaded by deciding whether to admit, defer, or reject traffic. It makes admission control decisions per session, not per request. This allows for a smoother user experience in a stateful environment, as a user of an application would not enjoy suddenly having requests to the application denied, when everything was working fine a moment ago. The admission controller implements per-session admission control. Unlike the traditional on-off approach, which makes admission control decisions on an interval basis, the per-session admission approach is not as vulnerable to sudden traffic fluctuations. The on-off approach can lead to servers becoming overloaded if they are set to admit traffic and a sudden traffic spike occurs [9]. The admission control decisions are based on prediction of future load trends combined with server health monitoring, as explained in Admission control section.
The cloud provisioner is an external component, which represents the control service of the underlying IaaS provider. The busy service acts as a default service, which is used whenever the actual service is unavailable. The application servers are dynamically provisioned VMs belonging to the underlying IaaS cloud, capable of concurrently running multiple applications inside an Open Services Gateway initiative (OSGi) environment [28].
Application bundles are contained in an application repository. When an application is deployed to a server, the server fetches the bundle from the repository. This implies that the repository is shared among application servers. A newly provisioned application server is assigned an application repository by the global controller.
The VM provisioning algorithms used by the global controller constitute a hybrid reactive-proactive PD-controller [8]. They implement proportional scaling augmented with derivative control in order to react to changes in the health of the system [7]. The server tier can be scaled independently of the application tier in a shared hosting environment. The VM provisioning algorithms are supplemented by a set of allocation policies. The prototype currently supports the following policies: lowest memory utilization, lowest CPU load, least concurrent sessions, and newest server first. In addition to this, we have also developed an admission control algorithm [9]. A summary of the concepts and notations used to describe the VM provisioning algorithms is available in Table 1. The additional concepts and notations for the admission control algorithm are provided in Table 2.
Table 1 Summary of VM provisioning concepts and their notation
Table 2 Additional concepts and notation for admission control
The input variables are average CPU load and memory usage. Average CPU load is the average Unix-like system load, which is based on the queue length of runnable processes, divided by the number of CPU cores present.
The VM provisioning algorithms have been designed to prevent oscillations in the size of the application server pool. There are several motivating factors behind this choice. Firstly, provisioning VMs takes substantial time. Combined with frequent scaling operations, this may lead to bad performance [38]. Secondly, usage based billing requires the time to be quantized at some resolution. For example, Amazon EC2 bases billing on full used hours. Therefore, it might not make sense to terminate a VM until it is close to a full billing hour, as it is impossible to pay for less than an entire hour. Thus, no scaling actions are taken until previous operations have been completed. This is why an underutilized server is terminated only after being consistently underutilized for at least U C T consecutive iterations.
The memory usage metric M(s,k) for a server s at discrete time k is given in (1). It is based on the amount of free memory m e m free , the size of the disk cache m e m cache , the buffers m e m buf , and the total memory size m e m total . The disk cache m e m cache is excluded from the amount of used memory, as the underlying operating system is at liberty to use free memory for such purposes as it sees fit. It will automatically be reduced as the demand for memory increases. The goal is to keep M(s,k) below the server memory utilization upper threshold M U S . Likewise, the memory usage metric for an application a at discrete time k is defined as M(a,k), which is the amount of the memory used by the application deployment plus the memory used by the user sessions divided by the total memory size m e m total .
$$ {}M(s,k) = \frac{mem_{total} - ({mem}_{free} + {mem}_{buf} + {mem}_{cache})} {mem_{total}} $$
The proposed approach maintains a fixed minimum number of application servers, known as the base capacity N B . In addition, it also maintains a dynamically adjusted number of additional application servers N A (k), which is computed as in (2), where the aggressiveness factor A A ∈[0,1] restricts the additional capacity to a fraction of the total capacity, S(k) is the set of servers at time k, and S over (k) is the set of overloaded servers at time k. This extra capacity is needed to account for various delays and errors, such as VM provisioning time and sampling frequency. For example, A A =0.2 restricts the maximum number of additional application servers to 20 % of the total |S(k)|.
$$ {}N_{A}(k) \,=\, \left\{ \begin{array}{ll} \left\lceil |S(k)|\cdot A_{A} \right\rceil, & \text{if~} |S(k)| - |S_{over}(k)| = 0\\ \left\lceil \frac{|S(k)|}{|S(k)| - |S_{over}(k)|} \cdot A_{A} \right\rceil, & \text{otherwise} \end{array}\right. $$
The number of VMs to provision N P (k) is determined by (3), where w p ∈[0,1] is a real number called the weighting coefficient for VM provisioning. It balances the influence of the proportional factor P P (k) relative to the derivative factor D P (k). The proportional factor P P (k) given by (4) uses a constant aggressiveness factor for VM provisioning A P ∈[0,1], which determines how many VMs to provision. The derivative factor D P (k) is defined by (5). It observes the change in the total number of overloaded servers between the previous and the current iteration.
$$\begin{array}{*{20}l} N_{P}(k) &= \lceil w_{p} \cdot P_{P}(k) + (1 - w_{p}) \cdot D_{P}(k) \rceil \end{array} $$
$$\begin{array}{*{20}l} P_{P}(k) &= |S_{over}(k)| \cdot A_{P} \end{array} $$
$$\begin{array}{*{20}l} D_{P}(k) &= |S_{over}(k)| - |S_{over}(k - 1)| \end{array} $$
The number of servers to terminate N T (k) is computed as in 6. It uses a weighting coefficient for VM termination w t ∈[0,1], similar to w p in (3). The currently required base capacity N B and additional capacity N A (k) have to be taken into account. The proportional factor for termination P T (k) is calculated as in (7). Here A T ∈[0,1], the aggressiveness factor for VM termination, works like A P in (4). Finally, the derivative factor for termination D T (k) is given by (8), which observes the change in the number of long-time underutilized servers between the previous and the current iteration.
$$\begin{array}{*{20}l} N_{T}(k) &\,=\, \lceil w_{t} \cdot P_{T}(k)\! +\! (1 \,-\, w_{t}) \cdot D_{T}(k) \rceil - N_{B} - N_{A}(k) \end{array} $$
$$\begin{array}{*{20}l} P_{T}(k) &= |S_{lu}(k)| \cdot A_{T} \end{array} $$
$$\begin{array}{*{20}l} D_{T}(k) &= |S_{lu}(k)| - |S_{lu}(k - 1)| \end{array} $$
Load Prediction
Prediction is performed with a two-step method [4, 5] based on EMA, which filters the monitored resource trends, producing a smoother curve. EMA is the weighted mean of the n samples in the past window, where the weights decrease exponentially. Figure 3 illustrates an EMA over a past window of size n=20, where less weight is given to old samples when computing the mean in each measure.
Example of EMA over a past window of size n=20, where less weight is given to old samples when computing the mean in each measure
As we use a hybrid reactive-proactive VM provisioning algorithm, there is a need to blend the measured and predicted values. This is done through linear interpolation [9] with the weights w c and w m [8], the former for CPU load average and the latter for memory usage. In the current implementation, each of these weights is set to the NRMSE of the predictions so that lower prediction error will favor predicted values over observed values. The NRMSE calculation is given by (9), where y i is the latest measured utilization, \(\hat {y_{i}}\) is the latest predicted utilization, n is the number of observations, and max is the maximum value of both measured and observed utilizations formed over the current interval, while min is analogous to max. More details of our load prediction approach are provided in [8, 9].
$$ NRMSE = \frac{\sqrt{\frac{1}{n} \sum_{i = 1}^{n} (y_{i} - \hat{y_{i}})^{2}}}{max - min} $$
The server tier
The server tier consists of the application servers, which can be dynamically added to or removed from the cluster. The VM provisioning algorithm for the application server tier is presented in Algorithm 1. At each sampling interval k, the global controller retrieves the performance metrics from each of the local controllers, evaluates them and decides whether or not to take an action. The set of application servers is partitioned into disjoint subsets according to the current state of each server. The possible server states are: overloaded, non-overloaded, underutilized, and long-term underutilized.
The algorithm starts by partitioning the set of application servers into a set of overloaded servers S over (k) and a set of non-overloaded servers S ¬o v e r (k) according to the supplied threshold levels (C U S and M U S ) of the observed input variables: memory utilization and CPU load (lines 2–4). A server is overloaded if the utilization of any resource exceeds its upper threshold value. All other servers are considered to be non-overloaded (line 6). The applications running on overloaded servers are added to a set of overloaded applications A over (k) to be deployed on any available non-overloaded application servers as per the allocation policy for applications to servers (line 5). If the number of overloaded application servers exceeds the threshold level, a proportional amount of virtualized application servers is provisioned (line 13) and the overloaded applications are deployed to the new servers as they become available (lines 16–18).
The server tier is scaled down by constructing a set of underutilized servers S u (k) (line 20) and a set of long-term underutilized servers S lu (k) (line 21), where servers are deemed idle if their utilization levels lie below the given lower thresholds (C L S and M L S ). Long-term underutilized servers are servers that have been consistently underutilized for more than a given number of iterations I C T S . When the number of long-term underutilized servers exceeds the base capacity N B plus the additional capacity N A (k) (line 22), the remainder are terminated after their active sessions have been migrated to other servers (lines 23–27).
The application tier
Applications can be scaled to run on many servers according to their individual demand. Due to memory constraints, the naïve approach of always running all applications on all servers is unfeasible. Algorithm 2 shows how individual applications are scaled up and down according to their resource utilization. The set of applications is partitioned into disjoint subsets according to the current state of each application. The possible application states are: overloaded, non-overloaded, inactive and long-term inactive.
An application is overloaded when it uses more resources than allotted (line 2). Each overloaded application a∈A over (k) is deployed to another server according to the allocation policy for applications to servers (lines 4–6). When an application has been running on a server without exceeding the lower utilization thresholds (C L A and M L A ), possible active sessions are migrated to another deployment of the application and then said application is undeployed (lines 8–15). This makes the memory available to other applications that might need it.
The admission control algorithm is given as Algorithm 3. It continuously checks for new s e n (k) or deferred sessions s e d (k) (line 1). If any are found (line 2), it updates the weighting coefficient w∈[0,1], representing the weight given to predicted and observed utilizations (line 3). If w=1.0, no predictions are calculated (lines 5–6). The prediction process uses a two-step approach, providing filtered input data to the predictor [5]. We currently perform automatic adjustment and tuning in a similar fashion to Cherkasova and Phaal [14], where the weighting coefficient w is defined according to (10). It is based on the following metrics: number of aborted sessions |s e a (k)|, number of deferred sessions |s e d (k)|, number of rejected sessions |s e r (k)|, and number of overloaded servers |S over (k)|.
$$ {\begin{aligned} w = \left\{ \begin{array}{ll} 1, & \text{if } |{se}_{a}(k)| > 0 \vee |{se}_{d}(k)| > 0 \vee |{se}_{r}(k)| > 0\\ 1, & \text{if } |S_{over}(k)| > 0\\ max(0.1, w - 0.01), & \text{otherwise} \end{array}\right. \end{aligned}} $$
For each iteration, a bit more preference is given to the predicted values, up to the limit of 90 %. However, as soon as a problem is detected, full preference is given to the observed values, as the old predictions cannot be trusted. This should help in reducing lag when there are sudden changes in the load trends after long periods of good predictions.
If the algorithm finds servers in good condition (line 12), the session is admitted (lines 13–17), else the session is deferred to the busy service server (line 20). Only if also the busy service server is overloaded, will the session be rejected (line 22).
In this section, we present some important implementation details.
The prototype implementations of ARVUE [1, 7] and CRAMP [8] use the free, lightweight load balancer HAProxy1, which can act as a reverse proxy in either of two modes: Transmission Control Protocol (TCP) or HTTP, which correspond to layers 4 and 7 in the Open Systems Interconnection (OSI) model. We use the HTTP mode, as ARVUE and CRAMP are designed for stateful web applications over HTTP.
HAProxy includes powerful logging capabilities using the Syslog standard. It also supports session affinity, the ability to direct requests belonging to a single session to the same server, and Access Control Lists (ACLs), even in combination with Secure Socket Layer (SSL) since version 1.5.
Session affinity is supported by cookie rewriting or insertion. As the prototype implementations of ARVUE and CRAMP are designed for Vaadin applications [18], which use the Java Servlet technology, applications already use the JSESSIONID cookie, which uniquely identifies the session the request belongs to. Thus, HAProxy only has to intercept the JSESSIONID cookie sent from the application to the client and prefix it with the identifier of the backend in question. Incoming JSESSIONID cookies are similarly intercepted and the inserted prefix is removed before they are sent to the applications.
HAProxy also comes with a built-in server health monitoring system, based on making requests to servers and measuring their response times. However, this system is currently not in use, as the proposed approach does its own health monitoring by observing different metrics.
The load balancer is dynamically reconfigured by the global controller as the properties of the cluster change. When an application is deployed, the load balancer is reconfigured with a mapping between a Uniform Resource Identifier (URI) that uniquely identifies the application and a set of application servers hosting the application, by means of an ACL, a usage declaration and a backend list. Weights for servers are periodically recomputed according to the health of each server, with higher weights assigned to less loaded servers.
The weights are integers in the range [0,W MAX], where higher values mean higher priority. In the case of HAProxy, W MAX=255. The value 0 is special in that it effectively prevents the server from receiving any new requests. This is explained by the weighting algorithm in Algorithm 4, which distributes the load among the servers so that each server receives a number of requests proportional to its weight divided by the sum of all the weights. This is a simple mapping of the current load to the weight interval. Here, S(k) is the set of servers at discrete time k, C w (s,k) is the weighted load average of server s at time k, C(s,k) is the measured load average of server s at time k, and similarly \(\hat {C}(s,k)\) is the predicted load average of server s at time k. w c ∈[0,1] is the weighting coefficient for CPU load average, C U S is the server load average upper threshold, and W(s,k) is the weight of server s at time k for load balancing. Thus, the algorithm obtains C(s,k) and \(\hat {C}(s,k)\) of each server s∈S(k) and uses them along with w c to compute C w (s,k) of each server (line 1). Afterwards, it uses C w (s,k) to compute W(s,k) of each server s (lines 2–10). The notation used in the algorithm is also defined in Table 1 in Algorithms section.
Cloud provisioner
The global controller communicates with the cloud provisioner through its custom Application Programming Interface (API) in order to realize the decisions on how to manage the server tier. Proper application of the façade pattern decouples the proposed approach from the underlying IaaS provider. The prototypes [1, 7, 8, 10] currently support Amazon EC2 in homogeneous configurations. For now, we only provision m1.small instances, as our workloads are quite small, but the instance type can be changed easily. Provisioning VMs of different capacity could eventually lead to better granularity and lower operating costs. Support for more providers and heterogeneous configurations is planned for the future.
Busy service server
The busy service amounts to a polling session, notifying the user when the requested service is available and showing a waiting message or other distraction until then. Using server push technology or websockets, the busy service server could be moved to the client instead.
The prototype implementations of ARVUE [1, 7, 10] and CRAMP [8] use Apache Felix2, which is a free implementation of the OSGi R4 Service Platform and other related technologies.
The OSGi specifications were originally intended for embedded devices, but have since outgrown their original purpose. They provide a dynamic component model, addressing a major shortcoming of Java.
Each application server has a local controller, responsible for monitoring the state of said server. Metrics such as CPU load and memory usage of both the VM and of the individual deployed applications are collected and fed to the global controller for further processing. The global controller delegates application-tier tasks such as deployment and undeployment of bundles to the local controllers, which are responsible for notifying the OSGi environment of any actions to take.
The predictor from CRAMP [8] is also connected to each application server, making predictions based on the values obtained through the two-step prediction process. The prototype implementation computes an error estimate based on the NRMSE of predictions in the past window and uses that as a weighting parameter when determining how to blend the predicted and observed utilization of the monitored resources, as explained in Load Prediction section.
Application repository
The applications are self-contained OSGi bundles, which allows for dynamic loading and unloading of bundles at the discretion of the local controller. The service-oriented nature of the OSGi platform suits this approach well. A bundle is a collection of Java classes and resources together with a manifest file MANIFEST.MF augmented with OSGi headers.
Experimental evaluation
To validate and evaluate the proposed VM provisioning and admission control approaches, we developed discrete-event simulations for ARVUE, CRAMP, and ACVAS and performed a series of experiments involving synthetic as well as realistic load patterns. The synthetic load pattern consists of two artificial load peaks, while the realistic load pattern is based on real world data. In this section, we present experimental results based on the discrete-event simulations.
VM provisioning experiments
This section presents some of the simulations and experiments that have been conducted to validate and evaluate ARVUE and CRAMP VM provisioning algorithms. The goal of these experiments was to test the two approaches and to compare their results.
In order to generate workload, a set of application users was needed. In our discrete-event simulations, we developed a load generator to emulate a given number of user sessions making HTTP requests on the web applications. We also constructed a set of 100 simulated web applications of varying resource needs, designed to require a given amount of work on the hosting server(s). When a new HTTP request arrived at an application, the application would execute a loop for a number of iterations, corresponding to the empirically derived time required to run the loop on an unburdened server. As the objective of the VM provisioning experiments was to compare the results of ARVUE and CRAMP, admission control was not used in these experiments.
Design and setup
We performed two experiments with the proposed VM provisioning approaches: ARVUE and CRAMP. The first experiment used a synthetic load pattern, which was designed to scale up to 1000 concurrent sessions in two peaks with a period of no activity between them. In the second peak, the arrival rate was twice as high as in the first peak.
The second experiment was designed to simulate a load representing a workload trace from a real web-based system. The traces were derived from Squid proxy server access logs obtained from the IRCache project 3. As the access logs did not include session information, we defined a session as a series of requests from the same originating Internet Protocol (IP)-address, where the time between individual requests was less than 15 minutes. We then produced a histogram of sessions per second and used linear interpolation and scaling by a factor of 30 to obtain the load traces used in the experiment.
In a real-world application, there would be different kinds of requests available, requiring different amounts of CPU time. Take the simple case of a web shop: there might be one class of requests for adding items to the shopping basket, requiring little CPU time, and another class of requests requiring more CPU time, like computing the sum total of the items in the shopping basket. Users of an application would make a number of varying requests through their interactions with the application. After each request, there would be a delay while the user was processing the newly retrieved information, like when presented with a new resource. In both experiments, each user was initially assigned a random application and a session duration of 15 minutes. Application 1 to 10 were assigned to 50 % of all users, application 11 to 20 were used by 25 %, application 21 to 30 received 20 % of all users, while the remaining 5 % was shared among the other 70 applications. Each user made requests to its assigned application, none of which was to require more than 10 ms of CPU time on an idle server. In order to emulate the time needed for a human to process the information obtained in response to a request, the simulated users waited up to 20 s between requests. All random variables were uniformly distributed. This means they do not fit the Markovian model.
The sampling period was k=10 s. The upper threshold for server load average C U S and the upper threshold for server memory utilization M U S were both set to 0.8. These values are considered reasonable for efficient server utilization [2, 25].
The application-server allocation policy used was lowest load average. The session-server allocation policy was also set to lowest load average, realized through the weighted round-robin policy of HAProxy, where the weights were assigned by the global controller according to the load averages of the servers, as described in Load balancer section.
The weighting coefficient for VM provisioning w p was set to its default value 0.5, which gives equal weight to P P (k) and D P (k). A more suitable value for this coefficient can be determined experimentally. We have used w p =0.5 in all our experiments so far. Similarly, the default value for the weighting coefficient for VM termination w t is 0.75, which gives more weight to the proportional factor for termination P T (k).
Results and analysis
The results from the VM provisioning experiment with the synthetic load pattern are shown in Fig. 4 a and b. The depicted observed parameters are: number of servers, average response time, average server CPU load, average memory utilization, and applications per server. The upper half of Table 3 contains a summary of the results.
Results of VM provisioning experiment with the synthetic load pattern. In this experiment, both ARVUE and CRAMP had similar results, except that CRAMP used fewer servers
Table 3 Results from VM provisioning experiments
The results from the two approaches are compared based on the following criteria: number of servers used, average CPU load average, maximum CPU load average, average memory utilization, maximum memory utilization, average RTT, and maximum RTT. The resource utilizations are ranked according to the utilization error, where over-utilization is considered infinitely bad.
In Fig. 4 a and b, the number of servers plots show that the number of application servers varied in accordance with the number of simultaneous user sessions. In this experiment, ARVUE used a maximum of 16 servers, whereas CRAMP used no more than 14 servers. The RTT remained quite stable around 20 ms, as expected. The server CPU load average and the memory utilization never exceeded 1.0.
The results from the experiment with the synthetic load pattern indicate that the system is working as intended. The use of additional capacity seems to alleviate the problem of servers becoming overloaded due to long reaction times. The conservative VM termination policy of the proposed approach explains why the decrease in the number of servers occurs later than the decrease in the number of sessions. As mentioned in Algorithms section, one of the objectives of the proposed VM provisioning algorithms is to prevent oscillations in the number of application servers used. The results indicate that this was achieved.
Figure 5 a and b present the results of the VM provisioning experiment with the realistic load pattern. The results are also presented in the lower half of Table 3.
Results of VM provisioning experiment with the realistic load pattern. In this experiment, CRAMP used half as many servers as ARVUE, but it still provided similar performance
In this experiment, ARVUE used a maximum of 16 servers, whereas CRAMP used no more than 8 servers. In the case of ARVUE, the maximum response time was 21.3 ms and the average response time was 12.63 ms. In contrast, CRAMP had a maximum response time of 27.43 ms and an average response time of 14.7 ms. For both ARVUE and CRAMP, the server CPU load average and the memory utilization never exceeded 1.0.
The results from the experiment with the realistic load pattern show significantly better performance of CRAMP compared to ARVUE in terms of number of servers. CRAMP used half as many servers as ARVUE, but it still provided similar results in terms of average response time, CPU load average, and memory utilization. The ability to make predictions of future trends is a significant advantage, even if the predictions may not be fully accurate. Still, there were significant problems with servers becoming overloaded due to the provisioning delay. Increasing the safety margins further by lowering the upper resource utilization threshold values or increasing the extra capacity buffer further might not be economically viable. We suspect that an appropriate admission control strategy will be able to prevent the servers from becoming overloaded in an economically viable fashion.
Figure 6 a shows the utilization error in the first experiment that uses the synthetic load pattern. For brevity, we only depict the CPU load in the error analysis. Therefore, error is defined as the absolute difference between the target CPU load average level C U S and the measured value of the CPU load average C(s,k) averaged over all servers in the system. Initially, the servers are naturally underloaded due to the lack of work. Thereafter, as soon as the first peak of load arrives, the error shrinks significantly and becomes as low as 0.1 for ARVUE and 0.3 for CRAMP. The higher CPU load error for CRAMP at this point was due to the fact that CRAMP results in this experiment were mostly memory-driven, as can be seen in Fig. 4 b. In other words, CRAMP had higher error with respect to the CPU load, but it had lower error with respect to the memory utilization. The error grows again as the period of no activity starts after the first peak of load. In the second peak, both ARVUE and CRAMP showed similar results, where the error becomes as low as 0.25. Finally, as the request rate sinks after the second peak of load, the error grows further due to underutilization. This can be attributed to the intentionally cautious policy for scaling down, which is explained in Algorithms section and ultimately to the lack of work. A more aggressive policy for scaling down might work without introducing oscillating behavior, but when using a third-party IaaS it would still not make sense to terminate a VM until the current billing interval is coming to an end, as that resource constitutes a sunk cost.
CPU load average error analysis in the VM provisioning experiments. In the first experiment, CRAMP appears to have higher error because its results were mostly memorydriven. In the second experiment, CRAMP had lower error than ARVUE, with the only exceptions being due to underutilization
Error analysis of the second experiment that uses the realistic load pattern can be seen in Fig. 6 b. CRAMP appears to have lower error than ARVUE throughout most of the experiment, with the only exceptions being due to underutilization.
Admission experiments
This section presents experiments with admission control. The goal of these experiments was to test our proposed admission control approach ACVAS [9] and to compare it against an existing SBAC implementation [14], here referred to as the alternative approach. As in the VM provisioning experiments, the experiments in this section also used 100 simulated web applications of various resource requirements. The experiments were conducted through discrete-event simulations.
We performed two experiments with ACVAS and the alternative approach. The first admission experiment used the synthetic load pattern, which was also used in the first VM provisioning experiment described in VM provisioning experiments section. This workload was designed to scale up to 1000 concurrent sessions in two peaks with a period of no activity between them. Similarly, the second admission experiment was designed to use the realistic load pattern, which was also used in the second VM provisioning experiment in VM provisioning experiments section. The sampling period k, the upper threshold for server load average C U S , the upper threshold for server memory utilization M U S , the application-server allocation policy, and the session-server allocation policy were all same as in the VM provisioning experiments in VM provisioning experiments section.
In our previous work [9], we proposed a way of measuring the quality of an admission control mechanism based on the trade-off between the number of servers used and six important QoS metrics: number of overloaded servers, session throughput, number of aborted sessions, number of deferred sessions, number of rejected sessions and average response time for all admitted sessions. The goal is to minimize the values of these metrics, except for session throughput, that should be maximized. The results from the two approaches will be compared based on these criteria.
Figure 7 a and b present the results from the experiment with the synthetic load pattern. A summary of the results is also available in the upper half of Table 4. The prediction accuracy was high, the Root Mean Square Error (RMSE) of the predicted CPU and memory utilization was 0.0163 and 0.0128 respectively. ACVAS used a maximum of 19 servers with 0 overloaded servers, 0 aborted sessions, 30 deferred sessions, and 0 rejected sessions. There were a total of 8620 completed sessions with an average RTT of 59 ms. Thus, ACVAS provided a good trade-off between the number of servers and the QoS requirements. The alternative approach also used a maximum of 19 servers, but with several occurrences of server overloading. On average, there were 0.56 overloaded servers at all time with 0 aborted sessions and 488 rejected sessions. A total of 9296 sessions were completed with an average RTT of 112 ms. Thus, in the first experiment, the alternative approach completed 9296 sessions compared to 8620 sessions by ACVAS, but with 488 rejected sessions and several occurrences of server overloading.
Results of admission experiment with the synthetic load pattern. ACVAS performed better than the alternative approach in all aspects but session deferment and throughput
Table 4 Results from admission experiments
Figure 8 a and b show the results of the experiment with the realistic load trace derived from access logs. The lower half of Table 4 shows that ACVAS used a maximum of 16 servers with 0 overloaded servers, 0 aborted sessions, 20 deferred sessions, and 0 rejected sessions. There were a total of 8559 completed sessions with an average RTT of 59 ms. In contrast, the alternative approach used a maximum of 17 servers with 3 occurrences of server overloading. On average, there were 0.0046 overloaded servers at all time with 0 aborted sessions and 55 rejected sessions. There were a total of 8577 completed sessions with an average RTT of 72 ms. Thus, the alternative approach used an almost equal number of servers, but it did not prevent them from becoming overloaded. Moreover, it completed 8577 sessions compared to 8559 sessions by ACVAS, but with 55 rejected sessions and 3 occurrences of server overloading.
Results of admission experiment with the realistic load pattern. ACVAS performed better than the alternative approach in all aspects but session deferment and throughput
The results from these two experiments indicate that the ACVAS approach provides significantly better results in terms of the previously mentioned QoS metrics. In the first experiment, ACVAS had the best results in three areas: overloaded servers, rejected sessions, and average RTT. The alternative approach performed better in two areas: there were no deferred sessions, as it did not support session deferment, and it had more completed sessions. In the second experiment, ACVAS performed better in four aspects: number of servers used, overloaded servers, rejected sessions, and average RTT. The alternative approach again showed better performance in the number of completed sessions and in the number of deferred sessions. We can therefore conclude that ACVAS performed better than the alternative approach in both experiments.
The EMA-based predictor appears to be doing a good job on predicting these types of loads. It remains unclear how the system reacts to sudden drops in a previously increasing load trend. Such a scenario could temporarily lead to high preference for predicted results, which are no longer valid.
A plot of the utilization error with the synthetic load pattern can be seen in Fig. 9 a. Likewise, a plot of the utilization error with the realistic load can be seen in Fig. 9 b. Again, we only depict the CPU load, as it played the most significant part. The periods where ACVAS appears to have higher error than the alternative approach are due to underutilization amplified by ACVAS being more effective at keeping the average utilization down, as no servers became overloaded during this time. Overall, the results are quite similar, as they should be, the only difference being the admission controller.
CPU load average error analysis in the admission experiments. In the first experiment, both approaches had a similar error plot. However, in the second experiment, ACVAS appears to have lower error than the alternative approach throughout most of the experiment, with the only exceptions being due to underutilization
We have presented a prediction-based, cost-efficient Virtual Machine (VM) provisioning and admission control approach for multi-tier web applications. It provides automatic deployment and scaling of multiple simultaneous web applications on a given Infrastructure as a Service (IaaS) cloud in a shared hosting environment. The proposed approach comprises three sub-approaches: a reactive VM provisioning approach called ARVUE, a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling (CRAMP), and a session-based adaptive admission control approach called adaptive Admission Control for Virtualized Application Servers (ACVAS). Both ARVUE and CRAMP provide autonomous shared hosting of third-party Java Servlet applications on an IaaS cloud. However, CRAMP provides better responsiveness and results than the purely reactive scaling of ARVUE. ACVAS implements per-session admission, which reduces the risk of over-admission. Moreover, it implements a simple session deferment mechanism that reduces the number of rejected sessions while increasing session throughput. The proposed approach is demonstrated in discrete-event simulations and is evaluated in a series of experiments involving synthetic as well as realistic load patterns.
The results of the VM provisioning experiments showed that both ARVUE and CRAMP provide good performance in terms of average response time, Central Processing Unit (CPU) load average, and memory utilization. Moreover, CRAMP provides significantly better performance in terms of number of servers. It also had lower utilization error than ARVUE in most of the cases.
The evaluation and analysis concerning our proposed admission control approach compared ACVAS against an existing admission control approach available in the literature. The results indicated that ACVAS provides a good trade-off between the number of servers used and the Quality of Service (QoS) metrics. In comparison with the alternative admission control approach, ACVAS provided significant improvements in terms of server overload prevention, reduction of rejected sessions, and average response time.
1 http://www.haproxy.org/
2 http://felix.apache.org/
3 http://www.ircache.net/
4 http://www.apache.org/licenses/.
5 http://zenodo.org/.
6 https://aws.amazon.com/ec2/.
Aho T, Ashraf A, Englund M, Katajamäki J, Koskinen J, Lautamäki J, Nieminen A, Porres I, Turunen I (2011) Designing IDE as a service. Commun Cloud Softw 1: 1–10.
Allspaw J (2008) The Art of Capacity Planning: Scaling Web Resources. O'Reilly Media, Inc.
Almeida J, Almeida V, Ardagna D, Cunha I, Francalanci C, Trubian M (2010) Joint admission control and resource allocation in virtualized servers. J Parallel Distrib Comput 70(4): 344–362. doi:http://dx.doi.org/10.1016/j.jpdc.2009.08.009.
Andreolini M, Casolari S (2006) Load prediction models in web-based systems In: Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, valuetools '06.. ACM, New York. doi:http://dx.doi.org/10.1145/1190095.1190129.
Andreolini M, Casolari S, Colajanni M (2008) Models and framework for supporting runtime decisions in web-based systems. ACM Trans Web 2(3): 1–43. doi:http://dx.doi.org/10.1145/1377488.1377491.
Ardagna D, Ghezzi C, Panicucci B, Trubian M (2010) Service provisioning on the cloud: Distributed algorithms for joint capacity allocation and admission control. In: Di Nitto E Yahyapour R (eds)Towards a Service-Based Internet, Lecture Notes in Computer Science, 1–12.. Springer Berlin, Heidelberg.
Ashraf A, Byholm B, Lehtinen J, Porres I (2012) Feedback control algorithms to deploy and scale multiple web applications per virtual machine. In: Cortellessa V, Muccini H, Demirors O (eds)38th Euromicro Conference on Software Engineering and Advanced Applications, 431–438.. IEEE Computer Society.
Ashraf A, Byholm B, Porres I (2012) CRAMP: Cost-efficient resource allocation for multiple web applications with proactive scaling. In: Włodarczyk TW, Hsu CH, Feng WC (eds)4th IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 581–586.. IEEE Computer Society.
Ashraf A, Byholm B, Porres I (2012) A session-based adaptive admission control approach for virtualized application servers. In: Varela C Parashar M (eds)The 5th IEEE/ACM International Conference on Utility and Cloud Computing, 65–72.. IEEE Computer Society.
Byholm B (2013) An autonomous platform as a service for stateful web applications. Master's thesis, Åbo Akademi University.
Calinescu R, Grunske L, Kwiatkowska M, Mirandola R, Tamburrelli G (2011) Dynamic QoS management and optimization in service-based systems. Softw Eng IEEE Trans 37(3): 387–409. doi:http://dx.doi.org/10.1109/TSE.2010.92.
Carrera D, Steinder M, Whalley I, Torres J, Ayguade E (2008) Utility-based placement of dynamic web applications with fairness goals In: Network Operations and Management Symposium (NOMS), 9–16.. IEEE. doi:http://dx.doi.org/10.1109/NOMS.2008.4575111.
Chen X, Chen H, Mohapatra P (2003) ACES: An efficient admission control scheme for QoS-aware web servers. Comput Commun 26(14): 1581–1593. doi:http://dx.doi.org/10.1016/S0140-3664(02)00259-1.
Cherkasova L, Phaal P (2002) Session-based admission control: A mechanism for peak load management of commercial web sites. Comput IEEE Trans 51(6): 669–685. doi:http://dx.doi.org/10.1109/TC.2002.1009151.
Chieu TC, Mohindra A, Karve AA, Segal A (2009) Dynamic scaling of web applications in a virtualized cloud computing environment In: e-Business Engineering, 2009. ICEBE '09. IEEE International Conference on, 281–286. doi:http://dx.doi.org/10.1109/ICEBE.2009.45.
Dutreilh X, Rivierre N, Moreau A, Malenfant J, Truck I (2010) From data center resource allocation to control theory and back In: Cloud Computing (CLOUD), 2010 IEEE 3rd International Conference on, 410–417. doi:http://dx.doi.org/10.1109/CLOUD.2010.55.
Gong Z, Gu X, Wilkes J (2010) PRESS: PRedictive Elastic ReSource Scaling for cloud systems In: Network and Service Management (CNSM), 2010 International Conference on, 9–16. doi:http://dx.doi.org/10.1109/CNSM.2010.5691343.
Grönroos M (2011) Book of Vaadin, fourth edn. Vaadin Ltd.
Guitart J, Beltran V, Carrera D, Torres J, Ayguade E (2005) Characterizing secure dynamic web applications scalability In: Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS). doi:http://dx.doi.org/10.1109/IPDPS.2005.137.
Han R, Ghanem MM, Guo L, Guo Y, Osmond M (2014) Enabling cost-aware and adaptive elasticity of multi-tier cloud applications. Future Generation Comput Syst 32(0): 82–98. doi:http://dx.doi.org/10.1016/j.future.2012.05.018.
Han R, Guo L, Ghanem MM, Guo Y (2012) Lightweight resource scaling for cloud applications. Cluster Computing and the Grid, IEEE International Symposium on.
Hu Y, Wong J, Iszlai G, Litoiu M (2009) Resource provisioning for cloud computing In: Proceedings of the 2009 Conference of the Center for Advanced Studies on Collaborative Research, CASCON '09, 101–111.. ACM, New York.
Huang CJ, Cheng CL, Chuang YT, Jang JSR (2006) Admission control schemes for proportional differentiated services enabled internet servers using machine learning techniques. Expert Syst Appl 31(3): 458–471. doi:http://dx.doi.org/10.1016/j.eswa.2005.09.071.
Iqbal W, Dailey MN, Carrera D, Janecek P (2011) Adaptive resource provisioning for read intensive multi-tier applications in the cloud. Futur Gener Comput Syst 27(6): 871–879.
Liu HH (2009) Software Performance and Scalability: A Quantitative Approach. Wiley Publishing.
Montgomery DC, Peck EA, Vining GG (2012) Introduction to Linear Regression Analysis. Wiley Series in Probability and Statistics. John Wiley & Sons.
Muppala S, Zhou X (2011) Coordinated session-based admission control with statistical learning for multi-tier internet applications. J Netw Comput Appl 34(1): 20–29. doi:http://dx.doi.org/10.1016/j.jnca.2010.10.007.
OSGi Alliance (2010) OSGi Service Platform Core Specification, Release 4, Version 4.2. AQute Publishing.
Pan W, Mu D, Wu H, Yao L (2008) Feedback control-based QoS guarantees in web application servers In: High Performance Computing and Communications, 2008. HPCC '08. 10th IEEE International Conference on, 328–334. doi:http://dx.doi.org/10.1109/HPCC.2008.106.
Patikirikorala T, Colman A, Han J, Wang L (2011) A multi-model framework to implement self-managing control systems for QoS management In: Proceedings of the 6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS '11, 218–227.. ACM, New York.
Raivio Y, Mazhelis O, Annapureddy K, Mallavarapu R, Tyrväinen P (2012) Hybrid cloud architecture for short message services. In: Leymann F, Ivanov I, van Sinderen M, Shan T (eds)Proceedings of the 2nd International Conference on Cloud Computing and Services Science, 489–500.. SciTePress.
Robertsson A, Wittenmark B, Kihl M, Andersson M (2004) Admission control for web server systems - design and experimental evaluation In: Decision and Control, 2004. CDC. 43rd IEEE Conference on, 531–536. doi:http://dx.doi.org/10.1109/CDC.2004.1428685.
Roy N, Dubey A, Gokhale A (2011) Efficient autoscaling in the cloud using predictive models for workload forecasting In: Cloud Computing (CLOUD), 2011 IEEE International Conference on, 500–507. doi:http://dx.doi.org/10.1109/CLOUD.2011.42.
Shaaban YA, Hillston J (2009) Cost-based admission control for internet commerce QoS enhancement. Electron Commer Res Appl 8(3): 142–159. doi:http://dx.doi.org/10.1016/j.elerap.2008.11.007.
Urgaonkar B, Shenoy P, Roscoe T (2009) Resource overbooking and application profiling in a shared internet hosting platform. ACM Trans Internet Technol 9(1): 1–45. doi:http://dx.doi.org/10.1145/1462159.1462160.
Vogels W (2008) Beyond server consolidation. Queue 6(1): 20–26. doi:http://dx.doi.org/10.1145/1348583.1348590.
Voigt T, Gunningberg P (2002) Adaptive resource-based web server admission control In: Computers and Communications, 2002. Proceedings. ISCC 2002. Seventh International Symposium on. doi:http://dx.doi.org/10.1109/ISCC.2002.1021682.
Wolke A, Meixner G (2010) TwoSpot: A cloud platform for scaling out web applications dynamically. In: di Nitto E Yahyapour R (eds)Towards a Service-Based Internet, Lecture Notes in Computer Science, 13–24.. Springer Berlin, Heidelberg.
Zhang Z, Wang H, Xiao L, Ruan L (2011) A statistical based resource allocation scheme in cloud In: Cloud and Service Computing (CSC), 2011 International Conference on, 266–273. doi:http://dx.doi.org/10.1109/CSC.2011.6138531.
Zhao H, Pan M, Liu X, Li X, Fang Y (2012) Optimal resource rental planning for elastic applications in cloud market In: Parallel and Distributed Processing Symposium (IPDPS), 2012 IEEE 26th International, 808–819. doi:http://dx.doi.org/10.1109/IPDPS.2012.77.
The source code for the platform described in this article, as well as the discrete event simulator used for its design and evaluation are available under the open source Apache License version 24. The materials haven been placed in the GitHub repository https://github.com/SELAB-AA/arvue-platform and they have been archived in Zenodo5 with DOI:http://dx.doi.org/10.5281/zenodo.47293. The platform is implemented in Java and uses Amazon Elastic Compute Cloud (EC2)6 as its underlying infrastructure service. However, it can easily be used with other services as long as they support Java and the EC2 API.
AA carried out the literature review, designed the algorithms, and developed the simulations. BB developed the prototype implementation. AA and BB jointly drafted the manuscript. IP provided useful insights and guidance and critically reviewed the manuscript. All authors read and approved the final manuscript.
Faculty of Natural Sciences and Technology, Åbo Akademi University, Turku, Finland
Adnan Ashraf, Benjamin Byholm & Ivan Porres
Adnan Ashraf
Benjamin Byholm
Ivan Porres
Correspondence to Adnan Ashraf.
Ashraf, A., Byholm, B. & Porres, I. Prediction-based VM provisioning and admission control for multi-tier web applications. J Cloud Comp 5, 15 (2016). https://doi.org/10.1186/s13677-016-0065-9
Virtual machine provisioning
Cost-efficiency | CommonCrawl |
Detection of movement onset using EMG signals for upper-limb exoskeletons in reaching tasks
Emilio Trigili ORCID: orcid.org/0000-0002-3725-56941 na1,
Lorenzo Grazi1 na1,
Simona Crea1,2,
Alessandro Accogli1,
Jacopo Carpaneto1,
Silvestro Micera1,3,
Nicola Vitiello1,2 na2 &
Alessandro Panarese1 na2
Journal of NeuroEngineering and Rehabilitation volume 16, Article number: 45 (2019) Cite this article
To assist people with disabilities, exoskeletons must be provided with human-robot interfaces and smart algorithms capable to identify the user's movement intentions. Surface electromyographic (sEMG) signals could be suitable for this purpose, but their applicability in shared control schemes for real-time operation of assistive devices in daily-life activities is limited due to high inter-subject variability, which requires custom calibrations and training. Here, we developed a machine-learning-based algorithm for detecting the user's motion intention based on electromyographic signals, and discussed its applicability for controlling an upper-limb exoskeleton for people with severe arm disabilities.
Ten healthy participants, sitting in front of a screen while wearing the exoskeleton, were asked to perform several reaching movements toward three LEDs, presented in a random order. EMG signals from seven upper-limb muscles were recorded. Data were analyzed offline and used to develop an algorithm that identifies the onset of the movement across two different events: moving from a resting position toward the LED (Go-forward), and going back to resting position (Go-backward). A set of subject-independent time-domain EMG features was selected according to information theory and their probability distributions corresponding to rest and movement phases were modeled by means of a two-component Gaussian Mixture Model (GMM). The detection of movement onset by two types of detectors was tested: the first type based on features extracted from single muscles, whereas the second from multiple muscles. Their performances in terms of sensitivity, specificity and latency were assessed for the two events with a leave one-subject out test method.
The onset of movement was detected with a maximum sensitivity of 89.3% for Go-forward and 60.9% for Go-backward events. Best performances in terms of specificity were 96.2 and 94.3% respectively. For both events the algorithm was able to detect the onset before the actual movement, while computational load was compatible with real-time applications.
The detection performances and the low computational load make the proposed algorithm promising for the control of upper-limb exoskeletons in real-time applications. Fast initial calibration makes it also suitable for helping people with severe arm disabilities in performing assisted functional tasks.
Exoskeletons are wearable robots exhibiting a close physical and cognitive interaction with the human users. Over the last years, several exoskeletons have been developed for different purposes, such as augmenting human strength [1], rehabilitating neurologically impaired individuals [2] or assisting people affected by many neuro-musculoskeletal disorders in activities of daily life [3]. For all these applications, the design of cognitive Human-Robot Interfaces (cHRIs) is paramount [4]; indeed, understanding the users' intention allows to control the device with the final goal to facilitate the execution of the intended movement. The flow of information from the human user to the robot control unit is particularly crucial when exoskeletons are used to assist people with compromised movement capabilities (e.g. post-stroke or spinal-cord-injured people), by amplifying their movements with the goal to restore functions.
In recent years, different approaches have been pursued to design cHRIs, based on invasive and non-invasive approaches. Implantable electrodes, placed directly into the brain or other electrically excitable tissues, record signals directly from the peripheral or central nervous system or muscles, with high resolution and high precision [5]. Non-invasive approaches exploit different bio-signals: some examples are electroencephalography (EEG) [6], electrooculography (EOG) [7], and brain-machine interfaces (BMI) combining the two of them [8,9,10]. In addition, a well-consolidated non-invasive approach is based on surface electromyography (sEMG) [11], which has been successfully used for controlling robotic prostheses and exoskeletons due to their inherent intuitiveness and effectiveness [12,13,14]. Compared to EEG signals, sEMG signals are easy to be acquired and processed and provide effective information on the movement that the person is executing or about to start executing. Despite the above-mentioned advantages, the use of surface EMG signals still has several drawbacks, mainly related to their time-varying nature and the high inter-subject variability, due to differences in the activity level of the muscles and in their activation patterns [11, 15], which requires custom calibrations and specific training for each user [16]. For these reasons, notwithstanding the intuitiveness of EMG interfaces, it is still under discussion their efficacy and usability in shared human-machine control schemes for upper-limb exoskeletons. Furthermore, the need for significant signal processing can limit the use of EMG signals in on-line applications, for which fast detection is paramount. In this scenario, machine learning methods have been employed to recognize the EMG onset in real time, using different classifiers such as Support Vector Machines, Linear Discriminant Analysis, Hidden Markov Models, Neural Networks, Fuzzy Logic and others [15,16,17]. In this process, a set of features is previously selected in time, frequency, or time-frequency domains [18]. Time-domain features extract information associated to signal amplitude in non-fatiguing contractions; when fatigue effects are predominant, frequency-domain features are more representative; finally, time-frequency domain features better elicit transient effects of muscular contractions. Before feeding the features into the classifier, dimensionality reduction is usually performed, to increase classification performances while reducing complexity [19]. The most common strategies for reduction are: i) feature projection, to map the set of features into a new set with reduced dimensionality (e.g., linear mapping through Principal Component Analysis); ii) feature selection, in which a subset of features is selected according to specific criteria, aimed at optimizing a chosen objective function. All the above-mentioned classification approaches ensure good performance under controlled laboratory conditions. Nevertheless, in order to be used effectively in real-life scenarios, smart algorithms must be developed, which are able to adapt to changes in the environmental conditions and intra-subject variability (e.g. changes of background noise level of the EMG signals), as well as to the inter-subject variability [20].
In this paper, we exploited a cHRI combining sEMG and an upper-limb robotic exoskeleton, to fast detect the users' motion intention. We implemented offline an unsupervised machine-learning algorithm, using a set of subject-independent time-domain EMG features, selected according to information theory. The probability distributions of rest and movement phases of the set of features were modelled by means of a two-component Gaussian Mixture Model (GMM). The algorithm simulates an online application and implements a sequential method to adapt GMM parameters during the testing phase, in order to deal with changes of background noise levels during the experiment, or fluctuations in EMG peak amplitudes due to muscle adaptation or fatigue. Features were extracted from two different signal sources, namely onset detectors, which were tested offline and their performance in terms of sensitivity (or true positive rate), specificity (or true negative rate) and latency (delay on onset detection) were assessed for two different events, i.e. two transitions from rest to movement phases at different initial conditions. The two events were selected in order to replicate a possible application scenario of the proposed system. Based on the results we obtained, we discussed the applicability of the algorithm to the control of an upper-limb exoskeleton used as an assistive device for people with severe arm disabilities.
The experimental setup includes: (i) an upper-limb powered exoskeleton (NESM), (ii) a visual interface, and (iii) a commercial EMG recording system (TeleMyo 2400R, Noraxon Inc., AZ, US).
NESM upper-limb exoskeleton
NESM (Fig. 1a) is a shoulder-elbow powered exoskeleton designed for the mobilization of the right upper limb [21, 22], developed at The BioRobotics Institute of Scuola Superiore Sant'Anna (Italy). The exoskeleton mechanical structure hangs from a standing structure and comprises four active and eight passive degrees of freedom (DOFs), along with different mechanisms for size regulations to improve comfort and wearability of the device.
a Experimental setup, comprising NESM, EMG electrodes and the visual interface; b Location of the electrodes for EMG acquisition; c Timing and sequence of action performed by the user during a single trial
The four active DOFs are all rotational joints and are mounted in a serial kinematic chain. Four actuation units, corresponding to the four active DOFs, allow the shoulder adduction/abduction (sAA), flexion/extension (sFE) and internal/external rotation (sIE), and the elbow flexion/extension (eFE). Each actuation unit is realized with a Series Elastic Actuation (SEA) architecture [23], employing a custom torsional spring [24] and two absolute encoders, to measure the joint angle and the joint torque as explained in [21]. SEAs allow reducing the mechanical stiffness of the actuator and easy implementation of position and torque controls.
The NESM control system runs on a real-time controller, namely a sbRIO-9632 (National Instruments, Austin, TX, US), endowed with a 400 MHz processor running a NI real-time operating system and a field programmable gate array (FPGA) processor Xilinx Spartan-3. The high-level layer runs at 100 Hz, whereas the low-level layer runs at 1 KHz.
NESM control modes
Low-level control
The low-level layer allows the exoskeleton to be operated in two control modalities, namely joint position and joint torque control modes. In the position control mode, each actuator drives the joint position to follow a reference angle trajectory: this control mode is used if the arm of the user has no residual movement capabilities and needs to be passively guided by the exoskeleton. If the user has residual movement capabilities but is not able to entirely perform a certain motor task, the exoskeleton can be controlled in torque mode: each actuation unit can supply an assistive torque to help the user accomplish the movement; we refer to transparent mode when null torque is commanded as reference. Both control modes are implemented by means of closed-loop controllers, independent for each actuation unit. Controllers are proportional-integrative-derivative (PID) regulators, operating on the error between the desired control variable (angle or torque) and the measured control variable (joint angle or joint torque). Safety checks are implemented when switching from one control mode to the other, in order to avoid undesired movements of the exoskeleton.
High-level control
The high-level layer implements the control strategies to provide the movement assistance. A graphical user interface (GUI) has been implemented in LabVIEW environment. The GUI allowed to (i) set the desired control mode and control parameters, (ii) visualize joint angles, torques and EMG signals, (iii) launch the visual interface, and (iv) save data. NESM high-level controller also implements a gravity compensation algorithm to counteract the gravity torque due to the exoskeleton weight. A more detailed description of the control modes and their performances can be found in [21, 22].
Visual interface
A visual interface (Fig. 1a) displayed three LEDs (west - W, center - C, and east - E) for the reaching movements, placed on different positions on a computer screen (15 cm apart, at left, center, and right, respectively). The visual interface was implemented in LabVIEW and launched by the NESM GUI.
EMG recording and acquisition system
EMG signals from seven muscles of the right shoulder (Trapezius, Anterior and Posterior Deltoid), arm (Biceps and Triceps Brachii) and forearm (Flexor and Extensor Carpi Ulnaris) were amplified (1000x) and band pass-filtered (10–500 Hz) through a TeleMyo 2400R system (Noraxon Inc., AZ, US). The location of the electrodes is shown in Fig. 1b. The sbRIO-9632 interfaced the TeleMyo analog output channels: EMG signals were sampled by the FPGA layer at 1 kHz and sent to the real-time layer for visualization and data storage.
A total of 10 healthy subjects (8 male, 2 female, age 26 ± 5 years) participated in the experiment, and they all provided written informed consent. The procedures were approved by the Institutional Review Board at The BioRobotics Institute, Scuola Superiore Sant'Anna and complied with the principles of the declaration of Helsinki.
Experimental protocol
Upon arrival, subjects were prepared for the experiment. Participants wore a t-shirt and were prepared for the application of the EMG electrodes over the skin according to the recommendations provided by SENIAM [25]. Then, subjects wore the exoskeleton with the help of the experimenter, and the size regulations were adjusted to fit the user's anthropometry. The subjects sat in front of a screen showing the visual interface, having the center of the right shoulder aligned with the central LED, in order to allow symmetric movements toward left and right LEDs.
Seven sessions per subject were performed, each consisting of 24 reaching movements, with 5 min of rest between sessions to avoid muscular fatigue. The targets (i.e. the LEDs) were presented in random order. For each reaching trial, the subjects were instructed to:
keep a resting position as long as all the LEDs were turned off,
as soon as one LED turned on, move the arm towards it and touch the screen,
keep the position (touching the screen) as long as the LED was turned on,
as soon as the LED turned off, move back to the resting position.
Each trial was set to a duration of T = 12 s; within this duration, the LED was turned on for TON = 6 s (Fig. 1c). When the LED turned ON, the exoskeleton control mode was automatically set to transparent mode, to allow the subject to start the movement and reach the target. After TR1 = 2.5 s the control mode was automatically set to position control for a duration of TR2 = 3.5 s; notably TR1 was set long enough to ensure subjects could reach the target. When the LED turned OFF, subjects were asked to flex the elbow until the eFE measured torque exceeded the threshold τthr = 2 N ∙ m; this value was used to discriminate a voluntary action of the user, to switch again the exoskeleton control mode to transparent mode and let the subject move the arm back to the resting position. The LED was off for TOFF = 6 s and then a new trial was started.
EMG data processing and features extraction
The EMG signals were hardware-filtered on the Noraxon TeleMyo device with high-pass and anti-aliasing low-pass filters for all channels, to achieve a pass band between 10 and 500 Hz. Digital signals were then converted to analog by the Noraxon TeleMyo and sent to the analog-digital converter of the NESM FPGA layer, operating at a sampling frequency of 1 kHz. Although the cut-off frequency of the anti-aliasing filter was close to the theoretical Nyquist frequency, it was the best filtering options available with our hardware setup. For offline analysis, an additional high-pass filter (Butterworth, 4th order) with a cut-off frequency of 10 Hz was necessary to remove low-frequency components from data collected from the FPGA. Notch filter at 50 Hz was then used to eliminate residual powerline interference. We considered 14 time-domain features to extract information from the EMG signals [26]. Features were computed within a sliding window of 300 ms (10 ms update interval). A description of the features and their mathematical formulation can be found in the Appendix.
Motion intention detection
For each trial, within each reaching movement, the EMG signals were segmented into two phases: rest, corresponding to the phase in which the upper limb was kept still in the initial resting position, and movement, corresponding to the phase in which the upper limb was moving towards or was voluntarily touching the target. This transition from rest to movement was defined as the Go-forward event.
A similar approach was adopted for retracting movements. The EMG signals were segmented into two phases: rest, corresponding to the phase in which the upper limb was held fixed near the target by the exoskeleton (in position control) and movement, corresponding to the phase in which the upper limb was moving (or trying to move, when the exoskeleton was in position control) to return to the initial resting position. The transition from rest to movement was defined as the Go-backward event. Figure 2a shows, for a representative subjects, kinematic and kinetic data used for the discrimination of the two events, together with the raw EMG signals for two representative muscles.
a Sample data acquired from one subject participating the experiments: joint angles from two representative joints of the upper-limb exoskeleton (for Go-forward discrimination); torque on the eFE joint for Go-backward discrimination; raw EMG data from two representative muscles. b Schematic representation of the detectors that were tested
To detect both events, the probability distribution of each feature corresponding to rest and movement phases was modeled by a Gaussian Mixture Model (GMM), in which the density function of each feature is a linear mixture of two Gaussian curves, each representing the distribution of that feature within a given phase.
GMM training phase
The parameters of the two-components GMM were estimated using an unsupervised approach based on the Expectation Maximization (EM) algorithm [27]. The GMM probability density function is given by:
$$ p\left(x,{\lambda}_M\right)={w}_{rest}\bullet \frac{1}{\sqrt{2\pi {\sigma}_{rest}^2}}{e}^{-\frac{{\left(x-{\mu}_{rest}\right)}^2}{2{\sigma}_{rest}^2}}+{w}_{mov}\bullet \frac{1}{\sqrt{2\pi {\sigma}_{mov}^2}}{e}^{-\frac{{\left(x-{\mu}_{mov}\right)}^2}{2{\sigma}_{mov}^2}} $$
or, equivalently:
$$ p\left(x,{\lambda}_M\right)={w}_{rest}\bullet p\left(x| rest,{\lambda}_M\right)+{w}_{mov}\bullet p\left(x| mov,{\lambda}_M\right) $$
where μrest and μmov are the means, and \( {\sigma}_{rest}^2 \) and \( {\sigma}_{mov}^2 \) are the variances of the Gaussian distribution for the given phase rest and movement, respectively. The parameters wrest and wmov represent the a priori distribution of task in rest/movement phases. The modeling problem involves estimating the parameter set \( {\lambda}_M=\left\{{w}_{rest},{w}_{mov},{\mu}_{rest},{\mu}_{mov},{\sigma}_{rest}^2,{\sigma}_{mov}^2\right\} \) from a training window of M ≤ L samples of the observed signal.
Given the training sequence x1, x2, … , xM, a Maximum Likelihood Estimation (MLE) of parameter set λM can be obtained by solving the problem:
$$ {\lambda}_M=\mathit{\arg}\underset{\lambda }{\mathit{\max}}\left[\ p\left(x,\lambda \right)\right] $$
which was tackled by iteratively applying the steps of the EM algorithm (1), until the difference between two consecutive estimations was lower than 10−6 for all parameters. The two estimated Gaussian distributions were then used to identify an optimal threshold, θ, that minimized the classification error:
$$ {w}_{rest}\bullet p\left(x=\theta | rest,{\lambda}_M\right)={w}_{mov}\bullet p\left(x=\theta | mov,{\lambda}_M\right) $$
The samples with feature value less than θ are classified as rest, while greater than θ as movement. At the end of the training, the parameters λM for each considered feature were obtained. These were employed as initial guesses for the parameters of the distributions sequentially estimated during the GMM testing.
For each subject, data of each session were used alternatively for the training phase and tested on data of the remaining 6 sessions. The outcome measures over all the testing sessions were then averaged.
GMM testing phase
Reiteration of the EM algorithm for each new sample acquired in the testing phase is disadvantageous in term of both computational load and consumption of memory; on the other hand, maintaining a fixed threshold during the testing phase could lead to inaccurate results due to changing background noise levels during the experiment, or varying EMG peak amplitudes as a result of muscle adaptation or fatigue.
Liu et al. [20] proposed a sequential method to adapt GMM parameters during the testing phase promoting computation efficiency. Here, the model is sequentially updated at each new observation xl + 1 every 10 ms, as follows:
$$ {w}_{i,l+1}=\alpha {w}_{i,l}+\left(1-\alpha \right)p\left(i|{x}_{l+1},{\lambda}_l\right) $$
$$ {\mu}_{i,l+1}=\frac{\alpha {w}_{i,l}{\mu}_{i,l}+\left(1-\alpha \right)p\left(i|{x}_{l+1},{\lambda}_l\right){x}_{l+1}}{w_{i,l+1}} $$
$$ {\sigma^2}_{i,l+1}=\frac{\alpha {w}_{i,l}{\sigma^2}_{i,l}+\left(1-\alpha \right)p\left(i|{x}_{l+1},{\lambda}_l\right){\left({x}_{l+1}-{\mu}_{i,l+1}\right)}^2}{w_{i,l+1}} $$
where i ∈ {rest, mov}, λl is the previous estimate of GMM parameters, and α indicates the forgetting factor (\( \alpha =\frac{L-1}{L},0<\alpha \le 1 \)). The conditional probability p(i| xl + 1, λl) at the generic time instant t is given by:
$$ p\left(i|{x}_t,{\lambda}_l\right)=\frac{w_{i,l}p\left({x}_t|i,{\lambda}_l\right)}{w_{rest,l}p\left({x}_t| rest,{\lambda}_l\right)+{w}_{mov,l}p\left({x}_t| mov,{\lambda}_l\right)} $$
The new estimates of the GMM parameters, λl + 1, can be derived from λl and xl + 1 using the above sequential scheme. Then, the time-varying threshold θl + 1 can be determined from Equation (4) which decides whether xl + 1 is classified as rest or movement.
Subject-independent feature set
The selection of a subject-independent set of EMG features was performed by means of information theory tools. First, the information carried by each single feature about the rest and movement phases of movement was quantified for each recorded muscle. Then, the contributions to the computed information due to Redundancy and Synergy effects were assessed according to the information breakdown proposed by [28]. The Redundancy term takes into account the similarities in the distribution across phases of phase-conditional response probabilities of individual features, whereas the Synergy term quantifies the amount of information available from the feature-feature or movement phase-feature correlations.
In our study, two features are synergic if the information about the events carried when they are considered together is higher than the information conveyed by each feature alone. Similarly, features are redundant if they carry similar information about the events.
According to [28], the mutual information of the variables F (feature) and R (phase) can be written as the sum of four terms:
$$ I\left(\mathcal{R};\mathcal{F}\right)={I}_{lin}+{I}_{sig- sim}+{I}_{cor- ind}+{I}_{cor- dep} $$
where the linear term, Ilin, quantifies the information obtained if each feature were to convey independent information on the movement phase; the signal-similarity term, Isig − sim, quantifies the Redundancy effects; and the correlation components, Icor − ind + Icor − dep, quantify the Synergy effects.
Each contribution of the information breakdown was computed via the C- and Matlab-based Information Breakdown Toolbox (ibTB) developed by Magri et al. [29]. The selection of subject-independent features was carried on by assessing the Redundancy and Synergy terms of the information breakdown. The criteria for feature selection were: 1) to choose the features that minimize redundancy effects and 2) to maximize synergistic effects.
Information theory was also exploited to select the best window length for feature extraction, by comparing the information content of the features using 100, 300 and 500 ms. For Go-forward, information content of the features slightly decreased for increased window length. The differences were not significant when testing window length effect on the three samples (Friedman test; p > 0.05). However, when the samples were tested in pairs, information content at 500 ms was lower than both 300 and 100 ms (p < 0.05; Wilcoxon sign rank test). Instead, for Go-backward, information content did not change as the window length increased (both Friedman test and Wilcoxon sign rank test; p > 0.05). Based on these results, we selected 300 ms as the best choice for window length, in accordance to data found in literature about the optimal window length for the feature calculation. Indeed, several studies report a window length of 300 ms as the maximum limit allowed for feature extraction in online applications [18, 30]. Similar approaches exploiting information theory suggest an optimal window size between 200 and 300 ms [31].
Onset detector type
At each update of the observation window new EMG features were calculated, thus an equivalent number of classification outputs were available. We compared three types of detectors making decisions on whether the new output is rest or movement in different ways, and requiring different amounts of computational load and memory consumption (Fig. 2b):
Type 1 detector: it takes as input a number M of features computed on a single EMG signal; GMM algorithms work in parallel on each feature and the final decision is made by a majority voting procedure on their outputs: it is rest if the corresponding number of outputs are at least \( \frac{M}{2}+1 \), movement otherwise.
Type 2 detector: it takes as input the features computed on multiple EMG signals; each EMG signal is the input of a type 1 sub-detector, and the final decision is made by a majority voting on their outputs. A number S = 7 of EMG signals are used as input for type 2 detectors in this study. Additionally, a type 2 info-based detector has been tested, which takes as input the features computed on a subset of P < S EMG signals, i.e. the ones carrying the highest information. P = 3 has been chosen in order to have the minimum number of signal sources to make a majority voting
Three parameters were used to evaluate the performances of the three types of detectors:
Sensitivity (or true positive rate): it measures the proportion of onset events that are correctly identified as such
$$ Sensitivity=\frac{TP}{TP+ FN} $$
Specificity (or true negative rate): it measures the proportion of correctly detected time samples not being classified as onset
$$ S\mathrm{p} ecificity=\frac{TN}{TN+ FP} $$
Latency: it measures the average delay of onset detection, with respect to the time instant of actual movement initiation, or reference onset time t0
$$ Latency={\left\langle {t}_d-{t}_0\right\rangle}_{trials} $$
where TP, TN, FN and FP are the number of true positives, true negatives, false negatives and false positives, respectively; td is the onset time detected by the algorithm and t0 is the reference onset time, i.e. the time instant on which the kinematic variables assumed a value corresponding to the 10% of their peak values during the movement (Go-forward) or the time instant on which the eFE measured torque overcomes the threshold value (Go-backward). The kinematic variables are the angular positions of the four active joints (sAA, sFE, sIE, eFE).
In addition, in order to assess the computational load, the algorithm has been implemented on a dual-core 667 MHz real-time processor (sbRIO-9651, National Instrument, US) and runs at 100 Hz sampling frequency. This processor has better performances with respect to the one used in the NESM, and will be employed in future versions of the exoskeleton. Raw sEMG are acquired at the FPGA level, running at 1 kHz, and sent by means of a direct memory access (DMA) method to the high-level control layer, for signal processing and feature extraction. With this FIFO-based method, during each iteration of the high-level control (i.e. 10 ms) ten sEMG samples (i.e. 1 ms data) are collected from the FPGA.
Data recorded during the experimental session, together with the initial GMM parameters obtained after the training phase, were used to run a simulation of the algorithm for 90 s (corresponding to 6 full cycles), and the maximum iteration duration of a single iteration was extracted.
Movement onset analysis
Subject-independent set of EMG features
Figure 3a and b show, for Go-forward and Go-backward onset detection respectively, the information about the rest/movement states carried by each of the 14 features (mean ± SD across subjects and muscles) and by white noise, the latter used as reference. Eight out of fourteen features (IAV, MAV, MMAV1, SSI, VAR, RMS, WL and LOG) carried significantly more information than white noise for both Go-forward and Go-backward (KW test; p < 0.001; Tuckey's post hoc), and were selected for further in-depth analysis. Fig. 3c and d show, for each of the fourteen features in the two events, the similarity term (Isig − sim) of the information breakdown, thus the redundancy effect between pairs of features: the eight selected features all showed to be redundant to some degree, e.g. there were similarities in the distribution across rest/movement states of state-conditional response probabilities of individual features (2). Analogously, in Figure 3e and f, the correlation term (Icor − ind + Icor − dep) is reported. Synergistic effects between features are expressed by positive correlation: IAV, MAV and MMAV1 features showed to be non synergistic (negative correlation) and similarly, RMS and SSI and VAR. For this reason we discarded MAV, MMAV1, RMS and VAR from the set of the eight most informative features, and considered the set {IAV, SSI, WL, LOG} as the most informative, most synergistic and less redundant subject-independent set of features to be used for EMG-based movement onset detection.
Information (in bit) carried by the fourteen selected features and white noise for Go-forward (a) and Go-backward (b). Colormap of the similarity term of the information breakdown for analysis of redundancy effects between features for Go-forward (c) and Go-backward (d). Colormap of the correlation term of the information breakdown for analysis of synergistic effects between features for Go-forward (e) and Go-backward (f)
Information content of the extracted features
After the optimal set of features had been selected, the information content of the 7 muscles has been calculated according to Eq. 9, in order to identify which muscles are more suitable as Type 1 detector for the two events. The results are reported in Figure 4a and b, for the two event respectively. Although for Go-forward all Type 1 detectors except Flexor and Extensor Carpi Ulnaris carry an information content higher than 0.5 bit (Anterior Deltoid being the most informative one with 0.87 bit of information), the same detectors result less informative for Go-backward, with the exception of Extensor Carpi Ulnaris (0.68 bit of information).
Information (in bit) carried by the 7 selected muscles considered as Type 1 Detectors for Go-forward (a) and Go-backward (b). Correlation between detector performances (sensitivity x specificity) and the information content of the four features selected for the testing phase in Go-forward (c) and Go-backward (d): 70 points are reported for each plot (10 subjects, 7 testing session for each)
Figure 4c and d inspect the correlation between the performance of the detectors in terms of both sensitivity and specificity (i.e. their product), and the information content of the four selected features, calculated for each of the Type 1 detectors and for each subject. As reported in Table 1, all the features show statistically significant correlation for both events (Pearson correlation coefficient between 0.66 and 0.88, p-value< 0.001).
Table 1 Pearson correlation coefficients between sensitivity and information content of the five selected features
Performance of different EMG-based detectors
Figure 5a and b show, for each of the two events, the performance of the three types of detector that were tested, in terms of sensitivity, specificity and latency. Regarding Go-forward onset detection, Type 1 detectors receiving as input single EMG signals from Anterior Deltoid and Biceps show higher sensitivity with respect to the other muscles. The median sensitivity (and interquartile range) is equal to 81.1% (76.6–86.24%) and 89.3% (90.5–76.5%) respectively. Nevertheless, whereas Anterior Deltoid detector has the highest specificity among Type 1 detectors (96.2% (93.0–98.3%)), Biceps detector exhibit the lowest specificity, equal to 80.8% (51.8–91.0%). Median latency values range from − 0.202 s (Biceps) to − 0.029 s (Flexor Carpi Ulnaris). The Type 2 detector (Majority Voting) exhibits the highest specificity (median value of 97.9% (98.4–96.3%); however, it performs worse than other Type 1 detectors in terms of sensitivity (74.7% (69.7–78.2%)). Median latency is equal to − 0.088 s (− 0.101 – − 0.057 s). By choosing only the most informative muscles as input to the Type 2 detector, the sensitivity increases to 81.7% (74.0–84.9%), while specificity and latency slightly decrease to 96.3% (91.3–98.6%) and − 0.133 s (− 0.168 – − 0.087 s) respectively.
Performance metrics of the detectors for Go-forward (a) and Go-backward (b). Detection results in true positive- false positive rate for Go-forward (c) and Go-backward (d): the mean value and standard deviation bars are reported for each detector
With reference to Go-backward, Extensor Carpi Ulnaris and Biceps Type 1 detectors exhibit the highest sensitivity (median values of 60.9% (49.7–69.9%) and 59.1% (50.7–64.5%) respectively) with respect to the other type of detectors. Extensor Carpi Ulnaris Type 1 detector exhibits the highest specificity as well, with a median value of 94.3% (79.5–95.7%), and median latency equal to − 0.099 s (− 0.163–0.173 s). All other Type 1 have a sensitivity below 50%. Type 2 detector (Majority Voting an all muscles) resulted to have poor sensitivity for Go-backward (median value of 50.4% (31.7–54.2%)) and specificity equal to 91.6% (86.2–97.1%). Median latency is equal to − 0.093 s (− 0.184–0.028 s). When only the most informative muscles are selected for the Type 2 detector, sensitivity and specificity increase to 52.7% (41.2–63.2%) and 94.3% (84.2–96.7%). Latency decreases to − 0.115 s (− 0.217 – − 0.014 s). Table 2 summarizes the best performances for each parameter and for each detector.
Table 2 Performances of the three types of detector for the two events. For Type 1 detectors, performances of the source with the best performances (Sens.*Spec.) are reported, which is Anterior Deltoid (AD) for Go-forward and Biceps Brachii (BB) for Go-backward
Figure 5c and d show, for the two events respectively, the true positive- vs false positive- rate calculated over all the trials performed by each subject and for each detector (mean values and standard deviations are reported for both measures). Type 2 detector exhibits the highest mean ratio for Go-forward, equal to 23.7 (0.71/0.03), which slightly decreases to 19.7 (0.79/0.04) when only the most informative muscles are selected. Among Type 1 detectors, Anterior Deltoid has the highest mean ratio, equal to 13.5 (0.81/0.06). As of Go-backward, Extensor Carpi Ulnaris and Biceps Brachii exhibit the highest ratios among Type 1 detectors, equal to 5.3 (0.58/0.11) and 3.1 (0.58/0.19) respectively. Type 2 Info-based detector performs slightly better than the others, having a ratio equal to 5.5 (0.55/0.10).
Computational load
Results from the simulation on the real-time controller revealed that the maximum iteration duration for event detection is lower than 2 ms. The maximum value of iteration duration allowed by the controller for real-time operation is 10 ms.
Classification methods based on GMM have been implemented for the myoelectric control of assistive devices such as prostheses or robotic arms [17, 32]. In our work, a GMM-based algorithm has been implemented and information theory was used to identify the best set of features able to detect the onset of upper-limb muscular activation, with the final goal of controlling a robotic exoskeleton for assistive tasks. Among the 14 time-domain features selected, a first screening was conducted based on their information content with respect to white noise. The smallest number of features which maximize synergistic effects and minimize redundancy effects is selected, in order to: i) reduce the probability that two different features will share the same information about rest/movement states (redundancy); ii) reduce the probability that the information content of the features alone is higher with respect to the information of the coupled features (synergy). Clearly, selecting the smallest number of features would be recommended for online applications, in order to reduce computational load and achieve faster detection without degradation of the performances. Among the ones we selected, IAV and WL have also been exploited to extract useful information about muscles activation [19, 32,33,34]. Although some of the features analyzed in this study have a similar formulation (such as IAV, MAV, MMAV1, MMAV2), we did not make a biased selection of features based on a priori knowledge of their definition or their similarities and used information theory to rule out redundant and non synergistic features. Indeed, previous works have shown that similar EMG features or their combination can yield significantly different results [35, 36]. We compared performance metrics of both Go-forward and Go-backward using all 14 features with the results obtained by the optimal subset. Only Go-backward sensitivity was slightly but significantly higher when using all features (median difference 0.0086; Wilcoxon sign rank test, p < 0.001). All the other performance indices were not significantly different (Wilcoxon sign rank test, p > 0.05), showing that usage of information theory to reduce the number of features did not affect overall detection performance, allowing in parallel a reduction of the total computational load of the detection algorithms (approx. 80% reduction).
The importance of the information content on the accuracy of onset detection was confirmed by the in-depth analysis on the four chosen features. In particular, the positive correlation between the detector performance (which takes into account both sensitivity and specificity) of the Type 1 detectors and the information content of the extracted features suggests that higher information content can be associated to better event recognition. Thus, an a priori analysis based on the breakdown of information can be useful to identify which features would be more effective for accurate detection of the movement onset. Indeed, among Type 1 detectors, features extracted from Anterior and Posterior Deltoid carry the highest information and have the highest combination of sensitivity and specificity for Go-forward. Similarly, Biceps and Extensor Carpi Ulnaris have the highest information for Go-backward. A similar trend between information content and performances can be observed by comparing Figs. 4 and 5A-B (Sens. x Spec. panel). The information content of the features for the Type 1 detectors of Go-Backward reflects on the worse performance of its detection with respect to Go-forward, for which sensitivity, specificity and information content are overall higher. Indeed, the true positive- vs false-positive rate ratio is always higher than 1 for Go-forward for all detectors, whereas for Go-backward the distributions are widely spread, and some detectors exhibit a ratio lower than 1 or close to it. The high information content for Go-forward carried by most of the Type 1 detectors for Go-forward suggests that a particular combination of them through a majority voting could provide accurate detection as well. Taking into account contributions from all muscles is disadvantageous for both events detection, because of the scarce information associated to this event by some of the single detectors. Conversely, given the positive correlation between information and detector performance, the selection of the most informative muscles as input to the Type 2 detector gives acceptable performances in terms of sensitivity and specificity, while reducing the computational load in the training and testing phase with respect to considering all muscles.
The two events have been chosen in order to simulate different conditions of real-life scenario, which can vary extensively according to the environment and the subjects' residual motion capabilities [37]. As an example, Go-forward is typical in situations where the user wants to initiate a new task or activity and the robot can modulate the level of assistance, up to providing passive mobilization, in the worst scenarios. On the contrary, Go-backward reflects situations in which sequential movements must be performed (e.g. a complete functional tasks composed of different sub-actions) and significant changes in the background noise level of EMG signals can be encountered in relatively short time.
Although the highest sensitivity for Go-forward event is higher than 80% when the proper detector is used, sensitivity for Go-backward detection is around 60% in the best case. The differences in the recognition of the two events could be due to the particular conditions of the experiments. Indeed, whereas Go-forward corresponds to a transition from rest to movement state when the user is completely relaxed (the exoskeleton is in transparent mode), the initial condition for Go-backward was with the arm stretched toward the target and the exoskeleton controlled in position mode, restraining any movement of the user. This condition did not allow subjects to relax their muscles before activating the Go-backward transition. In addition, the time interval for which the phase signal is 1 (i.e. movement state) is shorter for Go-backward than for Go-forward and it is dependent on the torque threshold chosen for the activation. This has two main implications: first, the sequential algorithm that adaptively modifies GMM parameters works on a shorter time window, reducing the accuracy in the calculation of the time-varying threshold to discriminate rest/movement states; furthermore, by selecting a higher torque threshold for Go-Backward initiation, a volitional muscular activity could have been better discriminated during the training phase. A low torque threshold has been selected for the experiments in order to reduce fatigue effects on the subjects.
The low sensitivity of the Go-backward detection represents the main limitation of the proposed method, which would make it difficult for the user to retract from the reaching position in real-time applications. Before the Go-backward event, the arm was completely extended toward the target, and held steady by the exoskeleton. In such position, the arm muscles exhibited a residual activation, which was non-optimal for discriminating between rest and movement states. In fact, although the subjects were instructed to keep their muscles as relaxed as possible, such activation increased the "background noise" on the EMG signals, leading to poor detection performance even with the best parameters and the optimal window length. A possible solution to address this problem would require a modification of the experimental protocol with respect to the rest positions for the Go-backward event. For example, a more comfortable posture with the arm not completely extended toward the target could help the subjects to keep their muscles relaxed. As a result, the information content carried by the EMG signals about the event would increase, thus improving detection performance.
The results about latency are comparable to other systems in the state of the art. For example, the multimodal control system in [9] is capable to predict movement onset via EMG analysis with a prediction time of 0.061 s, which was reduced to a value of 0.057 s when EMG and EEG signals were combined in a hybrid fashion. The GMM method presented in this paper has the advantage of using only EMG signals to predict the onset of the movement from 0.088 s up to 0.134 s (Table 2, Go-forward), reducing the complexity of the system while maintaining acceptable performances in terms of sensitivity and specificity, higher than 80 and 96% respectively for Go-forward. Earlier predictions have also been recorded, up to 0.202 s (median value) for Biceps Type 1 detector in Go-forward, but with poor overall performances. It is worth noticing that here latency is defined as the time delay between muscular activation onset and kinematic onset, rather than the delay between the algorithm detection and actual muscle activation. Thus, negative values of latency are preferred in order to design a control strategy able to react promptly to the user's intention. In this case, by taking into account the contribution of a specific muscle or a sub-set of muscles to a certain movement, it would be possible to trigger the robot assistance before having a substantial modification of the kinematic metrics, which would be strenuous for users with highly-reduced mobility of their upper arm.
When used in conjunction with the upper-limb exoskeleton, smart algorithms can be combined in order to reduce the effect of false activations. As an example, a robot-assisted full functional task can be implemented by means of a finite-state machine to split the main task in different sub-actions. Then, event detection can be triggered only when the proper state is activated. Similar approaches have been pursued, but using different interfaces for detecting the user's intention to move [10, 38]. Another study on healthy subjects showed that combining EMG data with kinematic data from the exoskeleton can improve the performance of classification of the movement direction, with respect to using EMG signals alone [39]. A hybrid approach exploiting kinetic data from the exoskeleton could allow to deal with pathological sEMG as well, as of post-stroke subjects exhibiting arm spasticity. In this case, an involuntary muscle contraction due to a spasm could be detected by the onboard torque sensors of the exoskeleton [40], and the event detection would be neglected. Future studies need to be conducted in order to evaluate the feasibility of such approach for online applications.
In this paper, we presented an algorithm for the detection of the user's intention to move based on the onset of muscular activity. We found that information theory represents a powerful tool to predict which features could be more representative for an accurate and robust detection of a desired event. For offline analysis, kinematic data of the upper-limb exoskeleton have been exploited to discriminate two different events, reproducing possible scenarios of daily-life activities for people with reduced mobility of their upper arm. The performances of different detectors have been analyzed, showing that information from single muscles or a combination of them can be equivalently effective depending on the kinematic event that is considered. Although the performance of the algorithm has been tested offline, its applicability to real scenario has been discussed. The capability to predict the onset of muscular activity before the kinematic event takes place, the accurate detection and the low computational load make the proposed algorithm promising for the control of upper-limb exoskeletons in online applications. The final goal would be aiding people with severe arm disabilities in performing assisted functional tasks. Clearly, additional test will be required in order to assess the performance of the algorithm when non-physiological muscle activation patterns are used.
Anterior deltoid
BMI:
Brain-machine interface
cHRI:
Cognitive human-robot interface
DMA:
Direct Memory Access
DOF:
Degree of freedom
EEG:
eFE:
Elbow flexion-extension
EM:
Expectation maximization
EOG:
Electrooculography
Extensor Carpi Ulnaris
FIFO:
First-In-First-Out
FN:
False negative
FP:
FPGA:
Field-programmable gate array
GMM:
Gaussian mixture model
GO-BWR:
Go-backward event
GO-FWR:
Go-forward event
IAV:
Integrated absolute value
Kruskal-Wallis
MAV:
Mean absolute value
MLE:
Maximum likelihood estimation
MMAV1:
Modified mean absolute value 1
NESM:
NeuroExos Shoulder-Elbow Module
NI:
PID:
Proportional-integrative-derivative
RMS:
Root mean square
sAA:
Shoulder abduction-adduction
SEA:
Series-elastic actuator
sEMG:
Surface electromyography
SENS.:
sFE:
Shoulder flexion-extension
sIE:
shoulder internal-external rotation
SPEC.:
SSI:
Simple square integral
TN:
True negative
TP:
True positive
VAR:
WL:
Waveform length
Zoss AB, Kazerooni H, Chu A. Biomechanical Design of the Berkeley Lower Extremity Exoskeletong (BLEEX). IEEE/ASME Trans Mechatronics. 2006;11:128–38. https://doi.org/10.1109/TMECH.2006.871087.
Maciejasz P, Eschweiler J, Gerlach-Hahn K, Jansen-Troy A, Leonhardt S. A survey on robotic devices for upper limb rehabilitation. J Neuroeng Rehabil. 2014;11:3. https://doi.org/10.1186/1743-0003-11-3.
Pazzaglia M, Molinari M. The embodiment of assistive devices-from wheelchair to exoskeleton. Phys Life Rev. 2016;16:163–75.
Pons JL. Rehabilitation exoskeletal robotics. IEEE Eng Med Biol Mag. 2010;29:57–63. https://doi.org/10.1109/MEMB.2010.936548.
Waldert S. Invasive vs. non-invasive neuronal signals for brain-machine interfaces: will one prevail? Front Neurosci. 2016;10:1–4.
Pfurtscheller G, Guger C, Müller G, Krausz G, Neuper C. Brain oscillations control hand orthosis in a tetraplegic. Neurosci Lett. 2000;292:211–4. https://doi.org/10.1016/S0304-3940(00)01471-3.
Úbeda A, Iáñez E, Azorín JM. Wireless and portable EOG-based interface for assisting disabled people. IEEE/ASME Trans Mechatronics. 2011;16:870–3. https://doi.org/10.1109/TMECH.2011.2160354.
Soekadar SR, Witkowski M, Gómez C, Opisso E, Medina J, Cortese M, et al. Hybrid EEG/EOG-based brain/neural hand exoskeleton restores fully independent daily living activities after quadriplegia. Sci Robot. 2016;1:eaag3296. https://doi.org/10.1126/scirobotics.aag3296.
Kirchner EA, Tabie M, Seeland A. Multimodal movement prediction - towards an individual assistance of patients. PLoS One. 2014;9. https://doi.org/10.1371/journal.pone.0085060.
Crea S, Nann M, Trigili E, Cordella F, Baldoni A, Badesa FJ, et al. Feasibility and safety of shared EEG/EOG and vision-guided autonomous whole-arm exoskeleton control to perform activities of daily living. Sci Rep. 2018;8:10823. https://doi.org/10.1038/s41598-018-29091-5.
Singh R, Chatterji S. Trends and challenges in EMG based control scheme of exoskeleton robots-a review. Int J Sci Eng Res. 2012;3:1–8.
Lenzi T, De Rossi SMM, Vitiello N, Carrozza MC. Intention-based EMG control for powered exoskeletons. IEEE Trans Biomed Eng. 2012;59(8):2180–90.
Farina D, Jiang N, Rehbaum H, Holobar A, Graimann B, Dietl H, et al. The extraction of neural information from the surface EMG for the control of upper-limb prostheses: emerging avenues and challenges. IEEE Trans Neural Syst Rehabil Eng. 2014;22:797–809. https://doi.org/10.1109/TNSRE.2014.2305111.
Ferris DP, Lewis CL. Robotic lower limb exoskeletons using proportional myoelectric control; 2013. p. 2119–24. https://doi.org/10.1109/IEMBS.2009.5333984.
Rechy-Ramirez EJ, Hu H. Bio-signal based control in assistive robots: a survey. Digit Commun Networks. 2015;1:85–101. https://doi.org/10.1016/j.dcan.2015.02.004.
Carpi F, De RD. Emg-Based and Gaze-Tracking-Based Man-Machine Interfaces. 1st edition: Elsevier Inc; 2009. https://doi.org/10.1016/S0074-7742(09)86001-7.
Huang Y, Englehart KB, Hudgins B, Chan ADC. A Gaussian mixture model based classification scheme for myoelectric control of powered upper limb prostheses. IEEE Trans Biomed Eng. 2005;52:1801–11. https://doi.org/10.1109/TBME.2005.856295.
Asghari Oskoei M, Hu H. Myoelectric control systems-a survey. Biomed Signal Process Control. 2007;2:275–94. https://doi.org/10.1016/j.bspc.2007.07.009.
Zecca M, Micera S, Carrozza MC, Dario P. Control of multifunctional prosthetic hands by processing the Electromyographic signal. Crit Rev Biomed Eng. 2002;30:459–85. https://doi.org/10.1615/CritRevBiomedEng.v30.i456.80.
Liu J, Ying D, Rymer WZ, Zhou P. Robust muscle activity onset detection using an unsupervised electromyogram learning framework. PLoS One. 2015;10:e0127990. https://doi.org/10.1371/journal.pone.0127990.
Crea S, Cempini M, Moise M, Baldoni A, Trigili E, Marconi D, et al. A novel shoulder-elbow exoskeleton with series elastic actuators. In: Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics; 2016. p. 1248–53.
Ercolini G, Trigili E, Baldoni A, Crea S, Vitiello N. A novel generation of ergonomic upper-limb wearable robots: design challenges and solutions. Robotica. 2018:1–17. https://doi.org/10.1017/S0263574718001340.
Pratt J, Krupp B, Morse C. Series elastic actuators for high fidelity force control. Ind Robot An Int J. 2002;29:234–41. https://doi.org/10.1108/01439910210425522.
Giovacchini F, Cempini M, Vitiello N, Carrozza MC. Torsional Trasmission Element with Elastic Response; 2015. https://doi.org/10.1103/PhysRevE.92.063302.
Hermens HJ. Development of recommendations for SEMG sensors and sensor placement procedures. J Electromyogr Kinesiol Off J Int Soc Electrophysiol Kinesiol. 2000;10:361–74. https://doi.org/10.1016/S1050-6411(00)00027-4.
Phinyomark A, Limsakul C, Phukpattaranont P. A novel feature extraction for robust EMG pattern recognition. J Comput. 2009;1(1):71–80.
Moon TK. The expectation-maximization algorithm. IEEE Signal Process Mag. 1996:47–60. https://doi.org/10.1109/79.543975.
Pola G, Thiele A, Hoffmann K, Panzeri S. An exact method to quantify the information transmitted by different mechanisms of correlational coding, vol. 14; 2003. p. 35–60. https://doi.org/10.1088/0954-898X/14/1/303.
Magri C, Whittingstall K, Singh V, Logothetis NK, Panzeri S. A toolbox for the fast information analysis of multiple-site LFP, EEG and spike train recordings. BMC Neurosci. 2009;10:81. https://doi.org/10.1186/1471-2202-10-81.
Hamedi M, Salleh SH, Astaraki M, Noor AM, Harris ARA. Comparison of multilayer perceptron and radial basis function neural networks for EMG-based facial gesture recognition. In: Lecture Notes in Electrical Engineering. Singapore: Springer; 2014. p. 285–94.
Chowdhury RH, Reaz MBI, Bin Mohd Ali MA, Bakar AAA, Chellappan K, Chang TG. Surface electromyography signal processing and classification techniques. Sensors (Switzerland). 2013;13(9):12431–66. https://doi.org/10.3390/s130912431.
Artemiadis PK, Kyriakopoulos KJ. An EMG-based robot control scheme robust to time-varying EMG signal features. IEEE Trans Inf Technol Biomed. 2010;14:582–8. https://doi.org/10.1109/TITB.2010.2040832.
Han J-S, Zenn Bien Z, Kim D-J, Lee H-E, Kim J-S. Human-machine interface for wheelchair control with EMG and its evaluation. Proc 25th Annu Int Conf IEEE Eng Med Biol Soc (IEEE Cat No03CH37439). 2003:1602–5. https://doi.org/10.1109/IEMBS.2003.1279672.
Hudgins B, Parker P, Scott RN. A new strategy for multifunction myoelectric control. IEEE Trans Biomed Eng. 1993;40:82–94. https://doi.org/10.1109/10.204774.
Englehart K, Hudgins B. A robust, real-time control scheme for multifunction myoelectric control. IEEE Trans Biomed Eng. 2003;50(7):848–54. https://doi.org/10.1109/TBME.2003.813539.
Farfan FD, Politti JC, Felice CJ. Evaluation of EMG processing techniques using information theory. Biomed Eng Online. 2010;9:72. https://doi.org/10.1186/1475-925X-9-72.
Schasfoort FC, Bussmann JBJ, Stam HJ. Ambulatory measurement of upper limb usage and mobility-related activities during normal daily life with an upper limb-activity monitor: a feasibility study. Med Biol Eng Comput. 2002;40:173–82. https://doi.org/10.1007/BF02348122.
Lauretti C, Cordella F, Ciancio AL, Trigili E, Catalan JM, Badesa FJ, et al. Learning by demonstration for motion planning of upper-limb exoskeletons. Front Neurorobot. 2018;12:1–14.
Accogli A, Grazi L, Crea S, Panarese A, Carpaneto J, Vitiello N, et al. EMG-based detection of user's intentions for human-machine shared control of an assistive upper-limb exoskeleton. Biosyst Biorobotics. 2017;16(i):181–5.
Vitiello N, Cempini M, Crea S, Giovacchini F, Cortese M, Moise M, et al. Functional Design of a Powered Elbow Orthosis toward its clinical Employmen. IEEE/ASME Trans Mechatronics. 2016;21(4):1880–91. https://doi.org/10.1109/TMECH.2016.2558646.
We would like to thank Eng. Alessandro Pilla and Eng. Giorgia Ercolini for the technical contribution in the development of the simulator to assess the computational load.
This work was supported in part by the European Union within the AIDE Project H2020-ICT-22-2014 under Grant Agreement 645322 and in part by the Regione Toscana within the RONDA Project (Bando FAS Salute 2014).
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Emilio Trigili and Lorenzo Grazi contributed equally to this work.
Nicola Vitiello and Alessandro Panarese share the senior authorship.
The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
Emilio Trigili, Lorenzo Grazi, Simona Crea, Alessandro Accogli, Jacopo Carpaneto, Silvestro Micera, Nicola Vitiello & Alessandro Panarese
IRCCS Fondazione Don Carlo Gnocchi, Milan, Italy
Simona Crea & Nicola Vitiello
Bertarelli Foundation Chair in Translational NeuroEngineering, Center for Neuroprosthetics and Institute of Bioengineering, School of Engineering, École Polytechnique Federale de Lausanne, Lausanne, Switzerland
Silvestro Micera
Emilio Trigili
Lorenzo Grazi
Simona Crea
Alessandro Accogli
Jacopo Carpaneto
Nicola Vitiello
Alessandro Panarese
ET, LG and AP wrote the manuscript. ET and LG developed the control system of the upper-limb exoskeleton. AP, AA and JC developed the onset detection algorithm. LG and AA collected the data. AP, ET, LG and SC analyzed and interpreted the data. LG, AP, SC, NV and SM designed the study. AP, SC, NV, SM supervised the study. All authors edited and provided critical feedbacks on the manuscript, read and approved its final version.
Correspondence to Emilio Trigili.
The procedures were approved by the Institutional Review Board at The BioRobotics Institute, Scuola Superiore Sant'Anna (Delibera n. 3/2018) and complied with the principles of the declaration of Helsinki. All participants provided written informed consent.
S. Crea, S. Micera and N. Vitiello have commercial interests in IUVO S.r.l., a spinoff company of Scuola Superiore Sant'Anna. Currently, part of the IP protecting the NESM has been licensed to IUVO S.r.l. for commercial exploitation.
Feature Definitions
According to [26], the features considered in this study are defined as the following:
Integrated Absolute Value (IAV): It is calculated as the summation of the absolute values of the signal amplitude in the window frame,
$$ IAV=\sum \limits_{n=1}^N\left|{x}_n\right| $$
where N is the number of samples of the sliding window (N = 100).
Mean Absolute Value (MAV): It is evaluated by taking the average of each signal,
$$ MAV=\frac{1}{N}\sum \limits_{n=1}^N\left|{x}_n\right| $$
Modified Mean Absolute Value 1 (MMAV1): It is an extension of MAV which uses a weighting window function,
$$ MMAV1=\frac{1}{N}\sum \limits_{n=1}^N{w}_n\left|{x}_n\right| $$
$$ {w}_n=\left\{\begin{array}{cc}1&\ if\ 0.25N\le n\le 0.75N\\ {}\ 0.5& otherwise\end{array}\right.. $$
Modified Mean Absolute Value 2 (MMAV2): It is similar to MMAV1, with a different weighting function,
$$ MMAV2=\frac{1}{N}\sum \limits_{n=1}^N{\widehat{w}}_n\left|{x}_n\right| $$
$$ {\widehat{w}}_n=\left\{\begin{array}{cc}1& if\ 0.25N\le n\le 0.75N\\ {}4n/N& if\ 0.25N>n\\ {}4\left(n-N\right)/N& if\ 0.75N<n\end{array}\right. $$
Simple Square Integral (SSI): It is the energy of the signal,
$$ SSI=\sum \limits_{n=1}^N{x_n}^2 $$
Variance (VAR): It is the variance of the signal,
$$ VAR=\frac{1}{N-1}\ \sum \limits_{n=1}^N{\left({x}_n-\mu \right)}^2 $$
where μ is the average value of the signal.
Root Mean Square (RMS): It is the root of the mean squared signal,
$$ RMS=\sqrt{\frac{1}{N}\ \sum \limits_{n=1}^N{x_n}^2} $$
Waveform Length (WL): It is the cumulative length of the waveform,
$$ WL=\sum \limits_{n=1}^{N-1}\left|{x}_{n+1}-{x}_n\right| $$
Willison Amplitude (WAMP): It is the number of times that the difference between two adjacent amplitude values exceeds a predefined threshold,
$$ WAMP=\sum \limits_{n=1}^{N-1}f\left(\left|{x}_{n+1}-{x}_n\right|\right) $$
$$ f(x)=\left\{\begin{array}{cc}1&\ if\ x\ge threshold\\ {}0& otherwise\end{array}\right. $$
Slope Sign Change (SSC): It is the number of changes between positive and negative slope; it is calculated by using a threshold function to minimize the influence of noise in the signal,
$$ SSC=\sum \limits_{n=2}^{N-1}f\left[\left({x}_n-{x}_{n-1}\right)\left({x}_n-{x}_{n+1}\right)\right] $$
Zero-Crossing (ZC): It is the number of times that the amplitude of the signal crosses the zero value,
$$ ZC=\sum \limits_{n=1}^{N-1}\left[\mathit{\operatorname{sign}}\left({x}_n\times {x}_{n+1}\right)\cap \left|{x}_n-{x}_{n+1}\right|\ge threshold\right] $$
$$ \mathit{\operatorname{sign}}(x)=\left\{\begin{array}{cc}1&\ if\ x\ge threshold\\ {}0& otherwise\end{array}\right. $$
Logarithm (LOG): It is the mean of the common logarithm of the absolute value of the signal,
$$ LOG=\frac{1}{N}\sum \limits_{n=1}^N{\log}_{10}\left(\left|{x}_n\right|\right) $$
Skewness (SKEW): It is the third standardized moment, defined as:
$$ SKEW=\frac{\frac{1}{N}\ \sum \limits_{n=1}^N{\left({x}_n-\mu \right)}^3}{{\left(\frac{1}{N-1}\ \sum \limits_{n=1}^N{\left({x}_n-\mu \right)}^2\right)}^{3/2}} $$
Kurtosis (KURT): It is the fourth standardized moment, defined as:
$$ KURT=\frac{\frac{1}{N}\ \sum \limits_{n=1}^N{\left({x}_n-\mu \right)}^4}{{\left(\frac{1}{N-1}\ \sum \limits_{n=1}^N{\left({x}_n-\mu \right)}^2\right)}^2} $$
Trigili, E., Grazi, L., Crea, S. et al. Detection of movement onset using EMG signals for upper-limb exoskeletons in reaching tasks. J NeuroEngineering Rehabil 16, 45 (2019). https://doi.org/10.1186/s12984-019-0512-1
Upper-limb exoskeleton
Human-robot interface
Onset detection | CommonCrawl |
Improved selection of participants in genetic longevity studies: family scores revisited
Mar Rodríguez-Girondo ORCID: orcid.org/0000-0003-0414-870X1,
Niels van den Berg2,
Michel H. Hof3,
Marian Beekman2 &
Eline Slagboom2
BMC Medical Research Methodology volume 21, Article number: 7 (2021) Cite this article
Although human longevity tends to cluster within families, genetic studies on longevity have had limited success in identifying longevity loci. One of the main causes of this limited success is the selection of participants. Studies generally include sporadically long-lived individuals, i.e. individuals with the longevity phenotype but without a genetic predisposition for longevity. The inclusion of these individuals causes phenotype heterogeneity which results in power reduction and bias. A way to avoid sporadically long-lived individuals and reduce sample heterogeneity is to include family history of longevity as selection criterion using a longevity family score. A main challenge when developing family scores are the large differences in family size, because of real differences in sibship sizes or because of missing data.
We discussed the statistical properties of two existing longevity family scores: the Family Longevity Selection Score (FLoSS) and the Longevity Relatives Count (LRC) score and we evaluated their performance dealing with differential family size. We proposed a new longevity family score, the mLRC score, an extension of the LRC based on random effects modeling, which is robust for family size and missing values. The performance of the new mLRC as selection tool was evaluated in an intensive simulation study and illustrated in a large real dataset, the Historical Sample of the Netherlands (HSN).
Empirical scores such as the FLOSS and LRC cannot properly deal with differential family size and missing data. Our simulation study showed that mLRC is not affected by family size and provides more accurate selections of long-lived families. The analysis of 1105 sibships of the Historical Sample of the Netherlands showed that the selection of long-lived individuals based on the mLRC score predicts excess survival in the validation set better than the selection based on the LRC score .
Model-based score systems such as the mLRC score help to reduce heterogeneity in the selection of long-lived families. The power of future studies into the genetics of longevity can likely be improved and their bias reduced, by selecting long-lived cases using the mLRC.
There is strong evidence that longevity, defined as survival to extreme ages, clusters within families and is transmitted across generations [1,2,3,4,5,6,7]. Recent research [5] on two large population-based multi-generational family studies indicates that longevity is transmitted as a quantitative genetic trait. Moreover, associations between environmental factors and familial clustering have been rarely found using historical pedigree data [5, 8,9,10]. Although these findings suggest that human longevity has a genetic component, genetic studies on longevity have had limited success in identifying longevity loci [11,12,13,14,15,16,17]. One of the main causes for this limited success could be the large heterogeneity in criteria for participant selection in longevity studies [5, 18, 19]. Since the study participants must be alive to extract blood or other biomaterials their longevity phenotype is, by definition, unknown. An additional complication of longevity studies is the ongoing increase in life expectancy due to non-genetic factors [20], such as improvements in nutrition, life style and health care. If only individual age is considered as selection criterion, these non-genetic factors increase the risk of including sporadically long-lived individuals i.e. individuals with the longevity phenotype but who do not have an underlying genetic predisposition for longevity.
To obtain a sample with less phenotype heterogeneity, the family history of longevity can be used as a participant selection criterion [5, 18]. Although this approach does not avoid that sample selection is influenced by family-shared non-genetic factors potentially involved in longevity, it is likely that it increases the power in case-control studies to detect novel genetic loci [21, 22]. A natural way to incorporate family history in the study design is to develop a longevity family score to identify families with the heritable longevity trait and to subsequently select alive members of these families for (genetic) longevity studies. A number of longevity family scores have been previously proposed [4, 18, 23,24,25], using different definitions of individual longevity and different ways of summarizing longevity within families. The implications of these choices are not well understood, namely how the interplay among individual longevity definition, family-specific summary measures and family size affects the sample selection process based on longevity family scores. The first challenge when developing longevity family scores is defining individual longevity. It is unclear how extreme the age at death must be to label an individual as long-lived and which scale is most beneficial so that scores reflect differences in extreme survival and not just in overall lifespan. The second challenge when developing longevity family scores are the large differences in family size. These differences imply that the available information per family differs. For a family with 12 members, for instance, more information is available than for a family with 2 members only. Importantly, we typically do not know whether these differences are real differences in sibship sizes or the result of missing data caused by limitations of the data collection. If not properly addressed, differences in family size can lead to biased rankings of long-lived families. This can lead to an increased heterogeneity among selected participants in longevity studies and hence reduce power of analyses. Instead of studying the genetics of longevity, biased selections can potentially lead to the combined analysis of the genetics of longevity, fertility and other factors affecting family size, such as, for example, socio economic status. Up till now, this important challenge has not received enough attention and how to address this problem still remains open.
In this paper, we investigate to what extent existing longevity family scores such as the Family Longevity Selection Score (FLoSS) [23] and the Longevity Relatives Count (LRC) score [18], are affected by differential family size. Subsequently, we propose an alternative method based on mixed effects regression modelling to deal with differences in family size when building a longevity family score.
The main novelty of our new approach is to consider the family size as a source of uncertainty when estimating the level of longevity of a family. Hence, we propose to select families accounting for such estimated uncertainty. This new approach will contribute to more robust scores and selection rules in longevity studies.
Existing longevity family scores and family size
Several longevity or excess survival family scores have been previously proposed [4, 18, 23,24,25]. Often, to measure individual survival exceptionality, age at death is transformed to the corresponding survival percentile [18] or related measure such as the cumulative hazard [4, 23, 25] using life table data of a reference population, typically matching for sex and birth cohort. An alternative approach based on defining individual survival exceptionality as the difference between individual's age at death and the sample-based expected age at death correcting for a number of confounders has been also proposed [24].
We focus on two of the previous proposals, representative of two different ways of summarizing individual survival exceptionality within families: the Family Longevity Selection Score (FLoSS) [23] and the Longevity Relatives Count (LRC) score [18]. The FLoSS relies on a sum to summarize survival exceptional within families, while the LRC score is representative of the rest of previously proposed longevity scores which all rely on an empirical expectation as summary, i.e., the mean [4, 24, 25] or a proportion [18] depending on the nature of the individual measure of survival exceptionality. These two type of summary measures (sum versus empirical expectation) have different implications with regard to the influence of family size in the resulting scoring system.
The FLoSS favors large families
The Family Longevity Selection Score (FLoSS) [23] was constructed using siblings included in the Long Life Family Study. The FLoSS is a modification of the SEf score which adds a bonus for the presence of living family members. Since the main properties of SEf transfer to FLoSS, for the sake of simplicity we focus on the properties of the SEf, defined, for each family i, as follows:
$$ {SE}_{fi}=\sum \limits_{j=1}^{N_i}{SE}_{ij}=\sum \limits_{j=1}^{N_i}\left(-\mathit{\log}\left(S\left({t}_{ij}|{bc}_{ij},{sex}_{ij}\right)\right)-1\right)=\sum \limits_{j=1}^{N_i}\left(\varLambda \left({t}_{ij}|{bc}_{ij},{sex}_{ij}\right)-1\right), $$
where tij is the age at death of family member j of family i, with j = 1,…,Ni members, S(tij| bcij, sexij) is the survival probability at age tij given sex and birth cohort in the reference population and Λ(tij| bcij, sexij) is the corresponding cumulative hazard. SEij varies between − 1 (if S(tij| bcij, sexij) = 1) and ∞ (if S(tij| bcij, sexij) = 0). The maximum value of SEij is determined by the maximum age recorded in the used life table. If for example, this maximum age at death is 99, like in the Dutch life tables [26], and the minimum survival in the population is S(99| bcij, sexij) = 0.01, this provides a maximum SEij = 4.6. The reference value, corresponding to a value SEij = 0 corresponds to S(tij| bcij, sexij) = 0.37. This means that family members with age at death beyond the top 37% survivors count positively in the score and those with younger ages at death count negatively. For example, using the Dutch life tables, this cut-off would correspond, for those born around 1900 with an age of death of around 73 years for men and of around 80 for women. This thresholds are not in line with recent evidence indicating that higher ages at death need to be considered to capture the heritable longevity trait [5, 18]. This problem can be solved by conditioning survival to being alive at certain age. For example, a conditioning age of 40 years has previously been proposed [23], which increases the age cut-off associated to SEij = 0. For example, using Dutch lifetables this would correspond to a cut-off of around 84 years for women and 78 year for men for individuals born around 1900. These ages correspond with percentiles survivals at birth of around 0.28 (oldest 28% survivors of their birth cohort) which are likely not extreme enough to capture the heritable longevity trait. This drawback is somehow compensated by the strongly skewed distribution of SEij, meaning that the impact of increasing, for example, from 95 to 96 years is greater than the increase from 70 to 71.
An additional problem of the SEf score is that it uses the sum over the available family members to summarize the level of survival exceptionality within the family. This implies that large families are systematically overweighted when using SEf. This phenomenon is illustrated in Fig. 1. Three example populations with twenty sibships each and different level of enrichment for longevity are considered. In the three examples, we consider sibships of increasing size, Ni = i + 1, i = 1,2,...,20. In the first example population, all sibships have two siblings belonging to the top 5% survivors of their sex-specific birth cohort and the rest of siblings belonging to the top 30% survivors, so these family members are clearly not long-lived. In the second, all sibships have two siblings belonging to the top 10% survivors of their sex-specific birth cohort and the rest of siblings belonging to the top 30% survivors. In the third example population all siblings belong to the top 30% survivors, representing a population with no long-lived individuals. The left panel of Fig. 1 illustrates the performance of the score SEf in these three examples. Overall, increasing the sibship size leads to larger values of SEf. Moreover, larger families with lower proportions of long-lived members can present a larger value of SEf than small families with a larger proportion of long-lived members. For example, a family with two members belonging to the top 10 survivors and 8 extra not long-lived siblings has a larger SEf than a family with two members in the top 10 survivors and 5 extra not long-lived siblings (black line). It can also happen that a large family where two siblings are top 10% survivors and the rest not long-lived present a larger SEf than a smaller family where two siblings are top 5% and the rest are not long-lived. The increasing pink line corresponding to the third scenario illustrates that large families with no long-lived family members can present large values of SEf, with SEf arbitrarily increasing in parallel to family size.
Example of three hypothetical populations with 20 sibships with sizes Ni = 2,3,...,21. In each population families are ranked according to SEf (left panel) and LRC (right panel). The black lines represents a population in which all families have two siblings belonging to the top 5% survivors (long-lived) of their sex-specific birth cohort and the rest of siblings belonging to the top 30% survivors (not long-lived). The blue lines represent a population in which all families have two siblings belonging to the top 10% survivors (long-lived) of their sex-specific birth cohort and the rest of siblings belonging to the top 30% survivors. The pink lines represent a population composed of families with all family members not log-lived, belonging to the top 30% survivors. The left panel shows the value of SEf with increasing number of non-lived family members. The right panel shows the value of LRC with increasing number of non-lived family members. Because of the definition of LRC, black and blue lines coincide in the right panel
In summary, using SEf and FLoSS in the selection of long-lived families may lead to an overrepresentation of large families and hence undesirable heterogeneity in the selected sample of families. Importantly, the size of the families governs the range of variation of the family score implying that SEf and FLoSS are not comparable when calculated in populations with different underlying family size patterns. Since this is an highly undesired feature, we will not further focus on the SEf score (and FLoSS) in the rest of the paper.
The LRC score favors small families
To mitigate the previously explained bias towards large families, a solution is to use a different summary measure at the family level, like the average [4, 25].
In this line, and based on the results of a recent study which shows that longevity is heritable beyond the 10% survivors of their birth cohort [5], the Longevity Relatives Count (LRC) score has been proposed [18]. The original definition of the LRC score allows for the inclusion of family members with different degree of relatedness. Here, we focus on its simplest form considering only siblings in its construction:
$$ {LRC}_i=\frac{\sum \limits_{j=1}^{N_i}I\left({P}_{ij}\ge 0.9\right)}{N_i} $$
where Pij is the sex and birth cohort specific percentile survival of individual j of family i, i.e., Pij = 1 − S(tij| bcij, sexij). I(Pij ≥ 0.9) is a variable indicator taking value 1 if individual j belongs to the top 10 survivor of his/her sex-specific birth cohort and 0 otherwise. As a result, LRCi is the proportion of members of family i belonging to the group of top 10 survivors, defined as long-lived. The LRC is bounded between 0 and 1, providing a clear interpretation and comparability across populations. A drawback is that it is based on a binary definition of longevity, ignoring differences in longevity beyond the top 10% of survivors.
The LRC score is based on calculating a proportion, and as a consequence, the resulting ranking based on this score indirectly favors small families. For small families, it is more easy to have 100% of its family members in the top 10% survivors for than large families. Hence, in small families it can be questioned whether a large LRC truly captures the heritable longevity trait.
The problem of this approach is of different nature than the case of the SEf score. While adding not long-lived family members implies an increase in SEf, this is not the case for LRC (Fig. 1, right panel). Instead of a systematic bias, we now face a problem of different uncertainty levels depending of the size of the family which cannot be properly captured by an empirical proportion. Consider the following example for illustration. Two families, both with half of the siblings long-lived, but in the first case the sibship size was 2 and on the second case the sibship size was 10. It is clear that there is more information in the second case and hence the ranking should also take this into account. However, using empirical proportions small families are benefitted.
Accounting for uncertainty in longevity family scores
To deal with the heterogeneity in information between families caused by their size, we propose to use mixed effects regression modelling in the estimation of family scores. In particular, we focus on the LRC, and extend its concept by introducing family specific random effects.
Let Yij = I(Pij ≥ c) be a binary random variable that indicates if Pij is equal of larger than c, where Pij is the percentile survival of individual j of family i, and c is a pre-specified threshold of longevity. For example, c = 0.90. Let ui be a random effect shared by the members of the same family that reflects the unobserved factors contributing to longevity.
Assuming that Yij follows a Bernoulli distribution, the family specific probability to reach c is given by the following logistic regression model with random intercept:
$$ {p}_i=P\left({Y}_{ij}=1|{u}_i\right)=\frac{e^{\beta_0+{u}_i}}{1+{e}^{\beta_0+{u}_i}} $$
We assume that ui follows a normal distribution with mean zero and variance σ2. Then, the parameters β0 and σ2 can be estimated maximizing the resulting likelihood function \( \prod \limits_{i=1}^N{L}_i\left({\beta}_0,\sigma \right)=\int \prod \limits_{j=1}^{N_i}P{\left({Y}_{ij}=1|{u}_i\right)}^{y_{ij}}{\left(1-P\left({Y}_{ij}=1|{u}_i\right)\right)}^{\left(1-{y}_{ij}\right)}f\left({u}_i;{\sigma}^2\right)d{u}_i \), where N is the total number of families, Ni is the number of family members of family i and f is the density function of ui. Maximization of the likelihood cannot be analytically solved and requires numerical approximation techniques (e.g. quadrature methods).
Finally, we can obtain \( {\hat{p}}_i \), the expected value of pi given the observed data of family i and the estimated β0 and σ, denoted by \( {\hat{\beta}}_o \) and \( \hat{\sigma} \), as
$$ {\hat{p}}_i={\int}_{-\infty}^{\infty}\frac{e^{{\hat{\beta}}_0+u}}{1+{e}^{{\hat{\beta}}_0+u}}\ f\left(u|{y}_{i1},\dots, {y}_{i{N}_i},{\hat{\beta}}_0,\hat{\sigma}\right) du $$
where \( f\left(u|{y}_{i1},\dots, {y}_{i{N}_i},{\hat{\beta}}_0,\hat{\sigma}\right) \) is the density of the posterior distribution of the family specific random effect. Using Bayes' rule, this density can be obtained as
$$ f\left(u|{y}_{i1},\dots, {y}_{i{N}_i},{\hat{\beta}}_0,\hat{\sigma}\right)=\frac{f\left({y}_{i1},\dots, {y}_{i{N}_i}|{\hat{\beta}}_0,u\right)f\left(u|\hat{\sigma}\right)}{\int_{-\infty}^{\infty }f\left({y}_{i1},\dots, {y}_{i{N}_i}|{\hat{\beta}}_0,u\right)f\left(u|\hat{\sigma}\right) du} $$
where \( f\left({y}_{i1},\dots, {y}_{i{N}_i}|{\hat{\beta}}_0,u\right)=\prod \limits_{j=1}^{N_i}P{\left({Y}_{ij}=1|{u}_i\right)}^{y_{ij}}{\left(1-P\left({Y}_{ij}=1|{u}_i\right)\right)}^{\left(1-{y}_{ij}\right)} \).
We propose to consider \( {\hat{p}}_i \) as a new longevity family score of family i, and we denote it by mLRCi. In this way, mLRC can be regarded as a model-based version of LRC which includes shrinkage based on Ni. mLRCi can still be interpreted as the proportion of long-lived members of family i but it captures the uncertainty due to family size by the different 'weight' each family receives through its estimated random effect \( {\hat{u}}_i \).
The new mLRC family score, together with the LRC and FLoSS have been implemented in R. The code is provided as supplementary material.
Simulation study
Simulated data is generated under the assumption that a latent factor, shared by the members of the same family, controls the degree of longevity of the family. Based on the simulated data, we can measure the level of agreement between the underlying longevity factor and different longevity family scores.
Characteristics of the simulated datasets such as the number and size of the families are chosen to mimic our real data set. In each run of the simulation, we simulated N = 1000 families of different sizes, namely 200 families with respectively size 2,3,8,10, and 14 individuals. For each individual j of family i, where i = 1,...,N, we sampled survival percentiles pij from a beta distribution with parameters a = exp.(0.1) and b = a × exp.(−(1 + ui)), where ui was a random effect common to the Ni members of family i. The random effect was sampled from a normal distribution with mean 0 and standard deviation 2. Large values of ui decreased the survival percentile pij, which meant that the families with the lowest values of the random effect were the most enriched for longevity.
For each family, we computed the LRC score and the new model-based LRC (mLRC). Both scores were compared in terms of their relation with family size and performance as selection tools. The simulation was repeated 1000 times.
Table 1 shows the distribution of family size according to the values of LRC and mLRC. The LRC score is strongly affected by family size; families with low sibship sizes tend to have large values of LRC (left column of Table 1). No clear relation between family size and mLRC is observed (right column of Table 1), which is in agreement with the data generation mechanism. Figure 2 shows the comparison between the LRC and mLRC for all the families in one simulation run. For small families, the mLRC score is typically lower than the LRC score when the LRC score is large. This is caused by the penalization of our new method due to lack of information in small families. Analogously, small families are weighted upwards when the LRC score is low following the same principle of major uncertainty when the family size is small. Still, if the level of exceptionality of the observed family members is large, small families can still outperform large families. This is illustrated by small families (for example, with Ni = 2, red dots) appearing at the right part of the graphic in Fig. 2. The ability of mLRC to correctly deal with differences in family size, explains that the association between family size and the mLRC score is very low (right column Table 1).
Table 1 Family size and family scores in simulated data
Comparison of LRC and mLRC with simulated data. For each of the N = 1000 families in one simulation run, we display the LRC score (x-axis) against the mLRC score (y-axis). Every point in the graphic represents a family, colored according to its size. Red dots represent families of size Ni = 2, light blue dots represent families of size Ni = 3, dark blue dots represent families of size Ni = 8, grey dots represent families of size Ni = 10 and black dots represent families of size Ni = 14
To evaluate the performance of selection rules based on the LRC and mLRC scores, we considered two definitions of longevity. First, the 10% of families with the lowest value of the random effect u were defined as truly long-lived. Second, we considered the 5% of families with the lowest value of the random effect u as truly long-lived. For both definitions, we checked the agreement between the truly long-lived families and the selected families based on the LRC and mLRC scores. To perform this selection, the families with the 10% (respectively 5%) largest LRC or mLRC score were labeled as long-lived. Since our main interest was to avoid families not enriched for longevity in our selection, we used the positive predictive value (PPV) as summary measure of our simulations. The PPV is defined as the proportion of truly long-lived families among those classified as long-lived using the score-based selection rule under investigation.
Figure 3 shows the distribution of the positive predictive values from the 1000 simulation runs. When defining the 10% of families with the lowest value of the random effect u as truly long-lived (left panel of Fig. 3), the mean PPV for the selection based on LRC was 54% (sd = 4%), meaning that on average, among the 1000 top 10% families classified as long-lived according to LRC, 54% were truly long-lived. The mean PPV increased to 62% (sd = 4%) when using mLRC for selection of the top 10% families. If we focus on the top 5% families (right panel of Fig. 3), the average accuracy of the selection based on LRC decreased (mean PPV = 0.52,sd = 0.13). In addition, we found large variability of the PPV among simulation runs, which indicates instable performance of the LRC score. On the contrary, the accuracy based on mLRC increased in this case (mean PPV = 0.67, sd = 0.06). These results show that selection of families based on mLRC clearly outperforms selection based on LRC.
Evaluation of LRC and mLRC as selection tools with simulated data. Distribution of positive predictive (PPV) values across 1000 simulation runs. For each simulation run, the PPV associated to the selection rule under investigation was computed. Black lines represent the results based on LRC and grey lines represent the results based on mLRC. The left panel shows the results when defining the 10% of families with the lowest value of the random effect u as truly long-lived and the selection criterion is declaring families with the 10% largest values of the score as long-lived. The right panel shows the results for the more strict definition of longevity, based on the 5% lowest values of the random effect u and the selection criterion is declaring families with the 5% largest values of the score as long-lived
Real data: the historical sample of the Netherlands
The Historical Sample of the Netherlands (HSN) Long Lives study [27, 28] is an extensive database which contains lifetime data for the members of 1326 five-generational families, evolving around a single proband (Index Person, IP) per family [29]. We focus on the siblings present in the second (F2) generation which are the children of the IPs. The selection for a part of these IPs was enriched for longevity. Specifically, the selected IPs were part of a case-control study to compare differences in longevity among descendants of 884 IPs who died at 80 years or beyond (case group) and 442 IPs who died between 40 and 59 years (control group) [18, 30]. After removing individuals with missing age at death, single child sibships, and individuals belonging to non-extinct birth cohorts by the date of data collection (death dates were updated at 2017 and 110 years was set as maximum age); the final sample of our analysis consisted of 1105 sibships, children of the aforementioned HSN IPs, which corresponded to 5361 individuals.
To evaluate the performance of the new longevity family score mLRC and compare it to the original LRC, we first randomly selected a sample of independent individuals by choosing one individual at random from each of the 1105 available sibships. This set of independent individuals was set aside from the score calculations and subsequently used as a validation set to evaluate score performance. This validation set resembles the potential candidates to be included in, for example, a GWA study of longevity. Then, LRC and mLRC were calculated based on a sample of 4256 individuals. Afterwards, based on both scores we conducted a selection of long-lived families and we checked if those corresponded with a survival benefit in the validation set using Cox proportional hazard regression.
The sibship size was highly varying in the sample (Fig. 4). As expected, LRC is largely affected by family size, and families with large values of LRC present lower sibship sizes (Table 2). We do not observe a pattern in family size according to the estimated level of familiar longevity using mLRC. Figure 5 shows the distribution of the LRC and mLRC scores in the analyzed sibships of the HSN dataset.
Sibship size in the HSN data
Table 2 Family size and family scores in the HSN data
Distribution of the LRC (left panel) and mLRC (right panel) scores in the analyzed sibships of the HSN dataset
Previous literature [18], has suggested LRC ≥ 0.3 as a selection criterion to capture the heritable longevity trait. In our sample, LRC ≥ 0.3 corresponds to the selection of the 15% families with the largest values of the LRC score. We evaluated the performance of this selection criterion by comparing the survival of the individuals of the validation set belonging the selected families to the rest of individuals in the validation set. Analogously, we selected the top 15% families according to ranking resulting from using the mLRC as longevity score which corresponds to define families with mLRC ≥ 0.15 as long-lived and evaluated this selection strategy using the validation set. For each of the proposed selections, we fitted a Cox regression model with the each of the selection indicators as explanatory variables. Both models were adjusted by gender and birth cohort. Table 3 shows that the selection of long-lived individuals based on the mLRC score predicts excess survival in the validation set better than the selection based on the LRC score (βLRC ≥ 0.3 = − 0.287, βmLRC ≥ 0.15 = − 0.321).
Table 3 Evaluation of selection strategies of long-lived families based on LRC and mLRC scores in the HSN
We proposed a method based on mixed effects regression modelling to estimate longevity family scores and properly account for differences in family size when ranking families according to their longevity and use this ranking for the selection of participants in longevity studies. Our simulation study and real data analysis show that the new proposed approach (mLRC) yields better results than its empirical counterpart (LRC) in terms of selection of long-lived individuals. We showed that the SEf score and FLoSS increase with the addition of non-long-lived family members and their interpretation is ruled by the underlying family size distribution. We also showed that the LRC score puts too much weight on small, less-informative families. The mLRC score was not affected by sibship size and therefore its resulting ranking better predicted the survival of 1105 independent study participants. The new mLRC score seems to reduce heterogeneity in the selection of families and its application could potentially help to improve power and bias reduction in longevity studies.
Our current approach has some limitations. First, the binary nature of the current mLRC discards important information which could contribute to improve its performance. An interesting property of the SEf score and the FLoSS is their continuous nature. Other continuous longevity family scores have been previously proposed [4, 24, 25]. The Longevity Family Score (LFS) [4] and the Family Mortality History Score (FMHS) [25] are closely related to the SEf and FLoSS since all use the same measure of individual survival exceptionality based on transforming the observed ages at death to survival percentiles in a reference population using life tables. The FMHS is restricted to parental data and hence not subject to differential family size. The LFS, the SEf and the FLoSS are extensions of the FMHS which can deal with sibships of arbitrary size. The Familial Excess Longevity (FEL) score [24] is also continuous but it does rely on population life tables. Instead, individual survival exceptionality is defined as the difference between observed and expected age, derived from an accelerated failure time regression model. Both the LFS and the FEL scores are based on the mean as family-specific summary measure and hence share with the LRC score the discussed limitations of empirical expectations.
A potential drawback of all these continuous longevity scores is that relatively young family members can contribute positively to these scores. Even after conditioning on being older than 40 as proposed for the FLoSS, the resulting score is probably influenced by ages at death which are not extreme enough to capture the heritable longevity trait. Evidence of this is supported by studies that have pointed towards increasing family aggregation of survival when focusing on more extreme ages at death for longevity definition [13, 31] and recent publications indicating that the longevity trait seems to be heritable considering lifespan thresholds beyond the top 10% survivors of a given birth cohort [5]. A model-based modified version of SEf or the LFS which minimizes the contribution of young family members seems a promising line of future research. However, the extremely skewed distribution of the individual measure of longevity of these scores makes the extension of our method not straightforward.
Another important topic is dealing with alive or lost on follow-up (right censored) individuals when constructing longevity family scores. We have assumed full observation of lifespan of siblings included in the calculation of the score, so scores can be regarded as family history scores of alive relatives who could potentially be selected to participate in a (genetic) longevity study.
The FLoSS score is the extension of the discussed score SEf to allow for the inclusion of right censored observations. The FLoSS follows a single imputation approach based on imputing alive individuals with the sex and birth cohort specific conditional expected age at death. This is an example of single imputation which underestimates the uncertainty of estimates and can potentially lead to bias. More advanced methods are possible in the mixed effect setting and its inclusion is left as subject of future research. Finally, recent evidence [9] indicates that the inclusion of family members of different degree of relatedness is of great importance to capture the heritable longevity phenotype and hence the proposed method should also be extended to this more complex setting.
Finally, it is important to mention that our approach may result in selections that are influenced by family-shared non-genetic factors. Despite previous research based on historical pedigree data have led to little evidence for associations between non-genetic factors such as socio-economic status, fertility factors or religious denomination and familial longevity [5, 8,9,10], other socio-behavioral and environmental factors such as personality and lifestyle may influence familial clustering of longevity. Since many of these also have a strong genetic component itself it is most likely that gene environmental interactions can explain a part of the familial clustering of longevity. Still in this complex setting, the use of well-designed family scores is expected to reduce genetic heterogeneity and contribute to a power increase in case-control longevity studies to detect novel genetic loci. Moreover, our mLRC score can be applied in more general longevity studies devoted to investigate the interplay among genetic and non-genetic factors involved in longevity.
To properly account for differences in family size is of paramount importance when deriving family scores of longevity and using them for ranking families and selecting participants in longevity studies. The methodology described in this paper is therefore of great relevance and can help to improve selection of participants in future longevity studies.
The data used for this study will be made freely available at the Data Archiving and Networked Services (DANS) repository but are currently not yet publicly available due to ongoing checks to guarantee that the data sharing process is in accordance with Dutch and international privacy legislation. Data are however available from the authors upon reasonable request.
FLoSS:
Family Longevity Selection Score
FMHS:
Family Mortality History Score
HSN:
Historical Sample of the Netherlands
Index person
LFS:
Longevity Family Score
LRC:
Longevity Relatives Count
mLRC:
model-based Longevity Relatives Count
SEf :
Survival Exceptionality
van den Berg N, Beekman M, Smith KR, Janssens A, Slagboom PE. Historical demography and longevity genetics: Back to the future. Ageing Res Rev. 2017;38:28–39.
Herskind AM, et al. The heritability of human longevity: a population-based study of 2872 Danish twin pairs born 1870–1900. Hum Genet. 1996;97:319–23.
Perls TT, et al. Life-long sustained mortality advantage of siblings of centenarians. Proc Natl Acad Sci. 2002;99:8442–7.
van den Berg N, et al. Longevity around the turn of the 20th century: life-long sustained survival advantage for parents of Today's nonagenarians. J Gerontol Ser A. 2018;73:1295–302.
van den Berg N, et al. Longevity defined as top 10% survivors and beyond is transmitted as a quantitative genetic trait. Nat Commun. 2019;10:35.
Schoenmaker M, et al. Evidence of genetic enrichment for exceptional survival 595 using a family approach: the Leiden longevity study. Eur J Hum Genet. 2006;14:79–84.
Ljungquist B, Berg S, Lanke J, McClearn GE, Pedersen NL. The effect of genetic 597 factors for longevity: a comparison of identical and fraternal twins in the Swedish 598 twin registry. J Gerontol Ser A Biol Sci Med Sci. 1998;53:441–6.
You D, Danan G, Yi Z. Familial transmission of human longevity among the oldest-old in China. J Appl Gerontol. 2010;29:308–32.
Gavrilov LA, Gavrilova NS. Predictors of exceptional longevity: effects of early-life and midlife conditions, and familial longevity. North Am Actuar J. 2015;19:174–86.
Mourits RJ, et al. Intergenerational transmission of longevity is not affected by other familial factors: evidence from 16,905 Dutch families from Zeeland, 1812-1962. Hist Fam. 2020;25:484–526.
Deelen J, et al. A meta-analysis of genome-wide association studies identifies multiple longevity genes. Nat Commun. 2019;10:3669.
Shadyab AH, LaCroix AZ. Genetic factors associated with longevity: a review of 615 recent findings. Ageing Res Rev. 2015;19:1–7.
Slagboom EP, van den Berg N, Deelen J. Phenome and genome based 617 studies into human ageing and longevity: an overview. Biochim Biophys Acta Mol Basis Dis. 1864;2018:2742–51.
Deelen J, et al. Genome-wide association meta-analysis of human longevity 620 identifies a novel locus conferring survival beyond 90 years of age. Hum Mol Genet. 2014;23:4420–32.
Sebastiani P, et al. Four genome-wide association studies identify new 635 extreme longevity variants. J Gerontol A Biol Sci Med Sci. 2017;72:1453–64.
Flachsbart F, et al. Immunochip analysis identifies association of the 637 RAD50/IL13 region with human longevity. Aging Cell. 2016;15:585–8.
Zeng Y, et al. Novel loci and pathways significantly associated with longevity. Sci Rep. 2016;6:21243.
van den Berg N, et al. Longevity Relatives Count score defines heritable longevity carriers and suggest case improvement in genetic studies. Aging Cell. 2020;19:e13139.
Sebastiani P, Nussbaum L, Andersen SL, Black MJ, Perls TT. Increasing Sibling Relative Risk of Survival to Older and Older Ages and the Importance of Precise Definitions of "Aging," "Life Span," and "Longevity". J Gerontol Ser A Biol Sci Med Sci. 2016;71:340–6.
Oeppen J, Vaupel J. W. Demography. Broken limits to life expectancy. Science. 2002;296:1029–31.
Liu JZ, Erlich Y, Pickrell JK. Case–control association mapping by proxy using family history of disease. Nat Genet. 2017;49:325–31 https://doi.org/10.1038/ng.3766.
Hujoel MLA, Gazal S, Loh P, Patterson N, Price AL. Liability threshold modeling of case-control status and family history of disease increases association power. Nat Genet. 2020;52:541–7.
Sebastiani P, et al. A family longevity selection score: ranking Sibships by their longevity, size, and availability for study. Am J Epidemiol. 2009;170:1555–62.
Kerber RA, Brien EO, Smith KR, Cawthon RM. Familial excess longevity in Utah genealogies. J Gerontol Ser A Biol Sci Med Sci. 2001;56:130–9.
Rozing MP, Houwing-Duistermaat JJ, Slagboom PE, et al. Familial longevity is associated with decreased thyroid function. J Clin Endocrinol Metab. 2010;95:4979–84.
van der Meulen A. Life tables and survival analysis. Tech report. The Netherlands: CBS; 2012. https://www.cbs.nl/NR/rdonlyres/C047245B-B20E-492D-A4119F298DE7930C/0/2012LifetablesandSurvivalanalysysart.pdf.
Mandemakers K. Historical sample of the Netherlands. In: Hall PK, McCaa R, Thorvaldsen G, editors. Handbook of International Historical Microdata for Population Research; 2000. p. 149–77.
van den Berg N, et al. Families in comparison: an individual-level comparison of life course and family reconstructions between population and vital event registers. SocArXiv. 2018. https://osf.io/preprints/socarxiv/h2w8t/.
Mandemakers, K. 2010. https://socialhistory.org/en/hsn/hsn-releases. HSN 2010.01 release.
Mandemakers K, Munnik C. Historical Sample of the Netherlands. Project Genes, Germs and Resources. Dataset LongLives. Release 2016.01. International Institute of Social History. https://pure.knaw.nl/portal/en/datasets/historical-sample-of-the-netherlands-project-genes-germs-and-reso.
Gavrilova NS, Gavrilov LA. When does human longevity start?: demarcation of the boundaries for human longevity. Rejuvenation Res. 2001;4:115–24.
Mar Rodríguez-Girondo has received financial support from MTM2017–89422-P (MINECO/AEI/FEDER,UE) project. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Department of Biomedical Data Sciences, section of Medical Statistics, Leiden University Medical Center, Albinusdreef 2, 2333, ZA, Leiden, the Netherlands
Mar Rodríguez-Girondo
Department of Biomedical Data Sciences, Section of Molecular Epidemiology, Leiden University Medical Center, Albinusdreef 2, 2333, ZA, Leiden, the Netherlands
Niels van den Berg, Marian Beekman & Eline Slagboom
Department of Clinical Epidemiology, Biostatistics, and Bioinformatics, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105, AZ, Amsterdam, the Netherlands
Michel H. Hof
Niels van den Berg
Marian Beekman
Eline Slagboom
M.R.G. and M.H.P.H. conceived the new mLRC method. M.R.G. performed the computations and data analysis. N.v.d.B. preprocessed real data and participated in real data analysis. M.B. and E.P.S. supervised the findings of this work. All authors discussed the results and contributed to the final manuscript. The author(s) read and approved the final manuscript.
Correspondence to Mar Rodríguez-Girondo.
No permission from the ethical medical commission was required to collect and analyzed the HSN data. The authors got formal permission to analyze and publish the data from the International Institute for Social History (IISG).
Rodríguez-Girondo, M., van den Berg, N., Hof, M.H. et al. Improved selection of participants in genetic longevity studies: family scores revisited. BMC Med Res Methodol 21, 7 (2021). https://doi.org/10.1186/s12874-020-01193-7
DOI: https://doi.org/10.1186/s12874-020-01193-7
Mixed effects modelling
Family history score | CommonCrawl |
Study of structure and magnetic properties of Ni–Zn ferrite nano-particles synthesized via co-precipitation and reverse micro-emulsion technique
M. Abdullah Dar1,2,
Jyoti Shah2,
W. A. Siddiqui1 &
R. K. Kotnala2
Applied Nanoscience volume 4, pages 675–682 (2014)Cite this article
Nano-crystalline Ni–Zn ferrites were synthesized by chemical co-precipitation and reverse micro-emulsion technique with an average crystallite size of 11 and 6 nm, respectively. The reverse micro-emulsion method has been found to be more appropriate for nano-ferrite synthesis as the produced particles are monodisperse and highly crystalline. Zero-field cooled and field cooled magnetization study under different magnetic fields and magnetic hysteresis loops at different temperatures have been performed. The non-saturated M–H loops, absence of hysteresis, and coercivity at room temperature are indicative of the presence of super paramagnetic and single-domain nano-particles for both the materials. In sample 'a', the blocking temperature (TB) has been observed to decrease from 255 to 120 K on increasing the magnetic field from 50 to 1,000 Oe, which can be attributed to the reduction of magneto crystalline anisotropy constant. The MS and coercivity were found to be higher for sample 'a' as compared with sample 'b' since surface effects are neglected on increasing the crystallite size.
Avoid the most common mistakes and prepare your manuscript for journal editors.
Nano-crystalline ferrites are materials of considerable interest because of their unique dielectric, magnetic, and optical properties, which make them appealing both from the scientific and technological points of view (Wang et al. 1998; Abdullah Dar et al. 2010). These magnetic materials are the basis of a very active research field due to the new phenomena taking place at the nano-scale level as a consequence of the interplay of quantum, finite-size, surface, and interfacial effects. Spinel ferrite nano-particles with a high surface area have many technical applications in several fields such as high-density information storage, ferro-fluids, catalysts, drug targeting, hyperthermia, magnetic separation, and magnetic resonance imaging (Cannas et al. 2010). The key questions in these systems are how these nano-structures modify their magnetic and electronic properties and how one can take advantage of those new properties to improve the applications. Consequently, understanding and controlling the effects of the nanostructures on the properties of the particles have become increasingly relevant issues for technological applications (Batlle et al. 2011).
The study of magnetic behavior of ferrites has attracted considerable attention from past few decades, in particular because deviations from bulk behavior have been widely reported for particle sizes below about 100 nm range. This is because of finite-size effects and the increasing fraction of atoms lying at the surface with lower atomic coordination than in the core as the size of the particle decreases, thus giving rise to a significant decrease in the particle magnetization (Batlle et al. 2011). The surface spin disorder due to symmetry breaking at the surface leads to a smaller saturation magnetization, high field differential susceptibility, extremely high closure fields, and shifted hysteresis loops after field cooling the sample, together with glassy behavior. This has been explained in terms of the existence of a surface layer of disordered spins that freeze in a spin-glass-like state due to magnetic frustration, yielding both an exchange field acting on the ordered core and an increase in the particle anisotropy. The most obvious and heavily studied finite-size effect is super-paramagnetism. The basic principle is that the magnetic anisotropy energy which keeps a particle magnetized in a particular direction is generally proportional to the particle volume. Therefore, below a critical size at room temperature, the thermal fluctuations are sufficient to rotate the particle magnetization, hence demagnetizing an assembly of such particles. Although this is a well-studied effect, it is understood only on a phenomenological level.
Recently, considerable attention has been paid on ferrites of different morphology and their shape- and size-dependent properties as well their corresponding applications were investigated. Both the physical and chemical methods have been developed for the synthesis of ferrite nano-structures of various morphologies. The chemical methods have advantages over the physical one such as low cost, reaction taking place at low temperature, and large-scale production possibility. It is widely appreciated that the cation distribution in spinel ferrites upon which many physical and chemical properties depend is a complex function of processing parameters and depends on the preparation method of the material (Sepelak et al. 2007). Selection of an appropriate method is the key factor to obtain ferrites of high quality. Various processing methods have been developed to obtain nano-crystalline ferrites such as hydrothermal synthesis, chemical co-precipitation, polymeric precursor techniques, sol–gel, shock wave, spray drying, sono-chemical process and mechanical alloying (Sivakumar et al. 2011; Kotnala et al. 2010). It is well known that different routes produce different microstructures and crystal sizes. Therefore, based on the ease and reproducibility, chemical co-precipitation method is widely used but it leads to the precipitation of nano-crystals with a relatively broad size distribution (Zhang et al. 1998). In the present study a reverse micro-emulsion technique has been selected for the synthesis of nano-crystals. Reverse micelles, which are essentially nano-sized aqueous droplets that exist in micro-emulsions with certain compositions, are known to present an excellent medium for the synthesis of nano-particles. The particles produced by this method are generally very fine, mono-disperse, morphologically controlled, and highly crystalline as compared with the other processes (Li and Park 1999; Yener and Giesche 2001). Current interest to synthesize nano-structured nickel–zinc ferrite particles is due to its low value of coercivity, high resistivity, and high saturation magnetization similar to that of magnetite (Thakur et al. 2007). In this work, we have reported the synthesis of nano-crystalline sized mixed ferrite system, namely Ni0.7Zn0.3Fe2O4, by chemical co-precipitation and reverse micro-emulsion method. The structure and morphology analysis has been carried out using X-ray diffraction (XRD) and transmission electron microscopy (TEM). The magnetic properties of the synthesized samples were measured using vibrating sample magnetometer at different temperatures and discussed in detail.
Synthesis of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals by reverse micro-emulsion method
The reverse micro-emulsion system consists of cyclo-hexane as an oil phase, Cetyl-tri-methyl-ammonium bromide (CTAB) as a surfactant, and iso-amyl-alcohol as the co-surfactant phase. All the chemicals in this work were of analytical grade and used as received without further purification. Reverse micro-emulsions were prepared by adding to 10.20 g of CTAB, 12.81 ml of iso-amyl-alcohol, and 30.48 ml of cyclo-hexane with 5.5 wt% of an aqueous solution of the reactants, corresponding to the desired value of water/[CTAB] ratio being equal to 10.12. The reverse micro-emulsions were sonicated until clear solution was formed. Two reverse micro-emulsions were prepared for the synthesis of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals, one containing the metal salts prepared by mixing stoichiometric amounts of 0.2 M Fe(NO3)3·9H2O, 0.07 M Ni(NO3)2·6H2O and 0.03 M Zn(NO3)2·6H2O. In the second reverse micro-emulsion, 0.1 M aqueous NaOH solution was added as water phase under similar conditions. The two solutions were mixed together quickly with vigorous stirring at constant temperature (80 °C) and pH of the resulting solution was maintained at 9. The resulting solution was continuously stirred for another 2 h for reaction completion. An equal volume of acetone and iso-propanol was added to the resulting solution and centrifuged to separate the solid product. The product obtained was washed several times with water and acetone to ensure the complete removal of surfactant molecules from the product. Finally, the product obtained was dried in an air oven at 100 °C for 12 h.
Synthesis of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals by chemical co-precipitation method
In a typical procedure, nano-crystals of Ni0.7Zn0.3Fe2O4 ferrite were synthesized by chemical co-precipitation of Fe(NO3)3·9H2O, Ni(NO3)2·6H2O and Zn(NO3)2·6H2O in an alkaline medium at constant pH of 9. The stock solutions of all the precursors were prepared with same concentration as used in the reverse micro-emulsion method. The stoichiometric amounts of Fe(NO3)3·9H2O, Ni(NO3)2·6H2O, and Zn(NO3)2·6H2O were mixed together. Then this mixture was poured into 0.1 M NaOH solution under constant stirring and at a constant temperature of 80 °C. The resulting mixture was continuously stirred for 2 h at same temperature and pH of 9. The resulting precipitate were then filtered off and washed several times with methanol and double distilled water followed by drying in an air oven at 100 °C for 12 h.
X-ray diffraction (XRD) analysis of both the samples were carried out using Rigaku Miniflex (Step size = 0.02) with CuKα radiation of wavelength λ = 1.5406 Å. The particle size, morphology, and SAED patterns of both the ferrite samples were determined by high-resolution transmission electron microscopy (HRTEM) using JEM 200CX model. The variation of magnetization as a function of field, up to a maximum field of 5 k Oe at different temperatures (80–300 K) was measured using vibrating sample magnetometer (Lakeshore, 7304). The zero-field-cooled (ZFC) measurement was carried out by cooling the sample in a zero magnetic field from 300 to 80 K. The magnetization was measured while increasing the temperature from 80 to 300 K in a magnetic field of 50 Oe. Field cooled (FC) measurements were performed in the same manner with the difference being that the cooling of the samples was done in the same field of 50 Oe. In order to study the effect of applied magnetic field on the blocking temperature (TB) and hence on the magneto-crystalline anisotropy, ZFC–FC measurements were performed at two different magnetic fields (50 and 1,000 Oe).
X-ray diffraction pattern of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals synthesized via chemical co-precipitation and reverse micro-emulsion technique are presented in Fig. 1a, b, respectively. The sharp peaks from diffraction pattern show the crystalline nature of these samples. All the diffraction peaks could be ascribed to the reflections of (220), (311), (400), (422), (511), and (440) planes which could be indexed to a face-centered cubic Ni–Zn ferrite phase. The reflections were comparatively broader, revealing that the as-prepared crystals are small in size. According to the Scherrer's equation (Verma et al. 2010), the average crystallite size determined from the half-width of the most intense peak (311) was found to be 10.3 and 5.7 nm for sample 'a' and 'b', respectively. The difference in the crystallite size was probably due to the difference in preparation conditions followed in this work. It gave rise to different rates of nucleation, growth, coarsening and agglomeration processes, and hence favoring the variation in crystallite size. The X-ray diffraction pattern of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals obtained from chemical co-precipitation method exhibits the more intense peaks, indicating their higher crystallinity. The obtained lattice constant values for the nano-crystals of sample 'a' and 'b' prepared by chemical precipitation and reverse micro-emulsion method were found to be 8.35 and 8.41 Å, respectively.
XRD pattern of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals obtained from a co-precipitation and b reverse micro-emulsion method
Figure 2a, b shows the transmission electron microscopy (TEM) images of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals synthesized using chemical co-precipitation and reverse micro-emulsion technique, respectively. The particle size distribution and their selected area electron diffraction (SAED) patterns are included as insets in Fig. 2a, b. The SAED pattern correspond to various diffractions of both samples. It confirmed spinel phase formation consistent with XRD pattern. It is evident from Fig. 2a that the particles obtained by chemical co-precipitation method are irregular in shape and the average particle size (~11 nm) obtained from TEM is bigger than that observed from X-ray line broadening technique. This may be due to the agglomeration of fine particles. Reverse micro-emulsion process has resulted in a narrower, mono-disperse and nearly spherical particle size distribution with reduced average diameter (~6 nm) as shown in Fig. 2b. In reverse micro-emulsion synthesized samples, the average crystallite size obtained from Scherer's equation were found to be in good agreement with that obtained from TEM, which means that each particle behaves as a single crystal.
TEM images of nano-crystals obtained from a co-precipitation and b reverse micro-emulsion method. The inset in a and b shows SAED pattern and histogram of the particle size distribution
The analysis of XRD and TEM results divulges that the nano-crystals obtained from the abovementioned methods have different crystallite size. This difference in crystallite size has been attributed to the different preparation conditions followed here, which gave rise to different rate of ferrite formation and hence favoring the variation in crystallite size. The reverse micro-emulsion technique allows the control over both shape and size via the structure of the surfactant assemblies. This is a technique, which allows the preparation of ultrafine nano-particles within the particle diameter of 1–100 nm (Vaidyanathan et al. 2007). The reverse micro-emulsion solutions are mostly transparent, isotropic liquid media with nano-sized water droplets that are dispersed in the continuous oil phase and stabilized by surfactant molecules at the water/oil interface. The nano-environment created by surfactant-covered water pools not only acts as nano-reactors for processing reactions but also inhibits the process of aggregation. It could be due to the surfactant molecules that could adsorb on the particle surface, when the particle size approaches that of the water pool. As a result, the particles obtained by this technique are generally very fine, mono-disperse, morphologically controlled, and highly crystalline (Li and Park 1999; Yener and Giesche 2001).
Figure 3a, b represents the M–H curves for sample 'a' and 'b' measured in the temperature range of 80 and 310 K prepared by chemical co-precipitation and reverse micro-emulsion method, respectively. It is well known that magnetic nano-particles <20 nm are usually super-paramagnetic at room temperature. Indeed, all of our measurements at room temperature are in agreement with this view, without any evidence of a significant slow down in the relaxation of the magnetic domains. Upon cooling, this super-paramagnetic behavior of the magnetization turns into a typical field dependence of the magnetic single domain, once the temperature drops below the blocking temperature (TB). The typical characteristics of super-paramagnetic behavior like absence of hysteresis and the non-attainment of saturation even up to an applied magnetic field of 5 k Oe were observed, which is indicative of the presence of super-paramagnetic and single-domain particles for both the ferrite samples. At low temperature (80 K), the nano-particles of these ferrites start to indicate remanence and coercivity, and exhibit hysteretic features as shown in Fig. 3a, b. At this temperature the particles do not have adequate thermal energy to attain complete equilibrium with the applied field during the measurement time and hence hysteresis appears. The lower MS (31 and 6.1 emu/g for 'a' and 'b' sample) of these nano-crystalline ferrites compared with the multidomain bulk values of 70 emu/g (Chikazumi 1964) is attributed to the increased cation disorder and surface effects in nano-crystalline ferrites. This can be explained in terms of the core–shell morphology of nanoparticles consisting of ferri-magnetically aligned core spins and a spin-canted like surface layer. The spin disorder from the surface of the nano-particles may essentially modulate the magnetic properties of these materials, especially when the surface/volume ratio is large enough (Lee et al. 2005). When a magnetic field is applied the core magnetic moments get aligned with the applied magnetic field and at some stage the core magnetization response is exhausted while the core magnetization of the system is saturated like Langevin way. Beyond this stage, any increase in the magnetic field on the particles has an effect only on the surface layer of the particles and thus an increase in the magnetization of particles slows down. This specific state of the surface results in an absence of magnetic saturation (Thakur et al. 2009). Further, an increase in the magneto-crystalline anisotropy can result from the inter-particle interactions, which are more intense in case of ferrites because of the super-exchange interactions. The presence of any defect on the surface leads to the weakening of these interactions inducing a large surface spin disorder (Nathani and Misra 2004). The small value of MS can also be explained on the basis of cation redistribution in these ferrite nano-crystals. In Ni–Zn ferrite, Zn2+ ions occupies the tetrahedral (A) site and Ni2+ ions occupy the octahedral (B) site of the spinel lattice. But in nano-scale ferrites cations occupy lattice sites by a certain degree against their preferences in bulk materials and this inversion is dependent on particle size. As the crystallite size decreases there is a change in the degree of inversion parameters, i.e., there are more Ni2+ ions occupying A-sites and more Zn2+ ions occupying B-sites results in lowering the magnetization values for these nano-crystals (Rao et al. 2006).
M–H curves of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals obtained from a co-precipitation and b reverse micro-emulsion method measured with an applied field of 5 k Oe at 80 and 310 K
A low value of coercivity, observed for both the samples is shown in Table 1. These low values of coercivity have been attributed to the particle–particle interactions among the nano-crystals owing to their extremely small size. The proximity of particles has a large effect on the hysteresis as they either become increasingly exchange coupled or show magnetostatic interactions with decreasing distance between the particles (Verdes et al. 2002). Thus the magnetization shows an increase on lowering the temperature, but the coercivity in sample 'b' follows an opposite trend, that is, it decreases with the decrease in measuring temperature. This decrease in coercivity is expected due to the lower anisotropy indicating the highly super-paramagnetic nature of the nano-crystals obtained from reverse micro-emulsion method (Roca et al. 2007). The Ni–Zn ferrite nano-crystals obtained from chemical co-precipitation have crystallite size ~11 nm, which is much bigger than that of reverse micro-emulsion synthesized particles. The low temperature coercive field of this small sample is very much larger for coherent magnetization reversal of randomly oriented particles, thus suggesting that the process is controlled by a different mechanism, whose strength is very much larger than that of magneto-crystalline anisotropy energy (Bates and Wohlfarth 1980). This may be attributed to the appearance of exchange anisotropy field, which clearly indicates the existence of a magnetically disordered surface layer that becomes frozen at low temperature (Caizer and Stefanescu 2002). Therefore, the observed increase in coercivity can be attributed to the extra energy required for the switching of the core spins that are pinned by the exchange interactions with the frozen spin-glass-like surface layer. The relevance of the magnetically disordered surface layer on the magnetic properties becomes less as the particle size increases and eventually disappears for large enough particles. Consequently, the value of MS and coercivity is higher for sample 'a' as compared with sample 'b', since surface effects are neglected.
Table 1 Variation of lattice parameter and coercivity (at different temperatures) of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals
Figure 4a, b represents the ZFC–FC curves recorded at an applied field of 50 Oe for Ni0.7Zn0.3Fe2O4 nano-crystals synthesized via chemical co-precipitation and reverse micro-emulsion method, respectively. The field cooled (FC) magnetization is obtained by cooling the sample from room temperature to 80 K within a field of 50 Oe and simultaneously measuring the magnetization while increasing the temperature. The zero field cooled (ZFC) magnetization is obtained by cooling the sample in zero-field, then warming within the field of 50 Oe, and simultaneously measuring the magnetization while increasing the temperature. From Fig. 4a, b it is clear that on increasing temperature the ZFC magnetization increases, reaches maximum at the blocking temperature (TB), and then again decreases. The TB is defined as the temperature at which the nano-particle moments do not relax (are blocked) during the time scale of the measurement. The spins are able to flip only above TB and the ZFC magnetization then coincides with the FC magnetization (Neel 1953). At temperatures above TB, the thermal energy is larger than the magnetic energy barrier and thus the materials shows super-paramagnetism following the Curie–Weiss law. The blocking temperature of Ni–Zn ferrite nano-crystals were found to be 225 and 85 K for sample 'a' and 'b', respectively, as is evident from Fig. 4a, b. In order to study the variation of magnetic field on the blocking temperature, ZFC–FC measurements were also performed with an applied filed of 1,000 Oe for sample 'a' as shown in Fig. 4c. From the curves it is clear that the blocking temperature shifts to lower temperature (225–120 K) with the increase of applied field. It could be attributed to the reduction of magneto-crystalline anisotropy constant. In FC measurements, the magnetization direction of nano-particles is frozen to the direction of applied field when nano-particles are cooled from room temperature to 80 K. At 80 K, the anisotropy energy barrier prevents the magnetic moments from flipping and hence a maximum magnetization has been recorded at this temperature. The divergence between the FC and ZFC curve is a characteristic feature of super-paramagnetism and results from the anisotropy energy barrier of the nano-particles. In the ZFC process the nano-particles need to overcome the anisotropy energy barrier, as the moments of the nano-particles are oriented along their easy axis. The anisotropy energy barrier does not have an effect on the FC curve, as the nano-particle moments are aligned with the field during the cooling process. The FC curve appears to be nearly flat below the blocking temperature, as compared with the increasing behavior characteristic of super-paramagnetic systems, which indicates the existence of strong interactions among the nano-particles. However, this feature has been found not to be exclusive of spin glass, but also shared by nano-particle systems with random anisotropy and strong dipole–dipole interactions (Jiao et al. 2002). When a magnetic field is applied to a magnetic material, the domain walls rotate in such a way that its multidomain structure changes towards a single-domain structure with increase in the field. In addition to the extrinsic factors such as defects and lattice strains, another very important factor that plays a crucial role in determining the domain wall motion is the magneto-crystalline anisotropy energy.
ZFC–FC curves of Ni0.7Zn0.3Fe2O4 ferrite nano-crystals obtained from a co-precipitation, b reverse micro-emulsion method recorded at 50 Oe and c recorded at 1,000 Oe for sample a
The magneto-crystalline anisotropy energy of a single-domain particle is given by EA = KV sin2θ, where K is the magneto-crystalline anisotropy constant, V is the volume of the nano-particle, and θ is the angle between the direction of magnetization and the easy axis of the nano-particle. KV is the anisotropy energy barrier for the reversal of magnetic moment (Goya 2004). The cation distribution in a particular ferrite also plays a dominant role in the resultant blocking temperature.
The reversal or switching time, called the Neel relaxation time, is given by the following relation (1):
$$ \tau = \tau_{o} \exp \left( {{{E_{\text{A}} } \mathord{\left/ {\vphantom {{E_{\text{A}} } {k_{\text{B}} T_{\text{B}} }}} \right. \kern-0pt} {k_{\text{B}} T_{\text{B}} }}} \right) $$
$$ E_{\text{A}} = \ln \left( {\tau /\tau_{o} } \right)k_{\text{B}} T_{\text{B}} , $$
where τ is the super-paramagnetic relaxation time (60 s), τo is a relaxation time constant (∼10−10 s), and TB is the blocking temperature. The magneto-crystalline anisotropy constant is larger for nano-crystalline materials than the bulk one and it increases with the decrease in particle size as well as decreases with the increase of applied magnetic field. Assuming the particles to be spherical, the magneto-crystalline anisotropy constant for sample 'a' was found to decrease from 12.63 × 105 to 5.94 × 105 erg/cm3 on increasing the magnetic field from 50 to 1,000 Oe, respectively. The magneto-crystalline anisotropy constant was found to be 25.94 × 105 erg/cm3 for sample 'b' at an applied field of 50 Oe, which is extremely higher than that of bulk ferrites.
A novel and facile route of reverse micro-emulsion method has been presented here for the synthesis of Ni–Zn ferrite nano-crystals with narrow particle size distribution. In this method we have obtained fine, mono-disperse, and nearly spherical nano-particles in comparison with the irregularly shaped particles obtained by the co-precipitation method. The presence of surfactant molecules in the reverse micelles which to some extent act as capping agents may serve to prevent agglomeration of the particles. The saturation magnetization of Ni–Zn ferrite nano-crystals prepared by reverse micro-emulsion technique (6.1 emu/g) is less than those prepared by co-precipitation (31 emu/g) and their bulk counter parts (70 emu/g). This can be explained in terms of the core–shell morphology of the nano-particles consisting of ferrimagnetically aligned core spins and a spin-canted like surface layer. In sample 'a', the coercivity has been observed to decrease with temperature; however, a reverse trend was observed in the reverse micro-emulsion synthesized sample. The anisotropy has been observed to increase substantially with the decrease in particle size as well as decreases with the increase of applied magnetic field.
Abdullah Dar M, Batoo KM, Verma V, Siddiqui WA, Kotnala RK (2010) Synthesis and characterization of nano-sized pure and Al-doped lithium ferrite having high value of dielectric constant. J Alloy Compd 493:553
Bates G (1980) In: Wohlfarth EP (ed) Ferromagnetic materials, vol 2. North-Holland, Amsterdam, p 442
Batlle X, Perez N, Guardia P, Iglesias O, Labarta A, Bartolome F, Garcia LM, Bartolome J, Roca AG, Morales MP, Serna CJ (2011) Recent advances in magnetic nanoparticles with bulk-like properties, J Appl Phys 109: 07B524
Caizer C, Stefanescu M (2002) Magnetic characterization of nanocrystalline Ni–Zn ferrite powder prepared by the glyoxylate precursor method. J Phys D 35:3035
Cannas C, Ardu A, Peddis D, Sangregorio C, Piccaluga G, Musinu A (2010) Surfactant-assisted route to fabricate CoFe2O4 individual nanoparticles and spherical assemblies. J Coll Interface Sci 343:415
Chikazumi S (1964) Physics of Magnetism. Wiley, New York, p 498
Goya GF (2004) Magnetic interactions in ball-milled spinel ferrites. J Mater Sci 39:5045
Jiao X, Chen D, Hu Y (2002) Hydrothermal synthesis of nanocrystalline M x Zn1−xFe2O4 (M = Ni, Mn, Co; x = 0.40–0.60) powders. Mater Res Bull 37:1583
Kotnala RK, Abdullah Dar M, Verma V, Singh AP, Siddiqui WA (2010) Minimizing of power loss in Li–Cd ferrite by nickel substitution for power applications. J Magn Magn Mater 322:3714
Lee Y, Lee J, Bae CJ, Park JG, Noh HJ, Park JH, Hyeon T (2005) Large-scale synthesis of uniform and crystalline magnetite nanoparticles using reverse micelles as nanoreactors under reflux conditions. Adv Funct Mater 15:503
Li Y, Park CW (1999) Particle size distribution in the synthesis of nanoparticles using microemulsions. Langmuir 15:952
Nathani H, Misra RDK (2004) Surface effects on the magnetic behavior of nanocrystalline nickel ferrites and nickel ferrite-polymer nanocomposites. Mater Sci Eng B 113:228
Neel L (1953) Thermoremanent magnetization of fine powders. Rev Mod Phys 25:293
Rao BP, Kumar AM, Rao KH, Murthy YLN, Caltun OF, Dumitru I, Spinu L (2006) Synthesis and magnetic studies of Ni–Zn ferrite nanoparticles. J Optoelectr Adv Mater 8:1703
Roca AG, Marco JF, del Puerto Morales M, Serna CJ (2007) Effect of nature and particle size on properties of uniform magnetite and maghemite nanoparticles. J Phys Chem C 111:18577
Sepelak V, Bergmann I, Feldhoff A, Heitjans P, Krumeich F, Menzel D, Litterst FJ, Campbell SJ, Becker KD (2007) Nanocrystalline nickel ferrite, NiFe2O4: mechanosynthesis, nonequilibrium cation distribution, canted spin arrangement, and magnetic behavior. J Phys Chem C 111:5026
Sivakumar P, Ramesh R, Ramanand A, Ponnusamy S, Muthamizhchelvan C (2011) Synthesis and characterization of NiFe2O4 nanosheet via polymer assisted co-precipitation method. Mater Lett 65:483
Thakur S, Katyal SC, Singh M (2007) Structural and magnetic properties of nano nickel–zinc ferrite synthesized by reverse micelle technique. Appl Phys Lett 91: 262501
Thakur S, Katyal SC, Singh M (2009) Structural and magnetic properties of nano nickel–zinc ferrite synthesized by reverse micelle technique. J Magn Magn Mater 321:1
Vaidyanathan G, Sendhilnathan S, Arulmurugan R (2007) Structural and magnetic properties of Co1−xZn x Fe2O4 nanoparticles by co-precipitation method. J Magn Magn Mater 313:293
Verdes C, Thompson SM, Chantrell RW, Stancu AL (2002) Computational model of the magnetic and transport properties of interacting fine particles. Phys Rev B 65:174417
Verma V, Abdullah Dar M, Pandey V, Singh A, Annapoorni S, Kotnala RK (2010) Magnetic properties of nano-crystalline Li0.35Cd0.3Fe2.35O4 ferrite prepared by modified citrate precursor method. Mater Chem Phys 122: 133
Wang C, Zhang XM, Qian XF, Xie J, Wang WZ, Qian YT (1998) Preparation of nanocrystalline nickel powders through hydrothermal-reduction method. Mater Res Bull 33:1747
Yener DO, Giesche H (2001) Synthesis of pure and manganese-, nickel-, and zinc-doped ferrite particles in water-in-oil microemulsions. J Am Ceram Soc 84:1987
Zhang ZJ, Wang ZL, Chakoumakos BC, Yin JS (1998) Temperature dependence of cation distribution and oxidation state in magnetic Mn–Fe ferrite nanocrystals. J Am Chem Soc 120:1800
One of the authors (M. Abdullah Dar) is highly grateful to CSIR for their financial assistance through grant No. 124069/2K11/1.
Department of Applied Sciences and Humanities, Jamia Millia Islamia University, New Delhi, 110025, India
M. Abdullah Dar & W. A. Siddiqui
National Physical Laboratory, Dr. K. S. Krishnan Road, New Delhi, 110012, India
M. Abdullah Dar, Jyoti Shah & R. K. Kotnala
M. Abdullah Dar
Jyoti Shah
W. A. Siddiqui
R. K. Kotnala
Correspondence to R. K. Kotnala.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Abdullah Dar, M., Shah, J., Siddiqui, W.A. et al. Study of structure and magnetic properties of Ni–Zn ferrite nano-particles synthesized via co-precipitation and reverse micro-emulsion technique. Appl Nanosci 4, 675–682 (2014). https://doi.org/10.1007/s13204-013-0241-x
Issue Date: August 2014
DOI: https://doi.org/10.1007/s13204-013-0241-x
Micro-emulsion
Nano-particles
Super-paramagnetism
Blocking temperature
Coercivity | CommonCrawl |
Login | Create
Sort by: Relevance Date Users's collections Twitter
Group by: Day Week Month Year All time
Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity)
Particle identification studies with a full-size 4-GEM prototype for the ALICE TPC upgrade (1805.03234)
Z. Ahammed, S. Aiola, J. Alme, T. Alt, W. Amend, A. Andronic, V. Anguelov, H. Appelshäuser, M. Arslandok, R. Averbeck, M. Ball, G.G. Barnaföldi, E. Bartsch, R. Bellwied, G. Bencedi, M. Berger, N. Bialas, P. Bialas, L. Bianchi, S. Biswas, L. Boldizsár, L. Bratrud, P. Braun-Munzinger, M. Bregant, C.L. Britton, E.J. Brucken, H. Caines, A.J. Castro, S. Chattopadhyay, P. Christiansen, L.G. Clonts, T.M. Cormier, S. Das, S. Dash, A. Deisting, S. Dittrich, A.K. Dubey, R. Ehlers, F. Erhardt, N.B. Ezell, L. Fabbietti, U. Frankenfeld, J.J. Gaardhøje, C. Garabatos, P. Gasik, Á. Gera, P. Ghosh, S.K. Ghosh, P. Glässel, O. Grachov, A. Grein, T. Gunji, H. Hamagaki, G. Hamar, J.W. Harris, J. Hehner, E. Hellbär, H. Helstrup, T.E. Hilden, B. Hohlweger, M. Ivanov, M. Jung, D. Just, E. Kangasaho, R. Keidel, B. Ketzer, S.A. Khan, S. Kirsch, T. Klemenz, S. Klewin, A.G. Knospe, M. Kowalski, L. Kumar, R. Lang, R. Langoy, L. Lautner, F. Liebske, J. Lien, C. Lippmann, H.M. Ljunggren, W.J. Llope, S. Mahmood, T. Mahmoud, R. Majka, P. Malzacher, A. Marín, C. Markert, S. Masciocchi, A. Mathis, A. Matyja, M. Meres, D.L. Mihaylov, D. Miskowiec, J. Mitra, T. Mittelstaedt, T. Morhardt, J. Mulligan, R.H. Munzer, K. Münning, M.G. Munhoz, S. Muhuri, H. Murakami, B.K. Nandi, H. Natal da Luz, C. Nattrass, T.K. Nayak, R.A. Negrao De Oliveira, M. Nicassio, B.S. Nielsen, L. Oláh, A. Oskarsson, J. Otwinowski, K. Oyama, G. Paić, R.N. Patra, V. Peskov, M. Pikna, L. Pinsky, M. Planinic, M.G. Poghosyan, N. Poljak, F. Pompei, S.K. Prasad, C.A. Pruneau, J. Putschke, S. Raha, J. Rak, J. Rasson, V. Ratza, K.F. Read, A. Rehman, R. Renfordt, T. Richert, K. Røed, D. Röhrich, T. Rudzki, R. Sahoo, S. Sahoo, P.K. Sahu, J. Saini, B. Schaefer, J. Schambach, S. Scheid, C. Schmidt, H.R. Schmidt, N.V Schmidt, H. Schulte, K. Schweda, I. Selyuzhenkov, N. Sharma, D. Silvermyr, R.N. Singaraju, B. Sitar, N. Smirnov, S.P. Sorensen, F. Sozzi, J. Stachel, E. Stenlund, P. Strmen, I. Szarka, G. Tambave, K. Terasaki, A. Timmins, K. Ullaland, A. Utrobicic, D. Varga, R. Varma, A. Velure, V. Vislavicius, S. Voloshin, B. Voss, D. Vranic, J. Wiechula, S. Winkler, J. Wikne, B. Windelband, C. Zhao
May 8, 2018 physics.ins-det
A large Time Projection Chamber is the main device for tracking and charged-particle identification in the ALICE experiment at the CERN LHC. After the second long shutdown in 2019/20, the LHC will deliver Pb beams colliding at an interaction rate of about 50 kHz, which is about a factor of 50 above the present readout rate of the TPC. This will result in a significant improvement on the sensitivity to rare probes that are considered key observables to characterize the QCD matter created in such collisions. In order to make full use of this luminosity, the currently used gated Multi-Wire Proportional Chambers will be replaced by continuously operated readout detectors employing Gas Electron Multiplier technology, while retaining the performance in terms of particle identification via the measurement of the specific energy loss by ionization d$E$/d$x$. A full-size readout chamber prototype was assembled in 2014 featuring a stack of four GEM foils as an amplification stage. The d$E$/d$x$ resolution of the prototype, evaluated in a test beam campaign at the CERN PS, complies with both the performance of the currently operated MWPC-based readout chambers and the challenging requirements of the ALICE TPC upgrade program. Detailed simulations of the readout system are able to reproduce the data.
GitHub 0
Inclusive J/psi production in pp collisions at sqrt(s) = 2.76 TeV (1203.3641)
ALICE Collaboration: B. Abelev, J. Adam, D. Adamova, A.M. Adare, M.M. Aggarwal, G. Aglieri Rinella, A.G. Agocs, A. Agostinelli, S. Aguilar Salazar, Z. Ahammed, A. Ahmad Masoodi, N. Ahmad, S.U. Ahn, A. Akindinov, D. Aleksandrov, B. Alessandro, R. Alfaro Molina, A. Alici, A. Alkin, E. Almaraz Avina, J. Alme, T. Alt, V. Altini, S. Altinpinar, I. Altsybeev, C. Andrei, A. Andronic, V. Anguelov, J. Anielski, C. Anson, T. Anticic, F. Antinori, P. Antonioli, L. Aphecetche, H. Appelshauser, N. Arbor, S. Arcelli, A. Arend, N. Armesto, R. Arnaldi, T. Aronsson, I.C. Arsene, M. Arslandok, A. Asryan, A. Augustinus, R. Averbeck, T.C. Awes, J. Aysto, M.D. Azmi, M. Bach, A. Badala, Y.W. Baek, R. Bailhache, R. Bala, R. Baldini Ferroli, A. Baldisseri, A. Baldit, F. Baltasar Dos Santos Pedrosa, J. Ban, R.C. Baral, R. Barbera, F. Barile, G.G. Barnafoldi, L.S. Barnby, V. Barret, J. Bartke, M. Basile, N. Bastid, B. Bathen, G. Batigne, B. Batyunya, C. Baumann, I.G. Bearden, H. Beck, I. Belikov, F. Bellini, R. Bellwied, E. Belmont-Moreno, G. Bencedi, S. Beole, I. Berceanu, A. Bercuci, Y. Berdnikov, D. Berenyi, C. Bergmann, D. Berzano, L. Betev, A. Bhasin, A.K. Bhati, L. Bianchi, N. Bianchi, C. Bianchin, J. Bielcik, J. Bielcikova, S. Bjelogrlic, F. Blanco, F. Blanco, D. Blau, C. Blume, M. Boccioli, N. Bock, A. Bogdanov, H. Boggild, M. Bogolyubsky, L. Boldizsar, M. Bombara, J. Book, H. Borel, A. Borissov, S. Bose, F. Bossu, M. Botje, S. Bottger, B. Boyer, E. Braidot, P. Braun-Munzinger, M. Bregant, T. Breitner, T.A. Browning, M. Broz, R. Brun, E. Bruna, G.E. Bruno, D. Budnikov, H. Buesching, S. Bufalino, K. Bugaiev, O. Busch, Z. Buthelezi, D. Caballero Orduna, D. Caffarri, X. Cai, H. Caines, E. Calvo Villar, P. Camerini, V. Canoa Roman, G. Cara Romeo, W. Carena, F. Carena, N. Carlin Filho, F. Carminati, C.A. Carrillo Montoya, A. Casanova Diaz, J. Castillo Castellanos, J.F. Castillo Hernandez, E.A.R. Casula, V. Catanescu, C. Cavicchioli, J. Cepila, P. Cerello, B. Chang, S. Chapeland, J.L. Charvet, S. Chattopadhyay, S. Chattopadhyay, I. Chawla, M. Cherney, C. Cheshkov, B. Cheynis, E. Chiavassa, V. Chibante Barroso, D.D. Chinellato, P. Chochula, M. Chojnacki, P. Christakoglou, C.H. Christensen, P. Christiansen, T. Chujo, S.U. Chung, C. Cicalo, L. Cifarelli, F. Cindolo, J. Cleymans, F. Coccetti, F. Colamaria, D. Colella, G. Conesa Balbastre, Z. Conesa del Valle, P. Constantin, G. Contin, J.G. Contreras, T.M. Cormier, Y. Corrales Morales, P. Cortese, I. Cortes Maldonado, M.R. Cosentino, F. Costa, M.E. Cotallo, E. Crescio, P. Crochet, E. Cruz Alaniz, E. Cuautle, L. Cunqueiro, A. Dainese, H.H. Dalsgaard, A. Danu, K. Das, I. Das, D. Das, A. Dash, S. Dash, S. De, G.O.V. de Barros, A. De Caro, G. de Cataldo, J. de Cuveland, A. De Falco, D. De Gruttola, H. Delagrange, E. Del Castillo Sanchez, A. Deloff, V. Demanov, N. De Marco, E. Denes, S. De Pasquale, A. Deppman, G. D Erasmo, R. de Rooij, M.A. Diaz Corchero, D. Di Bari, T. Dietel, C. Di Giglio, S. Di Liberto, A. Di Mauro, P. Di Nezza, R. Divia, O. Djuvsland, A. Dobrin, T. Dobrowolski, I. Dominguez, B. Donigus, O. Dordic, O. Driga, A.K. Dubey, L. Ducroux, P. Dupieux, A.K. Dutta Majumdar, M.R. Dutta Majumdar, D. Elia, D. Emschermann, H. Engel, H.A. Erdal, B. Espagnon, M. Estienne, S. Esumi, D. Evans, G. Eyyubova, D. Fabris, J. Faivre, D. Falchieri, A. Fantoni, M. Fasel, R. Fearick, A. Fedunov, D. Fehlker, L. Feldkamp, D. Felea, G. Feofilov, A. Fernandez Tellez, E.G. Ferreiro, A. Ferretti, R. Ferretti, J. Figiel, M.A.S. Figueredo, S. Filchagin, D. Finogeev, F.M. Fionda, E.M. Fiore, M. Floris, S. Foertsch, P. Foka, S. Fokin, E. Fragiacomo, M. Fragkiadakis, U. Frankenfeld, U. Fuchs, C. Furget, M. Fusco Girard, J.J. Gaardhoje, M. Gagliardi, A. Gago, M. Gallio, D.R. Gangadharan, P. Ganoti, C. Garabatos, E. Garcia-Solis, I. Garishvili, J. Gerhard, M. Germain, C. Geuna, A. Gheata, M. Gheata, B. Ghidini, P. Ghosh, P. Gianotti, M.R. Girard, P. Giubellino, E. Gladysz-Dziadus, P. Glassel, R. Gomez, L.H. Gonzalez-Trueba, P. Gonzalez-Zamora, S. Gorbunov, A. Goswami, S. Gotovac, V. Grabski, L.K. Graczykowski, R. Grajcarek, A. Grelli, A. Grigoras, C. Grigoras, V. Grigoriev, A. Grigoryan, S. Grigoryan, B. Grinyov, N. Grion, P. Gros, J.F. Grosse-Oetringhaus, J.-Y. Grossiord, R. Grosso, F. Guber, R. Guernane, C. Guerra Gutierrez, B. Guerzoni, M.Guilbaud, K. Gulbrandsen, T. Gunji, A. Gupta, R. Gupta, H. Gutbrod, O. Haaland, C. Hadjidakis, M. Haiduc, H. Hamagaki, G. Hamar, B.H. Han, L.D. Hanratty, A. Hansen, Z. Harmanova, J.W. Harris, M. Hartig, D. Hasegan, D. Hatzifotiadou, A. Hayrapetyan, S.T. Heckel, M. Heide, H. Helstrup, A. Herghelegiu, G. Herrera Corral, N. Herrmann, K.F. Hetland, B. Hicks, P.T. Hille, B. Hippolyte, T. Horaguchi, Y. Hori, P. Hristov, I. Hrivnacova, M. Huang, S. Huber, T.J. Humanic, D.S. Hwang, R. Ichou, R. Ilkaev, I. Ilkiv, M. Inaba, E. Incani, G.M. Innocenti, P.G. Innocenti, M. Ippolitov, M. Irfan, C. Ivan, V. Ivanov, A. Ivanov, M. Ivanov, O. Ivanytskyi, A. Jacholkowski, P. M. Jacobs, L. Jancurova, H.J. Jang, S. Jangal, M.A. Janik, R. Janik, P.H.S.Y. Jayarathna, S. Jena, R.T. Jimenez Bustamante, L. Jirden, P.G. Jones, H. Jung, A. Jusko, A.B. Kaidalov, V. Kakoyan, S. Kalcher, P. Kalinak, M. Kalisky, T. Kalliokoski, A. Kalweit, K. Kanaki, J.H. Kang, V. Kaplin, A. Karasu Uysal, O. Karavichev, T. Karavicheva, E. Karpechev, A. Kazantsev, U. Kebschull, R. Keidel, M.M. Khan, S.A. Khan, P. Khan, A. Khanzadeev, Y. Kharlov, B. Kileng, M. Kim, J.S. Kim, D.J. Kim, T. Kim, B. Kim, S. Kim, S.H. Kim, D.W. Kim, J.H. Kim, S. Kirsch, I. Kisel, S. Kiselev, A. Kisiel, J.L. Klay, J. Klein, C. Klein-Bosing, M. Kliemant, A. Kluge, M.L. Knichel, A.G. Knospe, K. Koch, M.K. Kohler, A. Kolojvari, V. Kondratiev, N. Kondratyeva, A. Konevskikh, A. Korneev, C. Kottachchi Kankanamge Don, R. Kour, M. Kowalski, S. Kox, G. Koyithatta Meethaleveedu, J. Kral, I. Kralik, F. Kramer, I. Kraus, T. Krawutschke, M. Krelina, M. Kretz, M. Krivda, F. Krizek, M. Krus, E. Kryshen, M. Krzewicki, Y. Kucheriaev, C. Kuhn, P.G. Kuijer, P. Kurashvili, A. Kurepin, A.B. Kurepin, A. Kuryakin, S. Kushpil, V. Kushpil, H. Kvaerno, M.J. Kweon, Y. Kwon, P. Ladron de Guevara, I. Lakomov, R. Langoy, C. Lara, A. Lardeux, P. La Rocca, C. Lazzeroni, R. Lea, Y. Le Bornec, S.C. Lee, K.S. Lee, F. Lefevre, J. Lehnert, L. Leistam, M. Lenhardt, V. Lenti, H. Leon, I. Leon Monzon, H. Leon Vargas, P. Levai, J. Lien, R. Lietava, S. Lindal, V. Lindenstruth, C. Lippmann, M.A. Lisa, L. Liu, P.I. Loenne, V.R. Loggins, V. Loginov, S. Lohn, D. Lohner, C. Loizides, K.K. Loo, X. Lopez, E. Lopez Torres, G. Lovhoiden, X.-G. Lu, P. Luettig, M. Lunardon, J. Luo, G. Luparello, L. Luquin, C. Luzzi, K. Ma, R. Ma, D.M. Madagodahettige-Don, A. Maevskaya, M. Mager, D.P. Mahapatra, A. Maire, M. Malaev, I. Maldonado Cervantes, L. Malinina, D. Mal'Kevich, P. Malzacher, A. Mamonov, L. Manceau, L. Mangotra, V. Manko, F. Manso, V. Manzari, Y. Mao, M. Marchisone, J. Mares, G.V. Margagliotti, A. Margotti, A. Marin, C.A. Marin Tobon, C. Markert, I. Martashvili, P. Martinengo, M.I. Martinez, A. Martinez Davalos, G. Martinez Garcia, Y. Martynov, A. Mas, S. Masciocchi, M. Masera, A. Masoni, L. Massacrier, M. Mastromarco, A. Mastroserio, Z.L. Matthews, A. Matyja, D. Mayani, C. Mayer, J. Mazer, M.A. Mazzoni, F. Meddi, A. Menchaca-Rocha, J. Mercado Perez, M. Meres, Y. Miake, L. Milano, J. Milosevic, A. Mischke, A.N. Mishra, D. Miskowiec, C. Mitu, J. Mlynarz, A.K. Mohanty, B. Mohanty, L. Molnar, L. Montano Zetina, M. Monteno, E. Montes, T. Moon, M. Morando, D.A. Moreira De Godoy, S. Moretto, A. Morsch, V. Muccifora, E. Mudnic, S. Muhuri, H. Muller, M.G. Munhoz, L. Musa, A. Musso, B.K. Nandi, R. Nania, E. Nappi, C. Nattrass, N.P.Naumov, S. Navin, T.K. Nayak, S. Nazarenko, G. Nazarov, A. Nedosekin, M. Nicassio, B.S. Nielsen, T. Niida, S. Nikolaev, V. Nikolic, V. Nikulin, S. Nikulin, B.S. Nilsen, M.S. Nilsson, F. Noferini, P. Nomokonov, G. Nooren, N. Novitzky, A. Nyanin, A. Nyatha, C. Nygaard, J. Nystrand, A. Ochirov, H. Oeschler, S.K. Oh, S. Oh, J. Oleniacz, C. Oppedisano, A. Ortiz Velasquez, G. Ortona, A. Oskarsson, P. Ostrowski, J. Otwinowski, G. Ovrebekk, K. Oyama, K. Ozawa, Y. Pachmayer, M. Pachr, F. Padilla, P. Pagano, G. Paic, F. Painke, C. Pajares, S.K. Pal, S. Pal, A. Palaha, A. Palmeri, V. Papikyan, G.S. Pappalardo, W.J. Park, A. Passfeld, B. Pastircak, D.I. Patalakha, V. Paticchio, A. Pavlinov, T. Pawlak, T. Peitzmann, E. Pereira De Oliveira Filho, D. Peresunko, C.E. Perez Lara, E. Perez Lezama, D. Perini, D. Perrino, W. Peryt, A. Pesci, V. Peskov, Y. Pestov, V. Petracek, M. Petran, M. Petris, P. Petrov, M. Petrovici, C. Petta, S. Piano, A. Piccotti, M. Pikna, P. Pillot, O. Pinazza, L. Pinsky, N. Pitz, F. Piuz, D.B. Piyarathna, M. Ploskon, J. Pluta, T. Pocheptsov, S. Pochybova, P.L.M. Podesta-Lerma, M.G. Poghosyan, K. Polak, B. Polichtchouk, A. Pop, S. Porteboeuf-Houssais, V. Pospisil, B. Potukuchi, S.K. Prasad, R. Preghenella, F. Prino, C.A. Pruneau, I. Pshenichnov, S. Puchagin, G. Puddu, J. Pujol Teixido, A. Pulvirenti, V. Punin, M. Putis, J. Putschke, E. Quercigh, H. Qvigstad, A. Rachevski, A. Rademakers, S. Radomski, T.S. Raiha, J. Rak, A. Rakotozafindrabe, L. Ramello, A. Ramirez Reyes, S. Raniwala, R. Raniwala, S.S. Rasanen, B.T. Rascanu, D. Rathee, K.F. Read, J.S. Real, K. Redlich, P. Reichelt, M. Reicher, R. Renfordt, A.R. Reolon, A. Reshetin, F. Rettig, J.-P. Revol, K. Reygers, L. Riccati, R.A. Ricci, T. Richert, M. Richter, P. Riedler, W. Riegler, F. Riggi, M. Rodriguez Cahuantzi, K. Roed, D. Rohr, D. Rohrich, R. Romita, F. Ronchetti, P. Rosnet, S. Rossegger, A. Rossi, F. Roukoutakis, C. Roy, P. Roy, A.J. Rubio Montero, R. Rui, E. Ryabinkin, A. Rybicki, S. Sadovsky, K. Safarik, R. Sahoo, P.K. Sahu, J. Saini, H. Sakaguchi, S. Sakai, D. Sakata, C.A. Salgado, J. Salzwedel, S. Sambyal, V. Samsonov, X. Sanchez Castro, L. Sandor, A. Sandoval, S. Sano, M. Sano, R. Santo, R. Santoro, J. Sarkamo, E. Scapparone, F. Scarlassara, R.P. Scharenberg, C. Schiaua, R. Schicker, H.R. Schmidt, C. Schmidt, S. Schreiner, S. Schuchmann, J. Schukraft, Y. Schutz, K. Schwarz, K. Schweda, G. Scioli, E. Scomparin, P.A. Scott, R. Scott, G. Segato, I. Selyuzhenkov, S. Senyukov, J. Seo, S. Serci, E. Serradilla, A. Sevcenco, I. Sgura, A. Shabetai, G. Shabratova, R. Shahoyan, N. Sharma, S. Sharma, K. Shigaki, M. Shimomura, K. Shtejer, Y. Sibiriak, M. Siciliano, E. Sicking, S. Siddhanta, T. Siemiarczuk, D. Silvermyr, C. Silvestre, G. Simonetti, R. Singaraju, R. Singh, S. Singha, T. Sinha, B.C. Sinha, B. Sitar, M. Sitta, T.B. Skaali, K. Skjerdal, R. Smakal, N. Smirnov, R.J.M. Snellings, C. Sogaard, R. Soltz, H. Son, M. Song, J. Song, C. Soos, F. Soramel, I. Sputowska, M. Spyropoulou-Stassinaki, B.K. Srivastava, J. Stachel, I. Stan, I. Stan, G. Stefanek, G. Stefanini, T. Steinbeck, M. Steinpreis, E. Stenlund, G. Steyn, D. Stocco, M. Stolpovskiy, K. Strabykin, P. Strmen, A.A.P. Suaide, M.A. Subieta Vasquez, T. Sugitate, C. Suire, M. Sukhorukov, R. Sultanov, M. Sumbera, T. Susa, A. Szanto de Toledo, I. Szarka, A. Szostak, C. Tagridis, J. Takahashi, J.D. Tapia Takaki, A. Tauro, G. Tejeda Munoz, A. Telesca, C. Terrevoli, J. Thader, D. Thomas, R. Tieulent, A.R. Timmins, D. Tlusty, A. Toia, H. Torii, L. Toscano, F. Tosello, D. Truesdale, W.H. Trzaska, T. Tsuji, A. Tumkin, R. Turrisi, T.S. Tveter, J. Ulery, K. Ullaland, J. Ulrich, A. Uras, J. Urban, G.M. Urciuoli, G.L. Usai, M. Vajzer, M. Vala, L. Valencia Palomo, S. Vallero, N. van der Kolk, P. Vande Vyvre, M. van Leeuwen, L. Vannucci, A. Vargas, R. Varma, M. Vasileiou, A. Vasiliev, V. Vechernin, M. Veldhoen, M. Venaruzzo, E. Vercellin, S. Vergara, D.C. Vernekohl, R. Vernet, M. Verweij, L. Vickovic, G. Viesti, O. Vikhlyantsev, Z. Vilakazi, O. Villalobos Baillie, A. Vinogradov, Y. Vinogradov, L. Vinogradov, T. Virgili, Y.P. Viyogi, A. Vodopyanov, S. Voloshin, K. Voloshin, G. Volpe, B. von Haller, D. Vranic, J. Vrlakova, B. Vulpescu, A. Vyushin, B. Wagner, V. Wagner, R. Wan, Y. Wang, D. Wang, Y. Wang, M. Wang, K. Watanabe, J.P. Wessels, U. Westerhoff, J. Wiechula, J. Wikne, M. Wilde, G. Wilk, A. Wilk, M.C.S. Williams, B. Windelband, L. Xaplanteris Karampatsos, H. Yang, S. Yang, S. Yasnopolskiy, J. Yi, Z. Yin, H. Yokoyama, I.-K. Yoo, J. Yoon, W. Yu, X. Yuan, I. Yushmanov, C. Zach, C. Zampolli, S. Zaporozhets, A. Zarochentsev, P. Zavada, N. Zaviyalov, H. Zbroszczyk, P. Zelnicek, I.S. Zgura, M. Zhalov, X. Zhang, D. Zhou, Y. Zhou, F. Zhou, X. Zhu, A. Zichichi, A. Zimmermann, G. Zinovjev, Y. Zoccarato, M. Zynovyev
Nov. 6, 2012 hep-ex
The ALICE Collaboration has measured inclusive J/psi production in pp collisions at a center of mass energy sqrt(s)=2.76 TeV at the LHC. The results presented in this Letter refer to the rapidity ranges |y|<0.9 and 2.5<y<4 and have been obtained by measuring the electron and muon pair decay channels, respectively. The integrated luminosities for the two channels are L^e_int=1.1 nb^-1 and L^mu_int=19.9 nb^-1, and the corresponding signal statistics are N_J/psi^e+e-=59 +/- 14 and N_J/psi^mu+mu-=1364 +/- 53. We present dsigma_J/psi/dy for the two rapidity regions under study and, for the forward-y range, d^2sigma_J/psi/dydp_t in the transverse momentum domain 0<p_t<8 GeV/c. The results are compared with previously published results at sqrt(s)=7 TeV and with theoretical calculations.
Anomalous centrality evolution of two-particle angular correlations from Au-Au collisions at $\sqrt{s_{\rm NN}}$ = 62 and 200 GeV (1109.4380)
STAR Collaboration: G. Agakishiev, M.M. Aggarwal, Z. Ahammed, A.V. Alakhverdyants, I. Alekseev, J. Alford, B.D. Anderson, C.D. Anson, D. Arkhipkin, G.S. Averichev, J. Balewski, D.R. Beavis, R. Bellwied, M.J. Betancourt, R.R. Betts, A. Bhasin, A.K. Bhati, H. Bichsel, J. Bielcik, J. Bielcikova, L.C. Bland, I.G. Bordyuzhin, W. Borowski, J. Bouchet, E. Braidot, A.V. Brandin, S.G. Brovko, E. Bruna, S. Bueltmann, I. Bunzarov, T.P. Burton, X.Z. Cai, H. Caines, M. Calderon, D. Cebra, R. Cendejas, M.C. Cervantes, P. Chaloupka, S. Chattopadhyay, H.F. Chen, J.H. Chen, J.Y. Chen, L. Chen, J. Cheng, M. Cherney, A. Chikanian, W. Christie, P. Chung, M.J.M. Codrington, R. Corliss, J.G. Cramer, H.J. Crawford, X. Cui, A. Davila Leyva, L.C. De Silva, R.R. Debbe, T.G. Dedovich, J. Deng, A.A. Derevschikov, R. Derradi de Souza, L. Didenko, P. Djawotho, X. Dong, J.L. Drachenberg, J.E. Draper, C.M. Du, J.C. Dunlop, L.G. Efimov, M. Elnimr, J. Engelage, G. Eppley, M. Estienne, L. Eun, O. Evdokimov, R. Fatemi, J. Fedorisin, R.G. Fersch, P. Filip, E. Finch, V. Fine, Y. Fisyak, C.A. Gagliardi, D.R. Gangadharan, F. Geurts, P. Ghosh, Y.N. Gorbunov, A. Gordon, O.G. Grebenyuk, D. Grosnick, A. Gupta, S. Gupta, W. Guryn, B. Haag, O. Hajkova, A. Hamed, L-X. Han, J.W. Harris, J.P. Hays-Wehle, S. Heppelmann, A. Hirsch, G.W. Hoffmann, D.J. Hofman, B. Huang, H.Z. Huang, T.J. Humanic, L. Huo, G. Igo, W.W. Jacobs, C. Jena, J. Joseph, E.G. Judd, S. Kabana, K. Kang, J. Kapitan, K. Kauder, H.W. Ke, D. Keane, A. Kechechyan, D. Kettler, D.P. Kikola, J. Kiryluk, A. Kisiel, V. Kizka, S.R. Klein, D.D. Koetke, T. Kollegger, J. Konzer, I. Koralt, L. Koroleva, W. Korsch, L. Kotchenda, P. Kravtsov, K. Krueger, L. Kumar, M.A.C. Lamont, J.M. Landgraf, S. LaPointe, J. Lauret, A. Lebedev, R. Lednicky, J.H. Lee, W. Leight, M.J. LeVine, C. Li, L. Li, W. Li, X. Li, X. Li, Y. Li, Z.M. Li, L.M. Lima, M.A. Lisa, F. Liu, T. Ljubicic, W.J. Llope, R.S. Longacre, Y. Lu, E.V. Lukashov, X. Luo, G.L. Ma, Y.G. Ma, D.P. Mahapatra, R. Majka, O.I. Mall, R. Manweiler, S. Margetis, C. Markert, H. Masui, H.S. Matis, D. McDonald, T.S. McShane, A. Meschanin, R. Milner, N.G. Minaev, S. Mioduszewski, M.K. Mitrovski, Y. Mohammed, B. Mohanty, M.M. Mondal, B. Morozov, D.A. Morozov, M.G. Munhoz, M.K. Mustafa, M. Naglis, B.K. Nandi, Md. Nasim, T.K. Nayak, L.V. Nogach, S.B. Nurushev, G. Odyniec, A. Ogawa, K. Oh, A. Ohlson, V. Okorokov, E.W. Oldag, R.A.N. Oliveira, D. Olson, M. Pachr, B.S. Page, S.K. Pal, Y. Pandit, Y. Panebratsev, T. Pawlak, H. Pei, T. Peitzmann, C. Perkins, W. Peryt, P. Pile, M. Planinic, J. Pluta, D. Plyku, N. Poljak, J. Porter, C.B. Powell, D. Prindle, C. Pruneau, N.K. Pruthi, P.R. Pujahari, J. Putschke, H. Qiu, R. Raniwala, S. Raniwala, R.L. Ray, R. Redwine, R. Reed, H.G. Ritter, J.B. Roberts, O.V. Rogachevskiy, J.L. Romero, L. Ruan, J. Rusnak, N.R. Sahoo, I. Sakrejda, S. Salur, J. Sandweiss, E. Sangaline, A. Sarkar, J. Schambach, R.P. Scharenberg, J. Schaub, A.M. Schmah, N. Schmitz, T.R. Schuster, J. Seele, J. Seger, I. Selyuzhenkov, P. Seyboth, N. Shah, E. Shahaliev, M. Shao, M. Sharma, S.S. Shi, Q.Y. Shou, E.P. Sichtermann, F. Simon, R.N. Singaraju, M.J. Skoby, N. Smirnov, D. Solanki, P. Sorensen, U.G. de Souza, H.M. Spinka, B. Srivastava, T.D.S. Stanislaus, S.G. Steadman, J.R. Stevens, R. Stock, M. Strikhanov, B. Stringfellow, A.A.P. Suaide, M.C. Suarez, M. Sumbera, X.M. Sun, Y. Sun, Z. Sun, B. Surrow, D.N. Svirida, T.J.M. Symons, A. Szanto de Toledo, J. Takahashi, A.H. Tang, Z. Tang, L.H. Tarini, T. Tarnowsky, D. Thein, J.H. Thomas, J. Tian, A.R. Timmins, D. Tlusty, M. Tokarev, T.A. Trainor, S. Trentalange, R.E. Tribble, P. Tribedy, B.A. Trzeciak, O.D. Tsai, T. Ullrich, D.G. Underwood, G. Van Buren, G. van Nieuwenhuizen, J.A. Vanfossen, Jr., R. Varma, G.M.S. Vasconcelos, A.N. Vasiliev, F. Videbaek, Y.P. Viyogi, S. Vokal, M. Wada, M. Walker, F. Wang, G. Wang, H. Wang, J.S. Wang, Q. Wang, X.L. Wang, Y. Wang, G. Webb, J.C. Webb, G.D. Westfall, C. Whitten Jr., H. Wieman, S.W. Wissink, R. Witt, W. Witzke, Y.F. Wu, Z. Xiao, W. Xie, H. Xu, N. Xu, Q.H. Xu, W. Xu, Y. Xu, Z. Xu, L. Xue, Y. Yang, Y. Yang, P. Yepes, K. Yip, I-K. Yoo, M. Zawisza, H. Zbroszczyk, W. Zhan, J.B. Zhang, S. Zhang, W.M. Zhang, X.P. Zhang, Y. Zhang, Z.P. Zhang, F. Zhao, J. Zhao, C. Zhong, X. Zhu, Y.H. Zhu, Y. Zoulkarneeva
June 13, 2012 nucl-ex
We present two-dimensional (2D) two-particle angular correlations on relative pseudorapidity $\eta$ and azimuth $\phi$ for charged particles from Au-Au collisions at $\sqrt{s_{\rm NN}} = 62$ and 200 GeV with transverse momentum $p_t \geq 0.15$ GeV/$c$, $|\eta| \leq 1$ and $2\pi$ azimuth. Observed correlations include a {same-side} (relative azimuth $< \pi/2$) 2D peak, a closely-related away-side azimuth dipole, and an azimuth quadrupole conventionally associated with elliptic flow. The same-side 2D peak and away-side dipole are explained by semihard parton scattering and fragmentation (minijets) in proton-proton and peripheral nucleus-nucleus collisions. Those structures follow N-N binary-collision scaling in Au-Au collisions until mid-centrality where a transition to a qualitatively different centrality trend occurs within a small centrality interval. Above the transition point the number of same-side and away-side correlated pairs increases rapidly {relative to} binary-collision scaling, the $\eta$ width of the same-side 2D peak also increases rapidly ($\eta$ elongation) and the $\phi$ width actually decreases significantly. Those centrality trends are more remarkable when contrasted with expectations of jet quenching in a dense medium. Observed centrality trends are compared to {\sc hijing} predictions and to the expected trends for semihard parton scattering and fragmentation in a thermalized opaque medium. We are unable to reconcile a semihard parton scattering and fragmentation origin for the observed correlation structure and centrality trends with heavy ion collision scenarios which invoke rapid parton thermalization. On the other hand, if the collision system is effectively opaque to few-GeV partons the observations reported here would be inconsistent with a minijet picture.
Energy and system-size dependence of two- and four-particle $v_2$ measurements in heavy-ion collisions at RHIC and their implications on flow fluctuations and nonflow (1111.5637)
The STAR Collaboration: G. Agakishiev, M.M. Aggarwal, Z. Ahammed, A.V. Alakhverdyants, I. Alekseev, J. Alford, B.D. Anderson, C.D. Anson, D. Arkhipkin, G.S. Averichev, J. Balewski, Z. Barnovska, D.R. Beavis, R. Bellwied, M.J. Betancourt, R.R. Betts, A. Bhasin, A.K. Bhati, H. Bichsel, J. Bielcik, J. Bielcikova, L.C. Bland, I.G. Bordyuzhin, W. Borowski, J. Bouchet, A.V. Brandin, S.G. Brovko, E. Bruna, S. Bueltmann, I. Bunzarov, T.P. Burton, X.Z. Cai, H. Caines, M. Calderon, D. Cebra, R. Cendejas, M.C. Cervantes, P. Chaloupka, S. Chattopadhyay, H.F. Chen, J.H. Chen, J.Y. Chen, L. Chen, J. Cheng, M. Cherney, A. Chikanian, W. Christie, P. Chung, M.J.M. Codrington, R. Corliss, J.G. Cramer, H.J. Crawford, X. Cui, A. Davila Leyva, L.C. De Silva, R.R. Debbe, T.G. Dedovich, J. Deng, R. Derradi de Souza, S. Dhamija, L. Didenko, F. Ding, P. Djawotho, X. Dong, J.L. Drachenberg, J.E. Draper, C.M. Du, L.E. Dunkelberger, J.C. Dunlop, L.G. Efimov, M. Elnimr, J. Engelage, G. Eppley, L. Eun, O. Evdokimov, R. Fatemi, J. Fedorisin, R.G. Fersch, P. Filip, E. Finch, Y. Fisyak, C.A. Gagliardi, D.R. Gangadharan, F. Geurts, P. Ghosh, S. Gliske, Y.N. Gorbunov, O.G. Grebenyuk, D. Grosnick, S. Gupta, W. Guryn, B. Haag, O. Hajkova, A. Hamed, L-X. Han, J.W. Harris, J.P. Hays-Wehle, S. Heppelmann, A. Hirsch, G.W. Hoffmann, D.J. Hofman, S. Horvat, B. Huang, H.Z. Huang, P. Huck, T.J. Humanic, L. Huo, G. Igo, W.W. Jacobs, C. Jena, J. Joseph, E.G. Judd, S. Kabana, K. Kang, J. Kapitan, K. Kauder, H.W. Ke, D. Keane, A. Kechechyan, A. Kesich, D. Kettler, D.P. Kikola, J. Kiryluk, A. Kisiel, V. Kizka, S.R. Klein, D.D. Koetke, T. Kollegger, J. Konzer, I. Koralt, L. Koroleva, W. Korsch, L. Kotchenda, P. Kravtsov, K. Krueger, L. Kumar, M.A.C. Lamont, J.M. Landgraf, S. LaPointe, J. Lauret, A. Lebedev, R. Lednicky, J.H. Lee, W. Leight, M.J. LeVine, C. Li, L. Li, W. Li, X. Li, X. Li, Y. Li, Z.M. Li, L.M. Lima, M.A. Lisa, F. Liu, T. Ljubicic, W.J. Llope, R.S. Longacre, Y. Lu, X. Luo, G.L. Ma, Y.G. Ma, D.P. Mahapatra, R. Majka, O.I. Mall, S. Margetis, C. Markert, H. Masui, H.S. Matis, D. McDonald, T.S. McShane, S. Mioduszewski, M.K. Mitrovski, Y. Mohammed, B. Mohanty, M.M. Mondal, B. Morozov, M.G. Munhoz, M.K. Mustafa, M. Naglis, B.K. Nandi, Md. Nasim, T.K. Nayak, L.V. Nogach, G. Odyniec, A. Ogawa, K. Oh, A. Ohlson, V. Okorokov, E.W. Oldag, R.A.N. Oliveira, D. Olson, M. Pachr, B.S. Page, S.K. Pal, Pan, Y. Pandit, Y. Panebratsev, T. Pawlak, H. Pei, C. Perkins, W. Peryt, P. Pile, M. Planinic, J. Pluta, D. Plyku, N. Poljak, J. Porter, A.M. Poskanzer, C.B. Powell, D. Prindle, C. Pruneau, N.K. Pruthi, P.R. Pujahari, J. Putschke, H. Qiu, R. Raniwala, S. Raniwala, R. Redwine, R. Reed, C.K. Riley, H.G. Ritter, J.B. Roberts, O.V. Rogachevskiy, J.L. Romero, L. Ruan, J. Rusnak, N.R. Sahoo, I. Sakrejda, S. Salur, J. Sandweiss, E. Sangaline, A. Sarkar, J. Schambach, R.P. Scharenberg, A.M. Schmah, N. Schmitz, T.R. Schuster, J. Seele, J. Seger, P. Seyboth, N. Shah, E. Shahaliev, M. Shao, B. Sharma, M. Sharma, S.S. Shi, Q.Y. Shou, E.P. Sichtermann, R.N. Singaraju, M.J. Skoby, N. Smirnov, D. Solanki, P. Sorensen, U.G. de Souza, H.M. Spinka, B. Srivastava, T.D.S. Stanislaus, S.G. Steadman, J.R. Stevens, R. Stock, M. Strikhanov, B. Stringfellow, A.A.P. Suaide, M.C. Suarez, M. Sumbera, X.M. Sun, Y. Sun, Z. Sun, B. Surrow, D.N. Svirida, T.J.M. Symons, A. Szanto de Toledo, J. Takahashi, A.H. Tang, Z. Tang, L.H. Tarini, T. Tarnowsky, D. Thein, J.H. Thomas, J. Tian, A.R. Timmins, D. Tlusty, M. Tokarev, S. Trentalange, R.E. Tribble, P. Tribedy, B.A. Trzeciak, O.D. Tsai, T. Ullrich, D.G. Underwood, G. Van Buren, G. van Nieuwenhuizen, J.A. Vanfossen, Jr., R. Varma, G.M.S. Vasconcelos, F. Videbaek, Y.P. Viyogi, S. Vokal, S.A. Voloshin, A. Vossen, M. Wada, F. Wang, G. Wang, H. Wang, J.S. Wang, Q. Wang, X.L. Wang, Y. Wang, G. Webb, J.C. Webb, G.D. Westfall, C. Whitten Jr., H. Wieman, S.W. Wissink, R. Witt, W. Witzke, Y.F. Wu, Z. Xiao, W. Xie, H. Xu, N. Xu, Q.H. Xu, W. Xu, Y. Xu, Z. Xu, L. Xue, Y. Yang, Y. Yang, P. Yepes, Y. Yi, K. Yip, I-K. Yoo, M. Zawisza, H. Zbroszczyk, W. Zhan, J.B. Zhang, S. Zhang, W.M. Zhang, X.P. Zhang, Y. Zhang, Z.P. Zhang, F. Zhao, J. Zhao, C. Zhong, X. Zhu, Y.H. Zhu, Y. Zoulkarneeva
Dec. 4, 2011 hep-ex, nucl-ex
We present STAR measurements of azimuthal anisotropy by means of the two- and four-particle cumulants $v_2$ ($v_2\{2\}$ and $v_2\{4\}$) for Au+Au and Cu+Cu collisions at center of mass energies $\sqrt{s_{_{\mathrm{NN}}}} = 62.4$ and 200 GeV. The difference between $v_2\{2\}^2$ and $v_2\{4\}^2$ is related to $v_{2}$ fluctuations ($\sigma_{v_2}$) and nonflow $(\delta_{2})$. We present an upper limit to $\sigma_{v_2}/v_{2}$. Following the assumption that eccentricity fluctuations $\sigma_{\epsilon}$ dominate $v_2$ fluctuations $\frac{\sigma_{v_2}}{v_2} \approx \frac{\sigma_{\epsilon}}{\epsilon}$ we deduce the nonflow implied for several models of eccentricity fluctuations that would be required for consistency with $v_2\{2\}$ and $v_2\{4\}$. We also present results on the ratio of $v_2$ to eccentricity.
Studies of di-jet survival and surface emission bias in Au+Au collisions via angular correlations with respect to back-to-back leading hadrons (1102.2669)
H. Agakishiev, M.M. Aggarwal, Z. Ahammed, A.V. Alakhverdyants, I. Alekseev, J. Alford, B.D. Anderson, C.D. Anson, D. Arkhipkin, G.S. Averichev, J. Balewski, D.R. Beavis, N.K. Behera, R. Bellwied, M.J. Betancourt, R.R. Betts, A. Bhasin, A.K. Bhati, H. Bichsel, J. Bielcik, J. Bielcikova, B. Biritz, L.C. Bland, I.G. Bordyuzhin, W. Borowski, J. Bouchet, E. Braidot, A.V. Brandin, A. Bridgeman, S.G. Brovko, E. Bruna, S. Bueltmann, I. Bunzarov, T.P. Burton, X.Z. Cai, H. Caines, M. Calderon, D. Cebra, R. Cendejas, M.C. Cervantes, Z. Chajecki, P. Chaloupka, S. Chattopadhyay, H.F. Chen, J.H. Chen, J.Y. Chen, L. Chen, J. Cheng, M. Cherney, A. Chikanian, K.E. Choi, W. Christie, P. Chung, M.J.M. Codrington, R. Corliss, J.G. Cramer, H.J. Crawford, S. Dash, A. Davila Leyva, L.C. De Silva, R.R. Debbe, T.G. Dedovich, A.A. Derevschikov, R. Derradi de Souza, L. Didenko, P. Djawotho, S.M. Dogra, X. Dong, J.L. Drachenberg, J.E. Draper, J.C. Dunlop, L.G. Efimov, M. Elnimr, J. Engelage, G. Eppley, M. Estienne, L. Eun, O. Evdokimov, R. Fatemi, J. Fedorisin, R.G. Fersch, P. Filip, E. Finch, V. Fine, Y. Fisyak, C.A. Gagliardi, D.R. Gangadharan, A. Geromitsos, F. Geurts, P. Ghosh, Y.N. Gorbunov, A. Gordon, O.G. Grebenyuk, D. Grosnick, S.M. Guertin, A. Gupta, W. Guryn, B. Haag, O. Hajkova, A. Hamed, L-X. Han, J.W. Harris, J.P. Hays-Wehle, M. Heinz, S. Heppelmann, A. Hirsch, E. Hjort, G.W. Hoffmann, D.J. Hofman, B. Huang, H.Z. Huang, T.J. Humanic, L. Huo, G. Igo, P. Jacobs, W.W. Jacobs, C. Jena, F. Jin, J. Joseph, E.G. Judd, S. Kabana, K. Kang, J. Kapitan, K. Kauder, H.W. Ke, D. Keane, A. Kechechyan, D. Kettler, D.P. Kikola, J. Kiryluk, A. Kisiel, V. Kizka, S.R. Klein, A.G. Knospe, D.D. Koetke, T. Kollegger, J. Konzer, I. Koralt, L. Koroleva, W. Korsch, L. Kotchenda, V. Kouchpil, P. Kravtsov, K. Krueger, M. Krus, L. Kumar, P. Kurnadi, M.A.C. Lamont, J.M. Landgraf, S. LaPointe, J. Lauret, A. Lebedev, R. Lednicky, J.H. Lee, W. Leight, M.J. LeVine, C. Li, L. Li, N. Li, W. Li, X. Li, X. Li, Y. Li, Z.M. Li, M.A. Lisa, F. Liu, H. Liu, J. Liu, T. Ljubicic, W.J. Llope, R.S. Longacre, W.A. Love, Y. Lu, E.V. Lukashov, X. Luo, G.L. Ma, Y.G. Ma, D.P. Mahapatra, R. Majka, O.I. Mall, L.K. Mangotra, R. Manweiler, S. Margetis, C. Markert, H. Masui, H.S. Matis, Yu.A. Matulenko, D. McDonald, T.S. McShane, A. Meschanin, R. Milner, N.G. Minaev, S. Mioduszewski, A. Mischke, M.K. Mitrovski, Y. Mohammed, B. Mohanty, M.M. Mondal, B. Morozov, D.A. Morozov, M.G. Munhoz, M.K. Mustafa, M. Naglis, B.K. Nandi, T.K. Nayak, P.K. Netrakanti, L.V. Nogach, S.B. Nurushev, G. Odyniec, A. Ogawa, K. Oh, A. Ohlson, V. Okorokov, E.W. Oldag, D. Olson, M. Pachr, B.S. Page, S.K. Pal, Y. Pandit, Y. Panebratsev, T. Pawlak, H. Pei, T. Peitzmann, C. Perkins, W. Peryt, S.C. Phatak, P. Pile, M. Planinic, M.A. Ploskon, J. Pluta, D. Plyku, N. Poljak, J. Porter, A.M. Poskanzer, B.V.K.S. Potukuchi, C.B. Powell, D. Prindle, C. Pruneau, N.K. Pruthi, P.R. Pujahari, J. Putschke, H. Qiu, R. Raniwala, S. Raniwala, R.L. Ray, R. Redwine, R. Reed, H.G. Ritter, J.B. Roberts, O.V. Rogachevskiy, J.L. Romero, A. Rose, L. Ruan, J. Rusnak, N.R. Sahoo, S. Sakai, I. Sakrejda, S. Salur, J. Sandweiss, E. Sangaline, A. Sarkar, J. Schambach, R.P. Scharenberg, A.M. Schmah, N. Schmitz, T.R. Schuster, J. Seele, J. Seger, I. Selyuzhenkov, P. Seyboth, E. Shahaliev, M. Shao, M. Sharma, S.S. Shi, Q.Y. Shou, E.P. Sichtermann, F. Simon, R.N. Singaraju, M.J. Skoby, N. Smirnov, P. Sorensen, H.M. Spinka, B. Srivastava, T.D.S. Stanislaus, D. Staszak, S.G. Steadman, J.R. Stevens, R. Stock, M. Strikhanov, B. Stringfellow, A.A.P. Suaide, M.C. Suarez, N.L. Subba, M. Sumbera, X.M. Sun, Y. Sun, Z. Sun, B. Surrow, D.N. Svirida, T.J.M. Symons, A. Szanto de Toledo, J. Takahashi, A.H. Tang, Z. Tang, L.H. Tarini, T. Tarnowsky, D. Thein, J.H. Thomas, J. Tian, A.R. Timmins, D. Tlusty, M. Tokarev, T.A. Trainor, V.N. Tram, S. Trentalange, R.E. Tribble, P. Tribedy, O.D. Tsai, T. Ullrich, D.G. Underwood, G. Van Buren, G. van Nieuwenhuizen, J.A. Vanfossen, Jr., R. Varma, G.M.S. Vasconcelos, A.N. Vasiliev, F. Videbaek, Y.P. Viyogi, S. Vokal, S.A. Voloshin, M. Wada, M. Walker, F. Wang, G. Wang, H. Wang, J.S. Wang, Q. Wang, X.L. Wang, Y. Wang, G. Webb, J.C. Webb, G.D. Westfall, C. Whitten Jr., H. Wieman, S.W. Wissink, R. Witt, W. Witzke, Y.F. Wu, Z. Xiao, W. Xie, H. Xu, N. Xu, Q.H. Xu, W. Xu, Y. Xu, Z. Xu, L. Xue, Y. Yang, Y. Yang, P. Yepes, K. Yip, I-K. Yoo, M. Zawisza, H. Zbroszczyk, W. Zhan, J.B. Zhang, S. Zhang, W.M. Zhang, X.P. Zhang, Y. Zhang, Z.P. Zhang, J. Zhao, C. Zhong, W. Zhou, X. Zhu, Y.H. Zhu, R. Zoulkarneev, Y. Zoulkarneeva
Feb. 14, 2011 nucl-ex
We report first results from an analysis based on a new multi-hadron correlation technique, exploring jet-medium interactions and di-jet surface emission bias at RHIC. Pairs of back-to-back high transverse momentum hadrons are used for triggers to study associated hadron distributions. In contrast with two- and three-particle correlations with a single trigger with similar kinematic selections, the associated hadron distribution of both trigger sides reveals no modification in either relative pseudo-rapidity or relative azimuthal angle from d+Au to central Au+Au collisions. We determine associated hadron yields and spectra as well as production rates for such correlated back-to-back triggers to gain additional insights on medium properties.
Azimuthal di-hadron correlations in d+Au and Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV from STAR (1004.2377)
STAR Collaboration: M.M. Aggarwal, Z. Ahammed, A.V. Alakhverdyants, I. Alekseev, J. Alford, B.D. Anderson, Daniel Anson, D. Arkhipkin, G.S. Averichev, J. Balewski, L.S. Barnby, S. Baumgart, D.R. Beavis, R. Bellwied, M.J. Betancourt, R.R. Betts, A. Bhasin, A.K. Bhati, H. Bichsel, J. Bielcik, J. Bielcikova, B. Biritz, L.C. Bland, B.E. Bonner, J. Bouchet, E. Braidot, A.V. Brandin, A. Bridgeman, E. Bruna, S. Bueltmann, I. Bunzarov, T.P. Burton, X.Z. Cai, H. Caines, M. Calderon, O. Catu, D. Cebra, R. Cendejas, M.C. Cervantes, Z. Chajecki, P. Chaloupka, S. Chattopadhyay, H.F. Chen, J.H. Chen, J.Y. Chen, J. Cheng, M. Cherney, A. Chikanian, K.E. Choi, W. Christie, P. Chung, R.F. Clarke, M.J.M. Codrington, R. Corliss, J.G. Cramer, H.J. Crawford, D. Das, S. Dash, A. Davila Leyva, L.C. De Silva, R.R. Debbe, T.G. Dedovich, A.A. Derevschikov, R. Derradi de Souza, L. Didenko, P. Djawotho, S.M. Dogra, X. Dong, J.L. Drachenberg, J.E. Draper, J.C. Dunlop, M.R. Dutta Mazumdar, L.G. Efimov, E. Elhalhuli, M. Elnimr, J. Engelage, G. Eppley, B. Erazmus, M. Estienne, L. Eun, O. Evdokimov, P. Fachini, R. Fatemi, J. Fedorisin, R.G. Fersch, P. Filip, E. Finch, V. Fine, Y. Fisyak, C.A. Gagliardi, D.R. Gangadharan, M.S. Ganti, E.J. Garcia-Solis, A. Geromitsos, F. Geurts, V. Ghazikhanian, P. Ghosh, Y.N. Gorbunov, A. Gordon, O. Grebenyuk, D. Grosnick, S.M. Guertin, A. Gupta, N. Gupta, W. Guryn, B. Haag, A. Hamed, L-X. Han, J.W. Harris, J.P. Hays-Wehle, M. Heinz, S. Heppelmann, A. Hirsch, E. Hjort, A.M. Hoffman, G.W. Hoffmann, D.J. Hofman, B. Huang, H.Z. Huang, T.J. Humanic, L. Huo, G. Igo, P. Jacobs, W.W. Jacobs, C. Jena, F. Jin, C.L. Jones, P.G. Jones, J. Joseph, E.G. Judd, S. Kabana, K. Kajimoto, K. Kang, J. Kapitan, K. Kauder, D. Keane, A. Kechechyan, D. Kettler, D.P. Kikola, J. Kiryluk, A. Kisiel, S.R. Klein, A.G. Knospe, A. Kocoloski, D.D. Koetke, T. Kollegger, J. Konzer, I. Koralt, L. Koroleva, W. Korsch, L. Kotchenda, V. Kouchpil, P. Kravtsov, K. Krueger, M. Krus, L. Kumar, P. Kurnadi, M.A.C. Lamont, J.M. Landgraf, S. LaPointe, J. Lauret, A. Lebedev, R. Lednicky, C-H. Lee, J.H. Lee, W. Leight, M.J. LeVine, C. Li, L. Li, N. Li, W. Li, X. Li, X. Li, Y. Li, Z.M. Li, G. Lin, S.J. Lindenbaum, M.A. Lisa, F. Liu, H. Liu, J. Liu, T. Ljubicic, W.J. Llope, R.S. Longacre, W.A. Love, Y. Lu, E.V. Lukashov, X. Luo, G.L. Ma, Y.G. Ma, D.P. Mahapatra, R. Majka, O.I. Mall, L.K. Mangotra, R. Manweiler, S. Margetis, C. Markert, H. Masui, H.S. Matis, Yu.A. Matulenko, D. McDonald, T.S. McShane, A. Meschanin, R. Milner, N.G. Minaev, S. Mioduszewski, A. Mischke, M.K. Mitrovski, B. Mohanty, M.M. Mondal, B. Morozov, D.A. Morozov, M.G. Munhoz, B.K. Nandi, C. Nattrass, T.K. Nayak, J.M. Nelson, P.K. Netrakanti, M.J. Ng, L.V. Nogach, S.B. Nurushev, G. Odyniec, A. Ogawa, V. Okorokov, E.W. Oldag, D. Olson, M. Pachr, B.S. Page, S.K. Pal, Y. Pandit, Y. Panebratsev, T. Pawlak, T. Peitzmann, V. Perevoztchikov, C. Perkins, W. Peryt, S.C. Phatak, P. Pile, M. Planinic, M.A. Ploskon, J. Pluta, D. Plyku, N. Poljak, A.M. Poskanzer, B.V.K.S. Potukuchi, C.B. Powell, D. Prindle, C. Pruneau, N.K. Pruthi, P.R. Pujahari, J. Putschke, H. Qiu, R. Raniwala, S. Raniwala, R.L. Ray, R. Redwine, R. Reed, H.G. Ritter, J.B. Roberts, O.V. Rogachevskiy, J.L. Romero, A. Rose, C. Roy, L. Ruan, R. Sahoo, S. Sakai, I. Sakrejda, T. Sakuma, S. Salur, J. Sandweiss, E. Sangaline, J. Schambach, R.P. Scharenberg, N. Schmitz, T.R. Schuster, J. Seele, J. Seger, I. Selyuzhenkov, P. Seyboth, E. Shahaliev, M. Shao, M. Sharma, S.S. Shi, E.P. Sichtermann, F. Simon, R.N. Singaraju, M.J. Skoby, N. Smirnov, P. Sorensen, J. Sowinski, H.M. Spinka, B. Srivastava, T.D.S. Stanislaus, D. Staszak, J.R. Stevens, R. Stock, M. Strikhanov, B. Stringfellow, A.A.P. Suaide, M.C. Suarez, N.L. Subba, M. Sumbera, X.M. Sun, Y. Sun, Z. Sun, B. Surrow, D.N. Svirida, T.J.M. Symons, A. Szanto de Toledo, J. Takahashi, A.H. Tang, Z. Tang, L.H. Tarini, T. Tarnowsky, D. Thein, J.H. Thomas, J. Tian, A.R. Timmins, S. Timoshenko, D. Tlusty, M. Tokarev, T.A. Trainor, V.N. Tram, S. Trentalange, R.E. Tribble, O.D. Tsai, J. Ulery, T. Ullrich, D.G. Underwood, G. Van Buren, M. van Leeuwen, G. van Nieuwenhuizen, J.A. Vanfossen, Jr., R. Varma, G.M.S. Vasconcelos, A.N. Vasiliev, F. Videbaek, Y.P. Viyogi, S. Vokal, S.A. Voloshin, M. Wada, M. Walker, F. Wang, G. Wang, H. Wang, J.S. Wang, Q. Wang, X.L. Wang, Y. Wang, G. Webb, J.C. Webb, G.D. Westfall, C. Whitten Jr., H. Wieman, S.W. Wissink, R. Witt, Y.F. Wu, W. Xie, H. Xu, N. Xu, Q.H. Xu, W. Xu, Y. Xu, Z. Xu, L. Xue, Y. Yang, P. Yepes, K. Yip, I-K. Yoo, Q. Yue, M. Zawisza, H. Zbroszczyk, W. Zhan, J.B. Zhang, S. Zhang, W.M. Zhang, X.P. Zhang, Y. Zhang, Z.P. Zhang, J. Zhao, C. Zhong, J. Zhou, W. Zhou, X. Zhu, Y.H. Zhu, R. Zoulkarneev, Y. Zoulkarneeva
Aug. 10, 2010 nucl-ex
Yields, correlation shapes, and mean transverse momenta \pt{} of charged particles associated with intermediate to high-\pt{} trigger particles ($2.5 < \pt < 10$ \GeVc) in d+Au and Au+Au collisions at $\snn=200$ GeV are presented. For associated particles at higher $\pt \gtrsim 2.5$ \GeVc, narrow correlation peaks are seen in d+Au and Au+Au, indicating that the main production mechanism is jet fragmentation. At lower associated particle $\pt < 2$ \GeVc, a large enhancement of the near- ($\dphi \sim 0$) and away-side ($\dphi \sim \pi$) associated yields is found, together with a strong broadening of the away-side azimuthal distributions in Au+Au collisions compared to d+Au measurements, suggesting that other particle production mechanisms play a role. This is further supported by the observed significant softening of the away-side associated particle yield distribution at $\dphi \sim \pi$ in central Au+Au collisions.
K*0 production in Cu+Cu and Au+Au collisions at \sqrt{s_NN} = 62.4 GeV and 200 GeV (1006.1961)
M.M. Aggarwal, Z. Ahammed, A.V. Alakhverdyants, I. Alekseev, J. Alford, B.D. Anderson, Daniel Anson, D. Arkhipkin, G.S. Averichev, J. Balewski, L.S. Barnby, S. Baumgart, D.R. Beavis, R. Bellwied, M.J. Betancourt, R.R. Betts, A. Bhasin, A.K. Bhati, H. Bichsel, J. Bielcik, J. Bielcikova, B. Biritz, L.C. Bland, B.E. Bonner, W. Borowski, J. Bouchet, E. Braidot, A.V. Brandin, A. Bridgeman, E. Bruna, S. Bueltmann, I. Bunzarov, T.P. Burton, X.Z. Cai, H. Caines, M. Calderon, O. Catu, D. Cebra, R. Cendejas, M.C. Cervantes, Z. Chajecki, P. chaloupka, S. Chattopadhyay, H.F. Chen, J.H. Chen, J.Y. Chen, J. Cheng, M. Cherney, A. Chikanian, K.E. Choi, W. Christie, P. Chung, R.F. Clarke, M.J.M. Codrington, R. Corliss, J.G. Cramer, H.J. Crawford, D. Das, S. Dash, A. Davila Leyva, L.C. De Silva, R.R. Debbe, T.G. Dedovich, A.A. Derevschikov, R. Derradi de Souza, L. Didenko, P. Djawotho, S.M. Dogra, X. Dong, J.L. Drachenberg, J.E. Draper, J.C. Dunlop, M.R. Dutta Mazumdar, L.G. Efimov, E. Elhalhuli, M. Elnimr, J. Engelage, G. Eppley, B. Erazmus, M. Estienne, L. Eun, O. Evdokimov, P. Fachini, R. Fatemi, J. Fedorisin, R.G. Fersch, P. Filip, E. Finch, V. Fine, Y. Fisyak, C.A. Gagliardi, D.R. Gangadharan, M.S. Ganti, E.J. Garcia-Solis, A. Geromitsos, F. Geurts, V. Ghazikhanian, P. Ghosh, Y.N. Gorbunov, A. Gordon, O. Grebenyuk, D. Grosnick, S.M. Guertin, A. Gupta, W. Guryn, B. Haag, A. Hamed, L-X. Han, J.W. Harris, J.P. Hays-Wehle, M. Heinz, S. Heppelmann, A. Hirsch, E. Hjort, A.M. Hoffman, G.W. Hoffmann, D.J. Hofman, B. Huang, H.Z. Huang, T.J. Humanic, L. Huo, G. Igo, P. Jacobs, W.W. Jacobs, C. Jena, F. Jin, C.L. Jones, P.G. Jones, J. Joseph, E.G. Judd, S. Kabana, K. Kajimoto, K. Kang, J. Kapitan, K. Kauder, D. Keane, A. Kechechyan, D. Kettler, D.P. Kikola, J. Kiryluk, A. Kisiel, V. Kizka, S.R. Klein, A.G. Knospe, A. Kocoloski, D.D. Koetke, T. Kollegger, J. Konzer, I. Koralt, L. Koroleva, W. Korsch, L. Kotchenda, V. Kouchpil, P. Kravtsov, K. Krueger, M. Krus, L. Kumar, P. Kurnadi, M.A.C. Lamont, J.M. Landgraf, S. LaPointe, J. Lauret, A. Lebedev, R. Lednicky, C-H. Lee, J.H. Lee, W. Leight, M.J. LeVine, C. Li, L. Li, N. Li, W. Li, X. Li, X. Li, Y. Li, Z.M. Li, G. Lin, S.J. Lindenbaum, M.A. Lisa, F. Liu, H. Liu, J. Liu, T. Ljubicic, W.J. Llope, R.S. Longacre, W.A. Love, Y. Lu, E.V. Lukashov, X. Luo, G.L. Ma, Y.G. Ma, D.P. Mahapatra, R. Majka, O.I. Mall, L.K. Mangotra, R. Manweiler, S. Margetis, C. Markert, H. Masui, H.S. Matis, Yu.A. Matulenko, D. McDonald, T.S. McShane, A. Meschanin, R. Milner, N.G. Minaev, S. Mioduszewski, A. Mischke, M.K. Mitrovski, B. Mohanty, M.M. Mondal, B. Morozov, D.A. Morozov, M.G. Munhoz, B.K. Nandi, C. Nattrass, T.K. Nayak, J.M. Nelson, P.K. Netrakanti, M.J. Ng, L.V. Nogach, S.B. Nurushev, G. Odyniec, A. Ogawa, V. Okorokov, E.W. Oldag, D. Olson, M. Pachr, B.S. Page, S.K. Pal, Y. Pandit, Y. Panebratsev, T. Pawlak, T. Peitzmann, C. Perkins, W. Peryt, S.C. Phatak, P. Pile, M. Planinic, M.A. Ploskon, J. Pluta, D. Plyku, N. Poljak, A.M. Poskanzer, B.V.K.S. Potukuchi, C.B. Powell, D. Prindle, C. Pruneau, N.K. Pruthi, P.R. Pujahari, J. Putschke, H. Qiu, R. Raniwala, S. Raniwala, R.L. Ray, R. Redwine, R. Reed, H.G. Ritter, J.B. Roberts, O.V. Rogachevskiy, J.L. Romero, A. Rose, C. Roy, L. Ruan, R. Sahoo, S. Sakai, I. Sakrejda, T. Sakuma, S. Salur, J. Sandweiss, E. Sangaline, J. Schambach, R.P. Scharenberg, N. Schmitz, T.R. Schuster, J. Seele, J. Seger, I. Selyuzhenkov, P. Seyboth, E. Shahaliev, M. Shao, M. Sharma, S.S. Shi, E.P. Sichtermann, F. Simon, R.N. Singaraju, M.J. Skoby, N. Smirnov, P. Sorensen, J. Sowinski, H.M. Spinka, B. Srivastava, T.D.S. Stanislaus, D. Staszak, J.R. Stevens, R. Stock, M. Strikhanov, B. Stringfellow, A.A.P. Suaide, M.C. Suarez, N.L. Subba, M. Sumbera, X.M. Sun, Y. Sun, Z. Sun, B. Surrow, D.N. Svirida, T.J.M. Symons, A. Szanto de Toledo, J. Takahashi, A.H. Tang, Z. Tang, L.H. Tarini, T. Tarnowsky, D. Thein, J.H. Thomas, J. Tian, A.R. Timmins, S. Timoshenko, D. Tlusty, M. Tokarev, T.A. Trainor, V.N. Tram, S. Trentalange, R.E. Tribble, O.D. Tsai, J. Ulery, T. Ullrich, D.G. Underwood, G. Van Buren, M. van Leeuwen, G. van Nieuwenhuizen, J.A. Vanfossen, Jr., R. Varma, G.M.S. Vasconcelos, A.N. Vasiliev, F. Videbaek, Y.P. Viyogi, S. Vokal, S.A. Voloshin, M. Wada, M. Walker, F. Wang, G. Wang, H. Wang, J.S. Wang, Q. Wang, X.L. Wang, Y. Wang, G. Webb, J.C. Webb, G.D. Westfall, C. Whitten Jr., H. Wieman, S.W. Wissink, R. Witt, Y.F. Wu, W. Xie, H. Xu, N. Xu, Q.H. Xu, W. Xu, Y. Xu, Z. Xu, L. Xue, Y. Yang, P. Yepes, K. Yip, I-K. Yoo, Q. Yue, M. Zawisza, H. Zbroszczyk, W. Zhan, J.B. Zhang, S. Zhang, W.M. Zhang, X.P. Zhang, Y. Zhang, Z.P. Zhang, J. Zhao, C. Zhong, J. Zhou, W. Zhou, X. Zhu, Y.H. Zhu, R. Zoulkarneev, Y. Zoulkarneeva
June 10, 2010 hep-ex, nucl-ex, nucl-th
We report on K*0 production at mid-rapidity in Au+Au and Cu+Cu collisions at \sqrt{s_{NN}} = 62.4 and 200 GeV collected by the Solenoid Tracker at RHIC (STAR) detector. The K*0 is reconstructed via the hadronic decays K*0 \to K+ pi- and \bar{K*0} \to K-pi+. Transverse momentum, pT, spectra are measured over a range of pT extending from 0.2 GeV/c to 5 GeV/c. The center of mass energy and system size dependence of the rapidity density, dN/dy, and the average transverse momentum, <pT>, are presented. The measured N(K*0)/N(K) and N(\phi)/N(K*0) ratios favor the dominance of re-scattering of decay daughters of K*0 over the hadronic regeneration for the K*0 production. In the intermediate pT region (2.0 < pT < 4.0 GeV/c), the elliptic flow parameter, v2, and the nuclear modification factor, RCP, agree with the expectations from the quark coalescence model of particle production.
Strange baryon resonance production in $\sqrt{s_{NN}} = 200$ GeV $p+p$ and $Au+Au$ collisions (nucl-ex/0604019)
The STAR collaboration: B.I. Abelev, M.M. Aggarwal, Z. Ahammed, J. Amonett, B.D. Anderson, M. Anderson, D. Arkhipkin, G.S. Averichev, Y. Bai, J. Balewski, O. Barannikova, L.S. Barnby, J. Baudot, S. Bekele, V.V. Belaga, A. Bellingeri-Laurikainen, R. Bellwied, F. Benedosso, S. Bhardwaj, A. Bhasin, A.K. Bhati, H. Bichsel, J. Bielcik, J. Bielcikova, L.C. Bland, S-L. Blyth, B.E. Bonner, M. Botje, J. Bouchet, A.V. Brandin, A. Bravar, T.P. Burton, M. Bystersky, R.V. Cadman, X.Z. Cai, H. Caines, M. Calderón de la Barca Sánchez, J. Castillo, O. Catu, D. Cebra, Z. Chajecki, P. Chaloupka, S. Chattopadhyay, H.F. Chen, J.H. Chen, J. Cheng, M. Cherney, A. Chikanian, W. Christie, J.P. Coffin, T.M. Cormier, M.R. Cosentino, J.G. Cramer, H.J. Crawford, D. Das, S. Das, S. Dash, M. Daugherity, M.M. de Moura, T.G. Dedovich, M. DePhillips, A.A. Derevschikov, L. Didenko, T. Dietel, P. Djawotho, S.M. Dogra, W.J. Dong, X. Dong, J.E. Draper, F. Du, V.B. Dunin, J.C. Dunlop, M.R. Dutta Mazumdar, V. Eckardt, W.R. Edwards, L.G. Efimov, V. Emelianov, J. Engelage, G. Eppley, B. Erazmus, M. Estienne, P. Fachini, R. Fatemi, J. Fedorisin, K. Filimonov, P. Filip, E. Finch, V. Fine, Y. Fisyak, J. Fu, C.A. Gagliardi, L. Gaillard, M.S. Ganti, L. Gaudichet, V. Ghazikhanian, P. Ghosh, J.E. Gonzalez, Y.G. Gorbunov, H. Gos, O. Grebenyuk, D. Grosnick, S.M. Guertin, K.S.F.F. Guimaraes, N. Gupta, T.D. Gutierrez, B. Haag, T.J. Hallman, A. Hamed, J.W. Harris, W. He, M. Heinz, T.W. Henry, S. Hepplemann, B. Hippolyte, A. Hirsch, E. Hjort, A.M. Hoffman, G.W. Hoffmann, M.J. Horner, H.Z. Huang, S.L. Huang, E.W. Hughes, T.J. Humanic, G. Igo, P. Jacobs, W.W. Jacobs, P. Jakl, F. Jia, H. Jiang, P.G. Jones, E.G. Judd, S. Kabana, K. Kang, J. Kapitan, M. Kaplan, D. Keane, A. Kechechyan, V.Yu. Khodyrev, B.C. Kim, J. Kiryluk, A. Kisiel, E.M. Kislov, S.R. Klein, A. Kocoloski, D.D. Koetke, T. Kollegger, M. Kopytine, L. Kotchenda, V. Kouchpil, K.L. Kowalik, M. Kramer, P. Kravtsov, V.I. Kravtsov, K. Krueger, C. Kuhn, A.I. Kulikov, A. Kumar, A.A. Kuznetsov, M.A.C. Lamont, J.M. Landgraf, S. Lange, S. LaPointe, F. Laue, J. Lauret, A. Lebedev, R. Lednicky, C-H. Lee, S. Lehocka, M.J. LeVine, C. Li, Q. Li, Y. Li, G. Lin, X. Lin, S.J. Lindenbaum, M.A. Lisa, F. Liu, H. Liu, J. Liu, L. Liu, Z. Liu, T. Ljubicic, W.J. Llope, H. Long, R.S. Longacre, W.A. Love, Y. Lu, T. Ludlam, D. Lynn, G.L. Ma, J.G. Ma, Y.G. Ma, D. Magestro, D.P. Mahapatra, R. Majka, L.K. Mangotra, R. Manweiler, S. Margetis, C. Markert, L. Martin, H.S. Matis, Yu.A. Matulenko, C.J. McClain, T.S. McShane, Yu. Melnick, A. Meschanin, J. Millane, M.L. Miller, N.G. Minaev, S. Mioduszewski, C. Mironov, A. Mischke, D.K. Mishra, J. Mitchell, B. Mohanty, L. Molnar, C.F. Moore, D.A. Morozov, M.G. Munhoz, B.K. Nandi, C. Nattrass, T.K. Nayak, J.M. Nelson, P.K. Netrakanti, L.V. Nogach, S.B. Nurushev, G. Odyniec, A. Ogawa, V. Okorokov, M. Oldenburg, D. Olson, M. Pachr, S.K. Pal, Y. Panebratsev, S.Y. Panitkin, A.I. Pavlinov, T. Pawlak, T. Peitzmann, V. Perevoztchikov, C. Perkins, W. Peryt, S.C. Phatak, R. Picha, M. Planinic, J. Pluta, N. Poljak, N. Porile, J. Porter, A.M. Poskanzer, M. Potekhin, E. Potrebenikova, B.V.K.S. Potukuchi, D. Prindle, C. Pruneau, J. Putschke, G. Rakness, R. Raniwala, S. Raniwala, R.L. Ray, S.V. Razin, J. Reinnarth, D. Relyea, F. Retiere, A. Ridiger, H.G. Ritter, J.B. Roberts, O.V. Rogachevskiy, J.L. Romero, A. Rose, C. Roy, L. Ruan, M.J. Russcher, R. Sahoo, T. Sakuma, S. Salur, J. Sandweiss, M. Sarsour, P.S. Sazhin, J. Schambach, R.P. Scharenberg, N. Schmitz, K. Schweda, J. Seger, I. Selyuzhenkov, P. Seyboth, A. Shabetai, E. Shahaliev, M. Shao, M. Sharma, W.Q. Shen, S.S. Shimanskiy, E Sichtermann, F. Simon, R.N. Singaraju, N. Smirnov, R. Snellings, G. Sood, P. Sorensen, J. Sowinski, J. Speltz, H.M. Spinka, B. Srivastava, A. Stadnik, T.D.S. Stanislaus, R. Stock, A. Stolpovsky, M. Strikhanov, B. Stringfellow, A.A.P. Suaide, E. Sugarbaker, M. Sumbera, Z. Sun, B. Surrow, M. Swanger, T.J.M. Symons, A. Szanto de Toledo, A. Tai, J. Takahashi, A.H. Tang, T. Tarnowsky, D. Thein, J.H. Thomas, A.R. Timmins, S. Timoshenko, M. Tokarev, T.A. Trainor, S. Trentalange, R.E. Tribble, O.D. Tsai, J. Ulery, T. Ullrich, D.G. Underwood, G. Van Buren, N. van der Kolk, M. van Leeuwen, A.M. Vander Molen, R. Varma, I.M. Vasilevski, A.N. Vasiliev, R. Vernet, S.E. Vigdor, Y.P. Viyogi, S. Vokal, S.A. Voloshin, W.T. Waggoner, F. Wang, G. Wang, J.S. Wang, X.L. Wang, Y. Wang, J.W. Watson, J.C. Webb, G.D. Westfall, A. Wetzler, C. Whitten Jr., H. Wieman, S.W. Wissink, R. Witt, J. Wood, J. Wu, N. Xu, Q.H. Xu, Z. Xu, P. Yepes, I-K. Yoo, V.I. Yurevich, W. Zhan, H. Zhang, W.M. Zhang, Y. Zhang, Z.P. Zhang, Y. Zhao, C. Zhong, R. Zoulkarneev, Y. Zoulkarneeva, A.N. Zubarev, J.X. Zuo
Oct. 11, 2006 nucl-ex
We report the measurements of $\Sigma (1385)$ and $\Lambda (1520)$ production in $p+p$ and $Au+Au$ collisions at $\sqrt{s_{NN}} = 200$ GeV from the STAR collaboration. The yields and the $p_{T}$ spectra are presented and discussed in terms of chemical and thermal freeze-out conditions and compared to model predictions. Thermal and microscopic models do not adequately describe the yields of all the resonances produced in central $Au+Au$ collisions. Our results indicate that there may be a time-span between chemical and thermal freeze-out during which elastic hadronic interactions occur.
Search for DCC in relativistic heavy-ion collisions : Possibilities and Limitations (nucl-ex/0211007)
B. Mohanty, T.K. Nayak, D.P. Mahapatra, Y.P. Viyogi
March 18, 2003 hep-ph, hep-ex, nucl-ex, nucl-th
The experimental observation of disoriented chiral condensate is affected due to various physical and detector related effects. We study and quantify the strength of the experimental signal, ``neutral pion fraction'' within the framework of a simple DCC model, using the analysis methods based on the multi-resolution discrete wavelet technique and by evaluating the signal to background ratio. The scope and limitations of DCC search in heavy-ion collision experiments using various combination of detector systems are investigated.
A Fluctuation Probe of Disoriented Chiral Condensates (nucl-ex/0201010)
B. Mohanty, D.P. Mahapatra, T.K. Nayak
Jan. 25, 2002 hep-ph, nucl-ex, nucl-th
We show that an event-by-event fluctuation of the ratio of neutral pions or resulting photons to charged pions can be used as an effective probe for the formation of disoriented chiral condensates. The fact that the neutral pion fraction produced in case of disoriented chiral condensate formation has a characteristic extended non gaussian shape, is shown to be the key factor which forms the basis of the present analysis.
A Honeycomb Proportional Counter for Photon Multiplicity Measurement in the ALICE Experiment (nucl-ex/0112016)
M.M.Aggarwal, S.K. Badyal, V.S. Bhatia, S. Chattopadhyay, A.K. Dubey, M.R. Dutta Majumdar, M.S. Ganti, P. Ghosh, A. Kumar, T.K. Nayak, S. Mahajan, D.P. Mahapatra, L.K. Mangotra, B. Mohanty, S. Pal, S.C. Phatak, B.V.K.S. Potukuchi, R. Raniwala, S. Raniwala, N.K. Rao, R.N. Singaraju, Bikash Sinha, M.D. Trivedi, R.J. Veenhof, Y.P. Viyogi
Dec. 28, 2001 nucl-ex
A honeycomb detector consisting of a matrix of 96 closely packed hexagonal cells, each working as a proportional counter with a wire readout, was fabricated and tested at the CERN PS. The cell depth and the radial dimensions of the cell were small, in the range of 5-10 mm. The appropriate cell design was arrived at using GARFIELD simulations. Two geometries are described illustrating the effect of field shaping. The charged particle detection efficiency and the preshower characteristics have been studied using pion and electron beams. Average charged particle detection efficiency was found to be 98%, which is almost uniform within the cell volume and also within the array. The preshower data show that the transverse size of the shower is in close agreement with the results of simulations for a range of energies and converter thicknesses.
Localized Domains of Disoriented Chiral Condensates (nucl-ex/9903005)
B.K. Nandi, T.K. Nayak, B. Mohanty, D.P. Mahapatra, Y.P. Viyogi
March 12, 1999 nucl-ex
A new method to search for localized domains of disoriented chiral condensates (DCC) has been proposed by utilising the (eta-phi) phase space distributions of charged particles and photons. Using the discrete wavelet transformation (DWT) analysis technique, it has been found that the presence of DCC domains broadens the distribution of wavelet coefficients in comparison to that of normal events. Strength contours have been derived from the differences in rms deviations of these distributions by taking into account the size of DCC domains and the probability of DCC production in ultra-relativistic heavy ion collisions. This technique can be suitably adopted to experiments measuring multiplicities of charged particles and photons.
A Preshower Photon Multiplicity Detector for the WA98 Experiment (hep-ex/9807026)
M.M. Aggarwal, A. Agnihotri, Z. Ahammed, P.V.K.S. Baba, S.K. Badyal, K.B. Bhalla, V.S. Bhatia, S. Chattopadhyay, A.C. Das, M.R. Dutta Majumdar, M.S. Ganti, T.K. Ghosh, S.K. Gupta, H.H. Gutbrod, S. Kachroo, B.W. Kolb, V. Kumar, I. Langbein, D.P. Mahapatra, G.C. Mishra, D.S. Mukhopadhyay, B.K. Nandi, S.K. Nayak, T.K. Nayak, M.L. Purschke, S. Raniwala, V.S. Ramamurthy, N.K. Rao, S.S. Sambyal, B.C. Sinha, M.D. Trivedi, J. Urbahn, Y.P. Viyogi
July 28, 1998 hep-ex
A high granularity preshower detector has been fabricated and installed in the WA98 Experiment at the CERN SPS for measuring the spatial distribution of photons produced in the forward region in lead ion induced interactions. Photons are counted by detecting the preshower signal in plastic scintillator pads placed behind a 3 radiation length thick lead converter and applying a threshold on the scintillator signal to reject the minimum ionizing particles. Techniques to improve the imaging of the fibre and performance of the detector in the high multiplicity environment of lead-lead collisions are described. Using Monte-Carlo simulation methods and test beam data of pi- and e- at various energies the photon counting efficiency is estimated to be 68% for central and 73% for peripheral Pb+Pb collisions. | CommonCrawl |
Volume 20 Supplement 1
Selected Articles from the BioCreative/OHNLP Challenge 2018 - Part 2
Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
Qingyu Chen1,
Jingcheng Du1,2,
Sun Kim1,
W. John Wilbur1 &
Zhiyong Lu1
Capturing sentence semantics plays a vital role in a range of text mining applications. Despite continuous efforts on the development of related datasets and models in the general domain, both datasets and models are limited in biomedical and clinical domains. The BioCreative/OHNLP2018 organizers have made the first attempt to annotate 1068 sentence pairs from clinical notes and have called for a community effort to tackle the Semantic Textual Similarity (BioCreative/OHNLP STS) challenge.
We developed models using traditional machine learning and deep learning approaches. For the post challenge, we focused on two models: the Random Forest and the Encoder Network. We applied sentence embeddings pre-trained on PubMed abstracts and MIMIC-III clinical notes and updated the Random Forest and the Encoder Network accordingly.
The official results demonstrated our best submission was the ensemble of eight models. It achieved a Person correlation coefficient of 0.8328 – the highest performance among 13 submissions from 4 teams. For the post challenge, the performance of both Random Forest and the Encoder Network was improved; in particular, the correlation of the Encoder Network was improved by ~ 13%. During the challenge task, no end-to-end deep learning models had better performance than machine learning models that take manually-crafted features. In contrast, with the sentence embeddings pre-trained on biomedical corpora, the Encoder Network now achieves a correlation of ~ 0.84, which is higher than the original best model. The ensembled model taking the improved versions of the Random Forest and Encoder Network as inputs further increased performance to 0.8528.
Deep learning models with sentence embeddings pre-trained on biomedical corpora achieve the highest performance on the test set. Through error analysis, we find that end-to-end deep learning models and traditional machine learning models with manually-crafted features complement each other by finding different types of sentences. We suggest a combination of these models can better find similar sentences in practice.
The ultimate goal of text mining applications is to understand the underlying semantics of natural language. Sentences, as the intermediate blocks in the word-sentence-paragraph-document hierarchy, are a key component for semantic analysis. Capturing the semantic similarity between sentences has many direct applications in biomedical and clinical domains, such as biomedical sentence search [1], evidence sentence retrieval [2] and classification [3] as well as indirect applications, such as biomedical question answering [4] and biomedical document labeling [5].
In the general domain, long-term efforts have been made to develop semantic sentence similarity datasets and associated models [6]. For instance, the SemEval Semantic Textual Similarity (SemEval STS) challenge has been organized for over 5 years and the dataset collectively has close to 10,000 annotated sentence pairs. In contrast, such resources are limited in biomedical and clinical domains and existing models are not sufficient for specific biomedical or clinical applications [7]. The BioCreative/OHNLP organizers have made the first attempt to annotate 1068 sentence pairs from clinical notes and have called for a community effort to tackle the Semantic Textual Similarity (BioCreative/OHNLP STS) challenge [8].
This paper summarizes our attempts on this challenge. We developed models using machine learning and deep learning techniques. The official results show that the our model achieved the highest correlation of 0.8328 among 13 submissions from 4 teams [9]. As an extension to the models (the Random Forest and the Encoder Network) developed during the challenge [9], we further (1) applied BioSentVec, a sentence embedding trained on the entire collection of PubMed abstracts and MIMIC-III clinical notes [10], increasing the performance of the previous Encoder Network by ~ 13% in terms of absolute values, (2) re-designed features for the Random Forest, increasing its performance by an absolute ~ 1.5% with only 14 features, and (3) performed error analysis in both a quantitative and qualitative manner. Importantly, the ensemble model that combines both the Random Forest and the Encoder Network further improved state-of-the-art performance -- from 0.8328 to 0.8528.
Figure 1 demonstrates a general overview of our models. We developed three models to capture clinical sentence similarity: the Random Forest, the Encoder Network, and the associated ensembled model. This section describes the dataset and the models in detail.
An overview of our models. The Random Forest uses manually crafted features (word tokens, character n-grams, sequence similarity, semantic similarity and named entities). The feature selection of the Random Forest was done on the validation set. The Neural Network uses vectors generated by sentence embeddings as inputs. The validation set was used to monitor the early stopping process of the neural network. The ensembled (stacking) model incorporates both the Random Forest and Neural Network models. The validation set was used to train the ensembled model
Dataset, annotations, and evaluation metrics
The BioCreative/OHNLP2018 MedSTS dataset consists of 1068 sentence pairs derived from clinical notes [8]. 750 pairs are used for the training set; 318 pairs are used for the test set. Each pair in the set was annotated by two medical experts on a scale of 0 to 5, from completely dissimilar to semantically equivalent. The specific annotation guidelines are (1) if the two sentences are completely dissimilar, it will be scored as 0; (2) if the two sentences are not equivalent but share the same topic, it will be scored as 1; (3) if the two sentences are not equivalent but share some details, it will be scored as 2; (4) if the two sentences are roughly equivalent but some important information is different, it will be scored as 3; (5) if the two sentences are mostly equivalent and only minor details differ, it will be scored as 4; and (6) if the two sentences mean the same thing, it will be scored as 5. Two medical experts annotated the dataset and their averaged scores are used as the gold standard similarity score. The agreement between the two annotators had a weighted Cohen's Kappa of 0.67 [8]. Based on the gold standard similarity score distribution, more than half of the sentence pairs have a score from 2 to 4 [8]. The Pearson correlation was used as the official evaluation metric. A good model should have high Pearson correlation with the gold standard scores, suggesting that their score distributions are consistent. More details can be found in the dataset description paper [8].
Pre-processing sentences
We pre-processed sentences in 4 steps: (1) converting sentences into lower cases; (2) separating words joined by punctuations including "/", (e.g., "cardio/respiratory" -> "cardio / respiratory"), ".-" (e.g., "independently.-ongoing" -> "independently. - ongoing"), "." (e.g., "content.caller" -> "content. caller"), and "'" (e.g., "'spider veins'" -> "' spider veins '"); (3) tokenizing pre-processed sentences using the TreeBank tokenizer supplied in the NLTK toolkit [11]; and (4) removing punctuation and stopwords. The pre-processed sentences are used for both the Random Forest and the Encoder Network.
The random Forest
The Random Forest is a popular traditional machine learning model. Traditional machine learning models take human engineered features as input. They have been used for decades in diverse biomedical and clinical applications, such as finding relevant biomedical documents [12], biomedical named entity recognition [13] and biomedical database curation [14]. According to the overview of the most recent SemEval STS task [6], such traditional approaches are still being applied by top performing systems.
In this specific task, human engineered features should be similarity measures that describe the degree of similarity between sentences, i.e., involve features that better capture the similarity between sentence pairs. Many similarity measures exist, such as the Cosine and Jaccard similarity. Zobel and Moffat [15] analyzed more than 50 similarity measures with different settings in information retrieval and found that there was no one-size-fits-all metric – no metric consistently worked better than others. We hypothesized that aggregating similarity metrics from different perspectives could better capture the similarity between sentences. We engineered features accordingly from five perspectives to capture sentence similarity: token-based, character-based, sequence-based, semantic-based and entity-based. To select the most effective features, we partitioned the official training set into training (600 pairs) and validation sets (150 pairs) and evaluated the effectiveness of the engineered features on the validation set. Ultimately 14 features achieved the highest performance on the validation set.
Token-based features (5 features).
Token-based features consider a sentence as an unordered list of tokens; the similarity between a pair of sentences is effectively measured by the similarity between the corresponding lists of tokens. We employed 5 token-based features: (1) the Jaccard similarity [16], summarized in Eq. 1, using the number of shared elements divided by the number of distinct elements in total; (2) the generalized Jaccard similarity, similar to the Jaccard similarity which considers that two tokens are the same if their similarity is above a pre-defined threshold. (We used the Jaro similarity, a string similarity measure that effectively finds similar short text [17], and empirically set the threshold to 0.6); (3) the Dice similarity [18], summarized in Eq. 2; (4) the Ochiai similarity [19], summarized in Eq. 3; and (5) the tf-idf similarity [20], one of the most popular metrics used in information retrieval.
$$ Jaccard\left(X,Y\right)=\frac{\mid X\cap Y\mid }{\mid X\cup Y\mid } $$
Jaccard similarity
$$ Dice\left(X,Y\right)=\frac{2\mid X\cap Y\mid }{\mid X\mid +\mid Y\mid } $$
Dice similarity
$$ Ochiai\left(X,Y\right)=\frac{\mid X\cap Y\mid }{\sqrt{\mid X\mid \mid Y\mid }} $$
Ochiai similarity
Character-based features (2 features)
Token-based features focus on similarity at the word level. We also employ such measures at the character level. Especially, we applied the Q-Gram similarity [21]. Each sentence is transformed into a list of substrings of length q (q-grams) by sliding a q character window over the sentence. We set q to 3 and 4.
Sequence-based features (4 features)
The above measures ignore the order of tokens. We adopted sequence-based measures to address this limitation. Sequence-based measures focus on how to transform one sentence into another using three types of edits: insertions, deletions and substitutions; therefore, the similarity between two sentences is related to their number of edits. In fact, sequence-based measures are very effective in clinical and biomedical informatics; for example, they are the primary measures used in previous studies on the detection of redundancy in clinical notes [22] and duplicate records in biological databases [23]. We selected the Bag similarity [24], the Levenshtein similarity [25], the Needleman-Wunsch similarity [26] and the Smith Waterman similarity [27]. While they all measure the similarity at the sequence level, the focus varies: the Bag similarity uses pattern matching heuristics to quickly find potentially similar strings; the Levenshtein similarity is a traditional edit distance method aiming to find strings with a minimal number of edits; the Needleman-Wunsch similarity generalizes the Levenshtein similarity by performing dynamic global sequence alignment (for example, it allows assigning different costs for different operations); and the Smith-Waterman similarity focuses on dynamic sequence alignment at substrings instead of globally.
Semantic-based features (1 feature)
The above features measure how sentences are similar in terms of syntactic structure. However, in natural language, distinct words may have close meanings; sentences containing different words or structures may represent similar semantics. For example, 'each parent would have to carry one non-working copy of the CF gene in order to be at risk for having a child with CF' and 'an individual must have a mutation in both copies of the CFTR gene (one inherited from the mother and one from the father) to be affected' contain distinct words that representing similar meanings (e.g., 'parent' vs 'the mother … and the father', and 'non-working' vs 'mutation') and the structure is rather different; however, the underlying semantics is similar. Using token-based or sequence-based features fails in this case.
We applied BioSentVec [10], the first freely available biomedical sentence encoder, on both PubMed articles and clinical notes from the MIMIC-III Clinical Database [28]. We used the cosine similarity between the sentence vectors from BioSentVec as a feature.
Entity-based features (2 features)
In biomedical and clinical domains, sentences often contain named entities like genes, mutations and diseases [29]. The similarity of these entities embedded in sentence texts could help measure the similarity of the sentence pairs. We leveraged CLAMP (Clinical Language Annotation, Modeling, and Processing Toolkit) [30], which integrates proven state-of-the-art NLP algorithms, to extract clinical concepts (e.g. medication, treatment, problem) from the text. The extracted clinical concepts were then mapped to Unified Medical Language System (UMLS) Concept Unique Identifiers (CUI). We measured the entity similarity in the sentence pairs using Eq. 4. It divides the number of shared CUIs in both sentences by the maximum number of CUIs in a sentence.
$$ Entity\_ Similarity\left(X,Y\right)=\frac{len\left( concept{s}_x\cap concept{s}_y\right)}{\mathit{\operatorname{MAX}}\left( len\left( concept{s}_x\right), len\left( concept{s}_y\right)\right)} $$
Entity similarity
In addition, we have observed that clinical notes often contain numbers other than named entities. The numbers may be expressed differently; for example, 'cream 1 apply cream 1 topically three times a day as needed' contains two numbers in different formats. We measured the similarity between numbers in two steps. First, we normalized digits to text, e.g., '24' to 'twenty-four'. Second, if both sentences contain numbers, we applied the Word Mover's Distance (WMD) [31] to measure the similarity between numbers. For other cases, i.e., if neither sentence contains numbers, the similarity is set to be 1. If only one sentence in a pair contains numbers, the similarity is set to be 0.
Deep learning models
In contrast to traditional machine learning approaches with feature engineering, deep learning models aim to extract features and learn representations automatically, requiring minimal human effort. To date, deep learning has demonstrated state-of-the-art performance in many biomedical and clinical applications, such as medical image classification [32], mental health text mining [33], and biomedical document triage [34].
In general, there are three primary deep learning models which tackle sentence related tasks, illustrated in Fig. 2. Given a sentence, it is first transformed into a 2-D matrix by mapping each word into the related word vector in a word embedding. Then to further process the matrix, existing studies have employed (1) Convolutional Neural Network (CNN) [35], the most popular architecture for computer vision, where the main components are convolutional layers (applying convolutional operations to generate feature maps from input layers) and pooling layers (reducing the number of dimensions and parameters from convolutional layers); (2) Recurrent Neural Network (RNN) [36], which aims to keep track of sequential information; and (3) Encoder-Decoder model [37], where the encoder component aims to capture the most important features of sentences (often in the format of a vector) and the decoder component aims to regenerate the sentence from the encoder output. All the three models use fully-connected layers as final processing stages.
Three primary deep learning models to capture sentence similarity. The first is the Convolutional Neural Network Model (1.1 and 1.2), which applies image-related convolutional processing to text. The second is LSTM or Recurrent Neural Network (2 in the figure), which aims to learn the semantics aligned with the input sequence. The third is Encoder-Decoder network (3 in the figure), where the encoder aims to compress the semantics of a sentence into a vector and decoder aims to re-generate the original sentence from the vector. FC layers: fully-connected layers. All three models use fully-connected layers as final stages
We tried these three models and found that the encoder-decoder model (Model 3 in Fig. 2) demonstrated the best performance in this task. We developed a model named as Encoder Network. It contains five layers. The first layer is the input layer. For each pair of sentences, it uses BioSentVec to generate the associated semantic vectors and concatenate absolute differences and the dot product. The next three layers are fully-connected layers each of 480, 240, and 80 hidden units. The final layer outputs the predicted similarity score. To train the model, we used the stochastic gradient descent optimizer and set the learning rate at 0.0001. The loss function is mean squared error. We also applied L2 regularization and a dropout rate at 0.5 to prevent overfitting. The training was stopped when the loss on the validation set will not decrease beyond 200 epochs. The model achieving the highest correlation on the validation set was saved.
We further developed an ensembled (stacking) model that takes the outputs of the Random Forest and Encoder Network as inputs. As explained in Fig. 1, it takes the predicted scores of these two models on the validation set as inputs. Then it uses the validation set for training. We used the linear regression model that combines the scores from these two models via a linear function for ensembling.
Table 1 summarizes the correlation of models on the official test set. During the challenge, we made four submissions. The best model was an ensemble model using 8 models as inputs. In comparison, the performance of our models has been further improved after the challenge. The performance of the Random Forest model is improved by an absolute ~ 1.5%. The main difference is that we used the Cosine similarity of sentence embeddings trained on biomedical corpora as the new feature. Likewise, the performance of the Encoder Network is also improved significantly. Originally, we used Universal Sentence Encoder [38] and inferSent [39], the sentence embeddings trained on the general domain. The performance was only 0.6949 and 0.7147 respectively on the official test. Therefore, we did not make the Encoder Network as a single submission. The current Encoder Network achieved a 0.8384 correlation on the official test set, over 10% higher (in absolute difference) than the previous version. The improvement of single models further increases the performance of the stacking model. The latest stacking model is a regression model taking inputs of the single models: two using the Encoder Network with different random seeds and one using the Random Forest model. It improves the state-of-art performance by an absolute 2%.
Table 1 Evaluation results on the official test set
Feature importance analysis
We analyze the importance of features manually engineered for the Random Forest model. Given that most of the features can directly measure the similarity between sentence pairs, we first quantify which single feature gives the best correlation (without using supervised learning methods) on the dataset. Then we also quantify which feature has the highest importance ranked by the Random Forest.
Figure 3 demonstrates the correlation of each single feature on the entire dataset. We exclude entity-based features since they are only designed for measuring the similarity between sentences containing entities. We measure the correlation over the entire dataset and also measure the correlation over the officially released training set (train + validation) and the test set separately. The findings are two-fold. First, character-based features achieve the best correlation on all datasets: the Q-Gram (q = 3) similarity had a remarkably high correlation of over 0.79 on the officially released training set and over 0.77 on the test set. This is consistent with existing studies on measuring the level of redundancy in clinical notes; for example, Zhang et al. [40] found that sequence alignment incorporated with the sliding window approach can effectively find highly similar sentences. The sliding window approach essentially looks at every consecutive n characters in a sequence. This is also because the sentences are collected from the same corpus, where it is possible that clinicians use controlled vocabularies, extract sentences in templates and copy-paste from existing notes [22]. For example, we observe that snippet 'No Barriers to learning were identified' occurs frequently in the dataset.
Performance of an individual hand-crafted feature on the dataset. The y-axis stands for the Pearson correlation. The left shows the correlation over the entire set; the right shows the correlation over the training & validation set and the test set
Second, while many sentences share exact terms or snippets, the semantic-based feature, i.e., the cosine similarity generated by BioSentVec is the most robust feature. The difference between its correlation on the training set and testing set is minimal. In contrast, although Q-Gram still achieves the highest correlation in the test set, its performance drops by an absolute 2%, from 0.793 to 0.772. This shows character-based or token-based features which have lower generalization ability. The Q-Gram recognizes sentences containing highly similar snippets, but fails to distinguish sentences having similar terms but distinct semantics, e.g., 'Negative gastrointestinal review of systems, Historian denies abdominal pain, nausea, vomiting.' and 'Negative ears, nose, throat review of systems, Historian denies otalgia, sore throat, stridor.' Almost 50% of the terms in this sentence pair are identical, but the underlying semantics are significantly different. It would be more problematic when sentences are from heterogeneous sources. Therefore, it is vital to combine features focusing on different perspectives.
In addition, we find that sequence-based features play a vital role in the supervised setting while the correlation is low based on the previous unsupervised experiment. Table 2 shows a feature ablation study. It shows that the performance of the Random Forest model drops by an absolute 2.1% on the test set without using sequence-based features. In addition, Fig. 4 visualizes the important features identified by one of the trees in the Random Forest model. We randomly selected a tree and repeated the training multiple times. The highly ranked features are consistent. Q-Gram is the most important feature identified by the model, consistent with the above results. The model further ranks sequence-based features as the second most important. For example, the Needle Wunsch similarity measure is used to split sentences into the similarity category of less than 1 or over. Likewise, the bag similarity measure is used to split sentences into the similarity category of less than ~ 4 or over. Therefore, while sequence-based features cannot achieve a high correlation by themselves, they play a vital role in supervised learning.
Table 2 Feature ablation study on the Random Forest model. Each set of features is removed, and the difference of the performance is measured
A visualization of important features ranked by a tree of the Random Forest model. We randomly picked up the tree and repeated multiple times. The top-ranked features are consistent. A tree makes the decision from top to bottom: the more important the feature, the higher the ranks. In this case, Q-Gram is the most important feature. From left to right, different colors represent the sentence pairs in different degrees of similarity; darker means more similar
We analyze errors based on sentence similarity regions, e.g., sentences with similarities ranging from 0 to 5. For each similarity region, we measure the mean squared error produced by the Random Forest and the Encoder Network. Figure 5 shows the results. The higher value means the model made more errors. The results clearly show the Encoder Network has significantly fewer errors than the Random Forest for sentence pairs with a similarity of up to 3. This is more evident for sentence pairs of lower similarity; for example, for sentence pairs having a similarity of no more than 1, the Encoder Network had almost half of the mean squared errors. For sentence pairs of similarity over 3, the Random Forest model performed slightly better, but the difference was much smaller.
The mean squared errors made by Random Forest and Encoder Model by categorizing sentence pairs into different similarity regions
We further qualitatively demonstrated three representative cases for error analysis.
Case 1: the encoder network makes more accurate predictions than the random Forest
Sentence pair: 'The patient understands the information and questions answered; the patient wishes to proceed with the biopsy.' and 'The procedure, alternatives, risks, and postoperative protocol were discussed in detail with the patient.'
Gold standard score: 2.75
Prediction of the Random Forest model: 1.50
Prediction of the Encoder Network: 2.12
In this case, the sentences certainly focus on different topics but they are somewhat related because they both mention there are discussions between clinicians and patients. The gold standard score is between 2 and 3. The Random Forest gave a score of 1.5, which is almost half of the gold standard score. This is because there are few shared terms in the pair; character-based or sequence-based similarities are low. The Encoder Network focuses more on the underlying semantic, thus giving a more accurate prediction.
Case 2: the random Forest made more accurate predictions than the encoder network
Sentence pair: 'Gradual onset of symptoms, There has been no change in the patient's symptoms over time, Symptoms are constant.' and 'Sudden onset of symptoms, Date and time of onset was 1 hour ago, There has been no change in the patient's symptoms over time, are constant.'
In this case, both sentences talk about symptoms of patients. The gold standard score is 3.95. Since there are many terms or snippets shared in sentence pairs: 'onset of symptoms', 'patient's symptoms', and 'are constant'. This increases the similarity of token-based and character-based similarity measures, making the Random Forest model give a similarity score over 3. In contrast, the Encoder Network gives a score less than 2.5. For this example, the Random Forest model has a closer score than the Encoder Network. However, we believe that this example is difficult to interpret. One may argue that these sentences are both about documenting the symptoms of patients. Another may argue that the symptoms are distinct, e.g., 'gradual onset' vs 'sudden onset', and no information about onset vs 'date and time was 1 hour ago'. Besides, case 1 and 2 show that token-based and character-based features act as a double-edged sword. For pairs of high similarity, i.e. the performance of the Random Forest is improved by awarding shared terms, but it also brings false positives when sentences have low similarity but share some of the same terms.
Case 3: the random Forest and the encoder network both have significantly different scores than the gold standard
Sentence pair: 'Patient will verbalize and demonstrate understanding of home exercise program following this therapy session.' and 'Patient stated understanding of program and was receptive to modifying activities with activities of daily living'
This is the case where both models have significantly different scores compared to the gold standard. Sentences note whether patients understand a program presumably suggested by clinicians. The gold standard score is 3, but both models give a score below 2. We believe that this is a challenging case. The pair shows some level of similarity, but in terms of the detail, they are distinct. In this specific case, the differences are not minor: the first sentence states that the therapy session will help patients understand the home exercise program, whereas the second sentence mentions that a specific patient already understands a program and is willing to change his or her activities. Case 2 and 3 in fact demonstrate the difficulty of the sentence similarity task – the relatedness is context-dependent.
In this paper, we describe our efforts on the BioCreative/OHNLP STS task. The proposed approaches utilize traditional machine learning, deep learning and the ensemble between them. For post challenges, we employ sentence embeddings pre-trained on large-scale biomedical corpora and re-designed the models accordingly. The best model improves the state-of-the-art performance from 0.8328 to 0.8528. Future work will focus more on the development of deep learning models. We plan to explore a variety of deep learning architectures and quantify their effectiveness on biomedical sentence related tasks.
The data is publicly available via https://sites.google.com/view/ohnlp2018/home.
CLAMP:
Clinical Language Annotation, Modeling, and Processing Toolkit
CNN:
Concept Unique Identifiers
NLP:
RNN:
Recurrent Neural Network
SemEval STS:
SemEval Semantic Textual Similarity
UMLS:
Unified Medical Language System
WMD:
Word Mover's Distance
Allot A, Chen Q, Kim S, Vera Alvarez R, Comeau DC, Wilbur WJ, Lu Z. LitSense: making sense of biomedical literature at sentence level. Nucleic Acids Res. 2019;47(W1):W594-9.
Ravikumar K, Rastegar-Mojarad M, Liu H. BELMiner: adapting a rule-based relation extraction system to extract biological expression language statements from bio-medical literature evidence sentences. Database. 2017;2017(1):baw156.
PubMed Central Google Scholar
Tafti AP, Behravesh E, Assefi M, LaRose E, Badger J, Mayer J, Doan A, Page D, Peissig P. bigNN: An open-source big data toolkit focused on biomedical sentence classification. In Proceedings of the 2017 IEEE International Conference on Big Data (Big Data). 2017. p. 3888–96.
Sarrouti M, El Alaoui SO. A passage retrieval method based on probabilistic information retrieval model and UMLS concepts in biomedical question answering. J Biomed Inform. 2017;68:96–103.
J. Du, Q. Chen, Y. Peng, Y. Xiang, C. Tao, and Z. Lu, "ML-net: multi-label classification of biomedical texts with deep neural networks," J Am Med Inform Assoc. 2019.
Cer D, Diab M, Agirre E, Lopez-Gazpio I, Specia L. SemEval-2017 Task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv. 2017;1708(00055).
Chen Q, Kim S, Wilbur WJ, Lu Z. Sentence similarity measures revisited: ranking sentences in PubMed documents. In Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. 2018. p. 531–2.
Wang Y, Afzal N, Fu S, Wang L, Shen F, Rastegar-Mojarad M, Liu H. MedSTS: A Resource for Clinical Semantic Textual Similarity. arXiv preprint arXiv. 2018;1808(09397).
Chen Q, Du J, Kim S, Wilbur WJ, Lu Z. Combining rich features and deep learning for finding similar sentences in electronic medical records. Proceedings of Biocreative/OHNLP challenge. 2018;2018.
Chen Q, Peng Y, Lu Z. BioSentVec: creating sentence embeddings for biomedical texts. In: The 7th IEEE international conference on healthcare informatics; 2019.
Chen Q, Peng Y, Lu Z. BioSentVec: creating sentence embeddings for biomedical texts. In 2019 IEEE International Conference on Healthcare Informatics (ICHI) 2019 Jun 10 (pp. 1–5). IEEE.
Fiorini N, Leaman R, Lipman DJ, Lu Z. How user intelligence is improving PubMed. Nat Biotechnol. 2018;36(10):937.
Wei C-H, Phan L, Feltz J, Maiti R, Hefferon T, Lu Z. tmVar 2.0: integrating genomic variant information from literature with dbSNP and ClinVar for precision medicine. Bioinformatics. 2017;34(1):80–7.
Chen Q, Zobel J, Zhang X, Verspoor K. Supervised learning for detection of duplicates in genomic sequence databases. PLoS One. 2016;11(8):e0159644.
Zobel J, Moffat A. Exploring the similarity space. In SIGIR Forum. 1998;32(1):18–34.
Jaccard P. Lois de distribution florale dans la zone alpine. Bull Soc Vaud Sci Nat. 1902;38:69–130.
Jaro MA. Advances in record-linkage methodology as applied to matching the 1985 census of Tampa, Florida. J Am Stat Assoc. 1989;84(406):414–20.
Dice LR. Measures of the amount of ecologic association between species. Ecology. 1945;26(3):297–302.
Ochiai A. Zoogeographic studies on the soleoid fishes found in Japan and its neighbouring regions. Bulletin of Japanese Society of Scientific Fisheries. 1957;22:526–30.
Sparck Jones K. A statistical interpretation of term specificity and its application in retrieval. J Doc. 1972;28(1):11–21.
Ukkonen E. Approximate string-matching with q-grams and maximal matches. Theor Comput Sci. 1992;92(1):191–211.
Wrenn JO, Stein DM, Bakken S, Stetson PD. Quantifying clinical narrative redundancy in an electronic health record. J Am Med Inform Assoc. 2010;17(1):49–53.
Chen Q, Zobel J, Verspoor K. Duplicates, redundancies and inconsistencies in the primary nucleotide databases: a descriptive study. Database. 2017;2017:baw163.
Navarro G. Multiple approximate string matching by counting. In WSP 1997, 4th South American Workshop on String Processing. 2011. p. 95–111.
Levenshtein VI. Binary codes capable of correcting deletions, insertions and reversals In: Soviet Physics Doklady. 1966;10:707.
Needleman SB, Wunsch CD. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol. 1970;48(3):443–53.
Smith TF, Waterman MS. Comparison of biosequences. Adv Appl Math. 1981;2(4):482–9.
Johnson AE, Pollard TJ, Shen L, Li-wei HL, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, Mark RG. MIMIC-III, a freely accessible critical care database. Sci Data. 2016;3:160035.
Wei C-H, Allot A, Leaman R, Lu Z. PubTator central: automated concept annotation for biomedical full text articles: Nucleic Acids Res. 2019:47(W1):W587–93.
Soysal E, Wang J, Jiang M, Wu Y, Pakhomov S, Liu H, Xu H. CLAMP–a toolkit for efficiently building customized clinical natural language processing pipelines. J Am Med Inform Assoc. 2017;25(3):331–6.
Kusner M, Sun Y, Kolkin N, Weinberger K. From word embeddings to document distances. In International conference on machine learning. 2015. p. 957-66.
Chen Q, Peng Y, Keenan T, Dharssi S, Agro E. A multi-task deep learning model for the classification of Age-related Macular Degeneration. AMIA Summits on Translational Science Proceedings. 2019;2019:505.
Du J, Zhang Y, Luo J, Jia Y, Wei Q, Tao C, Xu H. Extracting psychiatric stressors for suicide from social media using deep learning. BMC Med Inform Decis Mak. 2018;18(2):43.
Doğan RI, Kim S, Chatr-aryamontri A, Wei C-H, Comeau DC, Antunes R, Matos S, Chen Q, Elangovan A, Panyam NC. Overview of the BioCreative VI precision medicine track: mining protein interactions and mutations for precision medicine. Database. 2019;2019.
Kim Y. Convolutional neural networks for sentence classification. arXiv preprint arXiv. 2014;1408(5882).
Mueller J, Thyagarajan A. Siamese recurrent architectures for learning sentence similarity. In thirtieth AAAI conference on artificial intelligence. 2016.
Serban IV, Sordoni A, Lowe R, Charlin L, Pineau J, Courville A, Bengio Y. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. 2017.
Cer D, Yang Y, Kong S-y, Hua N, Limtiaco N, John RS, Constant N, Guajardo-Cespedes M, Yuan S, Tar C. Universal sentence encoder. arXiv preprint arXiv. 2018;1803(11175).
Conneau A, Kiela D, Schwenk H, Barrault L, Bordes A. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv. 2017;1705(02364).
Zhang R, Pakhomov S, McInnes BT, Melton GB. Evaluating measures of redundancy in clinical texts. In AMIA Annual Symposium Proceedings. Am Med Inform Assoc. 2011;2011:1612.
About this supplement
This article has been published as part of BMC Medical Informatics and Decision Making Volume 20 Supplement 1, 2020: Selected Articles from the BioCreative/OHNLP Challenge 2018 – Part 2. The full contents of the supplement are available online at https://bmcmedinformdecismak.biomedcentral.com/articles/supplements/volume-20-supplement-1.
Publication of this supplement was funded by the Intramural Research Program of the NIH, National Library of Medicine. Jingcheng Du was partly supported by UTHealth Innovation for Cancer Prevention Research Training Program Pre-doctoral Fellowship (Cancer Prevention and Research Institute of Texas grant # RP160015). We also thank Yifan Peng, Aili Shen and Yuan Li for various discussions. In addition, we thank Yaoyun Zhang and Hua Xu for word embedding related input.
National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, USA
Qingyu Chen, Jingcheng Du, Sun Kim, W. John Wilbur & Zhiyong Lu
School of Biomedical Informatics, UTHealth, Houston, USA
Jingcheng Du
Qingyu Chen
W. John Wilbur
Zhiyong Lu
All the authors approved the final manuscript. QC, WJW, ZL conceived and designed the experiments. QC, JD, and SK performed the experiments and analyzed the data. QC and JD wrote the paper.
Correspondence to Zhiyong Lu.
Chen, Q., Du, J., Kim, S. et al. Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records. BMC Med Inform Decis Mak 20, 73 (2020). https://doi.org/10.1186/s12911-020-1044-0
Sentence similarity | CommonCrawl |
Exploring the influence of context and policy on health district productivity in Cambodia
Tim Ensor1,
Sovannarith So2 &
Sophie Witter3
Cost Effectiveness and Resource Allocation volume 14, Article number: 1 (2016) Cite this article
Cambodia has been reconstructing its economy and health sector since the end of conflict in the 1990s. There have been gains in life expectancy and increased health expenditure, but Cambodia still lags behind neighbours One factor which may contribute is the efficiency of public health services. This article aims to understand variations in efficiency and the extent to which changes in efficiency are associated with key health policies that have been introduced to strengthen access to health services over the past decade.
The analysis makes use of data envelopment analysis (DEA) to measure relative efficiency and changes in productivity and regression analysis to assess the association with the implementation of health policies. Data on 28 operational districts were obtained for 2008–11, focussing on the five provinces selected to represent a range of conditions in Cambodia. DEA was used to calculate efficiency scores assuming constant and variable returns to scale and Malmquist indices to measure productivity changes over time. This analysis was combined with qualitative findings from 17 key informant interviews and 19 in-depth interviews with managers and staff in the same provinces.
The DEA results suggest great variation in the efficiency scores and trends of scores of public health services in the five provinces. Starting points were significantly different, but three of the five provinces have improved efficiency considerably over the period. Higher efficiency is associated with more densely populated areas. Areas with health equity funds in Special Operating Agency (SOA) and non-SOA areas are associated with higher efficiency. The same effect is not found in areas only operating voucher schemes. We find that the efficiency score increased by 0.12 the year any of the policies was introduced.
This is the first study published on health district productivity in Cambodia. It is one of the few studies in the region to consider the impact of health policy changes on health sector efficiency. The results suggest that the recent health financing reforms have been effective, singly and in combination. This analysis could be extended nationwide and used for targeting of new initiatives. The finding of an association between recent policy interventions and improved productivity of public health services is relevant for other countries planning similar health sector reforms.
Since the end of the conflict in Cambodia, Gross Domestic Product per capita has increased from $316 US in 1996 to $946 US in 2012 [1]. Health expenditure has also risen from $23 US per capita in 1996 to $69 US in 2012 [2]. The life expectancy of Cambodians has improved, rising from 54 to 61 years for men and 58 to 64 years for women over 2000–2008 [3]. The maternal mortality ratio has fallen from 690 to 290 while the infant mortality rate has declined from 95 to 45 deaths per 1000 live births over the period of 2000–2010 [4, 5]. Despite these improvements, outcomes remain inferior to other Asian countries particularly given that health spending per capita is one of the highest in the region [6].
In an attempt to increase access to health services and improve health outcomes, the government established the operational district (OD) as a focus for health services in 1996. ODs grouped existing administrative districts to form a network of health centres and referral hospitals covering between 100,000 and 200,000 people. Services, staffing requirements and management systems were defined by operational guidelines with the aim of promoting universal coverage across the country [7, 8]. Achievement of universal health coverage is dependent on the effective performance of the OD with adequate resources [9].
To improve their effectiveness, a number of health development initiatives have been introduced over the period with the aim of increasing motivation of health workers and utilisation of facilities, particularly in rural areas. Demand-side schemes to remove access barriers to the poor have been introduced since the mid-1990s, including formalising charges in public facilities (1996), Community-based Health Insurance (CBHI) in 1998; Health Equity Funds (HEF), a subsidy system to increase access for poor people in 2000 [10–12]; and the government National Midwifery Incentive Scheme (NMIS) and vouchers for deliveries, both in 2007 [13]. Income from demand-side schemes are pooled into the facility revenues and largely used to provide salary supplement for staff and administrative operating costs [14]. Supply-side mechanisms have also been introduced to strengthen human resource and facility management, including contracting-in and out of health management and service delivery working with international NGO operators between 1999 and 2002, a hybrid contracting implemented between 2003 and 2008, and internal contracting in form of Special Operating Agencies (SOA) since 2009 [9].
SOAs are ODs which are assessed as having met the capacity criteria and which are therefore granted greater autonomy over financial and other resources. The Ministry of Health and Provincial Health Departments commission services from them, which are funded through Service Delivery Grant (SDG) backed by donor pooled funds. The performance based contracting is based on planned outputs and staff performance indicators. SOAs have been adopted and scaled up from 11 ODs in 2009 to 22 ODs out of a total of 77 in 2012. The SOA roll-out has been supported by pooled funds from seven donors, including the World Bank, DFID, AUSAID, UNFPA, UNICEF, ADB and BTC [15]. With their ability to scale up financial incentives and adopt results-based management, utilisation of public health facilities was expected to increase and become more efficient. However, increasing productivity of public health services and improvement of OD operations through adopting and taking ownership of those health development initiatives remains very much debated. So far little attention has been paid to assessing the change in overall productivity of ODs and its determinants. That is the focus of this article.
The analysis was undertaken as part of a research programme examining health system reconstruction post-conflict, within which one component focussed specifically on the changing incentive environment for health workers, including changing productivity [16]. We use data envelopment analysis (DEA), a non-parametric technique for estimating a production frontier which has been extensively used across public and private sectors to assess relative efficiency of individual firms and industries [17]. The method has been used widely in the health sector in high and low income countries. DEA has been used to investigate the efficiency of inpatient care across a range of countries including Costa Rica, Namibia, Kenya and Zambia [18–21]. Less use has been made of the technique at the primary care level, although there have been studies in Zambia, Sierra Leone and Pakistan [22–24]. Little use has been made of the technique to analyse efficiency of services across a network of facilities and there appear to be no published studies from Cambodia. In most cases, studies focus on outputs from health care including inpatient-days, outpatients and, rarely, patients treated adjusted for diagnosis. Second stage analysis to analyse the impact of contextual variables such as facility ownership, socio-economic conditions and geography is frequently carried out. There appear to be few attempts to use DEA to understand the association between specific health reforms and changes in efficiency. Our article therefore adds both to understanding of productivity in Cambodia and also extends the use of the DEA methodology.
This article assesses the determinants of productivity of OD services in five provinces. The analysis makes use of data envelopment analysis (DEA) to measure the efficiency of each operational district in utilising resources to produce health services relative to other areas. The productivity of operational districts, defined as changes in efficiency over time, is analysed using a Malmquist total factor productivity index. Efficiency estimates are analysed by Tobit regression to assess the association with the implementation of health policies.
DEA has a number of advantages over regression-based techniques such as stochastic frontier analysis, including not having to specify a functional form for the production function and permitting the modelling of multiple outputs. The main weakness of the method is that, unlike stochastic regression methods, there is no test of significance and so no guide to the quality of results. It is suggested that it be used principally as an exploratory tool "rather than as an instrument with which to extract precise estimates of organisational efficiency" [25]. As with many quantitative measures of efficiency, the analysis makes no allowance for the quality or outcome of the outputs. It could be the case, for example, that those facilities that deliver lower value in terms of outputs per input(s) may be delivering a higher quality service that leads to better patient outcomes.
The focus of the study is the operational district over a period of 4 years from 2008 to 2011. This period was defined by data availability but includes a period when many of the reform initiatives described earlier were being introduced (see Table 1). Data on 28 operational districts were obtained, focussing on the five provinces that are representative of a range of conditions in Cambodia (geographic conditions, urban/rural populations, and also different levels of external investment). Phnom Penh was included in the wider study but removed from this analysis because of the gaps in information on staffing and limited participation in key informant interviews (Table 2).
Table 1 Coverage of health initiatives in target provinces and ODs
Table 2 Characteristics of selected areas
For the DEA, DMUs are defined as operational districts. Each DMU is observed four times, providing panel data that can be used to assess productivity across the years and understand the impact of policy. Operational districts are treated as multi-product units producing a range of health services. The main services (outputs) were specified as numbers of inpatient-days, outpatients and deliveries. Inpatient days and outpatients provide a general measure of overall workload while deliveries are included because of the policy priority placed on increasing facility births. Data for outputs were drawn from national health statistics, which have been managed electronically since 2008.
Data on inputs were available on staffing numbers by type (doctors, secondary nurses, primary nurses, secondary midwives, primary midwives and other staff) and the non-staffing recurrent expenditure (Table 3). Staff numbers are collapsed into three categories: doctors, nurses & midwives and other staff. Inputs and outputs at facility level are aggregated to operational districts, which are the unit of analysis for the quantitative analysis. Expenditure data were missing for Kompot region and we therefore estimated efficiency for the full sample (108 data points) without expenditure and a restricted sample with expenditure (92 data points without Kompot).
Table 3 Average staffing in each operational district
The linear programming technique of Data Envelopment Analysis (DEA) is used to obtain output oriented estimates of production efficiency across operational districts. For constant returns to scale DEA solves the optimisation problem by maximising, for DMU (1), the e weighted (Wr) sum of N outputs (Or):
$$\mathop \sum \limits_{r = 1}^{n} W_{r} O_{r1} ,\,\,\,\,\,\,{\text{where}}\,W_{r} \ge 0,$$
and subject to the constraint that the sum of the weighted Sum of M inputs is equal to one (to avoid an infinite number of solutions):
$$\mathop \sum \limits_{i = 1}^{m} V_{i} I_{i1} = 1,$$
and ensuring that all P DMUs have efficiency indices less than or equal to one:
$$\mathop \sum \limits_{r = 1}^{n} W_{r} O_{rj} - \mathop \sum \limits_{i = 1}^{m} V_{i} I_{ij} \le 0 j = 1, \ldots P.$$
There is no reason to assume the all operational districts will be operating at optimal scale and so both variable (VRS) as well as constant returns to scale (CRS) DEA efficiency scores are computed. While CRS assumes that an increase in inputs will increase outputs in the same proportion, VRS allows for a disproportionate change.
The data on operational districts is available for 4 years forming a balanced panel that permits consideration of the change in productivity over years. A Malmquist Productivity Index (MPI) is used for this purpose [25]. This provides a measure of the change in efficiency for each operational district from year to year. It is specified as the geometric mean of the index in each year. For DMU (1) this is:
$$MPI_{1, t + 1} = \left[ {\frac{{d_{1}^{t} \left( {I_{t + 1} ,O_{t + 1} } \right)}}{{d_{1}^{t} \left( {I_{t} ,O_{t} } \right)}} \times \frac{{d_{1}^{t + 1} \left( {I_{t + 1} ,O_{t + 1} } \right)}}{{d_{1}^{t + 1} \left( {I_{t} ,O_{t} } \right)}}} \right]^{0.5}$$
for all inputs (I) and outputs (O). A decomposition shows that the MPI is the product of changes associated with improvements in the technical efficiency of individual operational districts (getting closer to the industry frontier) and efficiency changes related to technical progress in the industry (technological efficiency resulting in a shift in the production frontier):
$$MPI_{1, t + 1} = \frac{{d_{1}^{t + 1} \left( {I_{t + 1} ,O_{t + 1} } \right)}}{{d_{1}^{t} \left( {I_{t} ,O_{t} } \right)}} \times \left[ {\frac{{d_{1}^{t} \left( {I_{t + 1} ,O_{t + 1} } \right)}}{{d_{1}^{t + 1} \left( {I_{t + 1} ,O_{t + 1} } \right)}} \times \frac{{d_{1}^{t} \left( {I_{t} ,O_{t} } \right)}}{{d_{1}^{t + 1} \left( {I_{t} ,O_{t} } \right)}}} \right]^{0.5}$$
An index (total or individual components) of more than one indicates positive growth from 1 year to the next an index less than one indicates negative growth.
DEA and MPI estimates were obtained using the Data Envelopment Analysis Programme (DEAP) developed by Coelli [26]. This accepts data formatted as text files structured with columns outputs and then input. Changes in efficiency can be explored further by attempting to understand the association between the score and possible determinants such as population density and poverty levels. Associations with the quality of the output (outcomes) can also be undertaken if data are available, which was not the case for this dataset. The results have, however, been triangulated with the qualitative data obtained from health mangers through key informant interviews (KII) and health workers consulted during the in-depth interviews (IDI), in order to contextualise and understand explanatory factors behind the quantitative results. Seventeen KIIs and 19 IDIs were conducted across five provinces between August and December 2013, using a semi-structure interview guide. Transcripts were analysed thematically [27]. Ethical approval was provided by the study by the Liverpool School of Tropical Medicine and the National Ethical Committee for Health Research of the Ministry of Health in Cambodia in 2012. Informed consent was provided by all participating health staff and managers.
The association between the DEA productivity index (with and without expenditure) and the presence of major health financing policies—health equity funds, vouchers and special operating areas—was also investigated. The index is bounded between zero and one (left and right censoring) and this truncation means that OLS estimators are inconsistent. Instead a random effects Tobit model, which provides consistent regression coefficients, is used to estimate the two specifications. The first regression explores the association between policies and productivity (E) as follows:
$$E_{it} = \alpha_{0} + \alpha_{1} t + \alpha_{2} P_{sit} + \alpha_{3} D_{i} + \alpha_{4} A_{i} + u_{st} + v_{i}$$
Where t is annual dummy variable, \(P_{sit}\) are a series of dummy variables representing the main health financing policy combinations, D is the population density and A is regional dummy variable. The random effects model leads to a combination error term where \(u_{st }\) is a district/time effect and \(v_{i}\) an observation specific effect. A positive association suggests that areas with higher productivity are associated with the presence of health policies. It is, however, difficult to attribute causation since it is not possible to tell whether those areas with more productive services are more likely to be chosen to implement new financing policies. A second regression, attempts to look at whether the introduction of financing policies was associated with a change in productivity as follows:
$$E_{it} = \beta_{0} + \beta_{1} t + \beta_{2} P_{si}^{o} + \beta_{3} P_{it}^{n} + \beta_{4} D_{i} + \beta_{5} A_{i} + u_{st} + v_{i}$$
where \(P_{si}^{o}\) is district specific variable for the main financing policies (HEF) and Vouchers and \(P_{it}^{n}\) is a time/district specific dummy variable for the introduction of any major new financing policy. Significant association makes a causal link more likely although it still may have been possible for another variable to have improved both productivity and the introduction of health financing polices. The productivity DEA variable is constrained to take a value of between zero and one.
Utilisation of public health services in each province has increased over the 4 year period, as indicated by increasing trends in outpatients and number of new cases (aggregated in Fig. 1). The DEA results suggest great variation in the efficiency scores and trends of scores of public health services across the five provinces (Figs. 2, 3; detailed OD figures are provided in Additional file 1: Web Annex S1). Starting points were significantly different, but three of the five provinces have improved efficiency considerably over the period. Stung Treng remains a low performer throughout. These results are reinforced by the MPI estimates of productivity change (Table 4). These show low positive or negative productivity growth in Stung Treng while there is a strong and consistent improvement in productivity in other provinces, notably Battambang. The decompositions suggest that changes in productivity are evenly split between technical progress across the sector and improvements in individual efficiency resulting from economies of scale and better use of inputs.
Trends in outpatient consultation and new cases by study province, 2008–2011 (Source: MoH' s annual statistic report 2008–2011)
The results of efficiency scores without expenditures, 2008–2011
The results of efficiency scores with expenditures, 2008–2011
Table 4 Malmquist index and decomposition (with and without expenditure as an input)
It is perhaps not surprising that Kampong Cham, which is the only one of the provinces in this group to have been given SOA status, was one of the most efficient performers at the start and has increased its performance over the period (though less significantly if expenditures are taken into account, reflecting perhaps the higher payments which SOA areas receive). Five of its ten ODs have been given SOA status since 2009, which reflected its existing capacity. It also implemented the full range of demand- and supply-side stimuli (Table 1). It recorded the highest productivity growth throughput the period including between 2010 and 2011 when growth dipped for other provinces.
Kandal province, which has only implemented user fee formalisation on the demand side in an earlier phase and the midwifery incentive scheme in this phase across its ODs, and has had less external support as a province, nevertheless maintained a comparable efficiency to Kampang Cham, especially once expenditures are included (Fig. 2). Battambang started implementing delivery vouchers for three ODs in 2008 and scaled up health equity funds, staring in 2006 and reaching full implementation in all five ODs in 2011. It shows a robust increase in OD productivity, noted by a sharp raise of both efficiency scores with and without expenditures over the 4 year period.
The productivity of public health services in Kampot appears to level off and growth is negative between 2010 and 2011. The change in productivity is likely to come from the recent financial incentive schemes (vouchers, user fees, HEF/CBHI and NMIS) supported by government and donors funds to the ODs in Kampot Province (Table 1).
Among the five provinces, Stung Treng province shows the poorest performance, indicated by a stagnation of efficiency scores and low productivity growth, both with and without expenditures. This is likely to result from local conditions, as the area is mountainous and access to health services is constrained by poor roads, low population density and dispersed settlements of Cambodian ethnic minorities.
The regression analysis enables investigation of the association between performance and policy implementation. Each model was estimated with the CRS and VRS efficiency estimates, with and without expenditure as an input variable. Results reported here are for the entire sample and so exclude expenditure but similar results were found for the sub-sample that includes expenditure. The first model investigates the association between policy combinations and area covariates for both DEA indices with and without health spending (Table 5). Efficiency appears to increase over time (compared to the base year of 2008). In the CRS model, population density is associated with higher productivity, possibly reflecting the difficulty of reaching services for more remote populations. Areas with health equity funds in SOA and non-SOA areas are associated with higher productivity. The same effect is not found in areas only operating voucher schemes. The effect of the health equity funds are not found to be statistically significant in the VRS model, suggesting that lower efficiency in areas without equity funds is largely due to the facilities operating below optimum scale. The stimulus in use of service due to health equity funds helps facilities to achieve scale economies. The disappearance of a significant association with the densest populated areas is likely due to a similar mechanism (with facilities more able to achieve scale economies in highly populated areas).
Table 5 Tobit regression results for OD efficiency and policy/area characteristics (CRS and VRS, without expenditure)
The second model (Table 6) attempts to disentangle the underlying association with financing schemes and any additional effect arising from the introduction of these schemes. Both CRS and VRS specifications suggest an increase in efficiency following the introduction of any or a combination of the three main policies; the efficiency score increases by between 0.11 and 0.12 the year any of the policies was introduced. It is notable that the policies have an impact in both CRS and VRS specifications. This suggests that the policies have an impact that goes beyond heavier utilisation of services resulting from the stimulus to demand.
Table 6 Tobit regressions: OD productivity and additional effects from new scheme introduction (CRS and VRS, without expenditure)
Explaining differences across ODs and provinces
It is unsurprising that population density emerges as a significant determinant of efficiency. Of the study provinces, Kandal and Kampong Cham had the highest population density in 2008, at 302 and 164 persons per square kilometre respectively [28] and the highest population to health worker (doctor, nurse and midwife) ratio of 1560 started with highest efficiency scores at around 0.8 in 2008. By contrast, Battambang province with 88 persons per square kilometre and 833 persons per health worker began at around 0.54 scores (Fig. 1). Stung Treng, with the lowest population density of 10 persons per square kilometre and 518 persons per health worker, had the lowest productivity among the five provinces.
Net immigration of population [29] seems to be associated with the rise of efficiency scores in the case of Kompong Cham between 2008 and 2011, Kompot for the period of 2008–2010, and Kandal province between 2010 and 2011. Population movement from other parts of the country is cited by the health managers of the hospitals and health centres as one factor in increasing demand.
For Stung Treng, at least two additional factors besides dispersed population and difficulty of access to the health facilities may have played a part in the region's relatively low performance. First, there is less demand for health care among indigenous population who had limited knowledge of the importance of health care, and largely still used traditional midwives or healers, according to the health managers consulted, despite an increase in quality of health care and increased availability of services at the health facilities. Lack of trust in the public health service is another factor which was highlighted. Strung Treng is one of the least developed provinces in Cambodia, and is not favoured by the health workers. If they accept a posting there, according to our interviews, they do it just to complete the probationary period, and most of them, except the ones who are from this province, then ask for transfers to work elsewhere, or else they remain working in the provincial town of Stung Treng [27]. The young profile of staff in the province may cause a lack of confidence by households, and facilities such as ambulances for referral are reported in interviews as lacking.
"…It is difficult for us due to our age and experience in maternal health care. We don't get much trust from the customers or patients. Customers always complained about our HC to let young midwives with less experience to treat them, especially when the problem happens…" (STIDI1)
In some areas, like Battambang, there is a thriving private sector which draws customers away from public facilities, which are perceived to offer lower quality.
On the supply side, budget disbursement and staffing problems can adversely affect facility functioning.
In Kompot province, for example, our interviews found that budget disbursement to facilities was often delayed and reduced. Another more general issue was mal-distribution of staff and the fact that actual staffing numbers were lower than those officially recorded. Mal-distribution of health workers reduces efficiency. The transfer of qualified and experienced health workers from the least to the most developed areas is a widespread problem, which leaves the public health facilities in rural areas with poorly qualified staff. Thus, the regular presence of health workers in the public facilities may not lead to an increase in production of health service delivery or utilisation of the public health services, especially in the rural areas. Despite receiving financial incentives in recent years, including from user fees and midwifery incentives, as well as an automatic annual salary increase of 20 % since 2010/11, deploying and retaining health workers to work in the rural area remains a critical challenge for OD managers. Field visits also suggest a lack of equipment, especially in health centres and rural areas.
Explaining the policy effects
The recent health financing reforms have aimed to stimulate supply and demand, and our results suggest that they have been effective, singly and in combination. This is compatible with results of evaluations of specific policies. For example, Chhun et al. found that that HEFs and vouchers increased health care access of the poor at both public and private facilities [30]. On the supply side, interviews with health workers and managers [27] also highlight that a large proportion of income collected from the financial incentive schemes was used for increasing the income and motivation of health workers, as well as being used by the facility managers to make the service at the public facilities available for 24 h a day to the population.
In Kampong Cham, three factors may explain its relatively higher efficiency scores. First, it has implemented the full range of demand-side and supply-side interventions. Secondly, as an early adopter, it benefited from management capacity development in the early stages of the pilot schemes—experiences which have been shared among the facility managers through quarterly provincial health department (PHD) meetings to improve their leadership and management skills, according to key informants. Finally, NGO support since 2000 in improving health infrastructure and supply of medical equipment at facility level is thought to be in part responsible for increased capacity for service delivery.
Health worker explanatory factors
One of the factors behind low efficiency is the limited working hours of many public sector workers. From in-depth interviews with health workers, these are estimated as varying between 3 h for well-experienced specialist doctors to 8 h for newly graduated midwife; and on average, the secondary level cadres are likely to spend at least 6 h working at their public posts.
The revenue collected from the demand side financial schemes is reportedly put into pooled income of the facility; and 60 % of this income is then distributed among the active staff. The health workers earn a supplementary income of around 100–150 USD per month, according to their level of education attainment and the types of tasks they accomplish, in addition to their basic salary of about 65USD and an annual increase 20 % of basic salary from the government for the past 5 years [31]. However, most mangers and health workers argued that income from public services remains lower than the expenditure that they need to support their families. Therefore, the health managers felt that:
"… I cannot force my staff to work full time, although they receive a top up average of 100-150USD per month from recent incentive based payment schemes in addition to their salary. But it cannot off set what they need to spend to support their families. Therefore, I often open one eye and close another eye when my staff come late at work or leave a bit early for making living to support their families. My task then is to ensure the quality of worker when the staff present at their post…" (KCII1)
In the health sector, the health workers often need to do dual practice to make extra income to support their family. For example, renting a house costs around 100USD per month, depending on the type of house; this is already higher than the average salary of a public health worker. The opportunity for informal practice or taking part time jobs is identified by most health managers as an important, though illegal, strategy to motivate the health workers to remain in their posts. One of the managers estimated that dual practice has reportedly contributed about 40–50 % of staff revenues and most managers agreed that:
"… if we completely stop the informal practice, I am sure they will all have gone" (KCII2).
Low salaries reduce public sector input costs, but may encourage dual practice which tends to reduce public sector (but not necessarily overall) outputs. Unfortunately, we lack trend data for overall public pay and dual practice over the period, to be able to disentangle effects on efficiency measures. However, it is important to bear in mind that most of health workers set up their private businesses (clinics) or take part-time employment at private clinics or provided private service to their clients for extra income generation to support their family. A recent study confirmed that over 50 % of the public health workers interviewed also worked in the private sector in 2012 [32]. Overall human resource productivity will therefore be higher than reported here, but with additional costs to consumers.
In the civil code of professional conduct, public health workers are required to work for 8 h a day and are not allowed to have dual employment. In practice, the managers cannot enforce punctuality and stop private practice after-office hours, and they often report a sympathetic management practice to retain key health personnel, especially the ones with technical competency. This management practice has resulted in positive and negative effects on the quality of public health service delivery [27]. On the positive side, it makes health care service available for 24 h by rotating stand-by staff, with urgent calls to specific health professionals if it is necessary. The negative side of this sympathetic management practice is the diversion of clients to private clinics or individual home care, which affects the efficiency scores of most public health care services captured by this study.
Reflections on data reliability
It is important to reflect on how accurate the staffing numbers are and whether staff are actually working their official hours, both of which affect the efficiency estimates. Triangulating with qualitative information suggests that fewer health workers actually remain working in post than officially recorded by the MoH. One of the in-depth interview participants working in a remote health centre confirmed in Stung Treng that:
…There are four staffs at this HC, but only one or two are active. I am working here, there is no replacement. For my unit, there are only 3 staff and one is active while others come and go… (STIDI2).
The OD and health centre mangers report that they have often received the list of the names of the newly recruited staffs from the Ministry of Health or Provincial Health Department in response to the requests they have put forward in the human resource plan. While the list of personnel is officially updated, some new health workers such as doctors or secondary nurses or midwifes had never shown up at their assigned posts, or if they do, they do not go regularly to their assigned posts. With interventions from the high-ranking officials and other family connections, they then ask to be transferred to work elsewhere after completing 1 year of their probation period. In these cases, the name of those staff can remain in the official records for 6–12 months, before they are corrected.
Some experienced health workers have taken leave without pay from the public sector to work for NGO health-related programmes or private providers, and their names may reportedly remain in the official records. Such shortfalls can be addressed in areas with internal contracting and SOA status, but in others it is harder to fill these gaps, according to health manager key informants at all levels of the health system.
It should be noted too that health workers at the OD level also take on other activities which were not reflected in our efficiency analysis (such as village outreach visits), due to lack of data on these activities.
Our analysis suggests increased efficiency in most of the selected ODs over the period, but with substantial variations across the provinces. Some important limitations are however noted, including data shortages, which reduced the number of sites and the type of analysis which could be conducted. Quality of care indicators are absent, some information (such as staffing numbers) may not be fully accurate, and important information is not included, such as working hours and public pay. Also, while it is innovative to aggregate inputs and outputs for all facilities in a health district, the results must be interpreted with care: by aggregating, efficient facilities will be fused with less efficient ones. Thus, the fact that a health district is inefficient does not mean that it has no efficient facilities. The findings do however open up a series of questions, which qualitative information complements. For example, the difficult working and access conditions in Stung Treng suggest that a different set of interventions might be needed there to boost outputs, compared to other areas. Addressing efficiency requires an understanding of area factors, organisational level factors and individual supply-and demand-side factors, as well as interaction between public, private and informal markets. Some of these are more amenable to policy levers than others.
In Cambodia, this is the first study of OD productivity, and indeed it adds to a very limited body of evidence on health district efficiency (most studies which have been done in low and middle income settings have analysed the hospital as the unit of production), and on the impact of health sector reforms on productivity. A number of policy implications can be drawn from this study. For effective resource planning and monitoring purposes, the administrative records should be improved. It would also be interesting and useful to replicate the analysis using country-wide data sets. The results of this country-wide analysis could be used for resource planning and targeting of new initiatives. The finding of an association between recent policy interventions and improved productivity of public health services will also be of interest to other countries planning similar health sector reforms.
World Bank Data. GDP per Capita (current US$). http://data.worldbank.org/indicator/NY.GDP.PCAP.CD/ (05/5/2015).
World Bank Data. Health expenditure per Capita (current US$). http://data.worldbank.org/indicator/SH.XPD.PCAP?page=3/ (05/5/2015).
WHO. Human Resources for Health Country Profiles: Cambodia. 2014. http://www.wpro.who.int/hrh/documents/publications/wpr_hrh_country_profile_cambodia_upload_ver1.dpf. Accessed 7 July 2014.
National Institute of Statistics. Cambodia demographic and health survey 2000. Phnom Penh: NIS, Ministry of Planning; 2001.
Annear PL, Grundy J, Ir P, Jacobs B, Men C, Nachtnebel M, Oum S, Bobins A, Ros EC. The Kingdom of Cambodia health system review. World Health Organisation. 2015. http://www.wpro.who.int/asia_pacific_observatory/hits/series/cambodia_health_systems_review.pdf. Accessed 4 May 2015.
World Health Organisation and Minstry of Health. Health Service Delivery Profile. 2012. http://www.wpro.who.int/health_services/service_delivery_profile_cambodia.pdf. Accessed 06 Aug 2015.
Ministry of Health. Health Strategic Plan 2008–2015. Ministry of Health. 2008. http://apps.who.int/medicinedocs/documents/s18360en/s18360en. Accessed 10 Aug 2013.
Vong S, Newlands D, Raven J. Change process in contracting management in Cambodia: in depth interviews with health care managers and providers. Research for building pro-poor health systems during recovery from conflict (REBUILD) (forthcoming).
Bigdeli M, Annear PP. Barriers to access and the purchasing function of health equality funds: lessons from Cambodia. Bull World Health Organ. 2006;87:560–4.
Annear P. A comprehensive review of the literature on health equity funds in Cambodia 2001–2010 and annotation bibliography. Health Policy and Health Finance Knowledge Hub (Working Paper 9, November 2010. 2010. http://healthmarketinnovations.org/sites/default/files/A%20comprehensive%20review%20of%20the%20literature%20on.pdf. Accessed 10 Nov 2013.
Hardeman W, Damme W, Pelt MV, Por I, Kimvan H, Meessen B. Access to health care for all? User fee plus a health equity fund in Sotnikum, Cambodia. Health Policy and Planning. 19 (1):22–32, Oxford University Press. 2004.
Dingle A, Jackson TP, Goodman C. A decade of improvements in equity of access to reproductive and maternal health services in Cambodia, 2000–2010. Int J Equity Health. 2013;12:51.
Bandeth S, Neath N, Nonglak P, Sothea S. Understanding rural health service in cambodia-results of a discrete choice experiment. In: Jalilian H, Sen V, editors. Singagapore: Institute of Southeast Studies-report 2011. pp. 202–44. 2011.
Kim K, Annear LP. The transition to semi-autonomous management of district health services in Cambodia: assessing purchasing arrangements, transition costs, and operational efficiencies of the special operating agencies. In: Jalilian H, Sen V, editors. Singapore: Institute of Southeast Studies-report 2011. pp. 45–73. 2011.
Witter S, Chirwa Y, Namakula J, Samai M, Sok S. Understanding health worker incentives in post-conflict settings: study protocol. ReBuild consortium. 2012. http://www.rebuildconsortium.com/media/1209/rebuild-research-protocol-summary-health-worker-incentives.pdf.
Parker BR, Tavares G. Evaluation of research in efficiency and productivity: a survey and analysis of the first 30 years of scholarly literature in DEA. Socio-Econ Plan Sci. 2008;42(3):151–7.
Arocena P, Garcia-Prado A. Accounting for quality in the measurement of hospital performance: evidence from Costa Rica. Health Econ. 2007;16(7):667–85.
Zere E, Mbeeli T, Shangula K, Mandlhate C, Mutirua K, Tjivambi B, Kapenambili W. Technical efficiency of district hospitals: Evidence from Namibia using data envelopment analysis. Cost Effect Resour Allocat. 2006; 4.
Kirigia JM, Emrouznejad A, Sambo IG. Measurement of technical efficiency of public hospitals in Kenya: using data envelopment analysis. J Med Syst. 2002;26(1):39–45.
Masiye F. Investigating health system performance: an application of data envelopment analysis to Zambian hospitals. BMC Health Serv Res. 2007;7:58.
Masiye F, Kirigia J, Emrouznejad A, Sambo L, Mounkaila A, Chimfwembe D, Okello D. Efficient management of health centres human resources in Zambia. J Med Syst. 2006;30(6):473–81.
Renner A, Kirigia J, Zere A, Barry S, Kirigia D, Kamara C, Muthuri L. Technical efficiency of peripheral health units in Pujehun district of Sierra Leone: a DEA application. BMC Health Ser Res. 2005;5:p77.
Razzaq SA, Chaudhary A, Khan A. Efficiency analysis of basic health units: a comparison of developed and deprived regions in Azad Jammu and Kashmir. Iranian J Public Health. 2013;42(11):1223–31.
Jacobs R, Smith PC, Street A. Measuring efficiency in health care: analytic techniques and health policy. Cambridge: Cambridge University Press; 2006.
Coelli TJ. A guide to DEAP version 2.1.: a data envelopment analysis (computer) program, CEPA Working Paper No. 8/96, Centre for Efficiency and Productivity Analysis, University of New England, Australia. 1996.
So S, Witter S. Policies to attract and retain health workers in rural areas—health worker perceptions and responses in post-conflict Cambodia. Report for ReBUILD. 2015.
National Institute of Statistics. General population census of Cambodia 2008. Phnom Penh: NIS, Ministry of Planning; 2008.
National Institute of Statistics. Cambodia inter-censal population survey 2013. Phnom Penh: NIS, Ministry of Planning; 2013.
Chhun C, Kimsun T, Yu G, Ensor T, McPale B. The impact of health financing polities on household spending: evidence from Cambodia socio-economic survey 2004 and 2009. Report for ReBUILD (forthcoming).
World Bank. Public service pay in Cambodia: the challenges of salary reform. Policy Note. Washington, DC: 83087. 2013.
HR Inc Cambodia. Cambodia public health compensation and HR review: a review of Cambodia public health professionals earnings composition, their motivations and HR practices. Phnom Penh: World Bank and Ministry of Health; 2013.
TE led on quantitative analysis. SS led on data collection and analysis of qualitative data. SW led the overall study design and coordinated the drafting of the article. All authors contributed to the drafting. All authors read and approved the final manuscript.
This work was carried out as part of the ReBUILD consortium grant from the UK Department for International Development (http://www.rebuildconsortium.com). The views expressed here are those of the authors alone.
International Health Systems, Leeds Institute of Health Sciences, Leeds, UK
Tim Ensor
Cambodia Development and Research Institute, Phnom Penh, Cambodia
Sovannarith So
International Health Financing and Systems, IIHD, Queen Margaret University, Edinburgh, UK
Sophie Witter
Search for Tim Ensor in:
Search for Sovannarith So in:
Search for Sophie Witter in:
Correspondence to Sophie Witter.
12962_2016_51_MOESM1_ESM.docx
Additional file 1: Web annex S1. Efficiency scores by operational district.
Ensor, T., So, S. & Witter, S. Exploring the influence of context and policy on health district productivity in Cambodia. Cost Eff Resour Alloc 14, 1 (2016) doi:10.1186/s12962-016-0051-6
Data envelopment analysis
Health districts
Health sector reform | CommonCrawl |
Martingale and pathwise solutions to the stochastic Zakharov-Kuznetsov equation with multiplicative noise
Backward bifurcation and global stability in an epidemic model with treatment and vaccination
June 2014, 19(4): 1027-1045. doi: 10.3934/dcdsb.2014.19.1027
On the stochastic beam equation driven by a Non-Gaussian Lévy process
Hongjun Gao 1, and Fei Liang 2,
Jiangsu Provincial Key Laboratory for Numerical Simulation of Large Scale Complex Systems, School of Mathematical Science, Nanjing Normal University, Nanjing 210023, China
Department of Mathematics, Northwest University, Xi An 710069, China
Received June 2012 Revised November 2013 Published April 2014
A damped stochastic beam equation driven by a Non-Gaussian Lévy process is studied. Under appropriate conditions, the existence theorem for a unique global weak solution is given. Moreover, we also show the existence of a unique invariant measure associated with the transition semigroup under mild conditions.
Keywords: invariant measure., global weak solution, Lévy process, Stochastic beam equation, transition semigroup.
Mathematics Subject Classification: Primary: 35L05, 35L70; Secondary: 60H15, 36R6.
Citation: Hongjun Gao, Fei Liang. On the stochastic beam equation driven by a Non-Gaussian Lévy process. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1027-1045. doi: 10.3934/dcdsb.2014.19.1027
D. Applebaum, Lévy Process and Stochastic Calculus,, 2nd edition, (2009). doi: 10.1017/CBO9780511809781. Google Scholar
V. Barbu and G. D. Prato, The stochastic nonlinear damped wave equation,, Appl. Math. Optim., 46 (2002), 125. doi: 10.1007/s00245-002-0744-4. Google Scholar
V. Barbu, G. D. Prato and L. Tubaro, Stochastic wave equations with dissipative damping,, Stochastic Process. Appl., 117 (2007), 1001. doi: 10.1016/j.spa.2006.11.006. Google Scholar
L. J. Bo, K. H. Shi and Y. J. Wang, ON a stochastic wave equation driven by a non-Gaussian Lévy process,, J. Theor. Probab, 23 (2010), 328. doi: 10.1007/s10959-009-0228-4. Google Scholar
L. J. Bo, D. Tang and Y. J. Wang, Explosive solutions of stochastic wave equations with damping on $\mathbbR^d$,, J. Differential Equations, 244 (2008), 170. doi: 10.1016/j.jde.2007.10.016. Google Scholar
Z. Brzeźniak, B. Maslowski and J. Seidler, Nonlinear stochstic wave equations: blow-up of second moments in $L^2$-norm,, Probab. Theory Related Fields, 132 (2005), 119. doi: 10.1007/s00440-004-0392-5. Google Scholar
Z. Brzeźniak and J. H. Zhu, Stochastic nonlinear beam equations driven by compensated Poisson random measures,, preprint, (). Google Scholar
T. Caraballo, P. E. Kloeden and B. Schmalfuß, Exponentially stable stationary solutions for stochastic evolution equations and their perturbation,, Appl Math Optim, 50 (2004), 183. doi: 10.1007/s00245-004-0802-1. Google Scholar
M. M. Cavalcanti, V. N. Domingos Cavalcanti and J. A. Soriano, Global existence and asymptotic stability for the nonlinear and generalized damped extensible plate equation,, Commun. Contemp. Math., 6 (2004), 705. doi: 10.1142/S0219199704001483. Google Scholar
P. L. Chow, Stochastic wave equations with polynomial nonlinearity,, Ann. Appl. Probab., 12 (2002), 1. doi: 10.1214/aoap/1015961168. Google Scholar
P. L. Chow, Asymptotics of solutions to semilinear stochastic wave equations,, Ann. Appl. Probab., 16 (2006), 475. doi: 10.1214/105051606000000141. Google Scholar
P. L. Chow, Asymptotic solutions of a nonlinear stochastic beam equation,, Discrete Contin. Dyn. Syst. Ser. B., 6 (2006), 735. doi: 10.3934/dcdsb.2006.6.735. Google Scholar
P. L. Chow, Nonlinear stochstic wave equations: blow-up of second moments in $L^2$-norm,, Ann. Appl. Probab., 19 (2009), 2039. doi: 10.1214/09-AAP602. Google Scholar
P. L. Chow and J. L. Menaldi, Stochastic PDE for nonlinear vibration of elastic panels,, Differential Integral Equations, 12 (1999), 419. Google Scholar
I. D. Chueshov, Introduction to the Theory of Infinite-Dimensional Dissipative Systems,, University Lectures in Contemporary Mathematics, (1999). Google Scholar
H. Crauel, A. Debussche and F. Flandoli, Random attractors,, J. Dynam. Differential Equations., 9 (1997), 307. doi: 10.1007/BF02219225. Google Scholar
G. Da Prato and J. Zabczyk,, Stochastic Equations in Infinite Dimensions,, Cambridge Univ. Press, (1992). doi: 10.1017/CBO9780511666223. Google Scholar
R. W. Dickey, Free vibrations and dynamic buckling of the extensible beam,, J. Math. Anal. Appl., 29 (1970), 443. doi: 10.1016/0022-247X(70)90094-6. Google Scholar
J. G. Eisley, Nonlinear vibration of beams and rectangular plates,, Z. Angew. Math. Phys., 15 (1964), 167. doi: 10.1007/BF01602658. Google Scholar
W. E. Fitzgibbon, Global existence and boundedness of solutions to the extensible beam equation,, SIAM J. Math. Anal., 13 (1982), 739. doi: 10.1137/0513050. Google Scholar
P. Holmes and J. Marsden, A partial differential equation with infinitely many periodic orbits: chaotic oscillations of a forced beam,, Arch. Ration. Mech. Anal., 76 (1981), 135. doi: 10.1007/BF00251249. Google Scholar
N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes,, North-Holland Publishing Co., (1981). Google Scholar
J. U. Kim, On the stochastic wave equation with nonlinear damping,, Appl. Math. Optim., 58 (2008), 29. doi: 10.1007/s00245-007-9029-2. Google Scholar
S. Kouémou Patcheu, On a global solution and asymptotic behaviour for the generalized damped extensible beam equation,, J. Differential Equations, 135 (1997), 229. doi: 10.1006/jdeq.1996.3231. Google Scholar
F. Liang, Explosive solutions of stochastic nonlinear beam equations with damping,, accepted by J. Math. Anal. Appl., (). Google Scholar
A. Millet and P. L. Morien, On a nonlinear stochastic wave equation in the plane: Existence and uniqueness of the solution,, Ann. Appl. Probab., 11 (2001), 922. doi: 10.1214/aoap/1015345353. Google Scholar
S. Peszat and J. Zabczyk, Stochastic heat and wave equations driven by an impulsive noise,, In Da Prato, (2006), 229. doi: 10.1201/9781420028720.ch19. Google Scholar
S. Peszat and J. Zabczyk, Stochastic Partial Differential Equations with Lévy Noise. An Evolution Equation Approach,, Encyclopedia of Mathematics and Its Applications, (2007). doi: 10.1017/CBO9780511721373. Google Scholar
E. L. Reiss and B. J. Matkowsky, Nonlinear dynamic buckling of a compressed elastic column,, Quart. Appl. Math., 29 (1971), 245. Google Scholar
K. Sato, Lévy Process and Infinitely Divisible Distributions,, Cambridge University Press, (1999). Google Scholar
L. Soraya and T. Nasser-eddine, Blow-up of solutions for a nonlinear beam equation with fractional feedback,, Nonlinear Anal., 74 (2011), 1402. doi: 10.1016/j.na.2010.10.012. Google Scholar
R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics,, 2nd edn. Springer, (1997). Google Scholar
A. Unai, Abstract nonlinear beam equations,, SUT J. Math., 29 (1993), 323. Google Scholar
C. F. Vasconcellos and L. M. Teixeira, Existence, uniqueness and stabilization for a nonlinear plate system with nonlinear damping,, Ann. Fac. Sci. ToulouseMath., 8 (1999), 173. doi: 10.5802/afst.928. Google Scholar
S. Woinowsky-Krieger, The effect of an axial force on the vibration of hinged bars,, J. Appl. Mech., 17 (1950), 35. Google Scholar
E. Zeidler, Nonlinear Functional Analysis and Its Applications, II/B, Nonlinear Monotone Operators,, Springer, (1990). doi: 10.1007/978-1-4612-0985-0. Google Scholar
Yong-Kum Cho. On the Boltzmann equation with the symmetric stable Lévy process. Kinetic & Related Models, 2015, 8 (1) : 53-77. doi: 10.3934/krm.2015.8.53
Giuseppe Da Prato. An integral inequality for the invariant measure of some finite dimensional stochastic differential equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3015-3027. doi: 10.3934/dcdsb.2016085
Yan Wang, Guanggan Chen. Invariant measure of stochastic fractional Burgers equation with degenerate noise on a bounded interval. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3121-3135. doi: 10.3934/cpaa.2019140
Yongxia Zhao, Rongming Wang, Chuancun Yin. Optimal dividends and capital injections for a spectrally positive Lévy process. Journal of Industrial & Management Optimization, 2017, 13 (1) : 1-21. doi: 10.3934/jimo.2016001
Badr-eddine Berrhazi, Mohamed El Fatini, Tomás Caraballo, Roger Pettersson. A stochastic SIRI epidemic model with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2415-2431. doi: 10.3934/dcdsb.2018057
Linghua Chen, Espen R. Jakobsen. L1 semigroup generation for Fokker-Planck operators associated to general Lévy driven SDEs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5735-5763. doi: 10.3934/dcds.2018250
Adam Andersson, Felix Lindner. Malliavin regularity and weak approximation of semilinear SPDEs with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4271-4294. doi: 10.3934/dcdsb.2019081
Kexue Li, Jigen Peng, Junxiong Jia. Explosive solutions of parabolic stochastic partial differential equations with lévy noise. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5105-5125. doi: 10.3934/dcds.2017221
Justin Cyr, Phuong Nguyen, Roger Temam. Stochastic one layer shallow water equations with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3765-3818. doi: 10.3934/dcdsb.2018331
Wen Chen, Song Wang. A finite difference method for pricing European and American options under a geometric Lévy process. Journal of Industrial & Management Optimization, 2015, 11 (1) : 241-264. doi: 10.3934/jimo.2015.11.241
Pao-Liu Chow. Asymptotic solutions of a nonlinear stochastic beam equation. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 735-749. doi: 10.3934/dcdsb.2006.6.735
Vladimir E. Fedorov, Natalia D. Ivanova. Identification problem for a degenerate evolution equation with overdetermination on the solution semigroup kernel. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 687-696. doi: 10.3934/dcdss.2016022
Jing Li, Boling Guo, Lan Zeng, Yitong Pei. Global weak solution and smooth solution of the periodic initial value problem for the generalized Landau-Lifshitz-Bloch equation in high dimensions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1345-1360. doi: 10.3934/dcdsb.2019230
Jiangyan Peng, Dingcheng Wang. Asymptotics for ruin probabilities of a non-standard renewal risk model with dependence structures and exponential Lévy process investment returns. Journal of Industrial & Management Optimization, 2017, 13 (1) : 155-185. doi: 10.3934/jimo.2016010
András Bátkai, Istvan Z. Kiss, Eszter Sikolya, Péter L. Simon. Differential equation approximations of stochastic network processes: An operator semigroup approach. Networks & Heterogeneous Media, 2012, 7 (1) : 43-58. doi: 10.3934/nhm.2012.7.43
Markus Riedle, Jianliang Zhai. Large deviations for stochastic heat equations with memory driven by Lévy-type noise. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1983-2005. doi: 10.3934/dcds.2018080
Kumarasamy Sakthivel, Sivaguru S. Sritharan. Martingale solutions for stochastic Navier-Stokes equations driven by Lévy noise. Evolution Equations & Control Theory, 2012, 1 (2) : 355-392. doi: 10.3934/eect.2012.1.355
Jiahui Zhu, Zdzisław Brzeźniak. Nonlinear stochastic partial differential equations of hyperbolic type driven by Lévy-type noises. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3269-3299. doi: 10.3934/dcdsb.2016097
Xueqin Li, Chao Tang, Tianmin Huang. Poisson $S^2$-almost automorphy for stochastic processes and its applications to SPDEs driven by Lévy noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3309-3345. doi: 10.3934/dcdsb.2018282
Min Niu, Bin Xie. Comparison theorem and correlation for stochastic heat equations driven by Lévy space-time white noises. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 2989-3009. doi: 10.3934/dcdsb.2018296
Hongjun Gao Fei Liang | CommonCrawl |
Progress in Earth and Planetary Science
Future projection of greenhouse gas emissions due to permafrost degradation using a simple numerical scheme with a global land surface model
Tokuta Yokohata ORCID: orcid.org/0000-0001-7346-79881,
Kazuyuki Saito2,
Akihiko Ito1,
Hiroshi Ohno3,
Katsumasa Tanaka1,4,
Tomohiro Hajima2 &
Go Iwahana5
Progress in Earth and Planetary Science volume 7, Article number: 56 (2020) Cite this article
The Yedoma layer, a permafrost layer containing a massive amount of underground ice in the Arctic regions, is reported to be rapidly thawing. In this study, we develop the Permafrost Degradation and Greenhouse gasses Emission Model (PDGEM), which describes the thawing of the Arctic permafrost including the Yedoma layer due to climate change and the greenhouse gas (GHG) emissions. The PDGEM includes the processes by which high-concentration GHGs (CO2 and CH4) contained in the pores of the Yedoma layer are released directly by dynamic degradation, as well as the processes by which GHGs are released by the decomposition of organic matter in the Yedoma layer and other permafrost. Our model simulations show that the total GHG emissions from permafrost degradation in the RCP8.5 scenario was estimated to be 31-63 PgC for CO2 and 1261-2821 TgCH4 for CH4 (68th percentile of the perturbed model simulations, corresponding to a global average surface air temperature change of 0.05–0.11 °C), and 14-28 PgC for CO2 and 618-1341 TgCH4 for CH4 (0.03–0.07 °C) in the RCP2.6 scenario. GHG emissions resulting from the dynamic degradation of the Yedoma layer were estimated to be less than 1% of the total emissions from the permafrost in both scenarios, possibly because of the small area ratio of the Yedoma layer. An advantage of PDGEM is that geographical distributions of GHG emissions can be estimated by combining a state-of-the-art land surface model featuring detailed physical processes with a GHG release model using a simple scheme, enabling us to consider a broad range of uncertainty regarding model parameters. In regions with large GHG emissions due to permafrost thawing, it may be possible to help reduce GHG emissions by taking measures such as restraining land development.
"Permafrost" is the name given to areas where the ground temperature has remained below 0 °C for more than 2 years (IPCC 2013). Virtually all soil contains the bodies of dead organisms (mainly plants) in the form of organic matter (Zimov et al. 2006a; Schuur et al. 2008; Brown 2013). When the soil is not frozen, the organic matter is decomposed by microorganisms and released from the surface to the atmosphere in the form of carbon dioxide or methane (Zimov et al. 2006b; Walter et al. 2007; Ciais et al. 2013). However, when the soil is frozen, the organic matter is trapped without being decomposed, as the activity of these microorganisms is suppressed (Brown 2013; Hugelius et al. 2013; Hugelius et al. 2014). It is estimated that permafrost contains roughly twice the amount of carbon as the air and approximately three times as much as land plants (Prentice et al. 2001; Ping et al. 2008; Tarnocai et al. 2009; Dlugokencky and Tans 2013). As the Earth's surface temperature rises due to climate change, the frozen soil in the polar region will thaw, thereby releasing in the form of greenhouse gases (GHGs) the organic substances contained in the frozen soil (Collins et al. 2013; Koven et al. 2013; Schuur et al. 2015). These GHGs will further accelerate global warming (Lenton 2012; Köhler et al. 2014; Schuur et al. 2015). Given the large amount of carbon contained in the permafrost, positive feedback from permafrost thawing is very likely to accelerate changes in the climate system (Schaefer et al. 2014; Koven et al. 2015; MacDougall et al. 2015; Schneider von Deimling et al. 2015; MacDougall and Knutti 2016; Steffen et al. 2018; Gasser et al. 2018; McGuire et al. 2018; Kawamiya et al. 2020).
Still, there is a great deal of uncertainty regarding the process of GHG emissions from permafrost thawing (Schaefer et al. 2014). This is partly due to the lack of observational knowledge of basic permafrost processes (Schuur et al. 2015). Although permafrost exists in various forms depending on its formation factors, what has been attracting attention in recent years is the thawing of the Yedoma layer, a permafrost layer containing a large mass of ground ice, mostly found in Alaska and Siberia (Strauss et al. 2013; Strauss et al. 2017). It has long been known that the Yedoma layer exists in permafrost zones (Brouchkov and Fukuda 2002; Schirrmeister et al. 2011; Kanevskiy et al. 2011), but it has only recently been noted that this huge underground layer is thawing rapidly (Vonk et al. 2012; Ulrich et al. 2014; Strauss et al. 2017). Analysis of satellite observations suggest that a subsidence of the ground occurred at sites where tundra fires have caused the heat insulation effect of vegetation on the surface to disappear (Iwahana et al. 2016) and that frozen soil and ground ice are being degraded by erosion from rivers and ocean waves (Günther et al. 2013; Jones et al. 2011; Kanevskiy et al. 2016). Previous studies have reported that the ground ice and frozen soils in the Yedoma layer contain high concentrations of carbon dioxide, methane, and organic carbon (e.g., Saito et al. 2017; Strauss et al. 2017). To date, however, the impact on the climate system of the dynamic degradation of the Yedoma layer associated with ground subsidence has not been sufficiently evaluated, partly due to the difficulty of modeling it in global climate models (Schneider von Deimling et al. 2015).
In this study, we developed a simple scheme to describe the thawing process of the Yedoma layer accompanied by vertical mechanical collapse due to ground subsidence (hereinafter called "dynamic degradation") based on in-situ observations conducted in Alaska and Siberia. Using this model, we estimate the GHG emissions due to the future degradation of the Yedoma layers. We consider two pathways for GHG emissions due to permafrost degradation: the process of releasing GHGs (CO2 and CH4) trapped in the frozen soil (referred to as "direct emissions") and the process of releasing GHG emissions produced by the decomposition of organic matter contained in the frozen soil ("secondary emissions") caused by the thawing of the permafrost. In addition to the dynamic degradation of the Yedoma layers, we also estimate the GHG emissions due to the thermodynamic degradation of the permafrost owing to the increase in ground temperature. Finally, in the course of our study, we estimate the global mean temperature response caused by the GHG emissions due to permafrost degradations using the simple climate model ACC2 (Tanaka and O'Neill 2018).
Permafrost Degradation and Greenhouse gasses Emission Model (PDGEM) evaluates the GHG emissions due to the degradation of the permafrost layer. PDGEM describes the processes of dynamic (Section 2.1.1) and thermodynamic (Section 2.1.2) permafrost degradation with a simple formulation and calculates the GHG emissions globally with a resolution of 1 degree. The parameters used in the formulation are varied (Table 1) in order to describe the future possible behavior of permafrost degradations. Details of the model formulation and experimental settings for the future projections are summarized in the sections that follow.
Table 1 Model parameters for the calculation of GHG emissions due to the dynamic and thermodynamic permafrost degradations
Description of Permafrost Degradation and GHG Emission Model
Dynamic degradation of the Yedoma Layer
FDy [kg year−1], the GHG emissions due to the dynamic degradation of the Yedoma layer, is defined as
$$ {F}_{Dy}={F}_{Dy, dir}+{F}_{Dy,\sec }. $$
FDy, dir: GHG emissions due to the release of gases trapped in the frozen soil [kg year−1]
FDy, sec: GHG emissions due to the decomposition of organic matter [kg year−1]
The first term in Eq. (1) corresponds to direct emissions, while the second term represents secondary emissions due to dynamic degradation. The direct emissions are formulated as follows:
$$ {F}_{Dy, dir}=\Delta {V}_{Dy}\times {X}_{\mathrm{GHG}}. $$
∆VDy: Volume of thawed permafrost due to dynamic degradation [m3 year−1]
XGHG, i: GHG mass in thawed permafrost [kg m−3]
Observational studies have measured the settling velocity of the ground surface due to permafrost thawing in the area where fire has occurred (e.g., Iwahana et al. 2016). In this study, the volume of dynamic permafrost thawing is formulated based on this observational knowledge as follows:
$$ \Delta {V}_{Dy}={P}_{dstrb}\times {A}_{ydm}\times {V}_{dstrb}. $$
Pdstrb: Probability of occurrence of fire
Aydm: Area of Yedoma layer in a 1-degree grid cell [m2]
Vdstrb: Settling velocity of the ground due to permafrost thawing [m year−1]
Equation (2) describes the processes of permafrost thawing with land subsidence owing to the occurrence of fire. We determine the fire area in the Yedoma layer with the first and second terms (Pdstrb × Aydm) in Eq. (3). The probability of fire, Pdstrb, is given as a function of meteorological data based on the observed relationship between past occurrences of fires and meteorological conditions. Veraverbeke et al. 2017 showed high correlations between fire occurrence and temperature, total precipitation and convective precipitation in the Northwest territory (NT) and Alaska (AK) from 2001 to 2015. In this study, the future fire area ratio, Pdstrb, is estimated using future meteorological data and the relationship shown below:
$$ {P}_{dstrb}=a+b\times {T}_{air}+c\times {P}_{total}+d\times {P}_{conv} $$
where Tair is surface air temperature [K], Ptotal is total precipitation [kg/m2/s], and Pconv is convective precipitation [kg/m2/s], and the coefficients are a = − 0.495, b = 0.00179, c = − 343.6, d = 204.4. The coefficients in Eq. (4) are obtained using multiple regression of the fire area ratios for 2001-2015 in NT and AK, from Veraverbeke et al. 2017, and NCEP reanalysis data (Kalnay et al. 1996) for the same regions. To estimate the future fire area ratio, the bias of the global climate models (GCMs, details of which are explained later) is corrected with NCEP reanalysis data (Kalnay et al. 1996) by subtracting the climatological error (the difference between model results and the reanalysis data using 1980–2000 average). As a result of this bias correction, the estimated fire area ratio based on Eq. (4) is consistent with past observations (Veraverbeke et al. 2017). Given that Veraverbeke et al. 2017 found correlation based on the NT and AK regions, we estimate the future fire area ratio by averaging the climate model data at 10-degree resolution. We also confirmed that the difference between the estimated value of the fire area obtained by Eq. (4) and the observed value (Veraverbeke et al. 2017) has a normal distribution (not shown, with standard deviation = 0.00229). Considering that fires generally occur stochastically, a normal distribution with the above standard deviation, corresponding to the difference between the estimated and observed fire area ratio, was used to randomly assign values to each 1-degree grid.
With respect to the area of the Yedoma layer, Aydm, we use the results of Saito et al. (2020) regarding the behavior of soil moisture and organic carbon from the last interglacial period (approximately 120,000 years ago) to the present with 20 km resolution. Since the Yedoma layer is considered to be a region where soil frozen water and soil organic carbon are particularly concentrated (e.g., Strauss et al. 2017), in this study, we defined the Yedoma layer by using a threshold value for soil frozen water and soil organic carbon as calculated in Saito et al. 2020. We based our threshold value on the "vulnerability" measure defined in Saito et al. 2020 as (ICE/max(ICE) × SOC/max(SOC)), where ICE and SOC are soil frozen water and soil organic carbon, respectively, and "max" denotes the maximum value across the spatial dimension. According to Strauss et al. 2017, the soil organic carbon in the Yedoma layer is estimated to be 83–129 GtC. In this study, the threshold of vulnerability was chosen so that the soil organic matter of the Yedoma layer falls within the range of Strauss et al. 2017 (Table 1).
The settling velocity, Vdstrb, in Eq. (2) is defined based on observational studies. Table 1 of Iwahana et al. 2016 synthesized the annual ground subsidence rates at various fire-burnt sites. In this study, the average value of Iwahana et al. 2016 is used; the range of the sedimentation velocity over the fire-bunt region is 2.4 ± 2.1 cm/year, as shown in Table 1.
The GHG concentration, XGHG, in Eq. (1) can be expressed by the following equation:
$$ {X}_{GHG}={R}_{pore}\times {C}_{GHG}\times {\rho}_{GHG}. $$
Rpore: Volume fraction of bubbles in the permafrost [ratio]
CGHG: GHG concentration in the permafrost pores [ratio]
ρGHG: Mass density of GHG [kg m−3]
In this study, we consider CO2 and CH4 as the GHG emissions and use data obtained by field observation in the Yedoma layer in Alaska and Siberia (Saito et al. 2017) to set the values of Rpore and CGHG. As reported in Saito et al. 2017, the Rpore and CGHG values obtained by field observation have very large variation. Table 2 shows the standard deviation of the observed values in Saito et al. 2017. In calculating the dynamic degradation, the average value for the ground ice and frozen soil is used for the calculation of XGHG. It is reported that the ice content in the Yedoma layer (rice) is approximately 0.64 (Strauss et al. 2017). Accordingly, the ratios of ground ice and frozen soil (rice and 1 − rice, respectively) are used as multipliers for Rpore and CGHG. Field observations revealed that the layer with high GHG concentration (Table 2) was above (approximately) 5 m in the soil column and that the lower layer had very low GHG concentration. In this study, therefore, we assume that the GHG concentration, as shown in Table 2, is zero below 5 m.
Table 2 Volume fraction of air bubbles in the ground ice and frozen soil (Rpore) and the concentration of GHGs in the air bubbles (CGHG)
In order to estimate the GHG emissions associated with the decomposition of soil organic carbon due to the dynamic degradation of permafrost (FDy, sec in Eq.1), this study considers four types of decomposition, following Schneider von Deimling et al. 2015 and Gasser et al. 2018. Specifically, we differentiate decomposition types based on two types of organic matter quality (fast or slow) and two types of soil moisture conditions (aerobic or anaerobic). The following equations for the decomposition of thawed permafrost carbon are solved with a global resolution of 1 degree:
$$ \frac{d{C}_{\mathrm{thaw}}^{i,j}}{dt}={\pi}^{i,j}{F}_{\mathrm{thaw}}-\frac{R^j}{\tau^i}{C}_{\mathrm{thaw}}^{i,j} $$
i: index for the quality of soil organic matter (fast or slow decomposition)
j: index for the soil moisture state (aerobic and anaerobic decomposition)
\( {C}_{\mathrm{thaw}}^{i,j} \): soil organic carbon content in the thawed permafrost [kg]
Fthaw: flux of soil organic carbon due to permafrost thawing [kg/year]
πi, j: fraction of flux for the corresponding types
τi: turnover time of soil organic carbon [year]
Rj: changes in soil organic carbon decomposition rate due to temperature rise
The model parameter, πi, j, i.e., the fraction of thawed soil organic carbon, depends on the quality of organic matter (i = 1: fast, i = 2: slow decomposition) and soil water content (j = 1: aerobic, j = 2, anaerobic). The quality of organic matter is an important determinant for the timescale of the carbon release (Strauss et al. 2015). We subdivide the thawed permafrost carbon into a fast and slow decomposing fraction with annual and decadal timescales (τi) based on the literature of soil organic quality, as shown in Table 1 (Sitch et al. 2003; Dutta et al. 2006; Koven et al. 2011; Burke et al. 2012; Schädel et al. 2014).
The soil water content is also a key determinant in the decomposition of soil organic carbon. In this study, the fraction of thawed permafrost carbon under the aerobic or anaerobic condition is determined by the wetland fraction, rwtlnd, obtained from the Global Lakes and Wetland Database (Lehner and Döll 2004). The original wetland fraction map is interpolated into 1-degree grid cells. The fraction of soil organic carbon for aerobic decomposition is 1 − rwtlnd, while that for anaerobic decomposition is rwtlnd in each grid cell. In the future simulations, extensions of the wetland area are represented as a function of surface air temperature rise, with reference to Schneider von Deimling et al. 2015. Specifically, we describe the increase in rwtlnd by linear scaling with the surface air temperature anomaly, ∆ Ta (the anomaly is calculated as the difference from the first 20-years average). The wetland fraction reaches its maximum extent (∆rwtlnd, max) for a warming ∆ Ta of 10 K. For further warming, the wetland fraction is kept constant at the maximum extent. The uncertainty range of ∆rwtlnd, max is shown in Table 1.
The flux of soil organic carbon due to permafrost thawing is formulated as
$$ {F}_{\mathrm{thaw}}=\Delta {V}_{Dy}\times {\rho}_{SOC} $$
Here, ρSOC is the density of soil organic carbon, calculated as \( {\rho}_{SOC}=\frac{\sigma_{SOC}}{d_{SOC}} \), where σSOC is the soil organic carbon from Saito et al. 2020, and dSOC is the depth of the soil organic carbon. dSOC is a model parameter in the range shown in Table 1. The changes in soil organic carbon decomposition rate due to temperature change are formulated with reference to Schneider von Deimling et al. 2015 as follows:
$$ {R}^j=Q{10}^{j\ \left({T}_g-10\right)/10} $$
Q10j: temperature sensitivity parameter
Tg: soil temperature [°C]
Q10j is the temperature sensitivity of carbon decomposition due to the microbial soil activity rises that accompany increasing soil temperature. The Q10j parameter is dependent on the aerobic or anaerobic conditions; the parameter ranges are given based on the literature, as shown in Table 1 (Walter and Heimann 2000; Shadel et al. 2013; Schneider von Deimling et al. 2015). For Tg, we use monthly mean soil temperature (averaged over the top 4 m), calculated by land-surface model simulations with 1-degree resolution (Yokohata et al. 2020a, 2020b). The details of this are explained in Section 2.2.
The GHG emissions due to the decomposition of soil organic carbon, FDy, sec, can be calculated by solving \( {C}_{\mathrm{thaw}}^{i,j} \) in Eq. (5) as follows:
$$ {F}_{Dy,\sec }={\sum}_{i,j}\left(\frac{d{C}_{\mathrm{thaw}}^{i,j}}{dt}-{\pi}^{i,j}{F}_{\mathrm{thaw}}\right)\times {r}_{\mathrm{gas}}^{i,j}\left(1-{oxd}^j\right) $$
\( {r}_{gas}^{i,j} \): production ratio of GHG (CO2 or CH4) due to soil organic matter decomposition
oxdj: oxidation rate of CH4
The production rate of CO2 and CH4, \( {r}_{\mathrm{gas}}^{i,j} \), is dependent on the soil organic quality and aerobic or anaerobic conditions. The ranges of the parameter values for \( {r}_{\mathrm{gas}}^{i,j} \) are determined based on incubation studies under various conditions (Segers 1998; Schuur et al. 2008; Lee et al. 2012; Walter Anthony et al. 2014; Schneider von Deimling et al. 2015). The oxdj term corresponds to the fraction of released carbon that is oxidized (thus, oxdj = 0 for CO2), the range of which is determined from the literature (Burke et al. 2012; Schneider von Deimling et al. 2015).
Thermodynamic degradation of the permafrost layer
In this section, the thermodynamic degradation of the permafrost (i.e., the thickening of the active layer) due to the rise in soil temperature in future climate change is formulated. In the formulation for the dynamic degradation of the Yedoma layer (Eq.1), direct emissions are considered due to the presence of high-concentration GHGs (Table 1). However, direct emissions are not considered in the thermodynamic degradation since high-concentration GHGs are not expected to be present in the permafrost other than in the Yedoma layer. Even in the thermodynamic degradation, (Eqs. 5, 6, 7 and 8) is used for the formulation of the secondary GHG release. To establish the flux of soil organic carbon due to the permafrost thawing associated with thermodynamic degradation, Eq. (6) is replaced by
$$ {F}_{\mathrm{thaw}}=\Delta {V}_{thr}\times {\rho}_{SOC} $$
∆VDy: Volume of thawed permafrost due to thermodynamic degradation [m3 year−1]
Here, we use the same ρSOC as described in Eq. (6). For the volume of thawed permafrost due to thermodynamic degradation, numerical simulations using a global land-surface model (Yokohata et al. 2020a, 2020b) are used. The formulation for ∆VTh [m3 yr-1] in the tth year is as follows:
$$ \Delta {V}_{Th}=\left[ ALT(t)-\operatorname{MAX}\left( ALT\left({t}_0\right),\kern0.5em {t}_0=0,\dots t-1\right)\right]\times {A}_{grid}. $$
Here, ALT(t) is the active layer thickness [m] (ALT, the annual maximum thaw depth) in the tth year. The active layer is defined as the region where the ground temperature exceeds 0 °C in summer seasons. Eq. (10) is formulated in order to avoid counting the thawed region multiple times due to the annual variability of ALT. Agrid in Eq. (10) is the grid area of the global climate model used for the simulation (1-degree latitude and longitude) as described in the next section. If Eq. (10) produces a negative value, ∆VTh is set to zero.
Experimental setting
Table 2 shows the standard values and uncertainty ranges for all parameters given in this study, as explained in the previous sections. Each parameter was randomly selected from the uniform distribution with the uncertainty range shown in Table 2. In all, 500 simulations were performed using the randomly selected parameters.
In addition to the parameters shown in Table 2, one of the physical variables used in the Permafrost Degradation and Greenhouse gasses Emission Model (PDGEM) model is the soil temperature, Tg, which is used for changes in the soil decomposition rate (Eq. 7) and the volume of the thermodynamic permafrost thawing (Eq. 9). Tg is calculated by the global land-surface climate model MIROC-INTEG-LAND (MIROC INTEGrated LAND surface model, Yokohata et al. 2020a, 2020b), which is based on the land surface components of the global climate model MIROC (Model for Interdisciplinary Research on Climate; Watanabe et al. 2010; Takata et al. 2003). The results of multi-GCM simulations provided by the Inter-Sectoral Impact Model Inter-comparison Project phase 1 (ISIMIP1, Hempel et al. 2013) were used as the atmospheric forcings to drive this land surface model. Using atmospheric forcings generated by the five GCMs (GFDL-ES2M, Dunne et al. 2012; HadGEM2-ES, Jones et al. 2011; IPSL-CM5A-LR, Dufresne et al. 2013; Nor-ESM, Bentsen et al. 2013; MIROC-ESM-CHEM, Watanabe et al. 2011) of ISIMIP1, we performed historical simulations (1951–2005) and future simulations (2006–2100) based on representative concentration pathways (RCP, van Vuuren et al. 2011) RCP2.6 and RCP8.5. The resolution of the land surface model was 1 degree (Nitta et al. 2014). In this study, a model version with improved permafrost processes (Yokohata et al. 2020b) was used.
Another physical variable given to the PDGEM model was the future temperature change, ∆Ta, which is used for the future extent of the wetland area. We use the future projections of the five ISIMIP1 GCMs (Hempel et al. 2013) noted above, under the RCP2.6 and RCP8.5 scenarios for ∆Ta.
The GHG emissions due to the dynamic and thermodynamic permafrost thawing are calculated with the model parameters shown in Table 2. The GHG emissions are then integrated globally and given to a simplified climate model, ACC2 (Tanaka and O'Neill 2018), which calculates the global mean surface air temperature response to GHG emissions. By calculating the global mean surface air temperature response with and without the permafrost GHG emissions under RCP 8.5, the impact of permafrost thawing on the climate system can be examined.
Figure 1 shows the area ratio of the Yedoma layer and the distribution of soil organic carbon used to calculate the dynamic degradation. As described in the previous section, this study defines the Yedoma layer as permafrost having a particular abundance of soil organic carbon and soil frozen water, based on data from Saito et al. (2020). The total soil organic carbon in the Yedoma layer, as shown in Fig. 1, is consistent with the estimates of Strauss et al. 2017 (106 GtC, the middle of uncertainty range 83-129 GtC). Soil organic carbon in the Arctic has accumulated in cold and humid environments where soil degradation is slow. It is distributed in eastern Siberia and Alaska, found mostly in coastal areas and near river basins (Saito et al. 2020). These areas are characterized by extremely low temperatures (Yokohata et al. 2020b).
(Left) Yedoma area used in the model simulation (unit = ratio to grid area); (right) soil organic carbon used in the model simulation (unit = kg/m2)
Table 3 shows the cumulative emissions of CO2 and CH4 due to the dynamic and thermodynamic degradation of the permafrost in the RCP8.5 scenario. Before conducting our future experiments, we confirmed that the average value of CH4 emission (3.9 TgCH4) for the 5-year period from the start year (2006) of the calculation is close to the estimate of present CH4 emission (~ 4 TgCH4, Walter Anthony et al. 2016; ~ 1 TgCH4, Saunois et al. 2020). As indicated in Table 3, the CO2 and CH4 releases due to the dynamic degradation (direct plus secondary emissions) of the Yedoma layer are approximately 0.1 PgC and 5 TgCH4, respectively. In each case, this is less than 1% of the total release due to dynamic and thermodynamic degradation (47 PgCO2 and 2067 TgCH4, respectively). Comparing the direct release of GHGs trapped in the ground ice and frozen soil and the secondary release of GHG due to the decomposition of soil organic carbon, the latter is an order of magnitude larger than the former (Table 3). Even though very high concentrations of CO2 and CH4 are contained in the ground ice and frozen soil of the Yedoma layer (Saito et al. 2017), their impact on the climate is quite small when they are released into the atmosphere by the degradation of the permafrost. In the present formulation and over the study period (up to 2100), the dynamic degradation of the Yedoma layer does not significantly affect the carbon cycle feedback.
Table 3 Future predictions of GHG emissions from permafrost degradation in the RCP8.5 scenario
As shown in Table 3, the cumulative CO2 and CH4 emissions (the emissions due to dynamic and thermodynamic degradation) in the RCP8.5 scenario estimated in the present study are 47 PgC (31–63 PgC, 68% range) and 2067 (1261–2821) TgCH4, respectively. For comparison, Table 3 also shows the amount of GHG gas emissions estimated in various previous studies. As can be seen in the table, these estimated emissions cover a wide range. Notably, the GHG emissions for the RCP8.5 scenario estimated in the present study are within the indicated range of uncertainty. As shown in the table, the aggregated carbon content of CO2 plus CH4 emissions due to permafrost degradation in the present study is 48 (32–66) PgC, and the increase in surface air temperature due to permafrost degradation is 0.08 (0.05–0.11) °C. Other studies (e.g., Schaefer et al. 2014; Schneider von Deimling et al. 2015; Koven et al. 2015; Gasser et al. 2018) have reported similar values. One multi-model study featuring state-of-the-art process models reported that in some of the models, atmospheric carbon may actually be absorbed due to permafrost degradation owing to the effect of potential plant growth after thawing (McGuire et al. 2018). The spread in estimated GHG emissions in McGuire et al. 2018 is larger than in other studies, ranging from a carbon sink of 41 PgC to a carbon source of 140 PgC at the end of the twenty-first century. On the other hand, the amount of CH4 released in the RCP8.5 scenario in the present study is larger than the 1474 TgCH4 reported by Schneider von Deimling et al. (2015).
Table 4 shows estimates of GHG emissions in the RCP2.6 scenario. Even here, the dynamic degradation of the Yedoma layer contributes less than 1% to total GHG emissions, and the direct release of dynamic degradation is an order of magnitude smaller than the secondary release. As in the RCP8.5 scenario, the cumulative emissions of CO2 and CH4 resulting from the combined effect of dynamic and thermodynamic degradation are similar to those in previous studies. In the present study, the combined carbon content of CO2 and CH4 emissions is 22 (15–29) PgC, which is similar to the total 27 PgC, reported by Gasser et al. (2018). On the other hand, the amount of released CH4 is 986 (618–1341) TgCH4, which is larger than the 446 TgCH4 estimated in Schneider von Deimling et al. (2015). The increase in surface air temperature due to permafrost degradation is 0.05 (0.03–0.07) °C, which is similar to the 0.06 (0.03–0.10) °C estimated in Schneider von Deimling et al. (2015).
Table 4 Same as Table 3, but for the RCP2.6 scenario
Figure 2 shows the cumulative GHG release due to dynamic permafrost degradation. In the formulation of dynamic degradation, GHG emissions are dependent on the possibility of fire (Pdstrb in Eq. 4) and the subsidence velocity of the land surface (Vdstrb), both of which are based on present observation (Section 2.1.1). Since we use the same Vdstrb for the RCP8.5 and RCP2.6 scenarios, the difference between the scenarios in Fig. 2 can be attributed to the difference in Pdstrb. In our study, the possibility of fire is increased mainly due to temperature rise, as shown in Fig. 3, since Pdstrb is estimated as a function of meteorological data (Eq. 4) based on the relationship established from historical data (Veraverbeke et al. 2017).
Cumulative GHG flux due to dynamic degradation of the Yedoma layer for CO2 (left, unit = PgC) and CH4 (right, unit = TgCH4). The simulations under the RCP2.6 (blue) and RCP8.5 (red) scenarios are shown. The width of the colored area represents the 68 percentiles of the simulated results forced by five GCMs, each of which involved 300 simulations with different model parameters sampled from the uncertainty ranges shown in Table 1. The average value of the model simulations is represented by the bold blue (RCP2.6) and red (RCP8.5) lines
Time sequence for the probability of fire in the RCP8.5 (red) and RCP2.6 (blue) scenarios. Unit is the ratio (%) to the total land area above 50° N. The width of the colored area represents the 68 percentiles of the 300 simulations with different model parameters, as explained in Section 2.1.1. The average value of the model simulations is represented by the bold blue (RCP2.6) and red (RCP8.5) lines
Figure 4 shows the results of the cumulative release of GHG from the combination of dynamic and thermodynamic degradation. As described above, since the contribution of dynamic degradation of the Yedoma layer is less than 1% of the total, the cumulative emission is essentially determined by thermodynamic degradation (Section 2.1.2). This thermodynamic degradation is obtained by solving the equation of secondary release shown in Eq. (8), based on Eqs. (9)–(10). Here, the change in active layer thickness (ALT) simulated by the global land surface model (Yokohata et al. 2020a, 2020b) is used for the calculation of permafrost degradation. As shown in Fig. 4, the cumulative release of CO2 from permafrost degradation increases almost linearly in RCP2.6, but the rate of increase rises in RCP8.5 in the latter half of the twenty-first century. This is due to the fact that the permafrost area rapidly decreases in RCP8.5 in the latter half of the century in these simulations (the details of the land surface model simulation results are provided in Yokohata et al. 2020b).
Same as Fig. 2 but for the cumulative GHG flux due to the total (dynamic plus thermodynamic) degradation of permafrost for CO2 and CH4, respectively
Figure 5 shows the CO2 and CH4 emissions at the end of the twenty-first century in the RCP8.5 scenario. We found that CO2 emissions are more widespread compared to the confined emissions of CH4. This is related to the fact that CH4 emissions can be larger in a wetland region, and the regions with a high wetland ratio are limited. The important factors that determine thermodynamic degradation are changes in the active layer thickness (Eq. 10) and the rise of soil temperature (Eq. 7). In order to interpret the results in Fig. 5, the changes in the active layer thickness, permafrost area, and wetland fraction are shown in Fig. 6. As indicated in the figure, the changes in active layer thickness are large in western and eastern Siberia, and in the North America coastal regions of the Arctic Ocean. This distribution roughly corresponds to that of CO2 emissions (Fig. 5). In western and eastern Siberia, and the northern part of North America, the amount of CH4 emission is large in regions with a large wetland fraction (Fig. 6).
The cumulative CO2 flux [TgC] (bottom, left) and CH4 flux [TgCH4] (bottom, right) due to the permafrost degradation (dynamic + thermodynamic) at the end of the twenty-first century in the RCP8.5 scenario. The average of all ensemble members is shown
Total change in active layer thickness [m] (top left), decrease in the permafrost area (top right), the wetland fraction (bottom) at the end of the twenty-first century in the RCP8.5 scenario. The average of all ensemble members is shown
Figure 6 also shows the changes in the permafrost area, which corresponds to the region with temperatures below 0 °C throughout the year. In the regions where the permafrost area decreased, the area below 0 °C throughout the year decreased. Figure 6 indicates that the permafrost area decreases significantly in the western and southern part of eastern Siberia, while permafrost remains in a wide region from the center to eastern Siberia. In other words, at the end of the twenty-first century, permafrost will remain in the cold regions, with the expectation that thawing will progress in the twenty-second century. Previous studies have reported that the impact of permafrost degradation on the climate will be greater after the end of the twenty-first century (e.g., McGuire et al. 2018), which is consistent with our result.
In this study, we developed PDGEM, a model for estimating GHG emissions due to permafrost degradation. Using the model, we produced future projections of the following three processes:
Direct release of GHGs due to the dynamic degradation of the Yedoma layer: The process in which high concentrations of CO2 and CH4 trapped in the ground ice and frozen soil of the Yedoma layer are released due to dynamic degradation.
Secondary release of GHGs due to the dynamic degradation of the Yedoma layer: The process by which organic matter trapped in the Yedoma layer is newly decomposed by the thawing of the permafrost to release CO2 and CH4.
Secondary release of GHGs due to the thermodynamic degradation of permafrost: The process by which organic matter trapped in the permafrost is newly decomposed by the thawing of the permafrost to release CO2 and CH4.
In the RCP8.5 and RCP2.6 scenarios, numerical simulations through the twenty-first century showed that the combination of (a) plus (b) contributed less than 1% of the total emissions resulting from (a) + (b) + (c). It was also found that the contribution of (a) is an order of magnitude smaller than that of (b). The cumulative release of CO2 plus CH4 produced by (a) + (b) + (c) was 48 (32–66) PgC for RCP8.5, and 22 (15–29) PgC for RCP2.6. This is consistent with a recent multi-model study (− 41 to 95 GtC, McGuire et al. 2018) which reported that in one of the ESMs, the land becomes a carbon sink owing to the effect of plant growth after thawing.
In this study, dynamic degradation of the Yedoma layer (defined as the location of high soil organic carbon and soil frozen water) is formulated by the possibility of fire (Pdstrb) and the present land surface subsidence velocity (Vdstrb) as shown in Eq. (3). The contribution of dynamic degradation ((a) + (b) above) is small since the area ratio of the Yedoma layer (Aydm) is very small. The contribution of dynamic degradation will be large if the dynamic degradation (i.e., the subsidence of surface due to dynamic collapse) occurs outside the Yedoma layer, or if the subsidence velocity is higher than it is currently. To estimate the probability of fire, the relationship between the occurrence of fire and meteorological conditions (Eq. 4) constructed from observation data is used; however, if the relationship described in Eq. (4) is different in the future, the frequency of fires will also change.
With PDGEM, the global distribution of GHG emissions can be estimated (e.g., Fig. 5) by using the thawing process of permafrost obtained from a state-of-the-art land surface model (Yokohata et al. 2020a, 2020b), taking into account the substantial uncertainties associated with the model's parameters (Table 1) and future atmospheric changes. This represents a significant advantage when compared to previous related studies (e.g., Schneider von Deimling et al. 2015; Gasser et al. 2018; McGuire et al. 2018). The models of permafrost degradation in previous studies were unable to predict the geographic distribution of GHG emissions due to their simplification of physical processes (Schneider von Deimling et al. 2015; Gasser et al. 2018). On the other hand, for state-of-the-art earth system models that incorporate advanced physical and carbon cycle processes (McGuire et al. 2018), it is difficult to fully consider the uncertainties in model prediction such as the uncertainties in future atmospheric responses. In this study, combining a simple scheme of carbon cycle processes with the results of the latest land surface model makes it possible to project the geographical distribution of future GHG emissions due to permafrost degradation (Fig. 5) by considering, across a very broad range, the uncertainties associated with the various model parameters and future atmospheric responses.
In the previous studies (e.g., Gasser et al. 2018), it has been shown that GHG emissions caused by the thawing of permafrost can be an obstacle to achieving the climate stabilization called for in the Paris Agreement. In addition, as described in Fig. 6, substantial permafrost remains unthawed at the end of the twenty-first century, and thus the impact of GHG gas emissions from permafrost thawing on the climate system is expected to increase markedly after that time (McGuire et al. 2018). As discussed in Section 4, the geographical distributions of GHG emissions (Fig. 5) are connected to changes in ground temperature, soil moisture status and wetland distribution, and the soil carbon accumulated over time scales of past glacial cycles. The hotspots with particularly large GHG emissions shown in Fig. 5 are determined by the interactions between these factors investigated in this study. In the regions of GHG emission hotspots shown in Fig. 5, it may be possible to reduce GHG emissions by taking measures such as restricting land development.
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. Please contact the authors for data requests.
Active layer thickness
CLAM:
Circumpolar active layer monitoring
CMIP:
Coupled Model Inter-comparison Project
GCM:
GHGs:
IPA:
International Permafrost Association
ISIMIP:
Inter-Sectoral Impact Model Inter-comparison Project
MATSIRO:
Minimal advanced treatments of surface interaction and runoff
MIROC:
Model for Interdisciplinary Research on Climate
PDGEM:
Permafrost Degradation and Greenhouse gases Emission Model
RCP:
Representative concentration pathways
Anthony KMW, Zimov SA, Grosse G et al (2014) A shift of thermokarst lakes from carbon sources to sinks during the Holocene epoch. Nature 511(7510):452–456. https://doi.org/10.1038/nature13560
Bentsen M, Bethke I, Debernard JB et al (2013) The Norwegian Earth system model, NorESM1-M—part 1: description and basic evaluation of the physical climate. Geosci Model Dev 6(3):687–720. https://doi.org/10.5194/gmd-6-687-2013
Brouchkov A, Fukuda M (2002) Preliminary measurements on methane content in permafrost, Central Yakutia, and some experimental data. Permafrost and Periglacial Processes 13(3):187–197. https://doi.org/10.1002/ppp.422
Brown A (2013) Pandora's freezer? Nature Climate Change 3(5):442–442. https://doi.org/10.1038/nclimate1896
Burke EJ, Hartley IP, Jones CD (2012) Uncertainties in the global temperature change caused by carbon release from permafrost thawing. The Cryosphere 6(5):1063–1076. https://doi.org/10.5194/tc-6-1063-2012
Ciais P, Sabine C, Bala G, Bopp L, Brovkin V, Canadell J et al (2013) Carbon and other biogeochemical cycles. In: Climate Change 2013: The physical science basis contribution of working group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge Univ. Press, Cambridge, United Kingdom and New York, NY, USA, pp 465–570
Collins M, Knutti R, Arblaster J, Dufresne J‐L, Fichefet T, Friedlingstein P, et al. (2013) Long‐term climate change: Projections, commitments and irreversibility. In Stocker TF, et al. (Eds.), Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. New York NY USA: Cambridge University Press (pp. 1029–1136)
Dlugokencky E, Tans PP (2013) Globally averaged marine surface annual mean data, NOAA/ESRL. In. www.esrl.noaa.gov/gmd/ccgg/trends/, Accessed 01-02-2013 2013
Dufresne J-L, Foujols M-A, Denvil S et al (2013) Climate change projections using the IPSL-CM5 Earth System Model: from CMIP3 to CMIP5. Climate Dynamics 40(9):2123–2165. https://doi.org/10.1007/s00382-012-1636-1
Dunne JP, John JG, Adcroft AJ et al (2012) GFDL's ESM2 global coupled climate-carbon Earth System Models Part I: Physical formulation and baseline simulation characteristics. Journal of Climate 25(19):6646–6665. https://doi.org/10.1175/JCLI-D-11-00560.1
Dutta K, Schuur EAG, Neff JC, Zimov SA (2006) Potential carbon release from permafrost soils of Northeastern Siberia. Global Change Biology 12(12):2336–2351. https://doi.org/10.1111/j.1365-2486.2006.01259.x
Gasser T, Kechiar M, Ciais P et al (2018) Path-dependent reductions in CO2 emission budgets caused by permafrost carbon release. Nature Geoscience 11(11):830–835. https://doi.org/10.1038/s41561-018-0227-0
Günther F, Overduin PP, Sandakov AV, Grosse G, Grigoriev MN (2013) Short- and long-term thermo-erosion of ice-rich permafrost coasts in the Laptev Sea region. Biogeosciences 10(6):4297–4318. https://doi.org/10.5194/bg-10-4297-2013
Hempel S, Frieler K, Warszawski L, Schewe J, Piontek F (2013) A trend-preserving bias correction – the ISI-MIP approach. Earth System Dynamics 4(2):219–236. https://doi.org/10.5194/esd-4-219-2013
Hugelius G, Tarnocai C, Broll G, Canadell JG, Kuhry P, Swanson DK (2013) The Northern Circumpolar soil carbon database: spatially distributed datasets of soil coverage and soil carbon storage in the northern permafrost regions. Earth Syst Sci Data 5(1):3–13. https://doi.org/10.5194/essd-5-3-2013
Hugelius G, Strauss J, Zubrzycki S et al (2014) Estimated stocks of circumpolar permafrost carbon with quantified uncertainty ranges and identified data gaps. Biogeosciences 11(23):6573–6593. https://doi.org/10.5194/bg-11-6573-2014
IPCC, 2013: Annex III: Glossary [Planton, S. (ed.)]. In: Climate Change 2013: the physical science basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Iwahana G, Harada K, Uchida M et al (2016) Geomorphological and geochemistry changes in permafrost after the 2002 tundra wildfire in Kougarok, Seward Peninsula, Alaska. Journal of Geophysical Research: Earth Surface 121(9):1697–1715. https://doi.org/10.1002/2016jf003921
Jones CD, Hughes JK, Bellouin N et al (2011) The HadGEM2-ES implementation of CMIP5 centennial simulations. Geosci Model Dev 4(3):543–570. https://doi.org/10.5194/gmd-4-543-2011
Kalnay E, Kanamitsu M, Kistler R et al (1996) The NCEP/NCAR 40-Year Reanalysis Project. Bulletin of the American Meteorological Society 77(3):437–472. https://doi.org/10.1175/1520-0477
Kanevskiy M, Shur Y, Fortier D, Jorgenson MT, Stephani E (2011) Cryostratigraphy of late Pleistocene syngenetic permafrost (yedoma) in northern Alaska, Itkillik River exposure. Quaternary Research 75(3):584–596. https://doi.org/10.1016/j.yqres.2010.12.003
Kanevskiy M, Shur Y, Strauss J, Jorgenson T, Fortier D, Stephani E, Vasiliev S (2016) Patterns and rates of riverbank erosion involving ice-rich permafrost (yedoma) in northern Alaska. Geomorphology, 253:370–84
Kawamiya M, Hajima T, Tachiiri K, Yokohata T (2020) Two decades of Earth system modelling, submitted to the same issue of Progress in Earth and Planetary Science.
Köhler P, Knorr G, Bard E (2014) Permafrost thawing as a possible source of abrupt carbon release at the onset of the Bølling/Allerød. Nature communications 5(1):5520. https://doi.org/10.1038/ncomms6520
Koven CD, Ringeval B, Friedlingstein P et al (2011) Permafrost carbon-climate feedbacks accelerate global warming. Proceedings of the National Academy of Sciences of the United States of America 108(36):14769–14774. https://doi.org/10.1073/pnas.1103910108
Koven CD, Riley WJ, Stern A (2013) Analysis of permafrost thermal dynamics and Response to climate change in the CMIP5 Earth system models. Journal of Climate 26(6):1877–1900. https://doi.org/10.1175/jcli-d-12-00228.1
Koven CD, Schuur EA, Schadel C et al (2015) A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback. Philosophical transactions Series A, Mathematical, physical, and engineering sciences 373(2054). https://doi.org/10.1098/rsta.2014.0423
Lee H, Schuur EAG, Inglett KS, Lavoie M, Chanton JP (2012) The rate of permafrost carbon release under aerobic and anaerobic conditions and its potential effects on climate. Global Change Biology 18(2):515–527. https://doi.org/10.1111/j.1365-2486.2011.02519.x
Lehner B, Döll P (2004) Development and validation of a global database of lakes, reservoirs and wetlands. Journal of Hydrology 296(1):1–22. https://doi.org/10.1016/j.jhydrol.2004.03.028
Lenton TM, Held H, Kriegler E, Hall JW, Lucht W, Rahmstorf S, Schellnhuber HJ (2008) Tipping elements in the Earth's climate system. Proc Natl Acad Sci U S A 105:1786–93
MacDougall AH, Knutti R (2016) Projecting the release of carbon from permafrost soils using a perturbed parameter ensemble modelling approach. Biogeosciences 13:2123–36
MacDougall AH, Zickfeld K, Knutti R, Matthews HD (2015) Sensitivity of carbon budgets to permafrost carbon feedbacks and non-CO2 forcings. Environmental Research Letters 10(12):125003. https://doi.org/10.1088/1748-9326/10/12/125003
McGuire AD, Lawrence DM, Koven C et al (2018) Dependence of the evolution of carbon dynamics in the northern permafrost region on the trajectory of climate change. Proceedings of the National Academy of Sciences of the United States of America 115(15):3882–3887. https://doi.org/10.1073/pnas.1719903115
Nitta T, Yoshimura K, Takata K et al (2014) Representing variability in subgrid snow cover and snow depth in a global land model: offline validation. Journal of Climate 27(9):3318–3330. https://doi.org/10.1175/jcli-d-13-00310.1
Ping C-L, Michaelson GJ, Jorgenson MT et al (2008) High stocks of soil organic carbon in the North American Arctic region. Nature Geoscience 1(9):615–619. https://doi.org/10.1038/ngeo284
Prentice IC, Farquhar GD, Fasham MJR et al (2001) The carbon cycle and atmospheric carbon dioxide, contribution of working group I to the third assessment report of the Intergovernmental Panel on Climate Change. In: Houghton JT, Ding Y, Griggs DJ et al (eds) Climate Change 2001: The Scientific Basis. Cambridge University Press, Cambridge, United Kingdom and New York, pp 183–237
Saito K, Ohno H, Yokohata T, Iwahana G, Machiya H (2017) Assessing and projecting greenhouse gas release due to dynamic permafrost degradation. Paper presented at the 2017 Fall Conference of the American Geophysical Union, New Orleans Convention Center, New Orleans, 13 December 2017.
Saito K, Machiya H, Iwahana G, Ohno H, Yokohata T (2020) Mapping simulated circum-Arctic organic carbon, ground ice, and vulnerability of ice-rich permafrost to degradation. Progress in Earth and Planetary Science 7(1):31. https://doi.org/10.1186/s40645-020-00345-z
Saunois M, Stavert AR, Poulter B et al (2020) The Global Methane Budget 2000–2017. Earth Syst Sci Data 12(3):1561–1623. https://doi.org/10.5194/essd-12-1561-2020
Schädel C, Schuur EAG, Bracho R et al (2014) Circumpolar assessment of permafrost C quality and its vulnerability over time using long-term incubation data. Global Change Biology 20(2):641–652. https://doi.org/10.1111/gcb.12417
Schaefer K, Lantuit H, Romanovsky VE, Schuur EAG, Witt R (2014) The impact of the permafrost carbon feedback on global climate. Environmental Research Letters 9(8):085003. https://doi.org/10.1088/1748-9326/9/8/085003
Schirrmeister L, Grosse G, Wetterich S et al (2011) Fossil organic matter characteristics in permafrost deposits of the northeast Siberian Arctic. Journal of Geophysical Research: Biogeosciences 116(G2). https://doi.org/10.1029/2011jg001647
Schneider von Deimling T, Grosse G, Strauss J et al (2015) Observation-based modelling of permafrost carbon fluxes with accounting for deep carbon deposits and thermokarst activity. Biogeosciences 12(11):3469–3488. https://doi.org/10.5194/bg-12-3469-2015
Schuur EAG, Bockheim J, Canadell JG et al (2008) Vulnerability of permafrost carbon to climate change: implications for the global carbon cycle. BioScience 58(8):701–714. https://doi.org/10.1641/b580807
Schuur EA, McGuire AD, Schadel C et al (2015) Climate change and the permafrost carbon feedback. Nature 520(7546):171–179. https://doi.org/10.1038/nature14338
Segers R (1998) Methane production and methane consumption: a review of processes underlying wetland methane fluxes. Biogeochemistry 41(1):23–51. https://doi.org/10.1023/a:1005929032764
Sitch S, Smith B, Prentice IC et al (2003) Evaluation of ecosystem dynamics, plant geography and terrestrial carbon cycling in the LPJ Dynamic Global Vegetation Model. Global Change Biology 9(2):161–185. https://doi.org/10.1046/j.1365-2486.2003.00569.x
Steffen W, Rockström J, Richardson K et al (2018) Trajectories of the Earth System in the Anthropocene. Proceedings of the National Academy of Sciences 115(33):8252–8259. https://doi.org/10.1073/pnas.1810141115
Strauss J, Schirrmeister L, Grosse G et al (2013) The deep permafrost carbon pool of the Yedoma region in Siberia and Alaska. Geophysical Research Letters 40(23):6165–6170. https://doi.org/10.1002/2013GL058088
Strauss J, Schirrmeister L, Mangelsdorf K, Eichhorn L, Wetterich S, Herzschuh U (2015) Organic-matter quality of deep permafrost carbon – a study from Arctic Siberia. Biogeosciences 12:2227–45.
Strauss J, Schirrmeister L, Grosse G et al (2017) Deep Yedoma permafrost: a synthesis of depositional characteristics and carbon vulnerability. Earth-Science Reviews 172:75–86. https://doi.org/10.1016/j.earscirev.2017.07.007
Takata K, Emori S, Watanabe T (2003) Development of the minimal advanced treatments of surface interaction and runoff. Global and Planetary Change 38(1-2):209–222. https://doi.org/10.1016/s0921-8181(03)00030-4
Tanaka K, O'Neill BC (2018) The Paris Agreement zero-emissions goal is not always consistent with the 1.5 °C and 2 °C temperature targets. Nature Climate Change 8(4):319–324. https://doi.org/10.1038/s41558-018-0097-x
Tarnocai C, Canadell JG, Schuur EAG, Kuhry P, Mazhitova G, Zimov S (2009) Soil organic carbon pools in the northern circumpolar permafrost region. Global Biogeochemical Cycles 23:Gb2023. https://doi.org/10.1029/2008gb003327
Ulrich M, Grosse G, Strauss J, Schirrmeister L (2014) Quantifying wedge-ice volumes in Yedoma and Thermokarst Basin deposits. Permafrost and Periglacial Processes 25(3):151–161. https://doi.org/10.1002/ppp.1810
Veraverbeke S, Rogers BM, Goulden ML et al (2017) Lightning as a major driver of recent large fire years in North American boreal forests. Nature Climate Change 7(7):529–534. https://doi.org/10.1038/nclimate3329
Vonk JE, Sánchez-García L, van Dongen BE et al (2012) Activation of old carbon by erosion of coastal and subsea permafrost in Arctic Siberia. Nature 489(7414):137–140. https://doi.org/10.1038/nature11392
van Vuuren DP, Edmonds J, Kainuma M et al (2011) The representative concentration pathways: an overview. Climatic Change 109(1-2):5–31. https://doi.org/10.1007/s10584-011-0148-z
Walter Anthony KM, Daanen R, Anthony P, Schneider von Deimling T, Ping C-L, Chanton JP, Grosse G (2016) Present-day permafrost carbon feedback from thermokarst lakes, in: EPIC3XI, International Conference on Permafrost, Potsdam, Germany, 20–24 June 2016. Potsdam, Germany
Walter BP, Heimann M (2000) A process-based, climate-sensitive model to derive methane emissions from natural wetlands: application to five wetland sites, sensitivity to model parameters, and climate. Global Biogeochemical Cycles 14(3):745–765. https://doi.org/10.1029/1999GB001204
Walter KM, Edwards ME, Grosse G, Zimov SA, Chapin FS (2007) Thermokarst lakes as a source of atmospheric methane during the last deglaciation. Science 318:633–636
Watanabe M, Suzuki T, O'ishi R et al (2010) Improved climate simulation by MIROC5: mean states, variability, and climate sensitivity. Journal of Climate 23(23):6312–6335. https://doi.org/10.1175/2010jcli3679.1
Watanabe S, Hajima T, Sudo K, et al. (2011) MIROC-ESM 2010: model description and basic results of CMIP5-20c3m experiments. Geosci Model Dev 4(4):845-872 doi:https://doi.org/10.5194/gmd-4-845-2011
Yokohata T, Kinoshita T, Sakurai G, et al. (2020a) MIROC-INTEG-LAND version 1: A global bio-geochemical land surface model with human water management, crop growth, and land-use change. Geosci Model Dev, in press
Yokohata T, Saito K, Takata K, Nitta T, Sato Y, Hajima T, Sueyoshi T, Iwahana G (2020b) Model improvement and future projection of permafrost processes in a global climate model, Submitted to the same issue of Progress in Earth and Planetary Science
Zimov SA, Davydov SP, Zimova GM et al (2006a) Permafrost carbon: stock and decomposability of a globally significant carbon pool. Geophysical Research Letters 33(20). https://doi.org/10.1029/2006gl027484
Zimov SA, Schuur EAG, Chapin FS (2006b) Permafrost and the global carbon budget. Science 312(5780):1612–1613. https://doi.org/10.1126/science.1128908
We gratefully acknowledge the helpful discussions with Hideo Shiogama, Tomoo Ogura, Nagio Hirota, Kaoru Tachiiri, and Michio Kawamiya. The authors are much indebted to Keita Matsumoto, Kuniyasu Hamada, Kenryou Kataumi, Eiichi Hirohashi, Futoshi Takeuchi, Nobuaki Morita, and Kenji Yoshimura at NEC Corporation for their support in model development. Model simulations were performed on the SGI UV20 at the National Institute for Environmental Studies. NCEP Reanalysis Derived data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/.
This study was conducted as a part of the Environment Research and Technology Development Fund project (2-1605, JPMEERF20162005) "Assessing and Projecting Greenhouse Gas Release from Large-scale Permafrost Degradation" supported by the Ministry of Environment and the Environmental Restoration and Conservation Agency. Our research is also supported by the "Integrated Research Program for Advancing Climate Models (TOUGOU Program)" sponsored by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. This study was partly supported by the NASA ABoVE (Arctic Boreal and Vulnerability Experiment (grant no. NNX17AC57A)). KT benefited from State assistance managed by the National Research Agency in France under the "Programme d'Investissements d'Avenir" under the reference "ANR-19-MPGA-0008".
Center for Global Environmental Research, National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, 305-8506, Japan
Tokuta Yokohata, Akihiko Ito & Katsumasa Tanaka
Research Center for Environmental Modeling and Application, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showamachi, Kanazawaku, Yokohama, 236-0001, Japan
Kazuyuki Saito & Tomohiro Hajima
School of Earth, Energy and Environmental Engineering, Kitami Institute of Technology, 165 Koen-cho, Kitami, 090-8507, Japan
Hiroshi Ohno
Laboratoire des Sciences du Climat et de l'Environnement (LSCE), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Gif-sur-Yvette, France
Katsumasa Tanaka
International Arctic Research Center, 739, The University of Alaska Fairbanks, 2160 Koyukuk Dr, Fairbanks, AK 740, 99775-7340, USA
Go Iwahana
Tokuta Yokohata
Kazuyuki Saito
Akihiko Ito
Tomohiro Hajima
TY and KS proposed the topic and conceived and designed the study. TY, KS, and AI contributed to the formulation of the numerical model. TY and KT carried out the experimental study and analyzed the results of the numerical simulations. HO and GI provided the observational data. All authors have read and approved the final manuscript.
Correspondence to Tokuta Yokohata.
The authors declare that they have no competing interest.
Yokohata, T., Saito, K., Ito, A. et al. Future projection of greenhouse gas emissions due to permafrost degradation using a simple numerical scheme with a global land surface model. Prog Earth Planet Sci 7, 56 (2020). https://doi.org/10.1186/s40645-020-00366-8
Permafrost degradation
Carbon cycle feedback
2. Atmospheric and hydrospheric sciences
Projection and impact assessment of global change | CommonCrawl |
Search SpringerLink
On the distribution of the number of internal equilibria in random evolutionary games
Manh Hong Duong ORCID: orcid.org/0000-0002-4361-07951,
Hoang Minh Tran2 &
The Anh Han3
Journal of Mathematical Biology volume 78, pages 331–371 (2019)Cite this article
The analysis of equilibrium points is of great importance in evolutionary game theory with numerous practical ramifications in ecology, population genetics, social sciences, economics and computer science. In contrast to previous analytical approaches which primarily focus on computing the expected number of internal equilibria, in this paper we study the distribution of the number of internal equilibria in a multi-player two-strategy random evolutionary game. We derive for the first time a closed formula for the probability that the game has a certain number of internal equilibria, for both normal and uniform distributions of the game payoff entries. In addition, using Descartes' rule of signs and combinatorial methods, we provide several universal upper and lower bound estimates for this probability, which are independent of the underlying payoff distribution. We also compare our analytical results with those obtained from extensive numerical simulations. Many results of this paper are applicable to a wider class of random polynomials that are not necessarily from evolutionary games.
Working on a manuscript?
Avoid the most common mistakes and prepare your manuscript for journal editors.
Evolutionary Game Theory (EGT) (Maynard Smith and Price 1973) has become one of the most diverse and far reaching theories in biology finding its applications in a plethora of disciplines such as ecology, population genetics, social sciences, economics and computer science (Maynard Smith 1982; Axelrod 1984; Hofbauer and Sigmund 1998; Nowak 2006; Broom and Rychtář 2013; Perc and Szolnoki 2010; Sandholm 2010; Han et al. 2017), see also recent reviews (Wang et al. 2016; Perc et al. 2017). For example, in economics, EGT has been employed to make predictions in situations where traditional assumptions about agents' rationality and knowledge may not be justified (Friedman 1998; Sandholm 2010). In computer science, EGT has been used extensively to model dynamics and emergent behaviour in multiagent systems (Helbing et al. 2015; Tuyls and Parsons 2007; Han 2013). Furthermore, EGT has provided explanations for the emergence and stability of cooperative behaviours which is one of the most well-studied and challenging interdisciplinary problems in science (Pennisi 2005; Hofbauer and Sigmund 1998; Nowak 2006). A particularly important subclass in EGT is random evolutionary games in which the payoff entries are random variables. They are useful to model social and biological systems in which very limited information is available, or where the environment changes so rapidly and frequently that one cannot describe the payoffs of their inhabitants' interactions (May 2001; Fudenberg and Harris 1992; Han et al. 2012; Gross et al. 2009; Galla and Farmer 2013).
Similar to the foundational concept of Nash equilibrium in classical game theory (Nash 1950), the analysis of equilibrium points is of great importance in EGT. It provides essential understanding of complexity in a dynamical system, such as its behavioural, cultural or biological diversity (Haigh 1988, 1990; Broom et al. 1997; Broom 2003; Gokhale and Traulsen 2010, 2014; Han et al. 2012; Duong and Han 2015, 2016; Broom and Rychtář 2016). A large body of literature has analysed the number of equilibria, their stability and attainability in concrete strategic scenarios such as the public goods game and its variants, see for example Broom et al. (1997), Broom (2000), Pacheco et al. (2009), Souza et al. (2009), Peña (2012), Peña et al. (2014) and Sasaki et al. (2015). However, despite their importance, equilibrium properties in random games are far less understood with, to the best of our knowledge, only a few recent efforts (Gokhale and Traulsen 2010, 2014; Han et al. 2012; Galla and Farmer 2013; Duong and Han 2015, 2016; Broom and Rychtář 2016). One of the most challenging problems in the study of equilibrium properties in random games is to characterise the distribution of the number of equilibria (Gokhale and Traulsen 2010; Han et al. 2012):
What is the distribution of the number of (internal) equilibria in a d-player random evolutionary game and how can we compute it?
This question has been studied in the literature to some extent. For example, in Gokhale and Traulsen (2010, 2014) and Han et al. (2012), the authors studied this question with a small number of players (\(d\le 4\)) and only focused on the probability of attaining the maximal number of equilibrium points, i.e. \(p_{d-1}\), where \(p_m\) (\(0\le m\le d-1\)) is the probability that a d-player game with two strategies has exactly m internal equilibria. These works use a direct approach by analytically solving a polynomial equation, expressing the positivity of its zeros as domains of conditions for the coefficients and then integrating over these domains to obtain the corresponding probabilities. However, it is impossible to extend this approach to games with a large or arbitrary number of players as in general, a polynomial of degree five or higher is not analytically solvable (Abel 1824). In more recent works (Duong and Han 2015, 2016; Duong et al. 2017), we have established the links between random evolutionary games, random polynomial theory (Edelman and Kostlan 1995) and classical polynomial theory (particularly Legendre polynomials), employing techniques from the latter to study the expected number of internal equilibria, E. More specifically, we provided closed form formulas for E, characterised its asymptotic limits as the number of players in the game tends to infinity and investigated the effect of correlation in the case of correlated payoff entries. On the one hand, E offers useful information regarding the macroscopic, average behaviour of the number of internal equilibria a dynamical system might have. On the other hand, E cannot provide the level of complexity or the number of different states of biodiversity that will occur in the system. In these situations, details about how the number of internal equilibrium points distributed is required. Furthermore, as E can actually be derived from \(p_m\) using the formula \(E = \sum ^{d-1}_{m=0} m p_m\), a closed form formula for \(p_m\) would make it possible to compute E for any d, hence filling in the gap in the literature on computing E for large d (\(d\ge 5)\). Therefore, it is necessary to estimate \(p_m\).
Summary of main results
In this paper, we address the above question by providing a closed-form formula for the probability \(p_m\) (\(0\le m\le d-1\)). Our approach is based on the links between random polynomial theory and random evolutionary game theory established in our previous work (Duong and Han 2015, 2016). That is, an internal equilibrium in a d-player game with two strategies can be found by solving the following polynomial equation (detailed derivation in Sect. 2),
$$\begin{aligned} \sum \limits _{k=0}^{d-1}\beta _k\begin{pmatrix} d-1\\ k \end{pmatrix} y^k=0, \end{aligned}$$
where \(\beta _k=A_k-B_k\), with \(A_k\) and \(B_k\) being random variables representing the entries of the game payoff matrix. We now summarise the main results of this paper. Detailed derivations and proofs will be given in subsequent sections. The first main result is an explicit formula for the probability distribution of the number of internal equilibria.
(The distribution of the number of internal equilibria in a d-player two-strategy random evolutionary game) Suppose that the coefficients \(\{\beta _k\}\) in (1) are either normally distributed, uniformly distributed or the difference of uniformly distributed random variables. The probability that a d-player two-strategy random evolutionary game has m, \(0\le m\le d-1\), internal equilibria, is given by
$$\begin{aligned} p_{m}=\sum _{k=0}^{\lfloor \frac{d-1-m}{2}\rfloor }p_{m,2k,d-1-m-2k}, \end{aligned}$$
where \(p_{m,2k,d-1-m-2k}\) are given in (13), (14) and (15), respectively.
This theorem, which is stated in detail in Theorem 4 in Sect. 3, is derived from a more general theorem, Theorem 3, where we provide explicit formulas for the probability \(p_{m,2k,n-m-2k}\) that a random polynomial of degree n has m (\(0\le m\le n\)) positive, 2k (\(0\le k\le \lfloor \frac{n-m}{2}\rfloor \)) complex and \(n-2m-2k\) negative roots. Note that results from Theorem 3 are applicable to a wider class of general random polynomials, i.e. beyond those derived from evolutionary random games considered in this work.
Theorem 1 is theoretically interesting and can be used to compute \(p_m\), \(0\le m\le d-1\) for small d. We use it to compute all the probabilities \(p_m\), \(0\le m\le d-1\), for d up to 5, and compare the results with those obtained through extensive numerical simulations (for validation). However, when d is larger it becomes computationally expensive to compute these probabilities using formula (2) because one needs to calculate all the probabilities \(p_{m,2k,d-1-2k}\), \( 0\le k\le \lfloor \frac{n-m}{2}\rfloor \), which are complex multiple integrals. To overcome this issue, in Sect. 5, we develop our second main result, Theorem 2 below, which offers simpler explicit estimates of \(p_m\) in terms of d and m. The main idea in developing this result is employing the symmetry of the coefficients \(\beta _k\). Specifically, we consider two cases
$$\begin{aligned}&\text {Case 1:}\quad \mathbf {P}(\beta _k>0)=\mathbf {P}(\beta _k<0)=\frac{1}{2},\\&\text {Case 2:}\quad \mathbf {P}(\beta _k>0)=\alpha \quad \text {and}\quad \mathbf {P}(\beta _k<0)=1-\alpha , \end{aligned}$$
for all \(k=0,\ldots , d-1\) and for some \(0\le \alpha \le 1\). Note here that Case 1 is an instance of Case 2 when \(\alpha =\frac{1}{2}\) and can be satisfied when \(a_k\) and \(\beta _k\) are exchangeable (see Lemma 1 below). Interestingly, the symmetry of \(\beta _k\) allows us to obtain a much simpler treatment. The general case allows us to move beyond the exchangeability condition capturing the fact that different strategies might have different payoff properties.
We have the following upper-bound estimate for \(p_m\)
$$\begin{aligned} p_m\le \sum _{\begin{array}{c} k\ge m\\ k-m~\text {even} \end{array}} p_{k,d-1}, \end{aligned}$$
where \(p_{k,d-1}=\frac{1}{2^{d-1}}\begin{pmatrix} d-1\\ k \end{pmatrix}\) if \(\alpha =\frac{1}{2}\), in this case the sum on the right hand side of (3) can be computed explicitly in terms of m and d. For the general case, it can be computed explicitly according to Theorem 7. The estimate (3) has several useful implications, leading to explicit bounds for \(p_{d-2}\) and \(p_{d-1}\) as well as the following assertions:
For \(d=2\): \(p_0=\alpha ^2+(1-\alpha )^2\) and \(p_1=2\alpha (1-\alpha )\);
For \(d=3\): \(p_1=2\alpha (1-\alpha )\).
This theorem is a summary of Theorems 6, 7 and 8 in Sect. 4 that are derived using Descartes' rule of signs and combinatorial methods. We note that results of the aforementioned theorems are applicable to a wider class of random polynomials that are not necessarily from random games.
Organisation of the paper
The rest of the paper is organised as follows. In Sect. 2, we recall and summarise the replicator dynamics for multi-player two-strategy games. The main contributions of this paper and the detailed analysis of the main results described above will be presented in subsequent sections. Section 3 is devoted to the proof of Theorem 1 on the probability distribution. The proof of Theorem 2 will be given in Sect. 4. In Sect. 5 we show some numerical simulations to demonstrate analytical results. In Sect. 6, further discussions are given. Finally, Appendix 1 contains proofs of technical results from previous sections.
Replicator dynamics
A fundamental model of evolutionary game theory is replicator dynamics (Taylor and Jonker 1978; Zeeman 1980; Hofbauer and Sigmund 1998; Schuster and Sigmund 1983; Nowak 2006), describing that whenever a strategy has a fitness larger than the average fitness of the population, it is expected to spread. For the sake of completeness, below we derive the replicator dynamics for multi-player two-strategy games.
Consider an infinitely large population with two strategies, A and B. Let x, \(0 \le x \le 1\), be the frequency of strategy A. The frequency of strategy B is thus \((1-x)\). The interaction of the individuals in the population is in randomly selected groups of d participants, that is, they play and obtain their fitness from d-player games. The game is defined through a \((d-1)\)-dimensional payoff matrix (Gokhale and Traulsen 2010), as follows. Let \(A_k\) (respectively, \(B_k\)) be the payoff that an A-strategist (respectively, a B-strategist) obtained when playing with a group of \(d-1\) players that consists of k A-strategists. In this paper, we consider symmetric games where the payoffs do not depend on the ordering of the players. Asymmetric games will be studied in a forthcoming paper. In the symmetric case, the probability that an A strategist interacts with k other A strategists in a group of size \(d-1\) is
$$\begin{aligned} \begin{pmatrix} d-1\\ k \end{pmatrix}x^k (1-x)^{d-1-k}. \end{aligned}$$
Thus, the average payoffs of A and B are, respectively
$$\begin{aligned} \pi _A= \sum \limits _{k=0}^{d-1}A_k\begin{pmatrix} d-1\\ k \end{pmatrix}x^k (1-x)^{d-1-k}, \quad \pi _B = \sum \limits _{k=0}^{d-1}B_k\begin{pmatrix} d-1\\ k \end{pmatrix}x^k (1-x)^{d-1-k}. \end{aligned}$$
The replicator equation of a d-player two-strategy game is given by (Hofbauer and Sigmund 1998; Sigmund 2010; Gokhale and Traulsen 2010)
$$\begin{aligned} {\dot{x}}=x(1-x)\big (\pi _A-\pi _B\big ). \end{aligned}$$
Since \(x=0\) and \(x=1\) are two trivial equilibrium points, we focus only on internal ones, i.e. \(0< x < 1\). They satisfy the condition that the fitnesses of both strategies are the same, i.e. \(\pi _A=\pi _B\), which gives rise to
$$\begin{aligned} \sum \limits _{k=0}^{d-1}\beta _k \begin{pmatrix} d-1\\ k \end{pmatrix}x^k (1-x)^{d-1-k} = 0, \end{aligned}$$
where \(\beta _k = A_k - B_k\). Using the transformation \(y= \frac{x}{1-x}\), with \(0< y < +\infty \), dividing the left hand side of the above equation by \((1-x)^{d-1}\) we obtain the following polynomial equation for y
$$\begin{aligned} P(y):=\sum \limits _{k=0}^{d-1}\beta _k\begin{pmatrix} d-1\\ k \end{pmatrix}y^k=0. \end{aligned}$$
Note that this equation can also be derived from the definition of an evolutionarily stable strategy (ESS), an important concept in EGT (Maynard Smith 1982), see e.g., Broom et al. (1997). Note however that, when moving to random evolutionary games with more than two strategies, the conditions for ESS are not the same as for those of stable equilibrium points of replicator dynamics. As in Gokhale and Traulsen (2010), Duong and Han (2015, 2016), we are interested in random games where \(A_k\) and \(B_k\) (thus \(\beta _k\)), for \(0\le k\le d-1 \), are random variables.
In Sect. 3 where we provide estimates for the number of internal equilibria in a d-player two-strategy game, we will use the information on the symmetry of \(\beta _k\). The following lemma gives a necessary condition to determine when the difference of two random variables is symmetrically distributed.
(Duong et al. 2017, Lemma 3.5) Let X and Y be two exchangeable random variables, i.e. their joint probability distribution \(f_{X,Y}(x,y)\) is symmetric, \(f_{X,Y}(x,y)=f_{X,Y}(y,x)\). Then \(Z=X-Y\) is symmetrically distributed about 0, i.e., its probability distribution satisfies \(f_Z(z)=f_Z(-z)\). In addition, if X and Y are i.i.d then they are exchangeable.
For the sake of completeness, the proof of this Lemma is provided in Sect. 1.
The distribution of the number of positive zeros of random polynomials and applications to EGT
This section focuses on deriving the distribution of the number of internal equilibria of a d-player two-strategy random evolutionary game. We recall that an internal equilibria is a real and positive zero of the polynomial P(y) in (5). We denote by \(\kappa \) the number of positive zeros of this polynomial. For a given m, \(0\le m\le d-1\), we need to compute the probability \(p_m\) that \(\kappa =m\). To this end, we first adapt a method introduced in Zaporozhets (2006) (see also Butez and Zeitouni 2017; Götze et al. 2017 for its applications to other problems) to establish a formula to compute the probability that a general random polynomial has a given number of real and positive zeros. Then we apply the general theory to the polynomial P.
The distribution of the number of positive zeros of a random polynomial
Consider a general random polynomial
$$\begin{aligned} \mathbf {P}(t)=\xi _0 t^n+\xi _1t^{n-1}+\cdots +\xi _{n-1}t+\xi _n. \end{aligned}$$
We use the following notations for the elementary symmetric polynomials
$$\begin{aligned} \sigma _0(y_1,\ldots ,y_n)&=1,\nonumber \\ \sigma _1(y_1,\ldots ,y_n)&=y_1+\cdots +y_n,\nonumber \\ \sigma _2(y_1,\ldots ,y_n)&=y_1y_2+\cdots +y_{n-1}y_n,\nonumber \\&\vdots \nonumber \\ \sigma _{n-1}(y_1,\ldots ,y_n)&=y_1y_2\ldots y_{n-1}+\cdots +y_2y_3\ldots y_n,\nonumber \\ \sigma _{n}(y_1,\ldots ,y_n)&=y_1\ldots y_n, \end{aligned}$$
and denote by
$$\begin{aligned} \varDelta (y_1,\ldots ,y_n)=\prod _{1\le i<j\le n}|y_i-y_j| \end{aligned}$$
the Vandermonde determinant.
Assume that the random variables \(\xi _0,\xi _1,\ldots , \xi _n\) have a joint density \(p(a_0,\ldots ,a_n)\). Let \(0\le m\le d-1\) and \(0\le k\le \lfloor \frac{n-m}{2}\rfloor \). The probability \(p_{m,2k,n-m-2k}\) that \(\mathbf {P}\) has m positive, 2k complex and \(n-m-2k\) negative zeros is given by
$$\begin{aligned}&p_{m,2k,n-m-2k}=\frac{2^{k}}{m! k! (n-m-2k)!}\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{n-m-2k}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\int _{\mathbf { R}}\nonumber \\&\quad r_1\ldots r_k p(a\sigma _0,\ldots ,a\sigma _{n}) |a^{n}\varDelta |\, da\,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{n-2k},\qquad \end{aligned}$$
$$\begin{aligned}&\sigma _j=\sigma _j\left( x_1,\ldots ,x_{n-2k}, r_1e^{i\alpha _1}, r_1e^{-i\alpha _1},\ldots ,r_k e^{i\alpha _k}, r_k e^{-i \alpha _k}\right) , \end{aligned}$$
$$\begin{aligned}&\varDelta =\varDelta \left( x_1,\ldots ,x_{n-2k}, r_1e^{i\alpha _1}, r_1e^{-i\alpha _1},\ldots ,r_k e^{i\alpha _k}, r_k e^{-i \alpha _k}\right) . \end{aligned}$$
As consequences,
The probability that \(\mathbf {P}\) has m positive zeros is
$$\begin{aligned} p_{m}=\sum _{k=0}^{\lfloor \frac{n-m}{2}\rfloor }p_{m,2k,n-m-2k}. \end{aligned}$$
In particular, the probability that \(\mathbf {P}\) has the maximal number of positive zeros is
$$\begin{aligned} p_{n}=\frac{2^{k}}{k! (n-2k)!}\int _{\mathbf { R}_+^{n}}\int _{\mathbf { R}}p(a\sigma _0,\ldots ,a\sigma _{n})\, |a^{n}\,\varDelta |\, dadx_1\ldots dx_{n}, \end{aligned}$$
$$\begin{aligned} \sigma _j=\sigma _j(x_1,\ldots ,x_{n}),\quad \varDelta =\varDelta (x_1,\ldots ,x_{n}). \end{aligned}$$
The reference (Zaporozhets 2006, Theorem 1) provides a formula to compute the probability that the polynomial \(\mathbf {P}\) has \(n-2k\) real and 2k complex roots. In the present paper, we need to distinguish between positive and negative real zeros. We now sketch and adapt the proof of Theorem 1 of Zaporozhets (2006) to obtain the formula (9) for the probability that the polynomial \(\mathbf {P}\) has m positive, 2k complex and \(n-m-2k\) negative roots. Consider a \((n+1)\)-dimensional vector space \(\mathbf {V}\) of polynomials of the form
$$\begin{aligned} Q(t)=a_0t^n+a_1t^{n-1}+\cdots +a_{n-1}t+a_n, \end{aligned}$$
and a measure \(\mu \) on this space defined as the integral of the differential form
$$\begin{aligned} dQ=p(a_0,\ldots ,a_n)\,da_0\wedge \cdots \wedge da_n. \end{aligned}$$
Our goal is to find \(\mu (V_{m,2k})\) where \(V_{m,2k}\) is the set of polynomials having m positive, 2k complex and \(n-m-2k\) negative roots. Let \(Q\in V_{m,2k}\). Denote all zeros of Q as
$$\begin{aligned}&z_1=x_1,\ldots ,z_{n-2k}=x_{n-2k},\quad z_{n-2k+1}=r_1 e^{i \alpha _1},\quad z_{n-2k+2}=r_1 e^{-i \alpha _1},\ldots ,\\&\quad z_{n-1}=r_k e^{i \alpha _k},\quad z_{n}=r_k e^{-i \alpha _k}, \end{aligned}$$
$$\begin{aligned}&0<x_1,\ldots , x_m<\infty ;\quad -\infty<x_{m+1},\ldots , x_{n-2k}<0; \quad 0<r_1,\ldots ,r_k<\infty ;\\&0<\alpha _1,\ldots ,\alpha _k<\pi . \end{aligned}$$
To find \(\mu (V_{m,2k})\) we need to integrate the differential form (12) over the set \(V_{m,2k}\). The key idea in the proof of Theorem 1 of Zaporozhets (2006) is to make a change of coordinates \((a_0,\ldots , a_n)\mapsto (a,x_1,\ldots ,x_{n-2k}, r_1,\ldots , r_k, \alpha _1,\ldots , \alpha _k)\), with \(a=a_0\), and find dQ in the new coordinates. The derivation of the following formula is carried out in detail in Zaporozhets (2006):
$$\begin{aligned} dQ= & {} 2^k r_1\ldots r_k\, p\left( a,a\sigma _1\left( x_1,\ldots ,x_{n-2k},r_1e^{i\alpha _1},r_1e^{-i\alpha _1},\ldots ,r_ke^{i\alpha _k}, r_ke^{-i\alpha _k}\right) ,\right. \\&\left. \ldots a\sigma _n\left( x_1,\ldots ,x_{n-2k},r_1e^{i\alpha _1},r_1e^{-i\alpha _1},\ldots ,r_ke^{i\alpha _k}, r_ke^{-i\alpha _k}\right) \right) \\&\times \left| a^n \varDelta \left( \left( x_1,\ldots ,x_{n-2k},r_1e^{i\alpha _1},r_1e^{-i\alpha _1},\ldots ,r_ke^{i\alpha _k}, r_ke^{-i\alpha _k}\right) \right) \right| \\&\times \, dx_1\wedge \cdots \wedge dx_{n-2k}\wedge dr_1\wedge \cdots \wedge dr_k\wedge d\alpha _1\wedge \cdots \wedge d\alpha _k\wedge da. \end{aligned}$$
Now we integrate this equation over all polynomials Q that have m positive zeros, \(n-m-2k\) negative zeros and k complex zeros in the upper half-plane. Since there are m! permutations of the positive zeros, \((n-m-2k)!\) permutations of the negative zeros, and k! permutations of the complex zeros, after integrating each polynomial in the left-hand side will occur \(m!k!(n-m-2k)!\) times. Hence the integral of the left-hand side is equal to \(m!k!(n-m-2k)! \, p_{m,2k,n-m-2k}\). The integral on the right-hand side equals
$$\begin{aligned}&2^k\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{n-m-2k}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\int _{\mathbf { R}} r_1\ldots r_k p(a\sigma _0,\ldots ,a\sigma _{n}) |a^{n}\varDelta |\, da\,d\alpha _1\ldots d\alpha _k\\&\quad dr_1\ldots dr_k dx_1\ldots dx_{n-2k}, \end{aligned}$$
hence the assertion (9) follows. \(\square \)
The distribution of the number of internal equilibria
Next we apply Theorem 3 to compute the probability that a random evolutionary game has m, \(0\le m\le d-1\), internal equilibria. We derive formulas for the three most common cases (Han et al. 2012):
(C1)
\(\{\beta _j,0\le j\le d-1\}\) are i.i.d. standard normally distributed,
\(\{\beta _j\}\) are i.i.d. uniformly distributed with the common distribution \(f_j(x)=\frac{1}{2} \mathbb {1}_{[-1,1]}(x)\),
\(\{A_k\}\) and \(\{B_k\}\) are i.i.d. uniformly distributed with the common distribution \(f_j(x)=\frac{1}{2} \mathbb {1}_{[-1,1]}(x)\).
The main result of this section is the following theorem (cf. Theorem 2).
The probability that a d-player two-strategy random evolutionary game has m (\(0\le m\le d-1\)) internal equilibria is
where \(p_{m,2k,d-1-m-2k}\) is given below for each of the cases above:
– For the case (C1)
$$\begin{aligned}&p_{m,2k,d-1-m-2k}\nonumber \\&\quad =\frac{2^{k}}{m! k! (d-1-m-2k)!}~\frac{ \varGamma \Big (\frac{d}{2}\Big ) }{(\pi )^{\frac{d}{2}}\prod \nolimits _{i=0}^{d-1}\delta _i} \int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\, r_1\ldots r_k\nonumber \\&\quad \quad \left( \sum \limits _{i=0}^{d-1}\frac{\sigma _i^2}{\delta _i^2}\right) ^{-\frac{d}{2}} \varDelta \,\, d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{d-1-2k}, \end{aligned}$$
where \(\sigma _i\), for \(i=0,\ldots ,d-1\), and \(\varDelta \) are given in (10)–(11) and \(\delta _i=\begin{pmatrix} d-1\\ i \end{pmatrix}\).
$$\begin{aligned}&p_{m,2k,d-1-m-2k}=\frac{2^{k+1-d}}{d \, m!\, k! \,(d-1-m-2k)!\prod \nolimits _{i=0}^{d-1} \delta _i}\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\nonumber \\&\quad r_1\ldots r_k\,\Big (\min \big \{|\delta _i/\sigma _i|\big \}\Big )^{d} \varDelta \,\,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{d-1-2k}. \end{aligned}$$
$$\begin{aligned}&p_{m,2k,d-1-m-2k}=\frac{2^{k+1}(-1)^d}{m! k! (d-1-m-2k)!\prod \nolimits _{j=0}^{d-1}\delta _j^2}\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\nonumber \\&\quad r_1\ldots r_k\, \prod _{j=0}^{d-1}|\sigma _j|\sum _{i=0}^{d}(-1)^i \frac{K_i}{2d-i} \Big (\min \big \{|\delta _i/\sigma _i|\big \}\Big )^{2d-i}\varDelta \,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k \nonumber \\&\quad dx_1\ldots dx_{d-1-2k}. \end{aligned}$$
In particular, the probability that a d-player two-strategy random evolutionary game has the maximal number of internal equilibria is:
for the case (C1)
$$\begin{aligned} p_{d-1}=\frac{1}{(d-1)!}~\frac{\varGamma \Big (\frac{d}{2}\Big ) }{(\pi )^\frac{d}{2} \prod \nolimits _{i=0}^{d-1}\delta _i}~\int _{\mathbf { R}_+^{d-1}} q(\sigma _0,\ldots ,\sigma _{d-1})\,dx_1\ldots dx_{d-1};\nonumber \\ \end{aligned}$$
$$\begin{aligned} p_{d-1}=\frac{2^{1-d}}{d! \prod _{i=0}^{d-1} \delta _i}~\int _{\mathbf { R}_+^{d-1}}\Big (\min \big \{|\delta _i/\sigma _i|\big \}\Big )^{d} \varDelta \,dx_1\ldots dx_{d-1}; \end{aligned}$$
$$\begin{aligned}&p_{d-1}=\frac{2(-1)^d}{(d-1)!\prod _{j=0}^{d-1}\delta _j^2}\int _{\mathbf { R}_+^{d-1}}\prod _{j=0}^{d-1}|\sigma _j|\sum _{i=0}^{d}(-1)^i \frac{K_i}{2d-i} \Big (\min \big \{|\delta _i/\sigma _i|\big \}\Big )^{2d-i}\varDelta \nonumber \\&\,dx_1\ldots dx_{d-1}. \end{aligned}$$
Note that in formulas (16)–(18) above
$$\begin{aligned} \sigma _j=\sigma _j(x_1,\ldots ,x_{d-1}),\quad \varDelta =\varDelta (x_1,\ldots ,x_{d-1}). \end{aligned}$$
(1) Since \(\{\beta _j,0\le j\le d-1\}\) are i.i.d. standard normally distributed, the joint distribution \(p(y_0,\ldots ,y_{d-1})\) of \(\left\{ \begin{pmatrix} d-1\\ j \end{pmatrix}\beta _j,0\le j\le d-1\right\} \) is given by
$$\begin{aligned} p(y_0,\ldots ,y_{d-1})&=\frac{1}{(2\pi )^{\frac{d}{2}} \prod _{i=0}^{d-1}\begin{pmatrix} d-1\\ i \end{pmatrix}}\exp \left[ -\frac{1}{2}\sum _{i=0}^{d-1}\frac{y_i^2}{\begin{pmatrix} d-1\\ i \end{pmatrix}^2}\right] \\&=\frac{1}{(2\pi )^{\frac{d}{2}}|\mathcal { C}|^\frac{1}{2}}\exp \left[ -\frac{1}{2}\mathbf {y}^T\mathcal { C}^{-1}\mathbf {y}\right] , \end{aligned}$$
where \(\mathbf {y}=[y_0~~y_1~~\ldots ~~y_{d-1}]^T\) and \(\mathcal { C}\) is the covariance matrix
$$\begin{aligned} \mathcal { C}_{ij}=\begin{pmatrix} d-1\\ i \end{pmatrix}\begin{pmatrix} d-1\\ j \end{pmatrix}\delta _{ij}. \end{aligned}$$
$$\begin{aligned} p(a\sigma _0,\ldots , a \sigma _{d-1})=\frac{1}{(2\pi )^\frac{d}{2} |\mathcal { C}|^\frac{1}{2}}\exp \Bigg (-\frac{a^2}{2}\varvec{\sigma }^T\,\mathcal { C}^{-1}\,\varvec{\sigma }\Bigg ), \end{aligned}$$
where \(\varvec{\sigma }=[\sigma _0~\sigma _1~\ldots ~\sigma _{d-1}]^T\). Using the following formula for moments of a normal distribution,
$$\begin{aligned} \int _{\mathbf { R}}|x|^n\exp \big (-\alpha x^2\big )\,dx=\frac{\varGamma \big (\frac{n+1}{2}\big )}{\alpha ^\frac{n+1}{2}}, \end{aligned}$$
we compute
$$\begin{aligned} \int _{\mathbf { R}}|a|^{d-1}\exp \Bigg (-\frac{a^2}{2}\varvec{\sigma }^T\,\mathcal { C}^{-1}\,\varvec{\sigma }\Bigg )\,da=\frac{\varGamma \Big (\frac{d}{2}\Big )}{\Big (\frac{\varvec{\sigma }^T\mathcal { C}^{-1} \varvec{\sigma }}{2}\Big )^{\frac{d}{2}}}=\frac{2^\frac{d}{2}\varGamma \Big (\frac{d}{2}\Big )}{ \big (\varvec{\sigma }^T\mathcal { C}^{-1}\varvec{\sigma }\big )^{\frac{d}{2}}}. \end{aligned}$$
Applying Theorem 3 to the polynomial P given in (5) and using the above identity we obtain
$$\begin{aligned}&p_{m,2k,d-1-m-2k}\\&\quad =\frac{2^{k}}{m! k! (d-1-m-2k)!}\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\int _{\mathbf { R}}\\&\qquad \quad r_1\ldots r_k\, p(a\sigma _0,\ldots ,a\sigma _{d-1}) |a|^{d-1}\varDelta \, da\,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{d-1-2k}\\&\quad =\frac{2^{k}}{m! k! (d-1-m-2k)!}~\frac{1}{(2\pi )^\frac{d}{2} |\mathcal { C}|^\frac{1}{2}}~ 2^{\frac{d}{2}}\varGamma \Big (\frac{d}{2}\Big ) ~\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\\&\qquad \quad r_1\ldots r_k\, \big (\varvec{\sigma }^T\mathcal { C}^{-1}\varvec{\sigma }\big )^{-\frac{d}{2}}~\varDelta \,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{d-1-2k}\\&\quad =\frac{2^{k}}{m! k! (d-1-m-2k)!}~\frac{\varGamma \Big (\frac{d}{2}\Big ) }{(\pi )^\frac{d}{2} |\mathcal { C}|^\frac{1}{2}}~\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\\&\qquad \quad r_1\ldots r_k\, \big (\varvec{\sigma }^T\mathcal { C}^{-1}\varvec{\sigma }\big )^{-\frac{d}{2}}~\varDelta \,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{d-1-2k}, \end{aligned}$$
which is the desired equality (13) by definition of \(\mathcal { C}\) and \(\varvec{\sigma }\).
(2) Now since \(\{\beta _j\}\) are i.i.d. uniformly distributed with the common distribution \(f_j(x)=\frac{1}{2} \mathbb {1}_{[-1,1]}(x)\), the joint distribution \(p(y_0,\ldots ,y_{d-1})\) of
$$\begin{aligned} \left\{ \begin{pmatrix} d-1\\ j \end{pmatrix}\beta _j,0\le j\le d-1\right\} \end{aligned}$$
is given by
$$\begin{aligned} p(y_0,\ldots ,y_{d-1})=\frac{1}{2^{d}\prod _{i=0}^{d-1} \delta _i}\mathbb {1}_{\times _{i=0}^{d-1}[-\delta _i,\delta _i]}(y_0,\ldots , y_{d-1}) \quad \text {where } \delta _{i}= \begin{pmatrix} d-1\\ i \end{pmatrix}. \end{aligned}$$
$$\begin{aligned} p(a\sigma _0,\ldots ,a \sigma _{d-1})=\frac{1}{2^{d}\prod _{i=0}^{d-1} \delta _i}\mathbb {1}_{\times _{i=0}^{d-1}[-\delta _i,\delta _i]}(a\sigma _0,\ldots , a \sigma _{d-1}). \end{aligned}$$
Since \(\mathbb {1}_{\times _{i=0}^{d-1}[-\delta _i,\delta _i]}(a\sigma _0,\ldots , a \sigma _{d-1})=1\) if and only if \(a\sigma _i\in [-\delta _i,\delta _i]\) for all \(i=0,\ldots , d-1\), i.e., if and only if
$$\begin{aligned} a\in \bigcap \limits _{i=0}^{d-1} \big [-|\delta _i/\sigma _i|,|\delta _i/\sigma _i|\big ]=\left[ -\min \limits _{i \in \{0,\ldots , d-1\}}\big \{|\delta _i/\sigma _i|\big \},\min \limits _{i \in \{0,\ldots , d-1\}}\big \{|\delta _i/\sigma _i|\big \}\right] , \end{aligned}$$
we have (for simplicity of notation, in the subsequent computations we shorten \(\min \nolimits _{i \in \{0,\ldots , d-1\}}\) by \(\min \))
$$\begin{aligned} p(a\sigma _0,\ldots ,a \sigma _{d-1})= {\left\{ \begin{array}{ll}\frac{1}{2^{d}\prod _{i=0}^{d-1} \delta _i},&{}\text {if } a \in \big [-\min \big \{|\delta _i/\sigma _i|\big \},\min \big \{|\delta _i/\sigma _i|\big \}\big ],\\ 0, &{}\text {otherwise}. \end{array}\right. } \end{aligned}$$
$$\begin{aligned} \int _{\mathbf { R}}|a|^{d-1}p(a\sigma _0,\ldots ,a \sigma _{d-1})\,da&=\frac{1}{2^{d}\prod _{i=0}^{d-1} \delta _i} \int _{-\min \big \{|\delta _i/\sigma _i|\big \}}^{\min \big \{|\delta _i/\sigma _i| \big \}}|a|^{d-1}\,da\\&=\frac{1}{d\, 2^{d-1}\prod _{i=0}^{d-1} \delta _i} \Big (\min \big \{|\delta _i/ \sigma _i|\big \}\Big )^{d}. \end{aligned}$$
Similarly as in the normal case, using this identity and applying Theorem 3 we obtain
$$\begin{aligned}&p_{m,2k,d-1-m-2k}\\&\quad =\frac{2^{k}}{m! k! (d-1-m-2k)!}\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\int _{\mathbf { R}}\\&\qquad \quad r_1\ldots r_k\, p(a\sigma _0,\ldots ,a\sigma _{d-1}) |a|^{d-1}\varDelta \, da\,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{d-1-2k}\\&\quad =\frac{2^{k+1-d}}{d \, m!\, k! \,(d-1-m-2k)! \prod _{i=0}^{d-1} \delta _i}\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\\&\qquad \quad r_1\ldots r_k\, \Big (\min \big \{|\delta _i/\sigma _i|\big \}\Big )^{d} \varDelta \, da\,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{d-1-2k}. \end{aligned}$$
(3) Now we assume that \(A_j\) and \(B_j\) are i.i.d. uniformly distributed with the common distribution \(\gamma (x)=\frac{1}{2} \mathbb {1}_{[-1,1]}(x)\). Since \(\beta _j=A_j-B_j\), its probability density is given by
$$\begin{aligned} \gamma _{\beta }(x)=\int _{-\infty }^{+\infty }f(y)f(x+y)\,dy=(1-|x|) \mathbb {1}_{[}-1,1](x). \end{aligned}$$
The probability density of \(\delta _j\beta _j\) is
$$\begin{aligned} \gamma _{j}(x)=\frac{1}{\delta _j}\left( 1-\frac{|x|}{\delta _j}\right) \mathbb {1}_{[-1,1]}(x/\delta _j)=\frac{\delta _j-|x|}{\delta _j^2}\mathbb {1}_{[- \delta _j,\delta _j]}(x), \end{aligned}$$
and the joint distribution \(p(y_0,\ldots ,y_{d-1})\) of \(\left\{ \delta _j\beta _j,0\le j\le d-1\right\} \) is given by
$$\begin{aligned} p(y_0,\ldots ,y_{d-1})=\prod _{j=0}^{d-1}\frac{\delta _j-|y_j|}{\delta _j^2} \mathbb {1}_{\times _{i=0}^{d-1}[-\delta _i,\delta _i]}(y_0,\ldots , y_{d-1}). \end{aligned}$$
$$\begin{aligned} p(a\sigma _0,\ldots ,a \sigma _{d-1})=\prod \limits _{j=0}^{d-1}\frac{\delta _j-|a \sigma _j|}{\delta _j^2}\mathbb {1}_{\times _{i=0}^{d-1}[-\delta _i,\delta _i]}( a \sigma _0,\ldots ,a \sigma _{d-1}). \end{aligned}$$
$$\begin{aligned}&\int _{\mathbf { R}}|a|^{d-1}p(a\sigma _0,\ldots ,a \sigma _{d-1})\,da\\&\quad =\frac{1}{\prod _{j=0}^{d-1}\delta _j^2}\int _{-\min \big \{|\delta _i/\sigma _i|\big \}}^{\min \big \{|\delta _i/\sigma _i|\big \}}|a|^{d-1}\prod _{j=0}^{d-1}(\delta _j-|a\sigma _j|)\,da\\&\quad =\frac{2}{\prod _{j=0}^{d-1}\delta _j^2}\int _{0}^{\min \big \{|\delta _i/\sigma _i|\big \}}a^{d-1}\prod _{j=0}^{d-1}(\delta _j-a|\sigma _j|)\,da\\&\quad =2 (-1)^d \prod _{j=0}^{d-1}\frac{|\sigma _j|}{\delta _j^2}\int _{0}^{\min \big \{|\delta _i/\sigma _i|\big \}}a^{d-1}\prod _{j=0}^{d-1}\left( a-\frac{\delta _j}{|\sigma _j|}\right) \,da\\&\quad =2 (-1)^d \prod _{j=0}^{d-1}\frac{|\sigma _j|}{\delta _j^2}\sum _{i=0}^{d}(-1)^i K_i\int _{0}^{\min \big \{|\delta _i/\sigma _i|\big \}}a^{2d-1-i}\,da\\&\quad =2 (-1)^d \prod _{j=0}^{d-1}\frac{|\sigma _j|}{\delta _j^2}\sum _{i=0}^{d}(-1)^i \frac{K_i}{2d-i} \Big (\min \big \{|\delta _i/\sigma _i|\big \}\Big )^{2d-i}, \end{aligned}$$
where \(K_i=\sigma _i(\delta _0/|\sigma _0|,\ldots , \delta _{d-1}/|\sigma _{d-1}|)\) for \(i=0,\ldots , d\).
$$\begin{aligned}&p_{m,2k,d-1-m-2k}\\&\quad =\frac{2^{k}}{m! k! (d-1-m-2k)!}\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}} \int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\int _{\mathbf { R}}\\&\qquad \quad r_1\ldots r_k\, p(a\sigma _0,\ldots ,a\sigma _{d-1}) |a|^{d-1}\varDelta \, da\,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k dx_1\ldots dx_{d-1-2k}\\&\quad =\frac{2^{k+1}(-1)^d}{m! k! (d-1-m-2k)!\prod _{j=0}^{d-1}\delta _j^2}\int _{\mathbf { R}_+^m}\int _{\mathbf { R}_-^{d-1-2k-m}}\int _{\mathbf { R}_+^k}\int _{[0,\pi ]^k}\\&\qquad \quad r_1\ldots r_k\, \prod _{j=0}^{d-1}|\sigma _j|\sum _{i=0}^{d}(-1)^i \frac{K_i}{2d-i} \Big (\min \big \{|\delta _i/\sigma _i|\big \}\Big )^{2d-i}\varDelta \,d\alpha _1\ldots d\alpha _k dr_1\ldots dr_k\\&\qquad \quad dx_1\ldots dx_{d-1-2k}. \end{aligned}$$
\(\square \)
Corollary 1
The expected numbers of internal equilibria and stable internal equilibria, E(d) and SE(d), respectively, of a d-player two-strategy game, are given by
$$\begin{aligned} E(d)=\sum _{m=0}^{d-1} m p_m, \quad \quad SE(d)=\frac{1}{2}\sum _{m=0}^{d-1} m p_m. \end{aligned}$$
Note that this formula for E(d) is applicable for non-normal distributions, which is in contrast to the method used in previous works (Duong and Han 2015, 2016) that can only be used for normal distributions. The second part, i.e. the formula for the expected number of stable equilibrium points, was obtained based on the following property of stable equilibria in multi-player two-strategy evolutionary games, as shown in Han et al. (2012, Theorem 3): \(SE(d) = \frac{1}{2}E(d)\).
In Theorem 4 for the case (C1), the assumption that \(\beta _k\)'s are standard normal distributions, i.e. having variance 1, is just for simplicity. Suppose that \(\beta _k\)'s are normal distributions with mean 0 and variance \(\eta ^2\). We show that the probability \(p_{m}\), for \(0\le m\le d-1\), does not depend on \(\eta \). In this case, the formula for p is given by (19) but with \(\mathcal { C}\) being replaced by \(\eta ^2 \mathcal { C}\). To indicate its dependence on \(\eta \), we write \(p_\eta \). We use a change of variable \(a=\eta {\tilde{a}}\). Then
$$\begin{aligned}&a^{d-1}p_\eta (a\sigma _0,\ldots , a\sigma _{d-1})\,da\\&\quad =\eta ^{d-1}{\tilde{a}}^{d-1}\frac{1}{(\sqrt{2\pi }\eta )^{d}\prod _{j=0}^{d-1} \begin{pmatrix} d-1\\ j \end{pmatrix}}\exp \left[ -\frac{{\tilde{a}}^2}{2}\sum _{j=0}^{d-1}\frac{\sigma _j^2}{\begin{pmatrix} d-1\\ j \end{pmatrix}^2}\right] \eta \, d{\tilde{a}}\\&\quad ={\tilde{a}}^{d-1}\frac{1}{(\sqrt{2\pi })^{d}\prod _{j=0}^{d-1} \begin{pmatrix} d-1\\ j \end{pmatrix}}\exp \left[ -\frac{{\tilde{a}}^2}{2}\sum _{j=0}^{d-1}\frac{\sigma _j^2}{\begin{pmatrix} d-1\\ j \end{pmatrix}^2}\right] \, d{\tilde{a}}\\&\quad ={\tilde{a}}^{d-1}p_1({\tilde{a}}\sigma _0,\ldots , {\tilde{a}}\sigma _{d-1}), \end{aligned}$$
from which we deduce that \(p_m\) does not depend on \(\eta \). Similarly for the other cases, the uniform interval can be \(\frac{1}{2\alpha }[-\alpha ,\alpha ]\) for some \(\alpha >0\).
For illustration of the application of Theorem 4, the following examples show explicit calculations for \(d=3\) and 4 for the case of normal distributions, i.e. (C1). Further numerical results for \(d = 5\) and also for other distributions, i.e. (C2) and (C3), are provided in Fig. 1. The integrals in these examples were computed using Mathematica.
Examples for \(d=3,4\)
(Three-player two-strategy games: \(d = 3\)) (1) One internal equilibria: \(p_1=p_{1,0,1}\). We have
$$\begin{aligned}&m = 1, \quad k = 0,\quad \sigma _0=1, \quad \sigma _1=x_1+x_2,\quad \sigma _2= x_1x_2,\quad \varDelta =|x_2-x_1|,\\&q(\sigma _0,\sigma _1,\sigma _2)=\frac{1}{\left( 1+x_1^2 x_2^2+\frac{1}{4} \left( x_1+x_2\right) {}^2\right) {}^{3/2}} |x_2-x_1|. \end{aligned}$$
Substituting these values into (13) we obtain the probability that a three-player two-strategy evolutionary game has 1 internal equilibria
$$\begin{aligned} p_{1}=\frac{1}{4 \pi }\int _{\mathbf { R}_+}\int _{\mathbf { R}_-}\frac{1}{\left( 1+x_1^2 x_2^2+\frac{1}{4} \left( x_1+x_2\right) {}^2\right) {}^{3/2}} |x_2-x_1| \,dx_1\,dx_2 = 0.5. \end{aligned}$$
(2) Two internal equilibria: \(p_2=p_{2,0,0}\). We have
$$\begin{aligned}&m = 2, \quad k = 0,\quad \sigma _0=1,\quad \quad \sigma _1=x_1+x_2,\quad \sigma _2= x_1x_2,\quad \varDelta =|x_2-x_1|,\\&q(\sigma _0,\sigma _1,\sigma _2)=\frac{1}{\left( 1+x_1^2 x_2^2+\frac{1}{4} \left( x_1+x_2\right) {}^2\right) {}^{3/2}}|x_2-x_1|. \end{aligned}$$
The probability that a three-player two-strategy evolutionary game has 2 internal equilibria is
$$\begin{aligned} p_2=\frac{1}{8\pi }\int _{\mathbf { R}_+^2}\frac{1}{\left( 1+x_1^2 x_2^2+\frac{1}{4} \left( x_1+x_2\right) {}^2\right) {}^{3/2}} |x_2-x_1| \,dx_1\,dx_2 \ \approx 0.134148.\nonumber \\ \end{aligned}$$
(3) No internal equilibria: the probability that a three-player two-strategy evolutionary game has no internal equilibria is \(p_0=1-p_1-p_2 \ \approx 1 - 0.5 - 0.134148 = 0.365852.\)
(Four-player two-strategy games: \(d=4\))
(1) One internal equilibria: \(p_{1}=p_{1,0,2}+p_{1,2,0}\).
We first compute \(p_{1,0,2}\). In this case,
$$\begin{aligned}&m=1,\quad k=0,\quad \sigma _0=1, \quad \sigma _1=x_1+x_2+x_3,\quad \sigma _2= x_1x_2+x_1x_3+x_2x_3,\\&\varDelta =|x_2-x_1|\, |x_3-x_1|\,|x_3-x_2|. \end{aligned}$$
Substituting these into (13) we get
$$\begin{aligned} p_{1,0,2}= & {} \frac{1}{18\pi ^2}\int _{\mathbf { R}_-}\int _{\mathbf { R}_-}\int _{\mathbf { R}_+} \left( 1+\frac{(x_1+x_2+x_3)^2}{9}+\frac{(x_1x_2+x_1x_3+x_2x_3)^2}{9}+(x_1x_2x_3)^2 \right) ^{-2}\\&\times |x_2-x_1|\,|x_3-x_1|\, |x_3-x_2|\,dx_1\,dx_2\,dx_3 \ \approx 0.223128. \end{aligned}$$
Next we compute \(p_{1,2,0}\). In this case,
$$\begin{aligned}&m=1, \quad k=1,\quad \sigma _0=1,\\&\sigma _1=\sigma _1\left( x_1,r_1e^{i\alpha _1}, r_1 e^{-i\alpha _1}\right) =x_1+r_1e^{i\alpha _1}+ r_1 e^{-i\alpha _1}=x_1+2r_1\cos \left( \alpha _1\right) , \\&\sigma _2=\sigma _2\left( x_1,r_1e^{i\alpha _1}, r_1 e^{-i\alpha _1}\right) =x_1\left( r_1e^{i\alpha _1}+r_1e^{-i\alpha _1}\right) +r_1^2=2x_1r_1\cos \left( \alpha _1\right) +r_1^2,\\&\sigma _3=\sigma _3\left( x_1,r_1e^{i\alpha _1}, r_1 e^{-i\alpha _1}\right) =x_1r_1^2,\\&\varDelta =\varDelta \left( x_1,r_1e^{i\alpha _1}, r_1 e^{-i\alpha _1}\right) =\left| r_1e^{i\alpha _1}-x_1\right| \left| r_1e^{-i\alpha _1}-x_1\right| \left| r_1e^{i\alpha _1}-r_1e^{-i\alpha _1}\right| \\&~~~=\left| r_1^2-2x_1r_1\cos \left( \alpha _1\right) +x_1^2\right| \left| 2r_1\sin \left( \alpha _1\right) \right| . \end{aligned}$$
Substituting these into (13) yields
$$\begin{aligned} p_{1,2,0}= & {} \frac{2}{9\pi ^2} \int _{\mathbf { R}_+}\int _{[0,\pi ]}\int _{\mathbf { R}_+}r_1\, \left( 1+\frac{(x_1+2r_1\cos (\alpha _1))^2}{9}+\frac{(2x_1r_1\cos (\alpha _1)+r_1^2)^2}{9}+(x_1r_1^2)^2 \right) ^{-2}\\&\times \, |r_1^2-2x_1r_1\cos (\alpha _1)+x_1^2||2r_1\sin (\alpha _1)|\,dx_1dr_1d\alpha _1da \ \approx 0.260348. \end{aligned}$$
Therefore, we obtain that
$$\begin{aligned} p_{1}=p_{1,0,2}+p_{1,2,0}\ \approx 0.223128 +0.260348 = 0.483476. \end{aligned}$$
(2) Two internal equilibria: \(p_2=p_{2,0,1}\)
$$\begin{aligned}&m=2, \quad k=0, \quad \sigma _0=1,\quad \sigma _1=x_1+x_2+x_3, \quad \sigma _2=x_1x_2+x_1x_3+x_2x_3,\\&\quad \sigma _3=x_1x_2x_3, \varDelta =|x_2-x_1|\,|x_3-x_1|\, |x_3-x_2|. \end{aligned}$$
The probability that a four-player two-strategy evolutionary game has 2 internal equilibria is
$$\begin{aligned} p_2= & {} \frac{1}{18\pi ^2}\int _{\mathbf { R}_+}\int _{\mathbf { R}_+}\int _{\mathbf { R}_-} \left( 1+\frac{(x_1+x_2+x_3)^2}{9}+\frac{(x_1x_2+x_1x_3+x_2x_3)^2}{9}+(x_1x_2x_3)^2\right) ^{-2}\nonumber \\&\times \, |x_2-x_1|\,|x_3-x_1|\, |x_3-x_2|\,dx_1\,dx_2\,dx_3 \ \approx 0.223128. \end{aligned}$$
(3) Three internal equilibria: \(p_3=p_{3,0,0}\)
$$\begin{aligned}&m=3,\quad k=0, \quad \sigma _0=1,\quad \sigma _1=x_1+x_2+x_3, \quad \sigma _2=x_1x_2+x_1x_3+x_2x_3,\\&\quad \sigma _3=x_1x_2x_3,\quad \varDelta =|x_2-x_1|\,|x_3-x_1|\, |x_3-x_2|. \end{aligned}$$
$$\begin{aligned} p_3&=\frac{1}{54\pi ^2}\int _{\mathbf { R}_+^3} \left( 1+\frac{(x_1+x_2+x_3)^2}{9}+\frac{(x_1x_2+x_1x_3+x_2x_3)^2}{9}+(x_1x_2x_3)^2\right) ^{-2}\\&\quad \times |x_2-x_1|\,|x_3-x_1|\, |x_3-x_2|\,dx_1\,dx_2\,dx_3 \ \approx 0.0165236. \end{aligned}$$
(4) No internal equilibria: the probability that a four-player two-strategy evolutionary game has no internal equilibria is: \(p_0=1-p_1-p_2-p_3 \ \approx 1 - 0.483476 - 0.223128 - 0.0165236 = 0.276872\).
Universal estimates for \(p_m\)
In Sect. 3, we have derived closed-form formulas for the probability distributions \(p_m \ (0\le m\le d-1)\) of the number of internal equilibria. However, it is computationally expensive to compute these probabilities since it involves complex multiple-dimensional integrals. In this section, using Descartes' rule of signs and combinatorial techniques, we provide universal estimates for \(p_m\). Descartes' rule of signs is a technique for determining an upper bound on the number of positive real roots of a polynomial in terms of the number of sign changes in the sequence formed by its coefficients. This rule has been applied to random polynomials before in the literature (Bloch and Pólya 1932); however this paper only obtained estimates for the expected number of zeros of a random polynomial.
(Descartes' rule of signs, see e.g., Curtiss 1918) Consider a polynomial of degree n, \(p(x)=a_nx^n+\cdots +a_0\) with \(a_n\ne 0\). Let v be the number of variations in the sign of the coefficients \(a_n,a_{n-1},\ldots ,a_0\) and \(n_p\) be the number of real positive zeros. Then \((v-n_p)\) is an even non-negative integer.
We recall that an internal equilibrium of a d-player two-strategy game is a positive root of the polynomial P given in (5). We will apply Descartes' rule of signs to find an upper bound for the probability that a random polynomial has a certain number of positive roots. This is a problem that is of interest in its own right and may have applications elsewhere; therefore we will first study this problem for a general random polynomial of the form
$$\begin{aligned} p(y):=\sum _{k=0}^n a_k y^k, \end{aligned}$$
and then apply it to the polynomial P. It turns out that the symmetry of \(\{a_k\}\) will be the key: the asymmetric case requires completely different treatment from the symmetric one.
Estimates of \(p_m\): symmetric case
We first consider the case where the coefficients \(\{a_k\}\) in (22) are symmetrically distributed. The main result of this section will be Theorem 6 that provides several upper and lower bounds for the probability that a d-player two strategy game has m internal equilibria. Before stating Theorem 6, we need the following auxiliary lemmas.
Proposition 1
Suppose that the coefficients \(a_k, 0\le k\le n\) in the polynomial (22) are i.i.d. and symmetrically distributed. Let \(p_{k,n}, 0\le k\le n\), be the probability that the sequence of coefficients \((a_0,\ldots ,a_{n})\) has k changes of signs. Then
$$\begin{aligned} p_{k,n}=\frac{1}{2^{n}}\begin{pmatrix} n\\ k \end{pmatrix}. \end{aligned}$$
See Appendix 2. \(\square \)
The next two lemmas on the sum of binomial coefficients will be used later on.
Let \(0\le k \le n\) be positive integers. Then it holds that
$$\begin{aligned}&\sum _{\begin{array}{c} j=k\\ j:\text {even} \end{array}}^{n}\begin{pmatrix} n\\ j \end{pmatrix}=\frac{1}{2}\left[ \sum _{j=0}^{n-k}\begin{pmatrix} n\\ j \end{pmatrix}+(-1)^{k}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}\right] , \\&\sum _{\begin{array}{c} j=k\\ j:\text {odd} \end{array}}^{n}\begin{pmatrix} n\\ j \end{pmatrix}=\frac{1}{2}\left[ \sum _{j=0}^{n-k}\begin{pmatrix} n\\ j \end{pmatrix}-(-1)^{k}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}\right] , \end{aligned}$$
where it is understood that \(\begin{pmatrix} n\\ j \end{pmatrix}=0\) if \(j<0\). In particular, for \(k=0\), we get
$$\begin{aligned} \sum _{\begin{array}{c} j=0\\ j:\text {even} \end{array}}^{n}\begin{pmatrix} n\\ j \end{pmatrix}=\sum _{\begin{array}{c} j=0\\ j:\text {odd} \end{array}}^{n}\begin{pmatrix} n\\ j \end{pmatrix}=2^{n-1}. \end{aligned}$$
The following lemma provides estimates on the sum of the first k binomial coefficients.
Let n and \(0\le k\le n\) be positive integers. We have the following estimates (MacWilliams and Sloane 1977, Lemma 8 and Corollary 9, Chapter 10; Gottlieb et al. 2012)
$$\begin{aligned}&\frac{2^{nH\big (\frac{k}{n}\big )}}{\sqrt{8k\big (1-\frac{k}{n}\big )}}\le \sum _{j=0}^k\begin{pmatrix} n\\ j \end{pmatrix}\le \delta 2^{nH\big (\frac{k}{n}\big )}\quad \text {if }0\le k< \frac{n}{2},\quad {and} \end{aligned}$$
$$\begin{aligned}&2^n-\delta 2^{nH\big (\frac{k}{n}\big )}\le \sum _{j=0}^k\begin{pmatrix} n\\ j \end{pmatrix}\le 2^n-\frac{2^{nH\big (\frac{k}{n}\big )}}{\sqrt{8k\big (1-\frac{k}{n} \big )}}\quad \text {if } \frac{n}{2}\le k\le n, \end{aligned}$$
where \(\delta =0.98\) and H is the binary entropy function
$$\begin{aligned} H(x)=-x\log _2(x)-(1-x)\log _2(1-x), \end{aligned}$$
where \(0\log _2 0\) is taken to be 0. In addition, if \(n=2n'\) is even and \(0\le k\le n'\), we also have the following estimate (Lovász et al. 2003, Lemma 3.8.2)
$$\begin{aligned} \sum _{j=0}^{k-1}\begin{pmatrix} 2n'\\ j \end{pmatrix}\le 2^{2n'-1}\begin{pmatrix} 2n'\\ k \end{pmatrix}\Big /\begin{pmatrix} 2n'\\ n' \end{pmatrix}. \end{aligned}$$
We now apply Proposition 1 and Lemmas 2 and 3 to derive estimates for the probability that a d-player two-strategy evolutionary game has a certain number of internal equilibria. The main theorem of this section is the following.
Suppose that the coefficients \(\{\beta _k\}\) in (5) are symmetrically distributed. Let \(p_m, 0\le m\le d-1,\) be the probability that the d-player two-strategy random game has m internal equilibria. Then the following assertions hold
Upper-bound for \(p_m\), for all \(0\le m\le d-1\),
$$\begin{aligned} p_m&\le \frac{1}{2^{d-1}}\sum \limits _{\begin{array}{c} j: j\ge m \\ j-m~\text {even} \end{array}}\begin{pmatrix} d-1\\ j \end{pmatrix}=\frac{1}{2^d}\left[ \sum _{j=0}^{d-1-m}\begin{pmatrix} d-1\\ j \end{pmatrix}+\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\right] \end{aligned}$$
$$\begin{aligned}&\le {\left\{ \begin{array}{ll} \frac{1}{2^d}\left[ \delta 2^{(d-1)H\big (\frac{m}{d-1}\big )}+\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\right] &{}\text {if}~~\frac{d-1}{2}< m\le d-1,\\ \frac{1}{2^d}\Bigg [2^{d-1}-\frac{2^{(d-1)H\big (\frac{m}{d-1}\big )}}{8m \big (1-\frac{m}{d-1}\big )} +\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\Bigg ]&\text {if}~~0\le m\le \frac{d-1}{2}. \end{array}\right. } \end{aligned}$$
As consequences, \(0\le p_m\le \frac{1}{2}\) for all \(0\le m\le d-1\), \(p_{d-1}\le \frac{1}{2^{d-1}}\), \(p_{d-2}\le \frac{d-1}{2^{d-1}}\) and \(\lim \nolimits _{d\rightarrow \infty }p_{d-1}=\lim \nolimits _{d\rightarrow \infty }p_{d-2}=0\). In addition, if \(d-1=2 d'\) is even and \(0\le m\le d'\) then
$$\begin{aligned} p_m\le \frac{1}{2^d}\left[ 2^{d-2}\begin{pmatrix} d-1\\ m-1 \end{pmatrix}\Big /\begin{pmatrix} d-1\\ d' \end{pmatrix}+\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\right] . \end{aligned}$$
Lower-bound for \(p_0\) and \(p_1\):
$$\begin{aligned} p_0\ge \frac{1}{2^{d-1}}\quad {and}\quad p_1\ge \frac{d-1}{2^{d-1}}. \end{aligned}$$
For \(d=2\): \(p_0=p_1=\frac{1}{2}\).
For \(d=3\): \(p_1=\frac{1}{2}\).
(a) This part is a combination of Decartes' rule of signs, Proposition 1 and Lemmas 2 and 3. In fact, as a consequence of this rule and by Proposition 1, we have
$$\begin{aligned} p_m\le \sum _{\begin{array}{c} j: j\ge m\\ j-m:~\text {even} \end{array}}p_{j,d-1}=\frac{1}{2^{d-1}}\sum _{\begin{array}{c} j: j\ge m\\ j-m:~\text {even} \end{array}}\begin{pmatrix} d-1\\ j \end{pmatrix}, \end{aligned}$$
which is the inequality part in (29). Next, applying Lemma 2 for \(k=m\) and \(n=d-1\) and then Lemma 3, we obtain
$$\begin{aligned}&\frac{1}{2^{d-1}}\sum _{\begin{array}{c} k: k\ge m\\ k-m:~ \text {even} \end{array}}\begin{pmatrix} d-1\\ k \end{pmatrix}\\&\quad = {\left\{ \begin{array}{ll}\frac{1}{2^{d}}\Bigg [ \sum \nolimits _{j=0}^{d-1-m}\begin{pmatrix} d-1\\ j \end{pmatrix}+(-1)^m\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\Bigg ]&{}\text {if}~m~\text { is even} \\ \frac{1}{2^{d}}\Bigg [ \sum \nolimits _{j=0}^{d-1-m}\begin{pmatrix} d-1\\ j \end{pmatrix}-(-1)^m\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\Bigg ]&\text {if}~m~\text { is odd} \end{array}\right. }\\&\quad =\frac{1}{2^d}\Bigg [ \sum \nolimits _{j=0}^{d-1-m}\begin{pmatrix} d-1\\ j \end{pmatrix}+\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\Bigg ]\\&\quad \le {\left\{ \begin{array}{ll} \frac{1}{2^d}\Bigg [ \delta 2^{(d-1)H\big (\frac{m}{d-1}\big )}+\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\Bigg ]&{}\text {if}~~\frac{d-1}{2} < m\le d-1,\\ \frac{1}{2^d}\Bigg [2^{d-1}-\frac{2^{(d-1)H\big (\frac{m}{d-1}\big )}}{8m \big (1-\frac{m}{d-1}\big )} +\begin{pmatrix} d-2\\ m-1 \end{pmatrix}\Bigg ]&\text {if}~~0\le m\le \frac{d-1}{2}. \end{array}\right. } \end{aligned}$$
This proves the equality part in (29) and (30). As a result, the estimate \(p_m\le \frac{1}{2}\) for all \(0\le m\le d-1\) is followed from (29) and (24); the estimates \(p_{d-1}\le \frac{1}{2^{d-1}}\) and \(p_{d-2}\le \frac{d-1}{2^{d-1}}\) are special cases of (29) for \(m=d-1\) and \(m=d-2\), respectively.
Finally, the estimate (31) is a consequence of (29) and (28).
(b) It follows from Decartes' rule of signs and Proposition 1 that
$$\begin{aligned} p_0\ge p_{0,d-1}=\frac{1}{2^{d-1}}\quad \text {and}\quad p_{1}\ge p_{1,d-1}=\frac{d-1}{2^{d-1}}. \end{aligned}$$
(c) For \(d=2\): from parts (a) and (b) we have
$$\begin{aligned} \frac{1}{2}\le p_0,\quad p_1\le \frac{1}{2}, \end{aligned}$$
which implies that \(p_0=p_1=\frac{1}{2}\) as claimed.
(d) Finally, for \(d=3\): also from parts (a) and (b) we get
$$\begin{aligned} \frac{1}{2}\le p_1\le \frac{1}{2}, \end{aligned}$$
so \(p_1=\frac{1}{2}\). This finishes the proof of Theorem 6. \(\square \)
Note that in Theorem 6 we only assume that \(\beta _k\) are symmetrically distributed but do not require that they are normal distributions. When \(\{\beta _k\}\) are normal distributions, we have derived (Duong and Han 2015, 2016) a closed formula for the expected number E(d) of internal equilibria, which can be computed efficiently for large d. Since \(E(d)=\sum _{m=0}^{d-1}m p_m\), we have \(p_m\le E(d)/m\) for all \(1\le m\le d-1\). Therefore, when \(\{\beta _k\}\) are normal, we obtain an upper bound for \(p_m\) as the minimum between E(d) / m and the bound obtained in Theorem 6. The comparison of the new bounds with E(d) / m in Fig. 2 shows that the new ones do better for m closer to 0 or \(d-1\) but worse for intermediate m (i.e. closer to \((d-1)/2\)).
Estimates of \(p_m\): general case
In the proof of Proposition 1 the assumption that \(\{a_k\}\) are symmetrically distributed is crucial. In that case, all the \(2^n\) binary sequences constructed are equally distributed, resulting in a compact formula for \(p_{k,n}\). However, when \(\{a_k\}\) are not symmetrically distributed, those binary sequences are no longer equally distributed. Thus computing \(p_{k,n}\) becomes much more intricate. We now consider the general case where
$$\begin{aligned} \mathbf {P}(a_i>0)=\alpha ,~~\mathbf {P}(a_i<0)=1-\alpha \quad \text {for all}~ i=0,\ldots ,n. \end{aligned}$$
Note that the general case allows us to move beyond the usual assumption in the analysis of random evolutionary games that all payoff entries \(a_k\)'s and \(b_k\)'s have the same probability distribution resulting in \(\alpha = 1/2\) (see Lemma 1). In the general case it only requires that all \(a_k\)'s have the same distribution and all \(b_k\)'s have the same distribution, capturing the fact that different strategies, i.e. A and B in Sect. 2, might have different payoff properties (e.g., defectors always have a larger payoff than cooperators in a public goods game).
The main results of this section will be Theorem 7 and Theorem 8. The former provides explicit formulas for \(p_{k,n}\) while the latter consists of several upper and lower bounds for \(p_m\). We will need several technically auxiliary lemmas whose proofs will be given in Appendix 1. We start with the following proposition that provides explicit formulas for \(p_{k,n}\) for \(k\in \{0,1,n-1,n\}\).
The following formulas hold:
$$\begin{aligned}&\bullet \quad p_{0,n}=\alpha ^{n+1}+(1-\alpha )^{n+1},\quad p_{1,n}={\left\{ \begin{array}{ll} \frac{n}{2^n}&{}\text {if}~\alpha =\frac{1}{2},\\ 2\alpha (1-\alpha )\frac{(1-\alpha )^n-\alpha ^n}{1-2\alpha }&{}\text {if}~\alpha \ne \frac{1}{2}; \end{array}\right. } \\&\bullet \quad p_{n-1,n}={\left\{ \begin{array}{ll} n \alpha ^\frac{n}{2}(1-\alpha )^\frac{n}{2}&{}\text {if } n \text { even},\\ \alpha ^\frac{n+1}{2}(1-\alpha )^\frac{n+1}{2}\bigg [\frac{n+1}{2}\Big (\frac{\alpha }{1- \alpha }+\frac{1-\alpha }{\alpha }\Big )+(n-1)\bigg ]&{}\text {if }n \text { odd}; \end{array}\right. } \\&\bullet \quad p_{n,n}={\left\{ \begin{array}{ll} \alpha ^{\frac{n}{2}}(1-\alpha )^{\frac{n}{2}}&{}\text {if } n \text { is even},\\ 2 \alpha ^\frac{n+1}{2}(1-\alpha )^\frac{n+1}{2}&{}\text {if } n \text { is odd}. \end{array}\right. } \end{aligned}$$
In particular, if \(\alpha =\frac{1}{2}\), then \(p_{0,n}=p_{1,n}=\frac{1}{2^n}\text { and } p_{1,n}=p_{n-1,n}=\frac{n}{2^n}\).
The computations of \(p_{k,n}\) for other k are more involved. We will employ combinatorial techniques and derive recursive formulas for \(p_{k,n}\). We define
$$\begin{aligned}&u_{k,n}=\mathbf {P}(\text {there are } k \text { variations of signs in}~\{a_0,\ldots ,a_n\}\big \vert a_{n}>0), \\&v_{k,n}=\mathbf {P}(\text {there are } k \text { variations of signs in}~\{a_0,\ldots ,a_n\}\big \vert a_{n}<0). \end{aligned}$$
We have the following lemma.
The following recursive relations hold:
$$\begin{aligned} u_{k,n}=\alpha u_{k,n-1}+(1-\alpha )v_{k-1,n-1} \quad \text {and}\quad v_{k,n}=\alpha u_{k-1,n-1}+(1-\alpha )v_{k,n-1}.\qquad \end{aligned}$$
We can decouple the recursive relations in Lemma 4 to obtain recursive relations for \(\{u_{k,n}\}\) and \(v_{k,n}\) separately as follows:
The following recursive relations hold
$$\begin{aligned}&u_{k,n} =\alpha (1-\alpha )(u_{k-2,n-2}-u_{k,n-2})+u_{k,n-1},\\&v_{k,n} =\alpha (1-\alpha )(v_{k-2,n-2}-v_{k,n-2})+v_{k,n-1}. \end{aligned}$$
Using the recursive equations for \(u_{k,n}\) and \(v_{k,n}\) we can also derive a recursive relation for \(p_{k,n}\).
\(\{p_{k,n}\}\) satisfies the following recursive relation.
$$\begin{aligned} p_{k,n}=\alpha (1-\alpha )(p_{k-2,n-2}-p_{k,n-2})+p_{k,n-1}. \end{aligned}$$
Proposition 3 provides a second-order recursive relation for the probabilities \(\{p_{k,n}\}\). This relation resembles the well-known Chu–Vandermonde identity for binomial coefficients, \(\Big \{b_{k,n}:=\begin{pmatrix} n\\ k \end{pmatrix}\Big \}\), which is that, for \(0<m<n\),
$$\begin{aligned} b_{k,n}=\sum \limits _{j=0}^k \begin{pmatrix} m\\ j \end{pmatrix}b_{k-j,n-m}. \end{aligned}$$
Particularly for \(m=2\) we obtain
$$\begin{aligned} b_{k,n}&=b_{k,n-2}+2b_{k-1,n-2}+b_{k-2,n-2}\\&=b_{k-2,n-2}-b_{k,n-2}+2(b_{k,n-2}+b_{k-1,n-2})\\&=b_{k-2,n-2}-b_{k,n-2}+2b_{k,n-1}, \end{aligned}$$
where the last identity is Pascal' rule for binomial coefficients.
On the other hand, the recursive formula \(p_{k,n}\) for \(\alpha =\frac{1}{2}\) becomes
$$\begin{aligned} p_{k,n}=\frac{1}{4}(p_{k-2,n-2}-p_{k,n-2})+p_{k,n-1}. \end{aligned}$$
Using the transformation \(a_{k,n}:=\frac{1}{2^n}p_{k,n}\) as in the proof of Theorem 7, then
$$\begin{aligned} a_{k,n}=a_{k-2,n-2}-a_{k,n-2}+2a_{k,n-1}, \end{aligned}$$
which is exactly the Chu–Vandermonde identity for \(m=2\) above. Then it is no surprise that in Theorem 7 we obtain that \(a_{k,n}\) is exactly the same as the binomial coefficient \(a_{k,n}=\begin{pmatrix} n\\ k \end{pmatrix}\).
In the next main theorem we will find explicit formulas for \(\{p_{k,n}\}\) from the recursive formula in the previous lemma using the method of generating functions. The case \(\alpha =\frac{1}{2}\) will be a special one.
\(p_{k,n}\) is given explicitly by: for \(\alpha =\frac{1}{2}\),
$$\begin{aligned} p_{k,n}=\frac{1}{2^n}\begin{pmatrix} n\\ k \end{pmatrix}. \end{aligned}$$
For \(\alpha \ne \frac{1}{2}\):
(i) if k is even, \(k=2k'\), then
$$\begin{aligned} p_{k,n}={\left\{ \begin{array}{ll} \sum \nolimits _{m=\lceil \frac{n}{2}\rceil }^n \frac{n-k+1}{2m-n+1}\begin{pmatrix} m\\ k',n-k'-m,2m-n \end{pmatrix}(-1)^{n-k'-m}(\alpha (1-\alpha ))^{n-m}&{}\\ &{}\text {if } n \text { even},\\ \sum \nolimits _{m=\lceil \frac{n}{2}\rceil }^n \frac{n-k+1}{2m-n+1}\begin{pmatrix} m\\ k',n-k'-m,2m-n \end{pmatrix}(-1)^{n-k'-m}(\alpha (1-\alpha ))^{n-m}&{}\\ \quad +\,2\begin{pmatrix} \lceil \frac{n-1}{2}\rceil \\ k' \end{pmatrix} (-1)^{\lceil \frac{n-1}{2}\rceil -k'+1} (\alpha (1-\alpha ))^{\frac{n+1}{2}}&\text {if } n \text { odd}; \end{array}\right. } \end{aligned}$$
(ii) if k is odd, \(k=2k'+1\), then
$$\begin{aligned} p_{k,n}=2\,\sum _{m=\lceil \frac{n-1}{2}\rceil }^n\begin{pmatrix} m\\ k', n-k'-m-1,2m-n+1 \end{pmatrix}(-1)^{n-k'-m-1} (\alpha (1-\alpha ))^{n-m}. \end{aligned}$$
Below we provide explicit formulas for \(\{p_{k,n}\}\) for \(0\le k\le n\le 4\):
$$\begin{aligned} \bullet \quad n=1{:}&\quad p_{0,1}=\alpha ^2+(1-\alpha )^2; \quad p_{1,1}=2\alpha (1-\alpha );\\ \bullet \quad n=2{:}&\quad p_{0,2}=\alpha ^3+(1-\alpha )^3, \quad p_{1,2}=2\alpha (1-\alpha ),\quad p_{2,2}=\alpha (1-\alpha );\\ \bullet \quad n=3{:}&\quad p_{0,3}=\alpha ^4+(1-\alpha )^4,~~p_{1,3}=2\alpha (1-\alpha )(\alpha ^2-\alpha +1),\\&\quad p_{2,3}=2\alpha (1-\alpha )(\alpha ^2-\alpha +1), \quad p_{3,3}=2\alpha ^2(1-\alpha )^2;\\ \bullet \quad n=4{:}&\quad p_{0,4}=\alpha ^5+(1-\alpha )^5,~~p_{1,4}=2\alpha (1-\alpha )(2\alpha ^2-2\alpha +1),\\&\quad p_{2,4}=3\alpha (1-\alpha )(2\alpha ^2-2\alpha +1),\quad p_{3,4}=4\alpha ^2(1-\alpha )^2,~~p_{4,4}=\alpha ^2(1-\alpha )^2. \end{aligned}$$
Direct computations verify the recursive formula for \(k=2,n=4\)
$$\begin{aligned} p_{2,4}=\alpha (1-\alpha )(p_{0,2}-p_{2,2})+p_{2,3}. \end{aligned}$$
We now apply Theorem 7 to the polynomial P in (5) to obtain estimates for \(p_m, 0\le m\le d-1\), which is the probability that a d-player two-strategy random evolutionary game has m internal equilibria. This theorem extends Theorem 6 for \(\alpha =1/2\) to the general case although we do not achieve an explicit upper bound in terms of d as in Theorem 6.
The following assertions hold
Upper-bound for \(p_m\)
where \(p_{k,d-1}\) can be computed explicitly according to Theorem 7 with n replaced by \(d-1\).
Lower-bound for \(p_0\): \(p_0\ge \alpha ^{d}+(1-\alpha )^{d}\ge \frac{1}{2^{d-1}}\).
Lower-bound for \(p_1\): \(p_1\ge {\left\{ \begin{array}{ll} \frac{d-1}{2^{d-1}}&{}\text {if}~\alpha =\frac{1}{2},\\ 2\alpha (1-\alpha )\frac{(1-\alpha )^{d-1}-\alpha ^{d-1}}{1-2\alpha } &{}\text {if}~\alpha \ne \frac{1}{2}. \end{array}\right. }\)
Upper-bound for \(p_{d-2}\):
$$\begin{aligned} p_{d-2}&\le {\left\{ \begin{array}{ll} (d-1) \alpha ^\frac{d-1}{2}(1-\alpha )^\frac{d-1}{2}&{}\text {if } d \text { odd},\\ \alpha ^\frac{d}{2}(1-\alpha )^\frac{d}{2}\bigg [\frac{d}{2}\Big (\frac{\alpha }{1-\alpha }+ \frac{1-\alpha }{\alpha }\Big )+(d-2)\bigg ]&{}\text {if } d \text { even},\end{array}\right. }\nonumber \\&\le \frac{d-1}{2^{d-1}}\quad \text {when}~d\ge 3. \end{aligned}$$
$$\begin{aligned} q_{d-1}&\le {\left\{ \begin{array}{ll} \alpha ^{\frac{d-1}{2}}(1-\alpha )^{\frac{d-1}{2}}&{}\text {if } d \text { is odd},\\ 2 \alpha ^\frac{d}{2}(1-\alpha )^\frac{d}{2}&{}\text {if } d \text { is even}, \end{array}\right. }\\&\le \frac{1}{2^{d-1}}. \end{aligned}$$
As consequences:
For \(d=2\): \(p_0=\alpha ^2+(1-\alpha )^2\) and \(p_1=2\alpha (1-\alpha )\).
For \(d=3\), \(p_1=2\alpha (1-\alpha )\).
We will apply Decartes' rule of signs, Proposition 2 and Theorem 7 for the random polynomial (5). It follows from Decartes' rule of signs that
where \(p_{k,d-1}\) is given explicitly in Theorem 7 with n replaced by \(d-1\). This proves the first statement. In addition, we can also deduce from Decartes' rule of signs and Proposition 2 the following estimates for special cases \(m\in \{0,1,d-2,d-1\}\):
$$\begin{aligned}&\bullet ~~p_0\ge p_{0,d-1}=\alpha ^d+(1-\alpha )^d\ge \min _{0\le \alpha \le 1}[\alpha ^d+(1-\alpha )^d]=\frac{1}{2^{d-1}};\\&\bullet ~~ p_1\ge p_{1,d-1}={\left\{ \begin{array}{ll} \frac{d-1}{2^{d-1}}&{}\text {if}~\alpha =\frac{1}{2},\\ 2\alpha (1-\alpha )\frac{(1-\alpha )^{d-1}-\alpha ^{d-1}}{1-2\alpha }&{} \text {if}~\alpha \ne \frac{1}{2}; \end{array}\right. }\\&\bullet ~~ p_{d-2}\le p_{d-2,d-1}={\left\{ \begin{array}{ll} (d-1) \alpha ^\frac{d-1}{2}(1-\alpha )^\frac{d-1}{2}&{}\text {if } d \text { odd},\\ \alpha ^\frac{d}{2}(1-\alpha )^\frac{d}{2}\bigg [\frac{d}{2}\Big (\frac{\alpha }{ 1-\alpha }+\frac{1-\alpha }{\alpha }\Big )+(d-2)\bigg ]&{}\text {if } d \text { even}, \end{array}\right. }\\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = {\left\{ \begin{array}{ll} (d-1) (\alpha (1-\alpha ))^\frac{d-1}{2}&{}\text {if } d \text { odd},\\ \frac{d}{2}(\alpha (1-\alpha ))^{d/2-1}-2(\alpha (1-\alpha ))^{d/2}&{}\text {if } d \text { even}, \end{array}\right. } \\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \le {\left\{ \begin{array}{ll} (d-1)(1/4)^\frac{d-1}{2}=\frac{d-1}{2^{d-1}}&{}\text {if } d \text { odd},\\ \max _{0\le \beta \le \frac{1}{4}} f(\beta )=\frac{d-1}{2^{d-1}}&{}\text {if } d\ge 3 \text { even}; \end{array}\right. }\\&\text {where}, \beta \!:=\!\alpha (1-\alpha ),~~ f(\beta ):=\frac{d}{2}\beta ^{d/2-1}-2\beta ^{d/2}, \text { and to obtain the last inequality}\\&\text {we have used the fact that}~~0\le \beta =\alpha (1-\alpha )\le \frac{1}{4}~\text {and}\\&f'(\beta )=d\beta ^{d/2-2}\Big (\frac{d}{4}-\frac{1}{2}-\beta \Big )\ge 0~~\text {when}~~0\le \beta \le \frac{1}{4}~~\text {and}~~ d\ge 3.\\&\bullet ~~ p_{d-1}\le p_{d-1,d-1}={\left\{ \begin{array}{ll} \alpha ^{\frac{d-1}{2}}(1-\alpha )^{\frac{d-1}{2}}&{}\text {if } d \text { is odd},\\ 2 \alpha ^\frac{d}{2}(1-\alpha )^\frac{d}{2}&{}\text {if } d \text { is even}, \end{array}\right. }\\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \le {\left\{ \begin{array}{ll} (1/4)^\frac{d-1}{2}=\frac{1}{2^{d-1}}&{}\text {if } d \text { is odd},\\ 2(1/4)^\frac{d}{2}=\frac{1}{2^{d-1}} &{}\text {if } d \text { is even}. \end{array}\right. } \end{aligned}$$
These computations establish the estimates (ii)–(v) of the theorem. For the consequences: for \(d=2\), in this case the above estimates (ii)–(v) respectively become:
$$\begin{aligned}&p_0\ge \alpha ^2+(1-\alpha )^2, \quad p_1\ge {\left\{ \begin{array}{ll} \frac{1}{2}&{}\text {if}~ \alpha =\frac{1}{2},\\ 2\alpha (1-\alpha )&{}\text {if}~ \alpha \ne \frac{1}{2} \end{array}\right. }=2\alpha (1-\alpha ),\quad \text {and}\\&p_0\le \alpha (1-\alpha )\Big [\frac{\alpha }{1-\alpha }+\frac{1-\alpha }{\alpha }\Big ]=\alpha ^2+(1-\alpha )^2, \quad q_1\le 2\alpha (1-\alpha ), \end{aligned}$$
which imply that \(p_0=\alpha ^2+(1-\alpha )^2,\quad p_1=2\alpha (1-\alpha )\).
Similarly for \(d=3\), estimates (ii) and (iii) respectively become
$$\begin{aligned} p_1\ge {\left\{ \begin{array}{ll} \frac{1}{2}\quad \text {if}~~\alpha =\frac{1}{2},\\ 2\alpha (1-\alpha )\text {if}~~\alpha \ne \frac{1}{2} \end{array}\right. }=2\alpha (1-\alpha ),\quad \text {and}\quad p_1\le 2\alpha (1-\alpha ), \end{aligned}$$
from which we deduce that \(p_1=2\alpha (1-\alpha )\). \(\square \)
Numerical simulations
In this section, we perform several numerical (sampling) simulations and calculations to illustrate the analytical results obtained in previous sections. Figure 1 shows the values of \(\{p_m\}\) for \(d\in \{3,4,5\}\), for the three cases studied in Theorem 4, i.e., when \(\beta _k\) are i.i.d. standard normally distributed (GD), uniformly distributed (UD1) and when \(\beta _k=a_k-b_k\) with \(a_k\) and \(\beta _k\) being uniformly distributed (UD2). We compare results obtained from analytical formulas in Theorem 4 and from samplings. The figure shows that they are in accordance with each other agreeing to at least 2 digits after the decimal points. Figure 2 compares the new upper bound obtained in Theorem 6 with that of E(d) / m. The comparison indicates which formulas should be used to obtain a stricter upper bound of \(p_m\).
Numerical versus simulation calculations of the probability of having a concrete number (m) of internal equilibria, \(p_m\), for different values of d. The payoff entries \(a_k\) and \(b_k\) were drawn from a normal distribution with variance 1 and mean 0 (GD) and from a standard uniform distribution (UD2). We also study the case where \(\beta _k = a_k - b_k\) itself is drawn from a standard uniform distribution (UD1). Results are obtained from analytical formulas (Theorem 2) (a) and are based on sampling \(10^6\) payoff matrices (b) where payoff entries are drawn from the corresponding distributions. Analytical and simulations results are in accordance with each other. All results are obtained using Mathematica
Comparison of the new upper bounds of \(p_m\) derived in Theorem 6 with that of E(d) / m: a for the bound in (36) and b for the bound in (37). Black areas indicate when the former ones are better and the grey areas otherwise. Clearly the bound in (a) is stricter/better than that of (b). For small d, the new bounds are better. When d is sufficiently large, we observe that for any d, the new bounds are worse than E(d) / m when m is intermediate while better otherwise. Overall, this comparison indicates which formulas should be used to obtain a stricter upper bound of \(p_m\)
Further discussions and future research
In this paper, we have provided closed-form formulas and universal estimates for the probability distribution of the number of internal equilibria in a d-player two-strategy random evolutionary game. We have explored further connections between evolutionary game theory and random polynomial theory as discovered in our previous works (Duong and Han 2015, 2016; Duong et al. 2017). We believe that the results reported in the present work open up a new exciting avenue of research in the study of equilibrium properties of random evolutionary games. We now provide further discussions on these issues and possible directions for future research.
Computations of probabilities\(\{p_m\}\). Although we have found analytical formulas for \(p_m\) it is computationally challenging to deal with them because of their complexity. Obtaining an effective computational method for \(\{p_m\}\) would be an interesting problem for future investigation.
Quantification of errors in the mean-field approximation theory (Schehr and Majumdar 2008). Consider a general polynomial \(\mathbf {P}\) as given in (6) with dependent coefficients, and let \(P_m([a,b],n)\) be the probability that \(\mathbf {P}\) has m real roots in the interval [a, b] (recall that n is the degree of the polynomial, which is equal to \(d -1\) in Equation (1)). The mean-field theory (Schehr and Majumdar 2008) neglects the correlations between the real roots and simply considers that these roots are randomly and independently distributed on the real axis with some local density f(t) at point t, with f(t) being the density that can be computed from the Edelman–Kostlan theorem (Edelman and Kostlan 1995). Within this approximation in the large n limit, the probability \(P_m([a, b],n)\) is given by a non-homogeneous Poisson distribution, see Schehr and Majumdar (2008, Section 3.2.2 and Equation (70)). By applying the mean-field theory one can approximate the probability \(p_m\) that a random d-player two-strategy evolutionary game has m internal equilibria by a simpler and computationally feasible formula. However, it is unclear to us how to quantify the errors of approximation. We leave this topic for future research.
Extensions to multi-strategy games. We have focused in this paper on random games with two strategies (with an arbitrary number of players). The analysis of games with more than two strategies is much more intricate since in this case one needs to deal with systems of multi-variate random polynomials. We have provided (Duong and Han 2015, 2016) a closed formula for the expected number of internal equilibria for a multi-player multi-strategy games for the case of normal payoff entries. We aim to extend the present work to the general case in future publications. In particular, Decartes' rule of signs for multi-variate polynomials (Itenberg and Roy 1996) might be used to obtain universal estimates, regardless of the underlying payoff distribution.
Abel NH (1824) Mémoire sur les équations algébriques, où l'on démontre l'impossibilité de la résolution de l'équation générale du cinquiéme degré. Abel Ouvres 1:28–33
Axelrod R (1984) The evolution of cooperation. Basic Books, New York
MATH Google Scholar
Bloch A, Pólya G (1932) On the roots of certain algebraic equations. Proc Lond Math Soc S2–33(1):102
Article MathSciNet MATH Google Scholar
Broom M (2000) Bounds on the number of ESSs of a matrix game. Math Biosci 167(2):163–175
Broom M (2003) The use of multiplayer game theory in the modeling of biological populations. Comments Theor Biol 8:103–123
Broom M, Rychtář J (2013) Game-theoretical models in biology. CRC Press, Boca Raton
Book MATH Google Scholar
Broom M, Rychtář J (2016) Nonlinear and multiplayer evolutionary games. Springer, Cham, pp 95–115
Broom M, Cannings C, Vickers G (1997) Multi-player matrix games. Bull Math Biol 59(5):931–952
Article MATH Google Scholar
Butez R, Zeitouni O (2017) Universal large deviations for Kac polynomials. Electron Commun Probab 22, paper no. 6
Curtiss DR (1918) Recent extentions of descartes' rule of signs. Ann Math 19(4):251–278
Duong MH, Han TA (2015) On the expected number of equilibria in a multi-player multi-strategy evolutionary game. Dyn Games Appl 6(3):324–346
Duong MH, Han TA (2016) Analysis of the expected density of internal equilibria in random evolutionary multi-player multi-strategy games. J Math Biol 73(6):1727–1760
Duong MH, Tran HM (2018) On the fundamental solution and a variational formulation for a degenerate diffusion of Kolmogorov type. Discrete Continuous Dyn Syst A 38:3407–3438
Duong M.H, Tran HM, Han TA (2017) On the expected number of internal equilibria in random evolutionary games with correlated payoff matrix. arXiv:1708.01672
Edelman A, Kostlan E (1995) How many zeros of a random polynomial are real? Bull Am Math Soc (NS) 32(1):1–37
Friedman D (1998) On economic applications of evolutionary game theory. J Evol Econ 8(1):15–43
Fudenberg D, Harris C (1992) Evolutionary dynamics with aggregate shocks. J Econ Theory 57(2):420–441
Galla T, Farmer JD (2013) Complex dynamics in learning complicated games. Proc Natl Acad Sci 110(4):1232–1236
Gokhale CS, Traulsen A (2010) Evolutionary games in the multiverse. Proc Natl Acad Sci USA 107(12):5500–5504
Article MathSciNet Google Scholar
Gokhale CS, Traulsen A (2014) Evolutionary multiplayer games. Dyn Games Appl 4(4):468–488
Gottlieb L-A, Kontorovich A, Mossel E (2012) VC bounds on the cardinality of nearly orthogonal function classes. Discrete Math 312(10):1766–1775
Götze F, Koleda D, Zaporozhets D (2017) Joint distribution of conjugate algebraic numbers: a random polynomial approach. arXiv:1703.02289
Gross T, Rudolf L, Levin SA, Dieckmann U (2009) Generalized models reveal stabilizing factors in food webs. Science 325(5941):747–750
Haigh J (1988) The distribution of evolutionarily stable strategies. J Appl Probab 25(2):233–246
Haigh J (1990) Random polymorphisms and random evolutionarily stable strategies: a comparison. J Appl Probab 27(4):737755
Han TA (2013) Intention recognition, commitments and their roles in the evolution of cooperation: from artificial intelligence techniques to evolutionary game theory models. Springer SAPERE series, vol 9. Springer, Berlin
Han TA, Traulsen A, Gokhale CS (2012) On equilibrium properties of evolutionary multi-player games with random payoff matrices. Theor Popul Biol 81(4):264–272
Han T, Pereira LM, Lenaerts T (2017) Evolution of commitment and level of participation in public goods games. Auton Agent Multi Agent Syst 31(3):561–583
Helbing D, Brockmann D, Chadefaux T, Donnay K, Blanke U, Woolley-Meza O, Moussaid M, Johansson A, Krause J, Schutte S et al (2015) Saving human lives: what complexity science and information systems can contribute. J Stat Phys 158(3):735–781
Hofbauer J, Sigmund K (1998) Evolutionary games and population dynamics. Cambridge University Press, Cambridge
Itenberg I, Roy M-F (1996) Multivariate descartes' rule. Beitr Algebra Geom 37(2):337–346
Lovász L, Pelikán J, Vesztergombi K (2003) Discrete mathematics: elementary and beyond. Undergraduate texts in mathematics. Springer, New York
MacWilliams F, Sloane N (1977) The theory of error-correcting codes, North-Holland Mathematical Library. North-Holland, Amsterdam
May RM (2001) Stability and complexity in model ecosystems, vol 6. Princeton University Press, Princeton
Maynard Smith J (1982) Evolution and the theory of games. Cambridge University Press, Cambridge
Maynard Smith J, Price GR (1973) The logic of animal conflict. Nature 246:15–18
Nash JF (1950) Equilibrium points in n-person games. Proc Natl Acad Sci USA 36:48–49
Nowak MA (2006) Evolutionary dynamics. Harvard University Press, Cambridge
Pacheco JM, Santos FC, Souza MO, Skyrms B (2009) Evolutionary dynamics of collective action in n-person stag hunt dilemmas. Proc R Soc Lond B Biol Sci 276(1655):315–321
Peña J (2012) Group-size diversity in public goods games. Evolution 66(3):623–636
Peña J, Lehmann L, Nöldeke G (2014) Gains from switching and evolutionary stability in multi-player matrix games. J Theor Biol 346:23–33
Pennisi E (2005) How did cooperative behavior evolve? Science 309(5731):93–93
Perc M, Jordan JJ, Rand DG, Wang Z, Boccaletti S, Szolnoki A (2017) Statistical physics of human cooperation. Phys Rep 687:1–51
Perc M, Szolnoki A (2010) Coevolutionary games—a mini review. Biosystems 99(2):109–125
Sandholm WH (2010) Population games and evolutionary dynamics. MIT Press, Cambridge
Sasaki T, Chen X, Perc M (2015) Evolution of public cooperation in a monitored society with implicated punishment and within-group enforcement. Sci Rep 5:112
Schehr G, Majumdar S (2008) Real roots of random polynomials and zero crossing properties of diffusion equation. J Stat Phys 132(2):235–273
Schuster P, Sigmund K (1983) Replicator dynamics. J Theor Biol 100:533–538
Sigmund K (2010) The calculus of selfishness. Princeton University Press, Princeton
Souza MO, Pacheco JM, Santos FC (2009) Evolution of cooperation under n-person snowdrift games. J Theor Biol 260(4):581–588
Taylor PD, Jonker L (1978) Evolutionary stable strategies and game dynamics. Math Biosci 40:145–156
Tuyls K, Parsons S (2007) What evolutionary game theory tells us about multiagent learning. Artif Intell 171(7):406–416
Wang Z, Bauch CT, Bhattacharyya S, d'Onofrio A, Manfredi P, Perc M, Perra N, Salathé M, Zhao D (2016) Statistical physics of vaccination. Phys Rep 664:1–113
Zaporozhets DN (2006) On the distribution of the number of real zeros of a random polynomial. J Math Sci 137(1):4525–4530
Zeeman EC (1980) Population dynamics from game theory. In: Lecture Notes in Mathematics, vol 819, pp 471–497
This paper was written partly when M. H. Duong was at the Mathematics Institute, University of Warwick and was supported by ERC Starting Grant 335120. M. H. Duong and T. A. Han acknowledge Research in Pairs Grant (No. 41606) by the London Mathematical Society to support their collaborative research. We would like to thank Dr. Dmitry Zaporozhets for his useful discussions on Zaporozhets (2006) and Götze et al. (2017).
School of Mathematics, University of Birmingham, Birmingham, B15 2TT, UK
Manh Hong Duong
Data Analytics Department, Esmart Systems, 1783, Halden, Norway
Hoang Minh Tran
School of Computing, Media & the Arts, Teesside University, Middlesbrough, TS1 3BX, UK
The Anh Han
Correspondence to Manh Hong Duong.
In this appendix, we present proofs of technical results in previous sections.
Proof of Lemma 1
The probability distribution, \(f_Z\), of \(Z=X-Y\) can be found via the joint probability distribution \(f_{X,Y}\) as
$$\begin{aligned} f_{Z}(z)=\int _{-\infty }^{\infty } f_{X,Y}(x,x-z)\,dx=\int _{-\infty }^{\infty } f_{X,Y}(y+z,y)\,dy. \end{aligned}$$
Therefore, using the symmetry of \(f_{X,Y}\) we get
$$\begin{aligned} f_Z(-z)=\int _{-\infty }^{\infty } f_{X,Y}(x,x+z)\,dx=\int _{-\infty }^{\infty } f_{X,Y}(x+z,x)\,dx=f_Z(z). \end{aligned}$$
If X and Y are i.i.d with the common probability distribution f then
$$\begin{aligned} f_{X,Y}(x,y)=f(x)f(y), \end{aligned}$$
which is symmetric with respect to x and y, i.e., X and Y are exchangeable.
Proof of Proposition 1
We take the sequence of coefficients \((a_0,\ldots , a_{n})\) and move from the left starting from \(a_0\) to the right ending at \(a_{n}\). When there is a change of sign, we write a 1 and write a 0 when there is not. Then the changes of signs form a binary sequence of length n. There are \(2^{n}\) of them in total. Thereby \(p_{k,n}\) is the probability that there are exactly k number 1s in the binary sequence. There are \(\begin{pmatrix} n\\ k \end{pmatrix}\) such sequences. Since \(\{\beta _k\}\) are independent and symmetrically distributed, each sequence has a probability \(\frac{1}{2^{n}}\) of occurring. From this we deduce (23).
Since \(\sum \nolimits _{j=0}^{n}\begin{pmatrix} n\\ j \end{pmatrix}(-1)^{j} =(1+(-1))^{n}=0\), we have
$$\begin{aligned} \sum _{j=k}^{n}\begin{pmatrix} n\\ j \end{pmatrix}(-1)^{j} =-\sum _{j=0}^{k-1}\begin{pmatrix} n\\ j \end{pmatrix}(-1)^{j}. \end{aligned}$$
According to Duong and Tran (2018, Lemma 5.4)
$$\begin{aligned} \sum _{j=0}^{k-1}\begin{pmatrix} n\\ j \end{pmatrix} (-1)^{j}=(-1)^{k-1}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}. \end{aligned}$$
$$\begin{aligned} \sum _{j=k}^{n}\begin{pmatrix} n\\ j \end{pmatrix}(-1)^{j}=(-1)^{k}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}, \end{aligned}$$
or equivalently:
$$\begin{aligned} \sum _{\begin{array}{c} j=k\\ j:~\text {even} \end{array}}^{n}\begin{pmatrix} n\\ j \end{pmatrix}-\sum _{\begin{array}{c} j=k\\ j:~ \text {odd} \end{array}}^{n}\begin{pmatrix} n\\ j \end{pmatrix} =(-1)^{k}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}. \end{aligned}$$
Define \({\bar{S}}_{k,n}:=\sum \nolimits _{j=k}^{n}\begin{pmatrix} n\\ j \end{pmatrix}\) and \(S_{k,n}:=\sum \nolimits _{j=0}^{k}\begin{pmatrix} n\\ j \end{pmatrix}\). Then using the property that \(\begin{pmatrix} n\\ j \end{pmatrix}=\begin{pmatrix} n\\ n-j \end{pmatrix}\) we get \({\bar{S}}_{k,n}=S_{n-k,n}\) and
$$\begin{aligned}&\sum _{\begin{array}{c} j=k\\ j:\text {even} \end{array}}^{n}\begin{pmatrix} n\\ j \end{pmatrix}=\frac{1}{2}\left[ {\bar{S}}_{k,n}+(-1)^{k}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}\right] =\frac{1}{2}\left[ S_{n-k,n}+(-1)^{k}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}\right] ,\\&\sum _{\begin{array}{c} j=k\\ j: \text {odd} \end{array}}^{n}\begin{pmatrix} n\\ j \end{pmatrix}=\frac{1}{2}\left[ {\bar{S}}_{k,n}-(-1)^{k}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}\right] =\frac{1}{2}\left[ S_{n-k,n}-(-1)^{k}\begin{pmatrix} n-1\\ k-1 \end{pmatrix}\right] . \end{aligned}$$
This finishes the proof of this lemma.
The four extreme cases \(k\in \{0,1,n-1,n\}\) are special because we can characterise explicitly the events that the sequence \(\{a_0,\ldots , a_n\}\) has k changes of signs. We have
$$\begin{aligned}&p_{0,n}=\mathbf {P}\Big \{a_0>0,\ldots ,a_n>0\} + \mathbf {P}\{a_0<0,\ldots , a_n<0)\Big \}\\&\qquad =\alpha ^{n+1}+(1-\alpha )^{n+1}.\\&p_{1,n}=\mathbf {P}\Big \{\cup _{k=0}^{n-1}\{a_0>0,\ldots a_k>0, a_{k+1}<0,\ldots , a_n<0\}\\&\qquad \qquad \cup \{a_0<0,\ldots a_k<0, a_{k+1}>0,\ldots , a_n>0\}\Big \}\\&\qquad =\sum _{k=0}^{n-1}\left( \alpha ^{k+1}(1-\alpha )^{n-k}+(1-\alpha )^{k+1}\alpha ^{n-k}\right) \\&\qquad =\alpha (1-\alpha )^n\sum _{k=0}^{n-1}\left( \frac{\alpha }{1-\alpha }\right) ^k+\alpha ^n(1-\alpha )\sum _{k=0}^{n-1}\left( \frac{1-\alpha }{\alpha }\right) ^k\\&\qquad ={\left\{ \begin{array}{ll} \frac{n}{2^n}&{}\text {if}~\alpha =\frac{1}{2},\\ \alpha (1-\alpha )^n\frac{1-\Big (\frac{\alpha }{1-\alpha }\Big )^n}{1-\frac{\alpha }{1-\alpha }}+\alpha ^n(1-\alpha )\frac{1-\Big (\frac{1-\alpha }{\alpha }\Big )^n}{1- \frac{1-\alpha }{\alpha }}&{}\text {if}~\alpha \ne \frac{1}{2} \end{array}\right. }\\&\qquad ={\left\{ \begin{array}{ll} \frac{n}{2^n}&{}\text {if}~\alpha =\frac{1}{2},\\ 2\alpha (1-\alpha )\frac{(1-\alpha )^n-\alpha ^n}{1-2\alpha }&{} \text {if}~\alpha \ne \frac{1}{2}. \end{array}\right. }\\&p_{n,n}=\mathbf {P}\Big \{\{a_0>0,a_1<0,\ldots , (-1)^n a_n>0\}\cup \{a_0<0,a_1>0,\ldots , (-1)^n a_n<0\} \Big \}\\&\qquad ={\left\{ \begin{array}{ll} \alpha ^{\frac{n+2}{2}}(1-\alpha )^{\frac{n}{2}}+(1-\alpha )^{\frac{n+2}{2}}\alpha ^{ \frac{n}{2}}&{}\text {if } n \text { is even},\\ 2 \alpha ^\frac{n+1}{2}(1-\alpha )^\frac{n+1}{2}&{}\text {if } n \text { is odd} \end{array}\right. }\\&\qquad ={\left\{ \begin{array}{ll} \alpha ^{\frac{n}{2}}(1-\alpha )^{\frac{n}{2}}&{}\text {if } n \text { is even},\\ 2 \alpha ^\frac{n+1}{2}(1-\alpha )^\frac{n+1}{2}&{}\text {if } n \text { is odd}. \end{array}\right. } \end{aligned}$$
It remains to compute \(p_{n-1,n}\).
$$\begin{aligned} p_{n-1,n}&=\sum _{k=0}^{n-1}\mathbf {P}\Big \{a_k~\text {and}~a_{k+1}~\text {have the same signs}~\text {and there are } n-1 \text { changes of signs in}\\&\quad ~ (a_0,\ldots ,a_k,a_{k+1},\ldots , a_n)\Big \}\\&=:\sum _{k=0}^{n-1}\gamma _{k}. \end{aligned}$$
We now compute \(\gamma _k\). This depends on the parity of n and k. If both n and k are even, then
$$\begin{aligned} \gamma _k&=\mathbf {P}\Big (a_0>0,a_1<0,\ldots , a_k>0, a_{k+1}>0,\ldots a_n<0\Big )\\&\quad +\mathbf {P}\Big (a_0<0,a_1>0,\ldots , a_k<0, a_{k+1}<0,\ldots a_n>0\Big )\\&=(1-\alpha )^\frac{n}{2}\alpha ^\frac{n+2}{2}+(1-\alpha )^{\frac{n+2}{2}}\alpha ^\frac{n}{2}. \end{aligned}$$
If n is even and k is odd, then
$$\begin{aligned} \gamma _k&=\mathbf {P}\Big (a_0>0,a_1<0,\ldots , a_k<0, a_{k+1}<0,\ldots a_n<0\Big )\\&\quad +\mathbf {P}\Big (a_0<0,a_1>0,\ldots , a_k>0, a_{k+1}>0,\ldots a_n>0\Big )\\&=\alpha ^\frac{n+2}{2}(1-\alpha )^\frac{n}{2}+(1-\alpha )^\frac{n+2}{2}\alpha ^\frac{n}{2}. \end{aligned}$$
Therefore, in both cases, i.e., if n is even we get
$$\begin{aligned} \gamma _k=\alpha ^\frac{n}{2}(1-\alpha )^\frac{n}{2}. \end{aligned}$$
From this we deduce \(p_{n-1,n}= n \alpha ^\frac{n}{2}(1-\alpha )^\frac{n}{2}\). Similarly if n is odd and k is even
$$\begin{aligned} \gamma _k&=\mathbf {P}\Big (a_0>0,a_1<0,\ldots , a_k>0, a_{k+1}>0,\ldots a_n>0\Big )\\&\quad +\mathbf {P}\Big (a_0<0,a_1>0,\ldots , a_k<0, a_{k+1}<0,\ldots a_n<0\Big )\\&=(1-\alpha )^{\frac{n+3}{2}}\alpha ^\frac{n-1}{2}+(1-\alpha )^\frac{n-1}{2}\alpha ^\frac{n+3}{2}. \end{aligned}$$
If both n and k are odd
$$\begin{aligned} \gamma _k&=\mathbf {P}\Big (a_0>0,a_1<0,\ldots , a_k<0, a_{k+1}<0,\ldots a_n>0\Big )\\&\quad +\mathbf {P}\Big (a_0<0,a_1>0,\ldots , a_k>0, a_{k+1}>0,\ldots a_n<0\Big )\\&=\alpha ^\frac{n+1}{2}(1-\alpha )^\frac{n+1}{2}+(1-\alpha )^\frac{n+1}{2}\alpha ^\frac{n+1}{2}. \end{aligned}$$
Then when n is odd, we obtain
$$\begin{aligned} p_{n-1,n}&=\frac{n+1}{2}\Big [(1-\alpha )^{\frac{n+3}{2}}\alpha ^\frac{n-1}{2}+(1-\alpha )^\frac{n-1}{2}\alpha ^\frac{n+3}{2}\Big ]+(n-1)\alpha ^\frac{n+1}{2}(1-\alpha )^\frac{n+1}{2}\\&=\alpha ^\frac{n+1}{2}(1-\alpha )^\frac{n+1}{2}\left[ \frac{n+1}{2}\left( \frac{\alpha }{1-\alpha }+\frac{1-\alpha }{\alpha }\right) +(n-1)\right] . \end{aligned}$$
$$\begin{aligned} p_{n-1,n}={\left\{ \begin{array}{ll} n \alpha ^\frac{n}{2}(1-\alpha )^\frac{n}{2}&{}\text {if } n \text { even},\\ \alpha ^\frac{n+1}{2}(1-\alpha )^\frac{n+1}{2}\bigg [\frac{n+1}{2}\Big ( \frac{\alpha }{1-\alpha }+\frac{1-\alpha }{\alpha }\Big )+(n-1)\bigg ]&{}\text {if } n \text { odd}. \end{array}\right. } \end{aligned}$$
Applying the law of total probability
$$\begin{aligned} \mathbf {P}(A|B)=\mathbf {P}(A|B,C)\mathbf {P}(C|B)+\mathbf {P}(A|B,\bar{C})\mathbf {P}(\bar{C}|B), \end{aligned}$$
$$\begin{aligned}&\mathbf {P}\Big (k~\text {sign switches in } \{a_0,\ldots ,a_n\}\big \vert a_{n}>0\Big )\\&\quad =\mathbf {P}\Big (k~\text { sign switches in}~\{a_0,\ldots , a_n\}\big \vert a_{n}>0, a_{n-1}>0)\mathbf {P}(a_{n-1}>0|a_{n}>0\Big )\\&\qquad +\mathbf {P}\Big (k~\text {sign switches in }\{a_0,\ldots ,a_n\}\big \vert a_{n}>0,a_{n-1}<0)\mathbf {P}(a_{n-1}<0|a_{n}>0\Big ). \end{aligned}$$
Since \(a_{n-1}\) and \(a_{n}\) are independent, we have \(\mathbf {P}(a_{n-1}>0\big \vert a_{n}>0)=\mathbf {P}(a_{n-1}>0)\) and \(\mathbf {P}(a_{n-1}<0\big \vert a_{n}>0)=\mathbf {P}(a_{n-1}<0)\). Therefore,
$$\begin{aligned}&\mathbf {P}\Big (k~\text { sign switches in}~\{a_0,\ldots ,a_n\}\big \vert a_{n}>0\Big )\\&\qquad = \mathbf {P}\Big (k~\text {sign switches in}~\{a_0,\ldots ,a_n\}\big \vert a_{n}>0,a_{n-1}>0\Big )\mathbf {P}(a_{n-1}>0)\\&\qquad \quad +\mathbf {P}\Big (k\text { sign switches in}~\{a_0,\ldots ,a_n\}\big \vert a_{n}>0,a_{n-1}<0\Big )\mathbf {P}(a_{n-1}<0)\\&\qquad =\mathbf {P}\Big (k~\text { sign switches in}~\{a_0,\ldots ,a_{n-1}\}\big \vert a_{n-1}>0\Big )\mathbf {P}(a_{n-1}>0)\\&\qquad \quad +\mathbf {P}\Big (k-1~\text {sign switches in}~\{a_0,\ldots ,a_{n-1}\}\big \vert a_{n-1}<0\Big )\mathbf {P}(a_{n-1}<0). \end{aligned}$$
Therefore we obtain the first relationship in (33). The second one is proved similarly.
From (33), it follows that
$$\begin{aligned} v_{k-1,n-1}=\frac{u_{k,n}-\alpha u_{k,n-1}}{1-\alpha },\quad v_{k,n-1}=\frac{u_{k+1,n}-\alpha u_{k+1,n-1}}{1-\alpha }. \end{aligned}$$
Substituting (35) into (33) we obtain
$$\begin{aligned} \frac{u_{k+1,n+1}-\alpha u_{k+1,n}}{1-\alpha }=\alpha u_{k-1,n-1}+(1-\alpha )\frac{u_{k+1,n}-\alpha u_{k+1,n-1}}{1-\alpha }, \end{aligned}$$
which implies that
$$\begin{aligned} u_{k+1,n+1}&=(1-\alpha )\alpha u_{k-1,n-1}+(1-\alpha )(u_{k+1,n}-\alpha u_{k+1,n-1})+\alpha u_{k+1,n} \\&=(1-\alpha )\alpha u_{k-1,n-1}-\alpha (1-\alpha )u_{k+1,n-1}+u_{k+1,n}. \end{aligned}$$
Re-indexing we get \( u_{k,n} =(1-\alpha )\alpha (u_{k-2,n-2}-u_{k,n-2})+u_{k,n-1}\). Similarly we obtain the recursive formula for \(v_{k,n}\).
From Lemmas 4 and 5 we have
$$\begin{aligned} p_{k,n}&=\alpha u_{k,n}+(1-\alpha )v_{k,n}\\&= \alpha [\alpha (1-\alpha )(u_{k-2,n-2}-u_{k,n-2})+u_{k,n-1}]\\&\quad +(1-\alpha )[\alpha (1-\alpha )(v_{k-2,n-2}-v_{k,n-2})+v_{k,n-1}]\\&= \alpha (1-\alpha )[\alpha (u_{k-2,n-2}-u_{k,n-2})+(1-\alpha )(v_{k-2,n-2}-v_{k,n-2})]\\&\quad +\alpha u_{k,n-1}+(1-\alpha )v_{k,n-1}\\&= \alpha (1-\alpha )(p_{k-2,n-2}-p_{k,n-2})+p_{k,n-1}. \end{aligned}$$
This finishes the proof.
Proof of Theorem 7
Set \(1/A^2:=\alpha (1-\alpha )\). By the Cauchy–Schwarz inequality \(\alpha (1-\alpha )\le \frac{(\alpha +1-\alpha )^2}{4}=\frac{1}{4}\), it follows that \(A^2\ge 4\). Define \(a_{k,n}:=A^n p_{k,n}\). Substituting this relation into (34) we get the following recursive formula for \(a_{k,n}\)
$$\begin{aligned} a_{k,n}=a_{k-2,n-2}-a_{k,n-2}+A a_{k,n-1}. \end{aligned}$$
According to Proposition 2
$$\begin{aligned} a_{0,n}&=A^n p_{0,n}=A^n\Big (\alpha ^{n+1}+(1-\alpha )^{n+1}\Big )=\alpha \left( \frac{\alpha }{1-\alpha }\right) ^\frac{n}{2}+(1-\alpha )\left( \frac{1- \alpha }{\alpha }\right) ^\frac{n}{2}, \end{aligned}$$
$$\begin{aligned} a_{1,n}&=A^n p_{1,n}={\left\{ \begin{array}{ll} n&{}\text {if}~~\alpha =\frac{1}{2},\\ \frac{2\alpha (1-\alpha )}{1-2\alpha }\Big [\big (\frac{1-\alpha }{ \alpha }\big )^\frac{n}{2}-\big (\frac{\alpha }{1-\alpha }\big )^\frac{n}{2}\Big ].&{} \end{array}\right. } \end{aligned}$$
Also \(a_{k,n}=0\) for \(k>n\). Let F(x, y) be the generating function of \(a_{k,n}\), that is
$$\begin{aligned} F(x,y):=\sum _{k=0}^\infty \sum _{n=0}^\infty a_{k,n}x^k y^n. \end{aligned}$$
$$\begin{aligned} g(x,y)=\sum _{n=0}^\infty a_{0,n} y^n+\sum _{n=0}^\infty a_{1,n} x y^n. \end{aligned}$$
From (36) and (37) we have: for \(\alpha =\frac{1}{2}\)
$$\begin{aligned} g(x,y)=\sum _{n=0}^\infty y^n+xy\sum _{n=0}^\infty ny^{n-1}=\frac{1}{1-y}+xy\frac{d}{dy}\left( \frac{1}{1-y}\right) =\frac{1-y+xy}{(1-y)^2}, \end{aligned}$$
and for \(\alpha \ne \frac{1}{2}\)
$$\begin{aligned}&g(x,y)\\&\quad =\sum _{n=0}^\infty \left[ \alpha \left( \frac{\alpha }{1-\alpha }\right) ^\frac{n}{2}+(1-\alpha )\left( \frac{1-\alpha }{\alpha }\right) ^\frac{n}{2}\right] y^n+\frac{2\alpha (1-\alpha )x}{1-2\alpha } \sum _{n=1}^\infty \left[ \left( \frac{1-\alpha }{\alpha }\right) ^\frac{n}{2}-\left( \frac{\alpha }{1-\alpha }\right) ^\frac{n}{2}\right] \, y^n\\&\quad =\left[ \alpha -\frac{2\alpha (1-\alpha )x}{1-2\alpha }\right] \sum _{n=0}^\infty \left( \frac{\alpha }{1-\alpha }\right) ^\frac{n}{2}\,y^n+\left[ 1-\alpha +\frac{2\alpha (1-\alpha )x}{1-2\alpha }\right] \sum _{n=0}^\infty \left( \frac{1-\alpha }{\alpha }\right) ^\frac{n}{2}\, y^n\\&\quad =\left[ \alpha -\frac{2\alpha (1-\alpha )x}{1-2\alpha }\right] \sum _{n=0}^\infty (\alpha A)^n y^n+\left[ 1-\alpha +\frac{2\alpha (1-\alpha )x}{1-2\alpha }\right] \sum _{n=0}^\infty ((1-\alpha )A)^n y^n\\&\quad =\left[ \alpha -\frac{2\alpha (1-\alpha )x}{1-2\alpha }\right] \frac{1}{1-\alpha A y}+\left[ 1-\alpha +\frac{2\alpha (1-\alpha )x}{1-2\alpha }\right] \frac{1}{1-(1-\alpha )A y}\\&\quad =\frac{\left( \alpha (1-2\alpha )-2\alpha (1-\alpha )x\right) \left( 1-(1-\alpha )Ay\right) +\left( (1-\alpha )(1-2\alpha )+2\alpha (1-\alpha )x\right) \left( 1-\alpha Ay\right) }{(1-2\alpha )(1-\alpha y)(1-(1-\alpha )Ay)}\\&\quad =\frac{1-\frac{2y}{A}+\frac{2xy}{A}}{1-Ay+y^2}. \end{aligned}$$
Note that in the above computations we have the following identities
$$\begin{aligned} \frac{1}{A^2}\!=\!\alpha (1-\alpha ),\quad \frac{\alpha }{1-\alpha }\!=\!(\alpha A)^2,\quad \frac{1-\alpha }{\alpha }\!=\!(1-\alpha )^2A^2,\quad (1-\alpha Ay)(1-(1-\alpha )Ay)\!=\!1-Ay+y^2. \end{aligned}$$
Now we have
$$\begin{aligned} F(x,y)&=\sum _{k=0}^\infty \sum _{n=0}^\infty a_{k,n}x^k y^n\nonumber \\&=g(x,y)+\sum _{k=2}^\infty \sum _{n=2}^\infty (a_{k-2,n-2}-a_{k,n-2}+A a_{k,n-1})x^k y^n\nonumber \\&=g(x,y)+\sum _{k=2}^\infty \sum _{n=2}^\infty a_{k-2,n-2} x^k y^n-\sum _{k=2}^\infty \sum _{n=2}^\infty a_{k,n-2} x^k y^n+ A \sum _{k=2}^\infty \sum _{n=2}^\infty a_{k,n-1}x^k y^n \end{aligned}$$
$$\begin{aligned}&=g(x,y)+(I)+(II)+(III). \end{aligned}$$
We rewrite the sums (I), (II) and (III) as follow. For the first sum
$$\begin{aligned} (I)=\sum _{k=2}^\infty \sum _{n=2}^\infty a_{k-2,n-2} x^k y^n=x^2y^2\sum _{k=0}^\infty \sum _{n=0}^\infty a_{k,n} x^k y^n=x^2y^2 F(x,y). \end{aligned}$$
For the second sum
$$\begin{aligned} (II)=\sum _{k=2}^\infty \sum _{n=2}^\infty a_{k,n-2} x^k y^n&=\sum _{k=0}^\infty \sum _{n=2}^\infty a_{k,n-2} x^k y^n-\sum _{n=2}^\infty a_{0,n-2}y^n-\sum _{n=2}^\infty a_{1,n-2} x y^n\\&=y^2 \sum _{k=0}^\infty \sum _{n=0}^\infty a_{k,n} x^k y^n-y^2\sum _{n=0}^\infty a_{0,n}y^n-y^2\sum _{n=1}^\infty a_{1,n} xy^n\\&=y^2 (F(x,y)-g(x,y)). \end{aligned}$$
And finally for the last sum
$$\begin{aligned} (III)=\sum _{k=2}^\infty \sum _{n=2}^\infty a_{k,n-1} x^k y^n&=y(F(x,y)-g(x,y)). \end{aligned}$$
Substituting these sums back into (39) we get
$$\begin{aligned} F(x,y)\!=\!g(x,y)+x^2y^2 F(x,y)\!-\!y^2 (F(x,y)-g(x,y))\!+\!A y(F(x,y)-g(x,y)), \end{aligned}$$
$$\begin{aligned} F(x,y)=\frac{g(x,y)(1-Ay+y^2)}{(1-Ay+y^2-x^2y^2)}. \end{aligned}$$
For \(\alpha =\frac{1}{2}\), we get
$$\begin{aligned} F(x,y)&=\frac{1-y+xy}{(1-y)^2}\frac{(1-y)^2}{(1-y)^2-x^2y^2}=\frac{1}{1-y-xy}\\&=\sum _{n=0}^\infty (1+x)^n y^n\\&=\sum _{n=0}^\infty \sum _{k=0}^n \begin{pmatrix} n\\ k \end{pmatrix} x^k y^n, \end{aligned}$$
which implies that \(\alpha _{k,n}=\begin{pmatrix} n\\ k \end{pmatrix}\). Hence for the case \(\alpha =\frac{1}{2}\), we obtain \(p_{k,n}=\frac{1}{2^n}\begin{pmatrix} n\\ k \end{pmatrix}\).
For the case \(\alpha \ne \frac{1}{2}\) we obtain
$$\begin{aligned} F(x,y)=\frac{1-\frac{2y}{A}+\frac{2xy}{A}}{1-Ay+y^2}\frac{1-Ay+y^2}{1-Ay+y^2-x^2y^2}=\frac{1-\frac{2y}{A}+\frac{2xy}{A}}{1-Ay+y^2-x^2y^2}. \end{aligned}$$
Finding the series expansion for this case is much more involved than the previous one. Using the multinomial theorem we have
$$\begin{aligned} \frac{1}{1-Ay+y^2-x^2y^2}&=\sum _{m=0}^\infty (x^2y^2-y^2+Ay)^m\\&=\sum _{m=0}^\infty ~\sum _{\begin{array}{c} 0\le i,j,l\le m\\ i+j+l=m \end{array}}\begin{pmatrix} m\\ i,j,l \end{pmatrix}(x^2y^2)^i(-y^2)^j(Ay)^l \\&=\sum _{m=0}^\infty ~\sum _{\begin{array}{c} 0\le i,j,l\le m\\ i+j+l=m \end{array}}\begin{pmatrix} m\\ i,j,l \end{pmatrix}(-1)^jA^l x^{2i}y^{2i+2j+l} \\&=\sum _{m=0}^\infty ~\sum _{\begin{array}{c} 0\le i,l\le m\\ i+l\le m \end{array}}\begin{pmatrix} m\\ i,m-i-l,l \end{pmatrix}(-1)^{m-i-l}A^l x^{2i}y^{2m-l}. \end{aligned}$$
$$\begin{aligned} F(x,y)&=\frac{1}{A}(A-2y+2xy)\sum _{m=0}^\infty ~\sum _{\begin{array}{c} 0\le i,l\le m\\ i+l\le m \end{array}}\begin{pmatrix} m\\ i,m-i-l,l \end{pmatrix}(-1)^{m-i-l}A^l x^{2i}y^{2m-l}\nonumber \\&=\sum _{m=0}^\infty ~\sum _{\begin{array}{c} 0\le i,l\le m\\ i+l\le m \end{array}}\begin{pmatrix} m\\ i,m-i-l,l \end{pmatrix}(-1)^{m-i-l}A^{l-1} \Big ( A x^{2i}y^{2m-l}-2x^{2i}y^{2m-l+1}\nonumber \\&\quad +2x^{2i+1}y^{2m-l+1}\Big ). \end{aligned}$$
From this we deduce that:
If k is even, \(k=2k'\), then to obtain the coefficient of \(x^ky^n\) on the right-hand side of (40), we select (i, m, l) such that
$$ \begin{aligned} (i=k'~ \& ~ 2m-l=n ~ \& ~0\le i, l\le m)\quad \text {or}\quad (i=k'~ \& ~ 2m-l+1=n~ \& ~0\le i, l\le m). \end{aligned}$$
Then we obtain
$$\begin{aligned} a_{k,n}&=\sum _{m=\lceil \frac{n}{2}\rceil }^n\begin{pmatrix} m\\ k',m-k'-(2m-n),2m-n \end{pmatrix}(-1)^{m-k'-(2m-n)}A^{2m-n} \\&\quad + 2\,\sum _{m=\lceil \frac{n-1}{2}\rceil }^n\begin{pmatrix} m\\ k',m-k'-(2m-n+1),2m-n+1 \end{pmatrix}(-1)^{m-k'-(2m-n+1)+1} A^{2m-n} \\&=\sum _{m=\lceil \frac{n}{2}\rceil }^n\begin{pmatrix} m\\ k',n-k'-m,2m-n \end{pmatrix}(-1)^{n-k'-m}A^{2m-n} \\&\qquad + 2\,\sum _{m=\lceil \frac{n-1}{2}\rceil }^n\begin{pmatrix} m\\ k', n-k'-m-1,2m-n+1 \end{pmatrix}(-1)^{n-k'-m} A^{2m-n} \\&={\left\{ \begin{array}{ll} \sum \nolimits _{m=\lceil \frac{n}{2}\rceil }^n\left[ \begin{pmatrix} m\\ k',n-k'-m,2m-n \end{pmatrix}+2\begin{pmatrix} m\\ k', n-k'-m-1,2m-n+1 \end{pmatrix}\right] \\ \quad \times (-1)^{n-k'-m}A^{2m-n}&{}\text {if } n \text { even},\\ \sum \nolimits _{m=\lceil \frac{n}{2}\rceil }^n\left[ \begin{pmatrix} m\\ k',n-k'-m,2m-n \end{pmatrix}+2\begin{pmatrix} m\\ k', n-k'-m-1,2m-n+1 \end{pmatrix}\right] \\ \qquad \times (-1)^{n-k'-m}A^{2m-n}+2\begin{pmatrix} \lceil \frac{n-1}{2}\rceil \\ k' \end{pmatrix}(-1)^{\lceil \frac{n-1}{2}\rceil -k'+1} A^{-1}&\text {if } n \text { odd} \end{array}\right. } \\&={\left\{ \begin{array}{ll} \sum \nolimits _{m=\lceil \frac{n}{2}\rceil }^n \frac{n-k+1}{2m-n+1}\begin{pmatrix} m\\ k',n-k'-m,2m-n \end{pmatrix}(-1)^{n-k'-m}A^{2m-n}&{}\text {if } n \text { even},\\ \sum \nolimits _{m=\lceil \frac{n}{2}\rceil }^n \frac{n-k+1}{2m-n+1}\begin{pmatrix} m\\ k',n-k'-m,2m-n \end{pmatrix}(-1)^{n-k'-m}A^{2m-n}\\ \quad +\,2\begin{pmatrix} \lceil \frac{n-1}{2}\rceil \\ k' \end{pmatrix}(-1)^{\lceil \frac{n-1}{2}\rceil -k'+1} A^{-1}&\text {if } n \text { odd}. \end{array}\right. } \end{aligned}$$
Similarly, if k is odd, \(k=2k'+1\), then to obtain the coefficient of \(x^ky^n\) on the right-hand side of (40), we select (i, m, l) such that
$$ \begin{aligned} (i=k'~ \& ~ 2m-l+1=n~ \& ~0\le i, l\le m), \end{aligned}$$
and obtain
$$\begin{aligned} a_{k,n}=2\,\sum _{m=\lceil \frac{n-1}{2}\rceil }^n\begin{pmatrix} m\\ k', n-k'-m-1,2m-n+1 \end{pmatrix}(-1)^{n-k'-m-1} A^{2m-n}. \end{aligned}$$
From \(a_{k,n}\) we compute \(p_{k,n}\) using the relations \(p_{k,n}=\frac{a_{k,n}}{A^n}\) and \(A^2=\frac{1}{\alpha (1-\alpha )}\) and obtain the claimed formulas. This finishes the proof of this theorem.
We can find \(a_{k,n}\) by establishing a recursive relation. We have
$$\begin{aligned} \frac{1}{F(x,y)}&=\frac{1-Ay+y^2-x^2y^2}{1-\frac{2y}{A}+\frac{2xy}{A}}\\&=-\frac{Axy}{2}-\frac{Ay}{2}+\frac{A^2}{4}+\frac{1-A^2/4}{1-\frac{2y}{A}+\frac{2xy}{A}}\\&=-\frac{Axy}{2}-\frac{Ay}{2}+\frac{A^2}{4}+(1-A^2/4)\sum _{n=0}^\infty \left( \frac{2y}{A}(1-x)\right) ^n\\&=-\frac{Axy}{2}-\frac{Ay}{2}+\frac{A^2}{4}+(1-A^2/4)\sum _{n=0}^\infty \left( \frac{2}{A}\right) ^n (1-x)^ny^n\\&=-\frac{Axy}{2}-\frac{Ay}{2}+\frac{A^2}{4}+(1-A^2/4)\sum _{n=0}^\infty \sum _{k=0}^n (-1)^k C_{k,n}\left( \frac{2}{A}\right) ^n x^ky^n\\&=1+\left( \frac{2}{A}-A\right) y-\frac{2}{A}xy+(1-A^2/4)\sum _{n=2}^\infty \sum _{k=0}^n (-1)^k C_{k,n}\left( \frac{2}{A}\right) ^n x^ky^n\\&=:\sum _{n=0}^\infty \sum _{k=0}^n b_{k,n}x^k y^n:=B(x,y). \end{aligned}$$
$$\begin{aligned}&b_{0,0}=1, \quad b_{0,1}=\frac{2}{A}-A, \quad b_{1,1}=-\frac{2}{A}\quad \text {and} \\&b_{k,n}=(1-A^2/4)(-1)^k C_{k,n}\Big (\frac{2}{A}\Big )^n \quad \text {for}~~0\le k \le n, n\ge 2. \end{aligned}$$
Using the relation that
$$\begin{aligned} F(x,y)B(x,y)=\left( \sum _{n=0}^\infty \sum _{k=0}^\infty a_{k,n} x^k y^n\right) \left( \sum _{n'=0}^\infty \sum _{k'=0}^\infty b_{k'n'} x^{k'} y^{n'}\right) =1, \end{aligned}$$
we get the following recursive formula to determine \(a_{K,N}\)
$$\begin{aligned} a_{0,0}=\frac{1}{b_{0,0}}=1, \quad a_{0,N}=-\sum _{n=0}^{N-1}a_{0,n}b_{0,N-n} , \quad a_{K,N}=-\sum _{k=0}^{K-1}\sum _{n=0}^{N-1} a_{k,n}b_{K-k,N-n}. \end{aligned}$$
It is not trivial to obtain an explicit formula from this recursive formula. However, it is easily implemented using a computational software such as Mathematica or Mathlab.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Duong, M.H., Tran, H.M. & Han, T.A. On the distribution of the number of internal equilibria in random evolutionary games. J. Math. Biol. 78, 331–371 (2019). https://doi.org/10.1007/s00285-018-1276-0
Revised: 12 July 2018
Issue Date: 15 January 2019
Evolutionary game theory
Multi-player games
Random polynomials
Distributions of equilibria
Mathematics Subject Classification
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Not affiliated
© 2023 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
1D-spiral
By Jens D.M. Rademacher
The picture is a space-time plot with time going downward of a chemical concentration in a homogeneous one-dimensional medium. The arising pattern has been called a 'one-dimensional spiral': a self-organized source (in the homogeneous medium!) sends out pulses to left and right in an alternating fashion.
Parameters: \(a=0.84, b=0.15, 1/\epsilon=10.8\), domain length=400, Neumann boundary conditions. Numerics is first order finite differences with explicit Euler in time.
The underlying equation is the two-component reaction-diffusion system
\begin{eqnarray} u_t &=& \frac 1 \epsilon u (u-1) \left(u- \frac{b+v}a\right) + u_{xx}\ v_t &=& f(u) -v \end{eqnarray}
$$f(u) = \cases{0,\quad0 \leq u < 1/3, 1-6.75 u (u-1)^2, \quad1/3 \leq u \leq 1, 1,\quad 1 < u}$$
which has been derived to model aspects of CO-oxidation on a platinum surface, see [M. Bär et al, J. Chem. Phys. 100 (1994) 1202]. In the picture, red corresponds to low and blue to high concentration of the v-component.
Author Institutional Affiliation Centre for Mathematics and Computer Science (CWI), Dept. Modelling Analysis and Simulation (MAS)
Author Postal Mail Kruislaan 413, 1098 SJ Amsterdam, the Netherlands
This simulation result has not been published. Similar results can be found in the work of M. Bär et al in the 90s.
Keywords reaction-diffusion system, one-dimensional spiral, self-replication
Categories: Media Gallery, Patterns and Simulations | CommonCrawl |
Some Methods of Gradient Estimation Are Better Than Others
I want to share a plot which took me by surprise. In computer vision, a useful tool for estimating motion is image alignment. Commonly, one may wish to estimate the affine warp matrix which minimizes a measure between pixel values of two images. Below are two aligned images from the KITTI data set, which are demonstrative of forward motion.
An alternative, which I believe was pioneered by Semi-Direct Visual Odometry, is to align two images by estimating the six parameter transform between the cameras. Usually these are the parameters of the $(\mathfrak{se}(3)$), but could also be three Euler angles plus an $(\langle x, y, z\rangle$) Euclidean transform. Because this method essentially reprojects the first image into the second, the depth field also needs to be known a priori.
The mathematics involved is surprisingly simple. For a feature based system, several points of known depth will be chosen from the first image $(\mathrm{I}_1$). Around each point, a small rectangular patch $(\mathrm{P}$) of pixels will be extracted and then compared to the second image $(\mathrm{I}_2$) at an offset of $(\langle x, y\rangle$). The goal is for each patch to find an offset which minimizes the below sum.
\text{obj}(x, y) = \sum_{i,j}^{\text{patch}} (\mathrm{P}[i, j] - \mathrm{I}_2[i+x, j+y])^2
Since the objective function is the square of residuals, Gauss-Newton provides a formulation for an iterative update scheme
\mathbf{x}^{i+1} = \mathbf{x}^{i} - (\mathbf{J}^T \mathbf{J})^{-1}\mathbf{J}^T\mathbf{r}
where $(\mathbf{r}$) are the stacked pixel residuals and $(\mathbf{J} = \partial/\partial_{x, y}\mathbf{ r}$). The derivative of a residual,
\mathrm{P}[i, j] - \mathrm{I}_2[i+x, j+y]
simplifies to the derivative of the only part that depends on $(x, y$), namely $(\partial/\partial_{x, y}\mathrm{ I}_2$). This can be approximated by applying an edge operator, e.g Sobel, to $(\mathrm{I}_2$) in each direction.
\mathbf{r} =& \begin{pmatrix}
\text{flatten}(\mathrm{P}[i, j] - \mathrm{I}_2[i+x, j+y])
\end{pmatrix}^T \\
\mathbf{J} =& \begin{pmatrix}
-\text{flatten}(\partial/\partial_x \mathrm{ I}_2) \\
-\text{flatten}(\partial/\partial_y \mathrm{ I}_2)
\end{pmatrix}^T
For the sake of completeness, it is worth mentioning that in practice $(\partial/\partial_{x, y}\mathrm{ P}$) is used in place of $(\partial/\partial_{x, y}\mathrm{ I}_2$). Although the maths is technically incorrect, since $(\mathrm{P}$) does not depend on $(x, y$), it turns out that locally during each iteration the role of the patch and the image can be reversed. The trick is to then do the opposite of whatever the Gauss-Newton update step would suggest. The advantages are that $(\partial/\partial_{x, y}\mathrm{ P}$) need only be computed once, and that using a fixed Jacobian seems to make the algorithm more stable.
At any rate, the previous formulation is for a single patch. All patches can be combined into a single iteration as below.
\begin{bmatrix}
\mathbf{J}_1 && && && \\
\vdots && && && \\
&& \mathbf{J}_2 && && \\
&& \vdots && && \\
&& && \ddots && \\
&& && && \mathbf{J}_n \\
&& && && \vdots
\end{bmatrix}\begin{bmatrix}
\Delta x_1 \\
\Delta y_1 \\
\vdots \\
\Delta x_n \\
\Delta y_n
\end{bmatrix} = \begin{bmatrix}
\mathbf{r}_1 \\
\mathbf{r}_n \\
\vdots
\end{bmatrix}
To convert from this form, which individually aligns each patch, to one which globally aligns all patches with respect to a transformation $(\mathbf{T}$), we apply the chain rule.
\mathbf{J}_\mathbf{T} = \frac{\partial\mathrm{I}_2}{\partial\mathbf{T}} = \frac{\partial\mathrm{I}_2}{\partial \langle x,y \rangle}\frac{\partial \langle x,y \rangle}{\partial\mathbf{T}}
Computing this is an exercise in projective geometry, but since our two images are separated predominantly by forward motion, let us further simplify things by instead only computing the image Jacobian with respect to this motion, which we will call $(\mathbf{J}_z$). We now have a function of a single variable, $(\text{ obj}(z)$), which can be plotted.
There is one last implementation detail, which is that the alignment algorithm is performed at multiple image scales. The smoother, lower curves show $(\text{obj}(z)$) plotted at reduced image sizes, where noise is spread across adjacent pixels. The least squares update, as computed using Gauss-Newton, is shown at the bottom.
Interestingly, despite multiple local minima, each estimated gradient crosses zero exactly once. This concludes the portion of the blog where I claim to know exactly what's going on.
I believe the key is that $(\mathbf{J}_z$) has been built using only estimates of image gradients, which have all been averaged together to compute $(\partial/\partial_z\text{ obj}(z)$). This has produced a gradient function which has been smoothed in a non-trivial way. The take-away is that this method of estimation seems substantially more robust than numerical differentiation.
It is important to be aware of this phenomena because of how many root finding algorithms work. Some, for example Levenberg-Marquardt, will reject updates which worsen the objective, effectively becoming stuck in a local minimum despite a strong gradient away from the current value of $(z$).
Other algorithms will opt to numerically differentiate with respect to a subset of variables being optimized, even if a method of computing the Jacobian is provided by the user. The rational is that numeric differentiation can be cheaper than computing the full Jacobian, however this post has shown that in some cases, the Jacobian may provide a better estimate of the gradient.
I was unable to get any off-the-shelf least squares solvers to find the correct minimum. I've currently settled for vanilla Gauss-Newton with a termination criterion for when the update direction reverses.
Carl Chatfield
Bringing autonomy to underwater drones.
Denmark - Week 1
After a week in Odense [https://www.google.com/maps/place/Odense/@55.3839275,10.1176455,10z/data=!3m1!…
Simple Method for Distance to Ellipse
Analytically finding the smallest distance between a point and an ellipse boils down to solving a quartic equation. The…
Wet Robots © 2023 | CommonCrawl |
A New Class of Revealed Comparative Advantage Indexes
Jenny P. Danna-Buitrago ORCID: orcid.org/0000-0003-0241-94811 &
Rémi Stellian ORCID: orcid.org/0000-0002-1143-33762
Open Economies Review volume 33, pages 477–503 (2022)Cite this article
This paper draws upon a critical analysis of the three RCA indexes in Vollrath (1991) to propose a new class of RCA indexes. The baseline RCA index in this new class rests on the overall structure of trade, is symmetric, avoids size bias and is compatible with the Kunimoto-Vollrath principle. Possible modifications of the baseline RCA index are subsequently suggested to take into account GDP per capita data and to use adjusted trade data with the aim of better measuring comparative advantages. These modified versions together with the baseline RCA index give rise to a whole new class of RCA indexes. An application to the Euro area indicates that this new class is able to rank countries according to their respective levels of comparative advantages in a more consistent way than alternative RCA indexes. Furthermore, the new class of RCA indexes provides second-best solutions for time stationarity and the desirable distributional characteristics of an RCA index.
Working on a manuscript?
Avoid the most common mistakes and prepare your manuscript for journal editors.
The concept of comparative advantage is a cornerstone of economic theory. Since the seminal paper of Balassa (1965), comparative advantages have usually been measured by Revealed Comparative Advantage (RCA) indexesFootnote 1. RCA indexes are computed on the basis of trade data and provide synthetic measures of comparative advantages (Danna-Buitrago 2017). According to (French 2017) p.83 "the concept is simple but powerful: if, according to Ricardian trade theory, differences in relative productivity determine the pattern of trade, then the (observable) pattern of trade can be used to infer (unobservable) differences in relative productivity". However, the appropriate way to use trade data to compute an RCA index is still under debate (Liu and Gao 2019).
In this regard, here a new class of RCA indexes is proposed with the aim of improving the measurement of comparative advantages. Our starting point is a critical analysis of the three RCA indexes proposed by Vollrath (1991), which is a reference point in the literature on RCA indexes (among the most recent citations of Vollrath (1991), see for example:Jambor and Babu (2016); Benesova et al. (2017); Brakman and Van Marrewijk (2017); Deb and Hauk (2017); French (2017); Sawyer et al. (2017); Seleka and Kebakile (2017); Algieri et al. (2018); Cai et al. (2018); Grundke and Moser (2019); Liu and Gao (2019); Saki et al. (2019); Yazdani and Pirpour (2020)). We then suggest an RCA index that overcomes drawbacks identified in Thomas Vollrath's RCA indexes. Thereafter, we propose two modifications of the new RCA index that take into account GDP per capita data in addition to trade data or use adjusted trade data instead of "raw" trade data. Therefore, the new RCA index is a baseline index to which different modifications can be applied, giving rise to a new class of RCA indexes. Furthermore, the new class is applied to the nineteen countries that form the Euro area to evaluate whether it provides better measures of comparative advantages than alternative RCA indexes in a given empirical case.
The remainder of this paper is organized as follows. Section 1 presents Thomas Vollrath's RCA indexes. Section 2 points out drawbacks of these RCA indexes and elaborates the aforementioned new RCA index to provide solutions to these drawbacks. Section 3 describes possible modifications of this RCA index in relation to GDP per capita data and adjusted trade data. Section 4 provides the empirical evaluation in the case of the Euro area. Concluding remarks are given in Sect. 5.
An Overview of Thomas Vollrath's RCA Indexes
Vollrath (1991) conceptualizes three RCA indexes: the Relative Trade Advantage (RTA) index, the Relative Export Advantage (REA) index and the Revealed Competitiveness (RC) index (see also Vollrath (1987; 1989)). Let J be a set of countries (the "trade area", i.e. the world or the members of some regional trade agreement), K a set of commodities, and T a set of time periods. \(X_{ikt}\) denotes the exports of commodity \(k\in K\) by country \(i\in J\) toward the other countries in J in time period t. Thereafter:
\(X_{i\mathcal {K}t}\) denotes the exports of all commodities except k by i in t; that is, \(X_{i\mathcal {K}t}=\sum _{l\in \mathcal {K}}X_{ilt}\), where \(\mathcal {K}=K\setminus \{k\}\).
\(X_{\mathcal {J}kt}\) represents the exports of k by all countries except i in t; that is, \(X_{\mathcal {J}kt}=\sum _{j\in \mathcal {J}}X_{jkt}\), where \(\mathcal {J}=J\setminus \{i\}\).
Lastly, we write as \(X_{\mathcal {J}\mathcal {K}t}\) the exports of all commodities except k by all countries except i in t; that is, \(X_{\mathcal {J}\mathcal {K}t}=\sum _{j\in \mathcal {J}}\sum _{l\in \mathcal {K}}X_{jlt}\).
In addition, let \(M_{ikt}\), \(M_{i\mathcal {K}t}\), \(M_{\mathcal {J}kt}\) and \(M_{\mathcal {J}\mathcal {K}t}\) be the same types of variables defined for imports. Lastly, \(\text {RTA}_{ikt}\), \(\text {REA}_{ikt}\) and \(\text {RC}_{ikt}\) denote the RTA, REA and RC indexes associated with (i, k, t), respectivelyFootnote 2. Thereafter:
$$\begin{aligned} \left\{ \begin{array}{l} \text {RTA}_{ikt}=\text {RXA}_{ikt}-\text {RMA}_{ikt} \\ \text {with } \text {RXA}_{ikt}=\dfrac{X_{ikt}/X_{i\mathcal {K}t}}{X_{\mathcal {J}kt}/X_{\mathcal {J}\mathcal {K}t}} \text { and } \text {RMA}_{ikt}=\dfrac{M_{ikt}/M_{i\mathcal {K}t}}{M_{\mathcal {J}kt}/M_{\mathcal {J}\mathcal {K}t}}\\ \text {REA}_{ikt}=\ln \left( \text {RXA}_{ikt}\right) \\ \text {RC}_{ikt}=\ln \left( \text {RXA}_{ikt}\right) -\ln \left( \text {RMA}_{ikt}\right) \end{array}\right. \end{aligned}$$
The RTA index computes the value of \(X_{ikt}\) normalized by \(X_{i\mathcal {K}t}\), which is the exports of k by i normalized by the exports of products other than k by i. Similarly, the RTA index computes the value of \(X_{\mathcal {J}kt}\) normalized by \(X_{\mathcal {J}\mathcal {K}t}\), which is the exports of k by the countries other than i normalized by the exports of products other than k by the countries other than i. The normalized values of \(M_{ikt}\) and \(M_{\mathcal {J}kt}\) are calculated in the same way. If the normalized value of \(X_{ikt}\) is greater than the normalized value of \(X_{\mathcal {J}kt}\), then i has a higher propensity to export k than the other countries. This could be seen as the consequence of comparative advantages. Therefore, the ratio of \(X_{ikt}/X_{i\mathcal {K}t}\) to \(X_{\mathcal {J}kt}/X_{\mathcal {J}\mathcal {K}t}\), which is named the ratio of relative export advantage (RXA), is greater than 1. However, the normalized value of \(M_{ikt}\) may be greater than the normalized value of \(M_{\mathcal {J}kt}\). Furthermore, the difference between the normalized value of \(M_{ikt}\) and the normalized value of \(M_{\mathcal {J}kt}\) may be greater than the corresponding difference in exports. If so, the ratio of \(M_{ikt}/M_{i\mathcal {K}t}\) to \(M_{\mathcal {J}kt}/M_{\mathcal {J}\mathcal {K}t}\), which is named the ratio of relative import advantage (RMA), will be greater than the RXA ratio, and there should not exist comparative advantages for i even if \(\text {RXA}_{ikt}>1\).
Following the logic of the RTA index, i has comparative advantages for k in t if \(\text {RXA}_{ikt}>\text {RMA}_{ikt}\). Eventually, the RTA index is calculated as the difference between the RXA ratio and the RMA ratio, so the inequality \(\mathrm{RTA}_{ikt}>0\) reveals comparative advantages, whereas the inequality \(\mathrm{RTA}_{ikt}<0\) reveals comparative disadvantages.
Note that the inequality \(\text {RTA}_{ikt}>0\) may be implied not only by \(\text {RXA}_{ikt}>\text {RMA}_{ikt}>1\) (as mentioned before) but also by \(1>\text {RXA}_{ikt}>\text {RMA}_{ikt}\). The RTA index may reveal comparative advantages even if the normalized value of exports of k by i is smaller than the normalized value of exports of k by the countries different from i, provided that the corresponding RXA ratio is greater than the RMA ratio. Each ratio separately suggests the existence of comparative advantages or disadvantages through the comparison with their "neutral" value, which is equal to 1. An RXA ratio greater (less) than 1 suggests the existence of comparative advantages (disadvantages), whereas an RMA ratio greater (less) than 1 suggests the existence of comparative disadvantages (advantages). However, calculating the RXA and RMA ratios is only the first step. The second step is to compare the two ratios. If the RXA ratio is greater than 1, the RTA index implies the existence of comparative advantages only if the RMA ratio is smaller than the RXA ratio. Similarly, if the RXA ratio is less than 1, the RTA index implies the existence of comparative disadvantages only if the RXA ratio is smaller than the RMA ratio. The RTA index implies the existence of comparative advantage on the basis of the RXA ratio relative to the RMA ratio. The RXA and RMA ratios have their own neutral values, i.e. 1. Thus for the RTA index, which calculates the difference between the two ratios, this neutral value becomes zero.
The RC index calculates the difference between the respective logarithms of each ratio, and the REA index is the log of the first ratio. According to Vollrath (1989), the use of logarithms is intended to ease the interpretation of the RXA and RMA ratios. Before comparison with the RMA ratio, the RXA ratio suggests the existence of comparative advantages if its value is greater than 1 and comparative disadvantages if its value belongs to the interval [0, 1). Conversely, the RMA ratio suggests the existence of comparative advantages if its value belongs to the interval [0, 1) and comparative disadvantages if its value belongs to the interval \((1,+\infty )\) (before being compared with the RXA ratio). Therefore, the interval associated with comparative advantages does not have the same length as the interval associated with comparative disadvantages. Using logarithms is a solution to this "asymmetry" because the interval [0, 1) is converted into \((-\infty ,0)\) and the interval \((1,+\infty )\) is converted into \((0,+\infty )\). As a result, the RXA and RMA ratios are "symmetric" around zero. Eventually, as for the RTA index, a positive value of the RC/REA index reveals comparative advantages, and a negative value reveals comparative disadvantages.
Drawbacks of Thomas Vollrath's RCA Indexes and Their Solutions
The three RCA indexes suffer from some drawbacks. First, the REA index ignores imports even though, like the RTA and RC indexes, using both export and import data makes it possible to "embody both the relative demand and relative supply dimensions \((\cdots )\)" of comparative advantages and therefore remain "consistent with the real world phenomenon of two-way trade" (Vollrath 1991, p. 276; see also Giraldo and Jaramillo 2018). According to Vollrath (1987), ignoring imports might be necessary because of the "noncomparability between import and export data which arises because the former contains certain handling, transportation, and spoilage costs not embedded into the latter" (p. 20). However, given that the exports of some countries are the imports of other countries, it is possible to deduce import data from export data or vice versa, so that exports and imports can be expressed in a homogeneous way. Furthermore, Vollrath (1987) suggests that "handling, transportation, and spoilage costs are small relative to the value of traded commodities" (p. 20), so the corresponding bias is unlikely to be significant.
Consequently, the RTA and RC indexes should be preferred to the REA index. Nevertheless, the RTA and RC indexes face numeric exceptions. The first numeric exception is division by zero, which occurs if \(X_{i\mathcal {K}t}=0\) or \(M_{i\mathcal {K}t}=0\), i.e. the countries other than i do not export or import k. As a result, it is impossible to calculate \(X_{ikt}/X_{i\mathcal {K}t}\) or \(M_{ikt}/M_{i\mathcal {K}t}\), and the RTA and RC indexes are left undefined even though there should be a measure of comparative advantages if i is the sole exporter/importer of k. With a lower commodity aggregation, commodities are more specific, so the likelihood of \(X_{i\mathcal {K}t}=0\) or \(M_{i\mathcal {K}t}=0\) is higher. Similarly, a smaller trade area implies a higher likelihood of \(X_{i\mathcal {K}t}=0\) or \(M_{i\mathcal {K}t}=0\). According to Vollrath (1991), the interest in removing exports and imports associated with i and/or k is to "make clear distinctions between a specific commodity and all other commodities and between a specific country and the rest of the world, eliminating country and commodity double counting in world trade" (p. 276). Nonetheless, this may prevent the calculation of the RTA and RC indexes.
In the case of the RC index, another numeric exception is the log of zero. Even if \(X_{i\mathcal {K}t}\ne 0\) and \(M_{i\mathcal {K}t}\ne 0\), \(X_{ikt}=0\) or \(M_{ikt}=0\) is possible, which means that i does not export or import k. Consequently, the log of the RXA ratio or the RMA ratio cannot be calculated, and once again, the RC index is left undefined. This also applies to the REA index, which is the log of the RXA ratio. In addition, Vollrath (1991) notes that the log implies that the RC and REA indexes are characterized by an "extreme sensitivity to small values of exports or imports of the specified commodity" (p. 277). Indeed, small values of \(X_{ikt}\) and \(M_{ikt}\) lead to small values of \(\text {RXA}_{ikt}\) and \(\text {RMA}_{ikt}\), respectively. In turn, these small values of \(\text {RXA}_{ikt}\) and \(\text {RMA}_{ikt}\) lead to large negative values of \(\ln \left( \text {RXA}_{ikt}\right)\) and \(\ln \left( \text {RMA}_{ikt}\right)\), which might distort the measurement of comparative advantages.
To overcome the aforementioned drawbacks, we first suggest preserving the exports/imports associated with k and/or i when exports/imports are aggregated across products and/or countriesFootnote 3. Put differently, exports/imports are added up across K instead of \(\mathcal {K}\) and/or J instead of \(\mathcal {J}\). Consequently:
\(X_{iKt}=\sum _{l \in K}X_{ilt}\) substitutes for \(X_{i\mathcal {K}t}\) (where \(\mathcal {K}=K\setminus \{k\}\));
\(X_{Jkt}=\sum _{j \in J}X_{jkt}\) substitutes for \(X_{\mathcal {J}kt}\) (where \(\mathcal {J}=J\setminus \{i\}\));
\(X_{JKt}=\sum _{j \in J}\sum _{l \in K}X_{jlt}\) substitutes for \(X_{\mathcal {J}\mathcal {K}t}\);
The same substitutions apply to import data.
Second, we suggest using \((x-1)/(x+1)\) as the approximation of \(\ln (x)\) around 1 (see Fig. 1) because this approximation is defined even if \(x=0\) and maintains the symmetry around zero (Dalum et al. 1998; Laursen 2015). In addition, this approximation is lower bounded by -1 and therefore avoids large negative values of the log. The approximation \((x-1)/(x+1)\) implies that the interval revealing comparative advantages is (0, 1] instead of \((0,+\infty )\) for the RXA ratio and that the interval revealing comparative disadvantages is \([-1,0)\) instead of \((-\infty ,0)\); the converse is true for the RMA ratio. Consequently, \(\text {RTA}'\), \(\text {REA}'\) and \(\text {RC}'\) are the modified versions of RTA, REA and RC, respectively:
$$\begin{aligned} \left\{ \begin{array}{l} \text {RTA}'_{ikt}=\text {BX}_{ikt}-\text {BM}_{ikt} \\ \text {with } \text {BX}_{ikt}=\dfrac{X_{ikt}/X_{iKt}}{X_{Jkt}/X_{JKt}} \text { and } \text {BM}_{ikt}=\dfrac{M_{ikt}/M_{iKt}}{M_{Jkt}/M_{JKt}}\\ \text {REA}'_{ikt}=\dfrac{\text {BX}_{ikt}-1}{\text {BX}_{ikt}+1}\\ \text {RC}'_{ikt}=\dfrac{\text {BX}_{ikt}-1}{\text {BX}_{ikt}+1}-\dfrac{\text {BM}_{ikt}-1}{\text {BM}_{ikt}+1} \end{array}\right. \end{aligned}$$
\(\ln (x)\) and \((x-1)/(x+1)\)
Each index embodies the ratio of \(X_{ikt}/X_{ikt}\) to \(X_{Jkt}/X_{JKt}\), which is the standard RCA index à la Balassa (1965), hereafter referred to as the BX ratio. In addition, the \(\text {REA}'\) index corresponds to the "symmetric" version of the BX index elaborated by Dalum et al. (1998). The ratio of \(M_{ikt}/M_{ikt}\) to \(M_{Jkt}/M_{JKt}\) is the import-equivalent of the BX ratio and is referred to as the BM ratio. The \(\text {RC}'\) index applies the symmetric transformation suggested by Dalum et al. (1998) to both the BX and BM ratios. The \(\text {RTA}'\) index ranges from \(-\infty\) to \(+\infty\), the \(\text {REA}'\) index ranges from -1 to 1, and the \(\text {RC}'\) index ranges from -2 to 2. For the three indexes, zero is the neutral value that reveals the absence of comparative advantages and disadvantages.
By using J instead of \(\mathcal {J}\) and K instead of \(\mathcal {K}\), the measurement of comparative advantages is no longer based on a comparison of the exports/imports of k by i normalized by the exports/imports of products other than k by i with the exports/imports of k by the countries other than i normalized by the exports/imports of products other than k by the countries other than i. Rather, BX and BM measure comparative advantages by comparing the share of k in i's exports/imports in t with the same share at the level of J.
It is possible that \(X_{Jkt}=0\), which is equivalent to \(M_{Jkt}=0\) and indicates that no country exports k and logically no country imports k. In this case, the BX and BM ratios cannot be calculated due to the division by zero. Nonetheless, this numeric exception can be solved. Indeed, if no country exports/imports k, then no country should have comparative advantages or disadvantages. Consequently, the BX and BM ratios should be set to 1, which is their neutral value, without any further calculation. Ultimately, \(\text {BX}_{ikt}=\text {BM}_{ikt}=1\) implies that the three indexes are equal to their neutral value, which is zero.
The literature has emphasized the size bias that affects the BX ratio: small values of \(X_{iKt}\) lead to great values of the BX index. Put differently, small exports of i (which can be seen as a proxy of i's size) lead the BX ratio to reveal strong comparative advantages, which can be considered a contradictionFootnote 4 (De Benedictis and Tamberi 2004). Similarly, small values of \(M_{iKt}\) leads to large values of the BM ratio. Therefore, small imports of i paradoxically lead the BM ratio to reveal strong comparative disadvantages. This is the reason why the \(\text {RTA}'\) index may yield misleading measures of comparative advantages. This is not the case for the \(\text {REA}'\) and \(\text {RC}'\) indexes because their log-approximation implies an upper bound, i.e. 1, which prevents these indexes from having abnormal large values. However, the \(\text {REA}'\) index still suffers from the same drawback as the REA index; that is, imports are not taken into account.
Ultimately, to overcome the drawbacks of the RCA indexes suggested by Vollrath (1991) and the other drawbacks arising from the proposed transformations of these indexes, the \(\text {RC}'\) index warrants consideration as an alternative RCA index. The \(\text {RC}'\) index arises from a specific combination of the BX and BM ratios into a formula that measures comparative advantages:
The BX and BM ratios replace the RXA and RMA ratios to avoid the unsolvable numeric exceptions that affect the RXA and RMA ratios. These numeric exceptions arise when comparative advantages are measured for single exporter/importer countries in the trade area under consideration.
Instead of using the BX ratio alone to measure comparative advantages, as in the case of the standard RCA index à la Balassa (1965), the BX and BM ratios are combined together in a formula that captures both the supply and demand dimensions of comparative advantages.
The BX and BM ratios are transformed according to the log-approximation of Dalum et al. (1998) to make them symmetric and avoid size bias. In addition, contrary to the log itself (which is applied by Vollrath (1991) to the RXA and RMA ratios to calculate the REA and RC indexes), the approximation of the log is defined even if \(\text {BX}=0\) or \(\text {BM}=0\).
Finally, the \(\text {RC}'\) index is the difference between \((\text {BX}-1)/(\text {BX}+1)\) and \((\text {BM}-1)/(\text {BM}+1)\) and replaces the difference between RXA and RMA (namely the RTA index) and the difference between the log of RXA and the log of RMA (namely the RC index).
The \(\text {RC}'\) index can be conceptualized as an "additive" extension of the standard RCA index à la Balassa (1965) to imports with the symmetric transformation à la Dalum et al. (1998). The word "additive" emphasizes that the \(\text {RC}'\) index is computed as the difference between the symmetric transformation of the BX ratio and the symmetric transformation of the BM ratioFootnote 5.
Further Improvements
The \(\text {RC}'\) index can be modified to make the measurement of comparative advantages more robust from a theoretical standpoint. We propose three modifications. Each modification gives rise to a variant form of the \(\text {RC}'\) index. The first modification aims to take into account the GDP per capita of all countries in J for the measurement of comparative advantages. Indeed, if a country i has a higher GDP per capita than another country j, this can be interpreted as the existence of higher factor endowments for i than for j, which gives i greater potential to have higher comparative advantages than jFootnote 6 (Jambor 2014). Consequently, if despite higher factor endowments i reaches the same value of the \(\text {RC}'\) index as j for a given product-period pair, then i should logically have lower comparative advantages than j (if \(\text {RC}'_{ikt}=\text {RC}'_{jkt}>0\)) or higher comparative disadvantages (if \(\text {RC}'_{ikt}=\text {RC}'_{jkt}<0\)). In this regard, the first modification is to weight \(\text {RC}'_{ikt}\) by a number given by a continuous function \(f_i\) whose domain is the J-dimensional vector of GDP per capita in t for each country in J, that is, \(y_t:=\left\langle y_{jt}\right\rangle _{j\in J}\). This number captures the effect of GDP per capita structure on the comparative advantages of i. To the best of our knowledge, no other RCA index available in the literature does so. Consequently, we define the \(\text {RC}^{y}\) index calculated for a given (i, k, t) as the \(\text {RC}'\) index adjusted by \(f_i(y_t)\):
$$\begin{aligned} \text {RC}^{y}_{ikt}=\text {RC}'_{ikt}\times f_i(y_t) \end{aligned}$$
The function \(f_i\) should have the following five properties:
The values of \(f_i(y_t)\) cannot be negative. A negative value would change the sign of the \(\text {RC}'\) index and therefore convert comparative advantages into comparative disadvantages and vice versa. To avoid this inconsistency, zero must be the minimum of \(f_i\).
\(f_i\) has a (global) maximum. This captures the fact that the differences in GDP per capita should generate limited differences in comparative advantages.
\(\partial f_i/\partial y_{it}<0\): If the GDP per capita of i is higher, then \(f_i(y_{t})\) is smaller, leading to a decrease in \(\text {RC}'_{ikt}>0\) or an increase in \(\text {RC}'_{ikt}<0\). Because \(f_i(y_t)\ge 0\), a higher value of \(y_{it}\) gives rise to a value of the \(\text {RC}'\) index closer to zero.
\(\partial f_i/\partial y_{jt}>0\) \(\forall j \ne i\): If the GDP per capita of a country different from i is higher, then \(f_i(y_{t})\) is larger, leading to an increase in \(\text{RC}'_{ikt}>0\) or a decrease in \(\text {RC}'_{ikt}<0\). As there exists a maximum value of \(f_i(y_t)\), the increase in \(\text{RC}'_{ikt}\) cannot generate a value of \(\text {RC}^{y}\) greater than this maximum.
\(f_{i}(y_{t})=1\) if \(y_{it}=\hat{y}_{t}\), where \(\hat{y}_{t}\) is a representative measure of \(y_{t}\). If the GDP per capita of i GDP is equal to the GDP per capita of a "typical" country among J, then weighting \(\text {RC}'_{ikt}\) by \(f_i(y_t)\) should not modify \(\text {RC}'_{ikt}\). Ultimately, the equality \(y_{it}=\hat{y}_{t}\) leads \(f_i(y_t)\) to be equal to 1.
Incorporating GDP per capita structure into the computation of an RCA index under the aforementioned five properties of \(f_i\) is an alternative to understanding comparative advantages through a regression in which the independent variable is GDP per capita and the dependent variable is an RCA index that rests solely upon trade flows. Weighting \(\text {RC}'\) by \(f_i(y_t)\) instead of using \(\text {RC}'\) per se is intended to provide a more relevant measure of comparative advantages without relying on a subsequent regression technique.
We suggest using the following form of \(f_i\):
$$\begin{aligned} f_i(y_t)=\exp \left( 1-\dfrac{y_{it}}{\frac{1}{\# J}\sum _{j \in J}y_{jt}}\right) \end{aligned}$$
This conceptualization of \(f_i\) is compatible with the aforementioned list of properties that \(f_i\) should have. In particular, the maximum of \(f_i\) is the value of e (second property). In addition, the representative value of \(y_t\) is \(\frac{1}{\# J}\sum _{j \in J}y_{jt}\), i.e. the mean of \(y_t\). If \(y_{it}\) is equal to the mean of \(y_t\), then \(f_i(y_t)=1\) because \(f_i\) calculates the value of e to the power of zero. This is consistent with the fifth property. Equation 4 is a starting point, and further research should study other conceptualizations of \(f_i\).
The second modification arises from the RCA indexes in terms of contribution to the trade balance (CTB); see below. Before calculating a CTB index, De Saint Vaulry (2008) suggests adjusting trade flows so that the share of k in total trade among J is the same for all periods in T and equal to the share associated with the period considered as a reference. This adjustment is assumed to eliminate short-term fluctuations in trade flows and therefore improve the ability of trade flows to reveal comparative advantages (Stellian and Danna-Buitrago 2019). Let \(r\in T\) be the reference period. The share of k in total trade among J in t is calculated as \((X_{Jkt}+M_{Jkt})/(X_{JKt}+M_{JKt})\). To make \((X_{Jkt}+M_{Jkt})/(X_{JKt}+M_{JKt})\) equal to \((X_{Jkr}+M_{Jkr})/(X_{JKr}+M_{JKr})\), every \(X_{ikt}\) and \(M_{ikt}\) must be scaled by the ratio of \((X_{Jkr}+M_{Jkr})/(X_{JKr}+M_{JKr})\) to \((X_{Jkt}+M_{Jkt})/(X_{JKt}+M_{JKt})\). Let \(v_{kt}^{r}\) be this kind of ratio associated with (k, t, r). The adjusted values of \(X_{ikt}\) and \(M_{ikt}\), denoted as \(X^{r}_{ikt}\) and \(M^{r}_{ikt}\), are therefore calculated as follows:
$$\begin{aligned} \left\{ \begin{array}{l} X^{r}_{ikt}=X_{ikt}\times v^{r}_{kt} \\ M^{r}_{ikt}=M_{ikt}\times v^{r}_{kt} \\ \text {with } v^{r}_{kt} =\dfrac{(X_{Jkr}+M_{Jkr})/(X_{JKr}+M_{JKr})}{(X_{Jkt}+M_{Jkt})/(X_{JKt}+M_{JKt})} \end{array}\right. \end{aligned}$$
The second modification of the \(\text {RC}'\) index is to calculate the \(\text {RC}'\) index with the adjusted values of trade flows. Indeed, the adjustment of trade flows in Eq. 5 can be applied to RCA indexes beyond the CTB indexes. Consequently, to calculate the \(\text {RC}'\) index with adjusted trade flows:
\(X^{r}_{iKt}=\sum _{l\in K}X^{r}_{ilt}\) substitutes for \(X_{iKt}\) (defined as \(\sum _{l\in K}X_{ilt}\));
\(X^{r}_{Jkt}=\sum _{j\in J}X^{r}_{jkt}\) substitutes for \(X_{Jkt}\) (defined as \(\sum _{j\in J}X_{jkt}\));
\(X^{r}_{JKt}=\sum _{j\in J}\sum _{l\in K}X^{r}_{jlt}\) substitutes for \(X_{JKt}\) (defined as \(\sum _{j\in J}\sum _{l\in K}X^{r}_{jlt}\));
Let \(\text {RC}^{r}_{ikt}\) be the \(\text {RC}'\) index calculated with adjusted trade flows. The \(\text {RC}^{r}\) index is calculated as follows:
$$\begin{aligned} \left\{ \begin{array}{l}\text {RC}^{r}_{ikt}=\dfrac{\text {BX}^{r}_{ikt}-1}{\text {BX}^{r}_{ikt}+1}-\dfrac{\text {BM}^{r}_{ikt}-1}{\text {BM}^{r}_{ikt}+1} \\ \text { with } \text {BX}^{r}_{ikt}=\dfrac{X^r_{ikt}/X^{r}_{iKt}}{X^{r}_{Jkt}/X^{r}_{JKt}} \text { and } \text {BM}^{r}_{i,k}=\dfrac{M^r_{ikt}/M^{r}_{iKt}}{M^{r}_{Jkt}/M^{r}_{JKt}} \end{array}\right. \end{aligned}$$
The third modification combines the two previous modifications; that is, the \(\text {RC}'\) index is calculated with both adjusted trade flows and GDP per capita. We denote as \(\text {RC}^{yr}_{ikt}\) this third modification of the \(\text {RC}'\) index:
$$\begin{aligned} \text {RC}^{yr}_{ikt}=\text {RC}^{r}_{ikt}\times f_i(y_t) \end{aligned}$$
Table 1 recapitulates the four RCA indexes suggested in the present paper. These RCA indexes possess valuable features from a theoretical standpoint. First, they calculate comparative advantages on the basis of both exports and imports, which better captures the supply and demand dimensions of comparative advantagesFootnote 7 (Vollrath 1991). Second, they calculate comparative advantages for a given country-product pair on the basis of all trade flows across both countries (J) and products (K). This is consistent with the relative nature of comparative advantages; that is, comparative advantages associated with any country-product pair depend on the overall structure of trade flows across J and across K (Yu et al. 2009). If only the trade flows associated with (i, k) in t are used to calculate an RCA index for (i, k, t), namely \(X_{ikt}\) and \(M_{ikt}\), the measure of comparative advantages may be inconsistent. Similarly, calculating an RCA for (i, k, t) on the basis of trade flows associated with i only – \(\{X_{ilt},M_{ilt}\}_{k\in K}\) – or on the basis of trade flows associated with k only – \(\{X_{jkt},M_{jkt}\}_{j\in J}\) – would not entirely reflect the relative nature of comparative advantages.
Third, the \(\text {RC}'\), \(\text {RC}^y\), \(\text {RC}^r\) and \(\text {RC}^{yr}\) indexes are consistent with the interpretation by Vollrath (1991) of the principle enunciated by Kunimoto (1977). According to that interpretation, an RCA index should compare the actual value of exports associated with (i, k, t), given by \(X_{ikt}\), with a theoretical "expected" value that reveals the absence of comparative advantages and disadvantages. i has a comparative advantage for k in t if the value of \(X_{ikt}\) is greater than the corresponding theoretical value. Conversely, \(X_{ikt}\) smaller than the theoretical value of \(X_{ikt}\) reveals comparative disadvantages. The theoretical value is calculated as total exports of i weighted by the share of k in total exports of J in t. Hence the theoretical value of \(X_{ikt}\) is \((X_{Jkt}/X_{JKt})\times X_{iKt}\). Consequently, the BX ratio is equal to the ratio of \(X_{ikt}\) to its theoretical value because \((X_{ikt}/X_{iKt})/(X_{Jkt}/X_{JKt})=X_{ikt}/((X_{Jkt}/X_{JKt})\times X_{iKt})\). A BX ratio greater than 1 suggests the existence of comparative advantages and simultaneously indicates that the actual value of \(X_{ikt}\) is greater than its theoretical value. Ultimately, the BX ratio is consistent with the Kunimoto-Vollrath principle. Such consistency also applies to the BM ratio, as the theoretical value of \(M_{ikt}\) is calculated as \((M_{Jkt}/M_{JKt})\times M_{iKt}\). Ultimately, the \(\text {RC}'\) and \(\text {RC}^y\) indexes are consistent with the Kunimoto-Vollrath principle because they are based on the BX and BM ratios, as are the \(\text {RC}^r\) and \(\text {RC}^{yr}\) indexes, with the sole difference that these two last indexes are based on adjusted trade flowsFootnote 8.
Table 1 The \(\text {RC}'\) index and its modifications
Now the question is "to what extent do the \(\text {RC}'\), \(\text {RC}^y\), \(\text {RC}^r\) and \(\text {RC}^{yr}\) indexes give consistent measures of comparative advantages for a given empirical case?". The following section addresses this point.
An Empirical Evaluation
Assume that an RCA index is applied to a given configuration of \(J\times K\times T\). This application gives a set of \(\#J\times \#K\times \#T\) values of the RCA index under consideration. It is possible to evaluate the quality of this set according to three criteria (Stellian and Danna-Buitrago 2019):
Time stationarity: The values of an RCA index computed for \(J\times K\times T\) should have low volatility over time due to the ex ante nature of comparative advantages.
Shape: The distribution of the values of an RCA index computed for \(J\times K\times T\) should be symmetric to capture the fact that, by construction, comparative disadvantages counterbalance comparative advantages. In addition, such a distribution should have thin tails because strong comparative (dis)advantages are relatively rare from an empirical standpoint.
Ordinal ranking bias: The values of an RCA index computed for \(J\times K\times T\) should rank countries in a consistent way.
In this section, we evaluate the \(\text {RC}'\), \(\text {RC}^y\), \(\text {RC}^r\) and \(\text {RC}^{yr}\) indexes according to these three criteria. The evaluation must compare the quality of the comparative advantage measurements of these four RCA indexes relative not only to one another but also to other RCA indexes. Sect. 4.1 presents the alternative RCA indexes considered in the present paper. Then, Sect. 4.2 describes the empirical case used for the evaluation and the corresponding methodology. Last, Sect. 4.3 presents and discusses the subsequent results.
Alternative RCA Indexes
There are many RCA indexes in the literatureFootnote 9. For instance, the RCA index à la Balassa (1965), identified as the BX ratio in the present paper, is still the reference in the literature (French 2017). However, only the aforementioned CTB indexes share the same valuable features as the \(\text {RC}'\), \(\text {RC}^y\), \(\text {RC}^r\) and \(\text {RC}^{yr}\) indexes:
The CTB indexes are export/import RCA indexes.
They measure comparative advantages of a given country-product pair on the basis on the overall structure of trade flows.
They are consistent with the Kunimoto-Vollrath principle.
The basic CTB index (Lafay 1987; 1992) compares the trade balance associated with (i, k, t), i.e. \(X_{ikt}-M_{ikt}\), with a theoretical value of \(X_{ikt}-M_{ikt}\) that would reveal the absence of comparative advantages or disadvantages. The Kunimoto-Vollrath principle is thus extended to trade balance. For this purpose, the basic CTB index starts from the principle that i would have neither comparative advantages nor comparative disadvantages in t if the total trade balance of i in t, i.e. \(X_{iKt}-M_{iKt}\), is distributed according to the share of each product in the total trade between all countries in J. Consequently, the theoretical value of \(X_{ikt}-M_{ikt}\) is calculated as the product of \(X_{iKt}-M_{iKt}\) and the ratio of \(X_{Jkt}+M_{Jkt}\) to \(X_{JKt}+M_{JKt}\). This ratio corresponds to the share of k in total trade among J in t. Ultimately, the theoretical value of \(X_{iKt}-M_{iKt}\) is calculated as \(\left( (X_{Jkt}+M_{Jkt})/(X_{JKt}+M_{JKt})\right) \times (X_{iKt}-M_{iKt})\). The basic CTB index is computed as the difference between the actual trade balance and the corresponding theoretical value before normalization by total trade by all countries in J for all products in K (in t), i.e. \(X_{JKt}+M_{JKt}\):
$$\begin{aligned} \text {CTB}_{ikt}=\dfrac{1}{X_{JKt}+M_{JKt}}\left( X_{ikt}-M_{ikt}-\dfrac{X_{Jkt}+M_{Jkt}}{X_{JKt}+M_{JKt}}\left( X_{iKt}-M_{iKt} \right) \right) \end{aligned}$$
A variant form of the basic CTB index uses the GDP of i as the normalization variable (De Saint Vaulry 2008; Stellian and Danna-Buitrago 2017):
$$\begin{aligned} \text {CTB}^{Y}_{ikt}=\dfrac{1}{Y_{it}}\left( X_{ikt}-M_{ikt}-\dfrac{X_{Jkt}+M_{Jkt}}{X_{JKt}+M_{JKt}}\left( X_{iKt}-M_{iKt} \right) \right) \end{aligned}$$
where \(Y_{it}\) denotes the GDP of i in t and the superscript Y in \(\text {CTB}^{Y}_{ikt}\) refers to this alternative normalization. In addition, the \(\text {CTB}^{Y}\) index can be calculated with adjusted trade flows, giving rise to the CTB index referred to as the \(\text {CTB}^{Yr}\) index (De Saint Vaulry 2008; Stellian and Danna-Buitrago 2019):
$$\begin{aligned} \text {CTB}^{Yr}_{ikt}=\dfrac{1}{Y_{it}}\left( X^r_{ikt}-M^r_{ikt}-\dfrac{X_{Jkt}+M_{Jkt}}{X_{JKt}+M_{JKt}}\left( X^r_{iKt}-M^r_{iKt} \right) \right) \end{aligned}$$
Similar to the new class of RCA indexes, CTB indexes are by construction symmetric and avoid size bias.
Most of the other RCA indexes available in the literature are modifications of the standard BX ratio; specifically, the log-approximation of the BX ratio by Dalum et al. (1998) is defined as \((\text {BX}-1)/(\text {BX}+1)\). Another RCA index calculates the difference between \(X_{ikt}/X_{iKt}\) and \(X_{Jkt}/X_{JKt}\) instead of dividing the first term by the latter term (Hoen and Oosterhaven 2006). This additive version of the BX ratio can be written as \((X_{ikt}-(X_{Jkt}/X_{JKt})\times X_{iKt})/X_{iKt}\) and therefore reads as the difference between exports and its expected value –in accordance with the Kunimoto-Vollrath principle– before normalization by a country's exports. Another additive version consists of substituting \(X_{JKt}\) for \(X_{iKt}\) as the normalization variable, namely total exports in the trade area under consideration (Yu et al. 2009). In addition, normalization of the BX ratio by the across-product mean for a given country (Proudman and Redding 1998; Proudman and Redding 2000) or the across-country mean for a given product (Amador et al. 2011) has been suggested.
These RCA indexes address some shortcomings of the BX ratio; specifically, the log-approximation of the BX ratio and the additive versions of that ratio restore symmetry (Yu et al. 2009). Furthermore, as explained previously, the log-approximation of the BX ratio eliminates the size bias thanks to its upper bound. The additive versions of the BX ratio similarly avoid size bias thanks to their upper bounds (1 and 1/4, respectively; see Yu et al. 2009). Normalization of the BX ratio by the across-product/country mean does not restore symmetry but at least attenuates the size bias, provided that the corresponding mean is greater than one to reduce the values taken by the BX ratio, including abnormal large values implied by the size bias.
However, \(\text {RC}'\), its variants and CTB indexes not only avoid the same type of shortcomings but also are export-import RCA indexes and therefore are able to capture both the supply-side and demand-side of comparative advantages. The modifications of the BX ratio remain based on export data only and are not able to represent comparative advantages beyond their traditional conceptualization according to Ricardian theoryFootnote 10.
Export-import RCA indexes other than the new class of RCA indexes and the CTB indexes also exist. The RCA index from Michaely (1962) consists of the difference between \(X_{ikt}/X_{iKt}\) and \(M_{ikt}/M_{iKt}\). Balassa (1986) proposes the calculation of \((X_{ikt}-M_{ikt})/(X_{ikt}+M_{ikt})\), and Donges and Riedel (1977) suggests normalizing \((X_{ikt}-M_{ikt})/(X_{ikt}+M_{ikt})\) by the same ratio calculated for all products throughout K before subtracting 1 and multiplying the subsequent difference by -1 or 1 depending on the sign of the trade balance of i (in t). The main weakness of these RCA indexes is that they are not based on the overall structure of trade flows. Only the trade flows associated with a given country are employed to measure comparative advantages. Consequently, it is not possible to make a consistent connection with the relative nature of comparative advantages.
Another RCA index that warrants consideration is the recent regression-based RCA index from Leromain and Orefice (2014), here referred to as the Z index. This index is of interest because it is based on the Ricardian model of Costinot et al. (2012), which combines heterogeneity in productivity across varieties of the same product with the features of the standard Ricardian model of international trade (constant returns to scale, perfect competition, labor as the unique factor of production, and equilibrium, among other features). In addition, it is the sole RCA index computed from disaggregated trade data. Denote \(x_{ijkt}\) as the trade flow of k from i to another country j in t (hence \(X_{ikt}=\sum _{j \in J}x_{ijkt}\) and \(M_{ikt}=\sum _{j \in J}x_{jikt}\)). The Z index starts from the OLS estimation of the following equation:
$$\begin{aligned} \ln (x_{ijkt})=\delta _{ijt}+\delta _{ikt}+\delta _{jkt}+\varepsilon _{ijkt} \end{aligned}$$
that is, the log of \(x_{ijkt}\) is decomposed additively into an exporter-importer fixed effect (\(\delta _{ijt}\)), an exporter-product fixed effect (\(\delta _{ikt}\)) and an importer-product fixed effect (\(\delta _{jkt}\)). \(\epsilon _{ijkt}\) is the residual term specific to (i, j, k, t). Comparative advantages are assumed to determine the exporter-product fixed effect. In this regard, \(z_{ikt}\) is defined as a proxy for the Ricardian fundamental productivity level of i with respect to k in t. After estimating \(\delta _{ikt}\), \(z_{ikt}\) is computed as \(\exp (\delta _{ikt}/\theta )\) where \(\theta\) captures heterogeneity in productivity across varieties of the same product k. The Z index is based on \(z_{ikt}\) and the following variables: \(\bar{z}_{it}=(^{1}/_{\#K})\sum _{l\in K}z_{ilt}\) is the average productivity of i across products in t; \(\bar{z}_{kt}=(^{1}/_{\#J})\sum _{j\in J}z_{jkt}\) is the average productivity for k across countries in t; and \(\bar{z}_{t}=(^{1}/_{\#J}{_{\times \#K}}){\sum _{j\in J}}\sum _{l\in K}z_{jlt}\) is the average productivity across countries and products in t. The Z index is the ratio of \(z_{ikt}/\bar{z}_{it}\) to \(\bar{z}_{kt}/\bar{z}_{t}\):
$$\begin{aligned} Z_{ikt}=\dfrac{z_{ikt}/\bar{z}_{it}}{\bar{z}_{kt}/\bar{z}_{t}} \text { with } z_{ikt}=\exp \left( \frac{\delta _{ikt}}{\theta }\right) \end{aligned}$$
The numerator is the value of \(z_{ikt}\) normalized by the average productivity of i in t, and the denominator is the same value at the level of J. Therefore, if the Z index is greater than 1, i has higher productivity for k than the other countries on average, which echoes the traditional definition of comparative advantages à la Ricardo. Note that, however, the Z index cannot capture "qualitative" comparative advantages arising from product differentiation, specifically quality (Stellian and Danna-Buitrago 2019).
In summary, the most robust RCA indexes from a theoretical standpoint–that is, robustness before any consideration of a specific case of comparative advantages–are the \(\text {RC}'\) index and its modifications, as well as the CTB indexes and the Z index. For this reason, our empirical evaluation will focus on these RCA indexes.
Data and Methodolgy
Our empirical case corresponds to the nineteen countries in the Euro area. Therefore, J comprises Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Portugal, Slovakia, Slovenia and Spain. Concerning K, we use the 3-digit Standard International Trade Classification, which mainly comprises 255 product categories distributed among food, live animals, beverages, tobacco, crude materials, oils/fats/waxes, chemicals and related products, manufactured goods, machinery and transport equipment, and miscellaneous manufactured articles. Concerning T, we calculate RCA indexes for each year from 1995 to 2018 using trade data from UNCTADstat. GDP and GDP per capita data are taken from World Bank national accounts data.
Concerning the Z index, the value of \(\theta\) in Eq. 12 is set to 6.534 (Costinot et al. 2012; Leromain and Orefice 2014). For the adjustment of trade flows in the \(\text {RC}^r\), \(\text {RC}^{yr}\) and \(\text {CTB}^{Yr}\) indexes, we use three alternative reference years (r). We use the first (1995) and last (2018) available years to make a "forward-looking" adjustment of trade flows and a "backward-looking" adjustment of trade flows, respectively (Stellian and Danna-Buitrago 2017). We also use 1999 as a reference year because 1999 was the year of introduction of the euro. Ultimately, comparative advantages are calculated for \(19\times 255 \times 24 =116280\) combinations of countries, products and periods, and these calculations are performed according to fourteen RCA indexes:
\(\text {RC}'\) and \(\text {RC}^{y}\);
\(\text {RC}^{r}\) and \(\text {RC}^{yr}\) with \(r\in \{1995,1999,2018\}\);
CTB and \(\text {CTB}^{Y}\);
\(\text {CTB}^{Yr}\) with \(r\in \{1995,1999,2018\}\); and
Table 2 presents descriptive statistics for each index. An online appendix contains bar charts representing the frequency distributions of each RCA index and Excel worksheets containing all calculations.
Table 2 RCA indexes of the Euro area: descriptive statistics
The quality of the empirical values of comparative advantages in the universe \(J\times K \times T\) described previously is evaluated following the path suggested by Leromain and Orefice (2014) and Stellian and Danna-Buitrago (2019). In what follows, we describe the tools employed for each criterion assessing the empirical accuracy of CTB indexes (time stationarity, shape and ordinal ranking bias).
Time stationarity The first way to check for time stationarity is the Harris-Tzavalis panel-data unit-root test. The null hypothesis is \(\rho =1\) in the following AR(1) process:
$$\begin{aligned} \text {RCA}_{ikt}=\rho \cdot \text {RCA}_{ikt-1}+\gamma _{ik}+\varepsilon _{ikt} \end{aligned}$$
where \(\text {RCA}_{ikt}\) is the value of an RCA index associated with (i, k, t), \(\gamma _{ik}\) is an intercept specific to each country-product pair (the panels) and \(\varepsilon _{ikt}\) is the residual term associated with each country-product-period triplet. If the null hypothesis is rejected, namely \(|\rho |< 1\), the RCA index exhibits short-term deviations and finite variance around a time-constant mean for the universe \(J\times K \times T\) under consideration, leading to time stationarity of the RCA index.
The Harris-Tzavalis panel-data unit-root test is a preliminary step because this test verifies whether time stationarity of an RCA index exists. If the null hypothesis is rejected, then additional measures describe the magnitude of time stationarity. The first measure arises from standard deviation. It is possible to calculate the across-time standard deviation of an RCA index for a given country-product pair. Time stationarity is higher if this standard deviation is closer to zero for the country-product pair under consideration. From the set of \(\# J\times \# K\) measures of standard deviation associated with \(J\times K\), we compute the across-product average of that set for each country. Ultimately, we rank the RCA indexes according to the distances of their respective averages from zero. This gives rises to \(\#J\) rankings. Ultimately, we calculate the across-country mean rank for each RCA index. This mean rank measures the score of each RCA index from the vantage point of standard deviation. A smaller mean rank implies a better score.
Two other measures of time stationarity arise from the OLS estimation of the following equation:
$$\begin{aligned} \text {RCA}_{ikt_1}=\alpha _{0i}+\alpha _{1i}\text {RCA}_{ikt_0}+\varepsilon _{ik} \end{aligned}$$
This regression is based on \(\#K\) observations for a given country. Each observation corresponds to a product. The dependent variable is the value of the RCA index calculated for (i, k) in the final period in T, which is written as \(t_1\) (2018 in our case), and the independent variable is the value of the RCA index calculated for (i, k) in the initial period in T, which is written as \(t_0\) (1995). Time stationarity is higher if the distance of \(\alpha _{1i}\) from 1 is smaller and the distance of \(\alpha _{0i}\) from zero is smaller. Indeed, if \(\alpha _{1i}=1\) and \(\alpha _{0i}=0\), then \(\text {RCA}_{ikt_1}=\text {RCA}_{ikt_0}+\varepsilon _{ik}\), which means that for country i the values of the RCA index in \(t_1\) deviate from the values of the RCA index in \(t_0\) only by the residual term (\(\varepsilon _{ik}\)).
For each country, we rank the RCA indexes according to the distances of their respective values of \(\alpha _{1i}\) from 1, and we calculate the across-country mean rank for each RCA index. Similarly, we rank the RCA indexes according to the distance of their respective values of \(\alpha _{0i}\) from 0 and calculate the across-country mean rank for each RCA index.
Lastly, three additional measures of time stationarity arise from the OLS estimation of the following equation:
$$\begin{aligned} \text {RCA}_{ikt_1}=\alpha _{0}+\alpha _{1}\text {RCA}_{ikt_0}+\gamma _i+\varepsilon _{ik} \end{aligned}$$
This regression is based on \(\#J\times \#K\) observations throughout countries and products. The regression differs from the former equation in two ways: \(\alpha\)-like coefficients are calculated for the whole trade area instead of a single country (hence there is no subscript i), and \(\gamma _i\) is a fixed effect that implies a specific intercept for each country, which is useful to control for country heterogeneity in the estimation. As for Eq. 14, time stationarity is higher if the distance of \(\alpha _{1}\) from 1 is smaller and the distance of \(\alpha _{0}\) from zero is smaller. In addition, time stationarity is higher if the distance of \(\gamma _{i}\) from 0 is smaller. We rank the RCA indexes according to the distances of their respective values of \(\alpha _{1}\) from 1 and the distances of their respective values of \(\alpha _0\) from 0. Ultimately, for each country we rank the RCA indexes according to the distances of their respective values of \(\gamma _{i}\) from 0 (excluding the country whose corresponding value of \(\gamma _i\) must be set to zero for the estimation), and we calculate the across-country mean rank for each RCA index.
Shape Stellian and Danna-Buitrago (2019) use skewness and mean minus median to measure the symmetry of an RCA index, and kurtosis to measure tail thinness. Symmetry is higher if both statistics are closer to zero, and tail thinness is higher if kurtosis is higher. We suggest dividing mean minus median by standard deviation to obtain a dimensionless unit of symmetryFootnote 11, just as skewness is the third central moment normalized by standard deviation to the power of 3/2. A dimensionless unit enables more consistent comparisons between RCA indexes with different scales like those as in the present paper. In addition, we suggest using a measure of tail thinness other than kurtosis. This statistic is usually viewed as a measure of the concentration of a distribution about its mean such that higher kurtosis implies higher concentration and therefore increases the likelihood of thinner tails. However, the correspondence between kurtosis and concentration is not true in general (Westfall 2014). Consequently, to avoid misleading interpretations of kurtosis, we suggest replacing kurtosis with another measure, namely the number of values beyond one standard deviation of the mean. A smaller number of "outliers" implies thinner tails.
Ultimately, from the set of \(\# J\times \# T\) measures of skewness associated with \(J\times T\), we compute the across-time average of that set for each country, and we rank the RCA indexes according to the distance of their respective averages from zero. Ultimately, we calculate the across-country mean rank for each RCA index. The same process is applied to the normalized mean minus median and mean numbers of outliers.
Ordinal ranking bias For each country i and period t, it is possible to calculate a pair of \(\# K\) integers. The first integer is the across-product rank of k for i in t. The second integer is the across-country rank of i in t with respect to k. For each country, we compute the correlation coefficient throughout the \(\#K \times \# T\) pairs of integers, which gives the Spearman's rank order coefficient. If this coefficient is close to 1, the products for which i has the highest values of the RCA index compared to the other products tend to be the products for which i has the highest values of the RCA index compared to the other countries. On the contrary, the products for which i has the lowest values of the RCA index compared to the other products tend to be the products for which i has the lowest values of the RCA index compared to the other countries. The same applies to intermediate ranks. Ultimately, a Spearman's rank order coefficient close to 1 suggests a correspondence between the intra-country ranks and the inter-country ranks determined by an RCA index and hence a lower ordinal ranking bias. In this regard, for each country, we rank the RCA indexes according to the distances of their respective Spearman's rank order coefficients from 1. This enables the calculation of the across-country mean rank for each RCA index.
The second measure of the ordinal ranking bias is suggested by Stellian and Danna-Buitrago (2019). For each country and period, it is possible to distribute the values of an RCA index – one value per product – between \(\# J\) subsets. The first subset comprises the values that rank i first compared with the other countries. The second subset comprises the values that rank i second (compared with the other countries), and so on, until the last subset, which comprises the values that rank i last. Then, we calculate the mean value of each subset. Thereafter:
We count how many values that are not included in the subset associated with the first rank are greater than the mean value of the subset associated with the first rank. For example, if i ranks first with a mean value equal to 1.5 but second or lower with a value equal to 2 (which does not belong to the subset associated with rank 1), then this amounts to an inconsistency in the country ranking by the RCA index under consideration.
We count how many values that are not included in the subset associated with the last rank (i.e. rank \(\# J\)) are lower than the mean value of the subset associated with the last rank. For example, if i ranks last with a mean value equal to 0.25 but penultimate or higher with a value equal to 0.10 (which does not belong to the subset associated with rank \(\# J\)), then this amounts to an inconsistency in the country ranking by the RCA index under consideration.
For the intermediate ranks, the same logic applies. First, we count how many values associated with every rank lower than x (i.e. ranks \(x+1, x+2, \cdots \# J\)) are higher than the mean value of the subset associated with rank x. Then, we count how many values associated with every rank greater than x (i.e. ranks \(1,2,\cdots , x-1\)) are lower than the mean value of the subset associated with rank x.
We compute the number of such inconsistencies for each country and each period. Then, for each country, we calculate the across-time average number of inconsistencies, and we rank the RCA indexes
Table 3 presents the Harris-Tzavalis unit root tests checking for time stationarity. All RCA indexes lead to rejection of the null hypothesis, so all RCA indexes can be considered stationary over time. However, the magnitude of time stationarity differs from one RCA index to another. Figure 2 shows the corresponding ranking according to standard deviation, \(\alpha _{1i}\), \(\alpha _{0i}\), \(\alpha _{0}\), \(\alpha _{1}\) and \(\gamma _{i}\); the intermediate computations and estimations are available in the online appendix. For standard deviation, \(\alpha _{1i}\), \(\alpha _{0i}\) and \(\gamma _{i}\), each graph comprises 14 lines in polar coordinates. Each line represents an RCA index and contains 19 points along the radial axis. Each point represents a country placed in alphabetical order, and the color of a point gives the rank of the corresponding RCA index for the country under consideration. Colors range from green for rank 1 to red for rank 14, with evenly spaced colors for intermediate ranks. For example, in the case of standard deviation, the predominance of green for CTB (C1 in the graph) indicates that this RCA index tends to have the lowest standard deviation for almost all countries. Similarly, in the case of \(\gamma _i\), the predominance of red for Z indicates that this RCA index tends to have the greatest distances from zero regarding country-specific effects (Eq. 15).
Table 3 Harris-Tzavalis unit root test: estimation of \(\rho\) in Eq. 13
Rankings of RCA indexes according to time stationarity
Ranks concerning shape and ordinal ranking bias are shown according to the same logic of visualization in Fig. 3. Ultimately, the across-country mean ranks are presented in Table 4, which gathers all the scores obtained through the different measures of time stationarity, shape and ordinal ranking bias. For each criterion, the final score achieved by an RCA index is calculated as the mean of each score.
Rankings of RCA indexes according to shape (S) and ordinal ranking bias (O)
Table 4 suggests that no RCA index has the best score for all criteria. The best score regarding time stationarity is achieved by the CTB index, the \(\text {CTB}^{y,2018}\) index gives the best score concerning shape, and the ordinal ranking bias is minimized by the \(\text {RC}'\) index. Generally speaking, the scores show that the whole class of CTB indexes gives the best performances in terms of both time stationarity and shape. Nevertheless, the \(\text {RC}'\) index and the modified versions of this index show the best scores concerning the ordinal ranking bias (except \(RC^{y,2018}\), whose score is lower than Z), whereas the CTB indexes give the poorest performance. In addition, the \(\text {RC}'\) index and the modified versions of this index give good second-best performances in terms of both time stationarity and shape.
Table 4 Time stationarity, shape and ordinal ranking bias: final scores
Consequently, our empirical example shows that the new class of RCA indexes suggested in the present paper is able to give good measures of comparative advantages in the Euro area and may usefully complement the measurements given by the CTB indexes, particularly concerning the ordinal ranking bias. On the one hand, the criteria of time stationarity and shape assess the consistency of the empirical measures of comparative advantages by an RCA index with theory and stylized facts. For example, the time stationarity of RCA indexes is evaluated because theory suggests that comparative advantages are sticky over time. Similarly, the mean number of outliers is calculated because stylized facts suggest that countries tend to exhibit a low frequency of strong comparative advantages or disadvantages. On the other hand, ordinal ranking bias concerns the informational content provided by an RCA index about intra- and inter-country rankings independently of the consistency of the empirical values of an RCA index with desirable features arising from theory or stylized facts. In this regard, the new class of RCA indexes achieves a well-balanced compromise between informational content and desirable features regarding time stationarity and shape. The CTB indexes show better performance concerning the aforesaid desirable features but their informational content is of lower quality; and the Z index matches neither the same quality of the new class of RCA indexes (except one) nor the consistency of the CTB indexes with time stationarity and shape.
Other results arise from Table 4. First, \(\text {RC}'\) has a better score than \(\text {RC}^y\) for ordinal ranking bias but not for shape. Consequently, the way GDP per capita is taken into account is able to enhance shape but not the ordinal ranking bias. Second, \(\text {RC}^{1995}\) provides better scores than \(\text {RC}^{1999}\) and \(\text {RC}^{2018}\) for all criteria. Consequently, when the \(\text {RC}'\) index is calculated with adjusted trade flows, better measures of comparative advantages are obtained with an adjustment on the basis of the first available year (1995). However, these scores are lower than the score obtained by \(\text {RC}'\) regarding ordinal ranking bias, namely without adjusting trade flows. The scores are roughly the same for shape. In addition, \(\text {RC}^{1995}\) is associated with a better score than \(\text {RC}'\) for time stationarity, and the score obtained by \(\text {RC}^{1999}\) is close to the score obtained by \(\text {RC}'\). Consequently, adjusting trade flows does not always provide better measures of comparative advantages. The same conclusion arises from a comparison of the scores obtained by \(\text {RC}^y\), \(\text {RC}^{y,1995}\), \(\text {RC}^{y,1999}\) and \(\text {RC}^{y,2018}\). This conclusion does not question the idea of adjusting trade flows. Rather, it calls for the development of other methods to calculate adjusted trade flows (see Eq. 5). Following the same logic, it is possible to inquire into other specifications of the function \(f_i\) that modify the computation of \(\text {RC}'\) (see Eq. 4). The aim is to obtain better empirical scores compared not only to \(\text {RC}'\) but also to the class of CTB indexes and the Z index.
This paper revises the widely cited Revealed Comparative Advantage (RCA) indexes from Vollrath (1991) to propose a new RCA index that combines an additive extension of the standard RCA index à la Balassa (1965) to imports with the symmetric transformation à la Dalum et al. (1998). This new RCA index can be modified to take into account GDP per capita, which is a proxy for factor endowments, with the aim of better measuring comparative advantages. In addition, we apply the adjustment process of trade flows initially used for RCA indexes in terms of Contribution to the Trade Balance (CTB). These modifications of the new RCA index give rise to a whole class of new RCA indexes. The quality of comparative advantage measurements of eight RCA indexes of this class is evaluated against five CTB indexes and the regression-based RCA index from Leromain and Orefice (2014) in the case of the Euro area. The eight new RCA indexes under consideration arise from taking into account GDP per capita or adjusting trade flows according to three different reference years (the first available year, 1995, the last available year, 2018, and the year the Euro area was created, 1999). These fourteen RCA indexes have consistent theoretical foundations, and their evaluation is based on three criteria: the ability of an RCA index to be stationary over time, a symmetric distribution with thin tails ("shape"), and the relative absence of ordinal ranking bias. The score obtained by each RCA index regarding each criterion is computed according to the tools elaborated in Stellian and Danna-Buitrago (2019). These tools comprise unit-root panel data tests, dispersion and shape statistics, regressions, Spearman's rank order coefficient and another non-parametric analysis of ordinal ranking bias.
All but one of the new RCA indexes are better able to avoid ordinal ranking bias, and although they are not associated with the best scores regarding time stationarity and shape, they are second-best solutions for these two criteria. By "second-best", we mean that the scores are lower than the scores obtained by the CTB indexes but higher than the scores of the index from Leromain and Orefice (2014). The new class of RCA indexes thus can usefully complement the CTB indexes, which have already proved accurate from an empirical standpoint in measuring comparative advantages (Danna-Buitrago 2017; Stellian and Danna-Buitrago 2019).
Similar empirical evaluations of the suggested new class of RCA indexes should be made for trade areas other than the Euro area to obtain a broader view of the quality of comparative advantage measurements. In addition, as already suggested at the end of Sect. 4, it is possible to inquire into different ways of taking into account GDP per capita and adjusting trade flows. This opens avenues for further investigation with the same objective as the present paper: to improve the measurement of comparative advantages by RCA indexes. Furthermore, although our method of empirical evaluation rests upon a comprehensive set of tools, there is room for enhancement. Two points are worth mentioning. First, Eqs. 14 and 15, which give various measures of time stationarity, do not take into account the values taken by an RCA index throughout the whole set of periods but only the initial and last periods. It would be useful to inquire into other equations whose estimates rest upon the whole set of periods, for example dynamic panel data models. Second, the final scores are calculated on the basis of simple arithmetic mean values across countries for a given variable (e.g. skewness), across variables for a given criterion (e.g. shape), and ultimately across criteria. Computing simple arithmetic mean values can be considered the standard technique to generate synthetic scores of empirical accuracy of RCA indexes. Nevertheless, other techniques may deserve attention, for example arithmetic mean values with specific weights for each country and/or each variable associated with a given criterion and/or each criterion.
Ultimately, this paper supports the application of the new class of RCA indexes in international economics. Specifically, empirical patterns of international specialization can be studied. For a given country-product pair, if \(\text {RC}'\) or other RCA index conceptualized in this paper is greater than a given positive value over several successive years, this can be seen as a signal of international specialization of that country for that productFootnote 12 (Stellian and Danna-Buitrago 2017). Instead of using an absolute value, the determination of which should be further discussed, international specialization can be associated with countries with the highest RCA metric each year in the time span under considerationFootnote 13 (Stellian and Danna-Buitrago 2019). In turn, these insights about international specialization can be helpful for economic policyFootnote 14.
Another way to measure comparative advantages is the Domestic Resource Cost (DRC) method (Cai et al. 2009), which is beyond the scope of this paper.
Time periods are not explicitly mentioned in the notation used by Vollrath (1987; 1989; 1991) as well as many other works on comparative advantages, for example Leromain and Orefice (2014) and French (2017). However, to remain consistent with other RCA indexes presented below whose calculation depends on trade flows from two different periods, we prefer to include periods in our notation.
Some works, for example Liu and Gao (2019), state that Vollrath (1991) elaborates the RTA, REA and RC indexes with J instead of \(\mathcal {J}\) and with K instead of \(\mathcal {K}\). However, this is not faithful to Thomas Vollrath's original work.
This drawback is avoided by the original indexes, as they are based on \(\mathcal {J}\) instead of J. However, as explained before, substituting \(\mathcal {J}\) for J imposes another drawback (index possibly undefined due to division by zero).
Algieri et al. (2018) proposes an "extended Balassa index", which consists of calculating the ratio of BX to BM instead of the difference between BX and BM. Nonetheless, unlike the \(\text {RTA}'\) index, the extended Balassa index is not symmetric and does not avoid the size bias.
In accordance with the Heckscher-Ohlin theory, factor endowments contribute to the determination of comparative advantages in relation to the relative abundance of different factors and their relative intensiveness in different techniques of production. Here, the link between factor endowments and comparative advantages places greater emphasis on the fact that higher factor endowments imply more available resources for improving productivity and differentiating products. For example, higher factor endowments may imply more knowledge and skills for elaborating high-quality varieties of some products and ultimately creating comparative advantages for these products.
This point has already been made in Section 3 and was the motive for rejecting the calculation of an RCA index solely on the basis of the RXA ratio (REA and \(\text {REA}'\)).
Note that the Kunimoto-Vollrath principle calculates theoretical values of exports and imports on the basis of the overall structure of trade flows. Consequently, being consistent with this principle logically implies being consistent with the relative nature of comparative advantages.
A survey of representative RCA indexes can be found in Liu and Gao (2019) and Stellian and Danna-Buitrago (2019).
Yu et al. (2009) show that the additive version of the BX ratio possesses additivity across products: if k is divided into two sub-products \(k_1\) and \(k_2\), \(\text {RCA}_{ik_1t}+\text {RCA}_{ik_2t}=\text {RCA}_{ikt}\). Furthermore, the additive version of the BX ratio normalized by \(X_{JKt}\) possesses additivity across countries: if two countries \(i_1\) and \(i_2\) are taken together as a single country, \(\text {RCA}_{i_1kt}+\text {RCA}_{i_2kt}=\text {RCA}_{ikt}\). Additivity makes an RCA index insensitive to the classification of commodities and countries. Nonetheless, it is possible to show that the basic CTB index possesses additivity. In addition, the other CTB indexes and the new class of RCA indexes do not possess full additivity but compensate for this deficiency by using both export and import data, GDP-scaled measures and adjusted trade flows. Consequently, additivity is not sufficient to include some variants of BX in the empirical analysis.
This statistic is close to Pearson's second coefficient of skewness. The difference lies in the multiplicative factor of 3 applied to (mean minus median)/\(\sigma\). Nevertheless, because the same multiplicative factor is uniformly applied to every normalized mean minus median, using this factor simply implies a monotonic transformation. Consequently, the multiplicative factor of 3 does not make a difference when the normalized mean minus median is used to rank RCA indexes, and we can ignore it.
In this regard, let \(Q\subseteq \mathbb {R}_{+}\) be the interval whose values are those that reveal comparative advantages according to a given RCA index (e.g. \(Q=(0,2]\) for \(\text {RC}'\)). Comparative advantages can be defined as q-sustainable for (i, k) over time span \(U\subseteq T\) with \(q\in Q\) if \(\text {RCA}_{ikt}>q\) \(\forall t\in U\) (Stellian and Danna-Buitrago 2017).
Comparative advantages can be defined as sustainable for (i, k) over time span \(U\subseteq T\) in trade area J if \(\text {RCA}_{ikt}>\text {RCA}_{jkt}\) \(\forall (j,t)\in (J\setminus \{i\})\times U\) (Stellian and Danna-Buitrago 2019). Comparative advantage sustainability and comparative advantage q-sustainability usefully complement the methodology in terms of the Markov transition matrix from De Benedictis and Tamberi (2004) to analyze the dynamics of specialization.
https://data.mendeley.com/datasets/pdscpxjfsn/draft?a=7b1691df-6ab7-4de2-a5e1-83b26da7fd8b
Algieri B, Aquino A, Succurro M (2018) International competitive advantages in tourism: An eclectic view. Tour Manag Perspect 25:41–52
Amador J, Cabral S, Maria JR (2011) A simple cross-country index of trade specialization. Open Econ Rev 22(3):447–461
Balassa BA (1965) Trade liberalization and revealed comparative advantage. The Manchester School of Economic and Social Studies 33(2):92–123
Balassa BA (1986) Comparative advantages in manufactured goods: a reappraisal. Rev Econ Stat 68(2):315–319
Benesova I, Maitah M, Smutka L, Tomsik K, Ishchukova N (2017) Perspectives of the Russian agricultural exports in terms of comparative advantage. Agric Econ 63(7):318–330
Brakman S, Van Marrewijk C (2017) A closer look at revealed comparative advantage: Gross-versus value-added trade flows. Pap Reg Sci 96(1):61–92
Cai J, Leung P, Hishamunda N (2009) Assessment of comparative advantage in aquaculture. FAO Fisheries and Aquaculture Technical Paper 528
Cai J, Zhao H, Coyte PC (2018) The effect of intellectual property rights protection on the international competitiveness of the pharmaceutical manufacturing industry in China. Eng Econ 29(1):62–71
Costinot A, Donaldson D, Komunjer I (2012) What goods do countries trade? A quantitative exploration of Ricardo's ideas. Rev Econ Stud 79:581–068
Dalum B, Laursen K, Villumsen G (1998) Structural change in OECD export specialisation patterns: de-specialisation and 'stickiness'. Int Rev Appl Econ 12(3):423–443
Danna-Buitrago JP (2017) Alianza del Pacífico+4 y la especialización regional de Colombia: Una aproximación desde las ventajas comparativas. Cuadernos de Administración 55:39–52
De Benedictis L, Tamberi M (2004) Overall specialization empirics: Techniques and applications. Open Econ Rev 15(4):323–346
De Saint Vaulry A (2008) Base de données CHELEM – commerce international du CEPII Tech Rep 9, Paris, Centre d'études Prospectives et d'Informations Internationales
Deb K, Hauk WR (2017) RCA indices, multinational production and the Ricardian trade model. IEEP 14(1):1–25
Donges J, Riedel J (1977) The expansion of manufactured exports in developing countries: an empirical assessment of supply and demand issues. Weltwirtschaftliches Arch 113(1):58–87
French S (2017) Revealed comparative advantage: what is it good for? J Int Econ 106:83–103
Giraldo I, Jaramillo F (2018) Productivity, demand, and the home market effect. Open Econ Rev 29(3):517–545
Grundke R, Moser C (2019) Hidden protectionism? evidence from non-tariff barriers to trade in the United States. J Int Econ 117:143–157
Hoen AR, Oosterhaven J (2006) On the measurement of comparative advantage. Ann Reg Sci 40(3):677–691
Jambor A (2014) Country-specific determinants of horizontal and vertical intra-industry agri-food trade: The case of the EU new member states. J Agric Econ 65(3):663–682
Jambor A, Babu S (2016) The competitiveness of global agriculture. In Competitiveness of global agriculture. Springer pp. 99–129
Kunimoto K (1977) Typology of trade intensity indices. Hitotsubashi J Eco 17(2):15–32
Lafay G (1987) Avantage comparatif et compétitivité. Économie Prospective Internationale 29:39–52
Lafay G (1992) The measurement of revealed comparative advantages. In: Dagenais MG, Muet PA (eds) International Trade Modelling. Chapman & Hall, London, pp 209–234
Laursen K (2015) Revealed comparative advantage and the alternatives as measures of international specialization. Eurasian Bus Rev 5(1):99–115
Leromain E, Orefice G (2014) New revealed comparative advantage index: dataset and empirical distribution. Int Eco 139:48–70
Liu B, Gao J (2019) Understanding the non-gaussian distribution of revealed comparative advantage index and its alter. Int Eco 158:1–11
Michaely M (1962) Concentration in International Trade. North-Holland, Amsterdam
Proudman J, Redding S (1998) Openness and Growth. Bank of England, London
Proudman J, Redding S (2000) Evolving patterns of international trade. Rev Int Econ 8(3):373–396
Saki Z, Moore M, Kandilov I, Rothenberg L, Godfrey AB (2019) Revealed comparative advantage for US textiles and apparel. Competitiveness Review: An International Business Journal 29(4):462–478
Sawyer WC, Tochkov K, Yu W (2017) Regional and sectoral patterns and determinants of comparative advantage in China. Front Econ China 12(1):7
Seleka TB, Kebakile PG (2017) Export competitiveness of Botswana's beef industry. Int Trade J 31(1):76–101
Stellian R, Danna-Buitrago JP (2017) Competitividad de los productos agropecuarios colombianos en el marco del tratado de libre comercio con Estados Unidos: análisis de las ventajas comparativas. Revista CEPAL 122:139–163
Stellian R, Danna-Buitrago JP (2019) Revealed comparative advantages and regional specialization: evidence from Colombia in the Pacific Alliance. Journal of Applied Economics 22(1):349–379
Vollrath TL (1987) Revealed competitive advantage for wheat. Economic Research Service Staff Report (US Department of Agriculture) AGES861030
Vollrath TL (1989) Competitiveness and protection in world agriculture. Agriculture Information Bulletin (US Department of Agriculture) 567
Vollrath TL (1991) A theoretical evaluation of alternative trade intensity measures of revealed comparative advantage. Weltwirtschaftliches Archiv 127(2):265–280
Westfall PH (2014) Kurtosis as peakedness, 1905-2014: R.I.P. Am Stat 68(3):191–195
Yazdani M, Pirpour H (2020) Evaluating the effect of intra-industry trade on the bilateral trade productivity for petroleum products of iran. Energy Econ 86:103933
Yu R, Cai J, Leung PS (2009) The normalized revealed comparative advantage index. Ann Reg Sci 43(1):267–282
Financial support was received from Pontificia Universidad Javeriana.
Faculty of Economics, Management and Accounting, Los Libertadores University Institute, Bogotá, Colombia
Jenny P. Danna-Buitrago
Department of Business Administration, Pontificia Universidad Javeriana, Bogotá, Colombia
Rémi Stellian
Correspondence to Rémi Stellian.
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Danna-Buitrago, J.P., Stellian, R. A New Class of Revealed Comparative Advantage Indexes. Open Econ Rev 33, 477–503 (2022). https://doi.org/10.1007/s11079-021-09636-4
Issue Date: July 2022
RCA index | CommonCrawl |
Prof. Dr. Camillo De Lellis
Büro: Y27K24
Arbeitsfelder:
PDE, Geometrische Mass Theorie
Arbeitsgruppe:
Dominik Inauen
Simone Steinbrüchel
Riccardo Tione
Cécile Haussener
One step at a time is enough for me.
HS 19
Vorlesungen & Seminare
Kolloquia
MAT070.1
Zurich Colloquium in Mathematics
KO2F150
Rémi Abgrall, Joseph Ayoub, Peter Bühlmann, Marc Burger, Camillo De Lellis, Horst Knörrer
BOOKS and LECTURE NOTES
C. De Lellis
Almgren's center manifold in a simple setting
Lectures held at Park City 9-13 July 2018 PDF
Il teorema di Schlaefli: un invito alla quarta dimensione
The paper, in italian, has appeared first in the journal "il Volterriano". The file which can be downloaded here contains minor modifications and will be published by Rivista dell'UMI. PDF
Il teorema di Liouville ovvero perche' ``non esiste'' la primitiva di exp(x^2)
The paper, in italian, has appeared first in the journal "il Volterriano". The file which can be downloaded here contains minor modifications and has been published by Rivista dell'UMI. PDF Volterriano
Comments and Errata to ``Il teorema di Liouville ovvero perche' ``non esiste'' la primitiva di exp(x^2)''
Allard's interior regularity theorem: an invitation to stationary varifolds
To appear in the Collections of the CMSA Harvard PDF
Rectifiable sets, densities and tangent measures
Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich. PDF
EMS Publishing House
Errata to ``Lecture Notes on Rectifiable Sets, Densities, and Tangent Measures''
M. Barsanti, F. Conti, C. De Lellis, T. Franzoni
Le Olimpiadi della Matematica. Seconda Edizione
Zanichelli
GEOMETRIC MEASURE THEORY
C. De Lellis, G. De Philippis, J. Hirsch, A. Massaccesi
On the boundary behavior of mass-minimizing integral currents
Boundary regularity of mass-minimizing integral currents and a question of Almgren
C. De Lellis, A. Marchese, E. Spadaro, D. Valtorta
Rectifiability and upper Minkowski bounds for singularities of harmonic Q-valued maps
C. De Lellis, J. Ramic
Min-max theory for minimal hypersurfaces with boundary
To appear in Jour. Ann. Inst. Fourier PDF PDF
$2$-dimensional almost area minimizing currents
Boll. Unione Mat. Ital. 9 (2016), no. 1, 3–67. PDF
C. De Lellis, E. Spadaro, L. Spolaor
Regularity theory for 2-dimensional almost minimal currents III: blowup
To appear in Jour. Diff. Geom. PDF PDF
Regularity theory for 2-dimensional almost minimal currents II: branched center manifold
Ann. PDE 3 (2017), no. 2, Art. 18, 85 pp. PDF PDF
Regularity theory for 2-dimensional almost minimal currents I: Lipschitz approximation
Trans. Amer. Math. Soc. 370 (2018), no. 3, 1783–1801 PDF PDF
Uniqueness of tangent cones for 2-dimensional almost minimizing currents
Comm. Pure Appl. Math. 70, 1402-1421 PDF PDF
The size of the singular set of area-minimizing currents
Surveys in differential geometry 2016. Advances in geometry and mathematical physics, 1–83, Surv. Differ. Geom., 21, Int. Press, Somerville, MA, 2016. PDF PDF
The regularity of minimal surfaces in higher codimension
Current developments in mathematics 2014, 153–229, Int. Press, Somerville, MA, 2016. PDF PDF
C. De Lellis, F. Ghiraldin, F. Maggi
A direct approach to Plateau's problem
J. Eur. Math. Soc. (JEMS) 19 (2017), no. 8, 2219–2240 PDF
C. De Lellis, M. Focardi, B. Ruffini
A note on the Hausdorff dimension of the singular set for minimizers of the Mumford-Shah functional
Adv. Calc. Var. 7, pp. 539-545, 2014 PDF
C. De Lellis, E. Spadaro
Regularity of area-minimizing currents III: blow-up
Ann. of Math. (2) 183 (2016), no. 2, 577–617. PDF
Regularity of area-minimizing currents II: center manifold
Regularity of area-minimizing currents I: L^p gradient estimates
Geom. Funct. Anal. 24 (2014), no. 6, 1831–1884. PDF
Errata to "Regularity of area-minimizing currents I: L^p gradient estimates"
PDF PDF
Multiple valued functions and integral currents
Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 14 (2015), no. 4, 1239–1269. PDF
L. Ambrosio, C. De Lellis, T. Schmidt
Partial regularity for mass-minimizing currents in Hilbert spaces.
J. Reine Angew. Math. 734, 99-144 PDF
C. De Lellis, M. Focardi
Density lower bound estimates for local minimizers of the 2d Mumford-Shah energy
Manuscripta Math. 142 (2013), 215-232. PDF
Higher integrability of the gradient for minimizers of the 2d Mumford-Shah energ
J. Math. Pures Appl. (9) 100 (2013), 391-409. PDF
Hyperbolic equations and SBV functions
Journées équations aux dérivées partielles (2010), Exp. No. 6, 10 p. PDF
Journées équations aux dérivées partielles
Center manifold: a case study
To appear in the Proceedings for the 2009 Conference in honor of De Giorgi and Stampacchia, held in Erice
Discrete and Continuous Dynamical Systems Volume 31, Issue 4, Pages : 1249 - 1272, 2011 PDF
Discrete and Continuous Dynamical Systems
Errata to "Center manifold: a case study"
C. De Lellis, M. Focardi, E. Spadaro
Lower semicontinuous functionals for Almgren's multiple valued functions
Annales Academiae Scientiarum Fennicae vol. 36 pp. 393-410 (2011) PDF
Annales Academiae Scientiarum Fennicae
C. De Lellis, D. Tasnady
The existence of embedded minimal hypersurfaces
J. Differential Geom. 95 (2013), no. 3, 355–388. PDF
Journal of Differential Geometry
Errata to ``The existence of embedded minimal hypersurfaces''
S. Bianchini, C. De Lellis, R. Robyr
SBV regularity for Hamilton-Jacobi equations in R^n.
Arch. Ration. Mech. Anal. 200 (2011) 1003-1021 PDF
Errata to ``SBV regularity for Hamilton-Jacobi equations in R^n''
Q-valued functions revisited
Memoirs of the AMS 211 (2011), no. 991. PDF
Memoirs of the AMS
Errata to "Q-valued functions revisited"
Almgren's Q-valued functions revisited
Proceedings of the International Congress of Mathematicians Hyderabad, India, 2010 PDF
C. De Lellis, F. Pellandini
Genus bounds for minimal surfaces arising from min-max constructions.
J. Reine Angew. Math. 644 (2010), 47–99. PDF
Crelle
A note on Alberti's rank-one theorem.
Transport equations and multi-D hyperbolic conservation laws, 61–74, Lect. Notes Unione Mat. Ital., 5, Springer, Berlin, 2008. PDF
Errata to ``A note to Alberti's rank-one theorem''
C. De Lellis, C. R. Grisanti, P. Tilli
Regular selections for multiple-valued functions.
Ann. Mat. Pura Appl. (4) 183 (2004), no. 1, 79–95. PDF
Ann. Mat. Pura Appl.
T. H. Colding, C. De Lellis
The min-max construction of minimal surfaces.
Surveys in differential geometry, Vol. VIII (Boston, MA, 2002), 75–107, Int. Press, Somerville, MA, 2003. PDF
Errata to "The min-max construction of minimal surfaces"
Some fine properties of currents and applications to distributional Jacobians.
Proc. Roy. Soc. Edinburgh Sect. A 132 (2002), no. 4, 815–842. PDF
Proc. Roy. Soc. Edinburgh
Errata to "Some fine properties of currents and applications to distributional jacobians"
EULER and NAVIER-STOKES EQUATIONS
C. De Lellis, L. Székelyhidi Jr.
On turbulence and geometry: from Nash to Onsager
To appear in the Notices of the AMS PDF
The Onsager theorem
Surveys in differential geometry 2017. PDF
M. Colombo, C. De Lellis, A. Massaccesi
The generalized Caffarelli-Kohn-Nirenberg Theorem for the hyperdissipative Navier-Stokes system
M. Colombo, C. De Lellis, L. De Rosa
Ill-posedness of Leray solutions for the ipodissipative Navier--Stokes equations
T. Buckmaster, C. De Lellis, L. Székelyhidi Jr., V. Vicol
Onsager's conjecture for admissible weak solutions
To appear in CPAM PDF
High dimensionality and h-principle in PDE
Bull. Amer. Math. Soc. (N.S.) 54 (2017), no. 2, 247–282. PDF PDF
The $h$-principle and Onsager's conjecture
Eur. Math. Soc. Newsl. No. 95 (2015), 19–24. PDF
T. Buckmaster, C. De Lellis, P. Isett, L. Székelyhidi Jr.
Anomalous dissipation for 1/5-Hoelder Euler flows
Ann. of Math.
Errata to "Anomalous dissipation for 1/5-Hoelder Euler flows"
T. Buckmaster, C. De Lellis, L. Székelyhidi Jr.
Dissipative Euler flows with Onsager-critical spatial regularity
Comm. Pure Appl. Math. 69 (2016), no. 9, 1613–1670. PDF
Transporting microstructure and dissipative Euler flows
Dissipative Euler Flows and Onsager's Conjecture
J. Eur. Math. Soc. (JEMS) 16 (2014), no. 7, 1467–1505. PDF PDF
Errata to "Dissipative Euler flows and Onsager's conjecture"
A. Choffrut, C. De Lellis, L. Székelyhidi Jr.
Dissipative continuous Euler flows in two and three dimensions
Continuous dissipative Euler flows and a conjecture of Onsager.
European Congress of Mathematics, 13–29, Eur. Math. Soc., Zürich, 2013. PDF
Errata to ``Continuous dissipative Euler flows and a conjecture of Onsager''
Dissipative continuous Euler flows
Inventiones Mathematicae 193, Issue 2 (2013), Page 377-407 PDF
Inventiones
The h-principle and the equations of fluid dynamics.
Bull. Amer. Math. Soc. 49, 347-375, 2012 PDF
Bull. AMS
Errata to "The h-principle and the equations of fluid dynamics"
Y. Brenier, C. De Lellis, L. Székelyhidi Jr.
Weak-strong uniqueness for measure-valued Solutions
Comm. Math. Phys. 305 (2011), 351-361 PDF
Comm. Math. Phys.
On admissibility criteria for weak solutions of the Euler equations.
Arch. Ration. Mech. Anal. 195 (2010), no. 1, 225–260. PDF
Arch. Ration. Mech. Anal.
Errata to "On admissibility criteria for weak solutions of the Euler equations"
Le equazioni di Eulero dal punto di vista delle inclusioni differenziali (Italian)
Boll. Unione Mat. Ital. (9) 1 (2008), no. 3, 873–879 PDF
Boll. UMI
The Euler equations as a differential inclusion.
Ann. of Math. (2) 170 (2009), no. 3, 1417–1436. PDF
C. De Lellis, D. Inauen
$C^{1,\alpha}$ isometric embeddings of polar caps
A. Carlotto, C. De Lellis
Min-max embedded geodesic lines on asymptotically conical surfaces
C. De Lellis, D. Inauen, L. Székelyhidi Jr.
A Nash-Kuiper theorem for $C^{1,\sfrac{1}{5}-\delta}$ immersions of surfaces in $3$ dimensions
To appear in Revista matemática Iberoamericana PDF
C. De Lellis, P. M. Topping
Almost Schur Lemma
Calc. Var. 43 (2012) 347-354 PDF
Calc. Var.
S. Conti, C. De Lellis, L. Székelyhidi Jr.
h-principle and rigidity for C^{1,\alpha} isometric embeddings
Nonlinear Partial Differential Equations
Abel Symposia Volume 7, 2012, pp 83-116 PDF
Proceedings of the Abel Symposium 2010
T. H. Colding, C. De Lellis, W. P. Minicozzi II
Three circles theorems for Schrödinger operators on cylindrical ends and geometric applications.
Comm. Pure Appl. Math. 61 (2008), no. 11, 1540–1602. PDF
C. De Lellis, S. Müller
A $C^0$ estimate for nearly umbilical surfaces.
Calc. Var. Partial Differential Equations 26 (2006), no. 3, 283–296. PDF
Optimal rigidity estimates for nearly umbilical surfaces.
J. Differential Geom. 69 (2005), no. 1, 75–110. PDF
Singular limit laminations, Morse index, and positive scalar curvature.
Topology 44 (2005), no. 1, 25–45. PDF
TRANSPORT EQUATIONS
C. De Lellis, P. Gwiazda, A. Swierczewska-Gwiazda
Transport equation with integral terms
Calc. Var. Partial Differential Equations 55 (2016), no. 5, Paper No. 128, 17 pp. PDF
ODEs with Sobolev coefficients: the Eulerian and the Lagrangian approach.
Discrete Contin. Dyn. Syst. Ser. S 1 (2008), no. 3, 405–426. PDF
DCDS
Ordinary differential equations with rough coefficients and the renormalization theorem of Ambrosio [after Ambrosio, DiPerna, Lions].
Séminaire Bourbaki. Vol. 2006/2007. Astérisque No. 317 (2008), Exp. No. 972, viii, 175–203. PDF
Séminaire Bourbaki.
G. Crippa, C. De Lellis
Estimates and regularity results for the DiPerna-Lions flow.
Notes on hyperbolic systems of conservation laws and transport equations.
Handbook of differential equations: evolutionary equations. Vol. III, 277–382, Handb. Differ. Equ., Elsevier/North-Holland, Amsterdam, 2007.
WARNING. THIS PDF IS NOT THE MOST UPDATED VERSION: PDF
Handbook of EDE
Errata to "Notes on hyperbolic systems of conservation laws and transport equations."
L. Ambrosio, C. De Lellis, J. Maly
On the chain rule for the divergence of BV-like vector fields: applications, partial results, open problems.
Perspectives in nonlinear partial differential equations, 31–67, Contemp. Math., 446, Amer. Math. Soc., Providence, RI, 2007. PDF
The chain rule for the divergence of BV-like vector fields.
Hyperbolic problems: theory, numerics and applications. I, 105–112, Yokohama Publ., Yokohama, 2006 PDF
Oscillatory solutions to transport equations.
Indiana Univ. Math. J. 55 (2006), no. 1, 1–13. PDF
Indiana Univ. Math. J.
HYPERBOLIC CONSERVATION LAWS
C. De Lellis, Radu Ignat
A regularizing property of the 2d-eikonal equation
Comm. Partial Differential Equations 40 (2015), no. 8, 1543–1557. PDF
E. Chiodaroli, C. De Lellis, O. Kreml
Surprising solutions to the isentropic Euler system of gas dynamics
Hyperbolic problems: theory, numerics, applications, 1–10, AIMS Ser. Appl. Math., 8, Am. Inst. Math. Sci. (AIMS), Springfield, MO, 2014. PDF
Global ill-posedness of the isentropic system of gas dynamics
Ill-posedness for bounded admissible solutions of the 2-dimensional p--system.
Hyperbolic problems: theory, numerics and applications, 269–278, Proc. Sympos. Appl. Math., 67, Part 1, Amer. Math. Soc., Providence, RI, 2009. PDF
C. De Lellis, F. Golse
A quantitative compactness estimate for scalar conservation laws.
Comm. Pure Appl. Math. 58 (2005), no. 7, 989–998. PDF
Blowup of the BV norm in the multidimensional Keyfitz and Kranzer system.
Duke Math. J. 127 (2005), no. 2, 313–339. PDF
Duke Math. J.
L. Ambrosio, F. Bouchut, C. De Lellis
Well-posedness for a class of hyperbolic systems of conservation laws in several space dimensions.
Comm. Partial Differential Equations 29 (2004), no. 9-10, 1635–1651. PDF
Comm. PDE
Errata to "Well-posedness for the for a class of hyperbolic systems of conservation laws in several space dimensions."
L. Ambrosio, C. De Lellis
A note on admissible solutions of 1D scalar conservation laws and 2D Hamilton-Jacobi equations.
J. Hyperbolic Differ. Equ. 1 (2004), no. 4, 813–826. PDF
JHDE
C. De Lellis, F. Otto, M. Westdickenberg
Minimal entropy conditions for Burgers equation.
Quart. Appl. Math. 62 (2004), no. 4, 687–700. PDF
Quart. Appl. Math.
C. De Lellis, T. Rivière
The rectifiability of entropy measures in one space dimension.
J. Math. Pures Appl. (9) 82 (2003), no. 10, 1343–1367. PDF
J. Math. Pures Appl.
Errata to "The rectifiability of entropy measures in one space dimension".
Structure of entropy solutions for multi-dimensional scalar conservation laws.
Arch. Ration. Mech. Anal. 170 (2003), no. 2, 137–184 PDF
C. De Lellis, M. Westdickenberg
On the optimality of velocity averaging lemmas.
Ann. Inst. H. Poincaré Anal. Non Linéaire 20 (2003), no. 6, 1075–1085. PDF
Ann. IHP
Existence of solutions for a class of hyperbolic systems of conservation laws in several space dimensions.
Int. Math. Res. Not. 2003, no. 41, 2205–2220. PDF
CALCULUS OF VARIATIONS
C. De Lellis, F. Ghiraldin
An extension of the identity Det=det.
C. R. Math. Acad. Sci. Paris 348 (2010), no. 17-18, 973–976 PDF
C. R. Math. Acad. Sci. Paris
S. Conti, C. De Lellis
Sharp upper bounds for a variational problem with singular perturbation.
Math. Ann. 338 (2007), no. 1, 119–146. PDF
Math. Ann.
Simple proof of two-well rigidity.
C. R. Math. Acad. Sci. Paris 343 (2006), no. 5, 367–370 PDF
Some remarks on the theory of elasticity for compressible Neohookean materials.
Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 2 (2003), no. 3, 521–549. PDF
Annali SNS
Errata to "Some remarks on the theory of elasticity for compressible Neohookean materials"
S. Conti, C. De Lellis, S. Müller, M. Romeo
Polyconvexity equals rank-one convexity for connected isotropic sets in $\Bbb M^{2\times 2}$.
C. R. Math. Acad. Sci. Paris 337 (2003), no. 4, 233–238. PDF
C. De Lellis, F. Otto
Structure of entropy solutions to the eikonal equation.
J. Eur. Math. Soc. (JEMS) 5 (2003), no. 2, 107–145. PDF
Some remarks on the distributional Jacobian.
Nonlinear Anal. 53 (2003), no. 7-8, 1101–1114. PDF
Nonlinear Anal.
An example in the gradient theory of phase transitions.
ESAIM Control Optim. Calc. Var. 7 (2002), 285–289 PDF
ESAIM COCV
L. Ambrosio, C. De Lellis, C. Mantegazza
Line energies for gradient vector fields in the plane
Calc. Var. Partial Differential Equations 9 (1999), no. 4, 327–255. PDF
Fractional Sobolev regularity for the Brouwer degree
Comm. Partial Differential Equations 42 (2017), no. 10, 1510–1523. PDF
John Nash's nonlinear iteration
Forthcoming in Memorial Volume for Professor John Nash, eds. Joseph Kohn and Hong Jun, World Scientific Publishers, 2017 PDF
The masterpieces of John Forbes Nash Jr.
To appear in H. Holden and R. Piene (editors): The Abel Prize 2013–2017. Springer Verlag. PDF
C. De Lellis, R. Robyr
Hamilton-Jacobi equations with obstacles
A. Bressan, C. De Lellis
Existence of optimal strategies for a fire confinement problem.
C. De Lellis, T. Kappeler, P. Topalov
Low-regularity solutions of the periodic Camassa-Holm equation.
Comm. Partial Differential Equations 32 (2007), no. 1-3, 87–126. PDF
C. De Lellis, G. Royer-Carfagni
Interaction of fractures in tensile bars with non-local spatial dependence.
J. Elasticity 65 (2001), no. 1-3, 1–31 (2002). PDF
J of E
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 Alle
Gäste Aufenthalt
Prof. Dr. Simon Brendle, Department of Mathematics, Columbia University (New York)
Talk: A boundary value problem for minimal Lagrangian graphs
20.12.10 - 22.12.10 De Lellis, Camillo
Prof. Dr. Edriss Titi, University of California Irvine (California)
Talk: On the Question of Global Regularity for Three-dimensional Navier-Stokes Equations and Relevant Geophysical Models
Prof. Dr. Franco Maddalena, Politecnico di Bari (Bari)
Talk: Mass Transport Minimizing Branched Structures
Prof. Dr. Miles Simon, Albert-Ludwigs-Universität Freiburg (Freiburg im Breisgau)
Talk: Expanding solitons with non-negative curvature operator coming out of cones
Prof. Dr. Mario Pulvirenti, University La Sapienza (Roma)
Talk: The Cauchy Problem for the 3-D Vlasov-Poisson System with Point Charges
Prof. Dr. Pertti Mattila, University of Helsinki, Dept. of Mathematics (Helsinki)
Dr. Emanuele Spadaro, Max Planck Institute, Leipzig (Leipzig)
Prof. Dr. Iskander Taimanov, Sobolev Institute of Mathematics, Novosibirsk State University (Novosibirsk State University)
Talk: Periodic magnetic geodesics on almost every energy level via variational methods
Luca Granieri,
Prof. Dr. Giovanni Alberti, Department of Mathematics, University of Genova (Genova)
Prof. Dr. William K. Allard, (Duke University)
Talk: Total Variation Regularization for Image Denoising
Prof. Dr. Luigi Ambrosio, Scuola Normale di Pisa (Italien)
Prof. Dr. Matteo Focardi, Università di Firenze (Florenz)
Prof. Dr. Bernd Kirchheim, Universitaet Leipzig (Leipzig)
Talk: Rank-one Convexity and Ornstein's L1-Noninequalities
© Universität Zürich | May 3, 2019 | CommonCrawl |
Next: Printing and Saving Plots, Previous: Manipulation of Plot Windows, Up: High-Level Plotting [Contents][Index]
15.2.8 Use of the interpreter Property
All text objects—such as titles, labels, legends, and text—include the property "interpreter" that determines the manner in which special control sequences in the text are rendered.
The interpreter property can take three values: "none", "tex", "latex". If the interpreter is set to "none" then no special rendering occurs—the displayed text is a verbatim copy of the specified text. Currently, the "latex" interpreter is not implemented for on-screen display and is equivalent to "none". Note that Octave does not parse or validate the text strings when in "latex" mode—it is the responsibility of the programmer to generate valid strings which may include wrapping sections that should appear in Math mode with '$' characters.
The "tex" option implements a subset of TeX functionality when rendering text. This allows the insertion of special glyphs such as Greek characters or mathematical symbols. Special characters are inserted by using a backslash (\) character followed by a code, as shown in Table 15.1.
Besides special glyphs, the formatting of the text can be changed within the string by using the codes
\bf Bold font
\it Italic font
\sl Oblique Font
\rm Normal font
These codes may be used in conjunction with the { and } characters to limit the change to a part of the string. For example,
xlabel ('{\bf H} = a {\bf V}')
where the character 'a' will not appear in bold font. Note that to avoid having Octave interpret the backslash character in the strings, the strings themselves should be in single quotes.
It is also possible to change the fontname and size within the text
\fontname{fontname} Specify the font to use
\fontsize{size} Specify the size of the font to use
The color of the text may also be changed inline using either a string (e.g., "red") or numerically with a Red-Green-Blue (RGB) specification (e.g., [1 0 0], also red).
\color{color} Specify the color as a string
\color[rgb]{R G B} Specify the color numerically
Finally, superscripting and subscripting can be controlled with the '^' and '_' characters. If the '^' or '_' is followed by a { character, then all of the block surrounded by the { } pair is superscripted or subscripted. Without the { } pair, only the character immediately following the '^' or '_' is changed.
Greek Lowercase Letters
\alpha \beta \gamma
\delta \epsilon \zeta
\eta \theta \vartheta
\iota \kappa \lambda
\mu \nu \xi
\o \pi \varpi
\rho \sigma \varsigma
\tau \upsilon \phi
\chi \psi \omega
Greek Uppercase Letters
\Gamma \Delta \Theta
\Lambda \Xi \Pi
\Sigma \Upsilon \Phi
\Psi \Omega
Misc Symbols Type Ord
\aleph \wp \Re
\Im \partial \infty
\prime \nabla \surd
\angle \forall \exists
\neg \clubsuit \diamondsuit
\heartsuit \spadesuit
"Large" Operators
\int
\pm \cdot \times
\ast \circ \bullet
\div \cap \cup
\vee \wedge \oplus
\otimes \oslash
\leq \subset \subseteq
\in \geq \supset
\supseteq \ni \mid
\equiv \sim \approx
\cong \propto \perp
\leftarrow \Leftarrow \rightarrow
\Rightarrow \leftrightarrow \uparrow
\downarrow
\lfloor \langle \lceil
\rfloor \rangle \rceil
\neq
\ldots \0 \copyright
\deg
Table 15.1: Available special characters in TeX mode
15.2.8.1 Degree Symbol
Conformance to both TeX and MATLAB with respect to the \circ symbol is impossible. While TeX translates this symbol to Unicode 2218 (U+2218), MATLAB maps this to Unicode 00B0 (U+00B0) instead. Octave has chosen to follow the TeX specification, but has added the additional symbol \deg which maps to the degree symbol (U+00B0). | CommonCrawl |
Why does the variance of a sample change if the observations are duplicated?
The variance is said to be a measure of spread. So, I had thought that the variance of 3,5 is equal to the variance of 3,3,5,5 since the numbers are equally spread. But this is not the case, the variance of 3,5 is 2 while the variance of 3,3,5,5 is 1 1/3.
This puzzles me, given the explanation that variance is supposed to be a measure of spread.
So, in that context, what does measure of spread mean?
René Nyffenegger
René NyffeneggerRené Nyffenegger
$\begingroup$ This is a nice question but the title is not search-friendly, unless somebody happens to be searching for the same data set as you! I wonder if "Why does the variance of a sample change if the observations are duplicated?", or similar, would sum up the more general problem? $\endgroup$ – Silverfish Jun 20 '15 at 23:41
$\begingroup$ Silverfish's suggested title change is a good one; I urge you to consider using it. $\endgroup$ – Glen_b♦ Jun 21 '15 at 15:13
$\begingroup$ @Silverfish Thanks for the suggestion, I have changed the title accordingly. $\endgroup$ – René Nyffenegger Jun 22 '15 at 14:32
If you define variance as $s^2_{n}=$$\,\text{MSE}\,$$=\frac1n \sum_{i=1}^n (x_i-\bar{x})^2$ -- similar to population variance but with sample mean for $\mu$, then both your samples would have the same variance.
So the difference is purely because of Bessel's correction in the usual formula for the sample variance ($s^2_{n-1}=\frac{n}{n-1}\cdot \text{MSE}=\frac{n}{n-1}\cdot \frac1n \sum_{i=1}^n (x_i-\bar{x})^2=\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2$, which adjusts for the fact that the sample mean is closer to the data than the population mean is, in order to make it unbiased (taking the right value "on average").
The effect gradually goes away with increasing sample size, as $\frac{n-1}{n}$ goes to 1 as $n\to\infty$.
There's no particular reason you have to use the unbiased estimator for variance, by the way -- $s^2_n$ is a perfectly valid estimator, and in some cases may arguably have advantages over the more common form (unbiasedness isn't necessarily that big a deal).
Variance itself isn't directly a measure of spread. If I double all the values in my data set, I contend they're twice as "spread". But variance increases by a factor of 4. So more usually, it is said that standard deviation, rather than variance is a measure of spread.
Of course, the same issue occurs with standard deviation (the usual $s_{n-1}$ version) as with variance -- when you double up the points the standard deviation changes, for the same reason as happens with the variance.
In small samples the Bessel correction makes standard deviation somewhat less intuitive as a measure of spread because of that effect (that duplicating the sample changes the value). But many measures of spread do retain the the same value when duplicating the sample; I'll mention a few --
$s_n$ (of course)
the mean (absolute) deviation from the mean
the median (absolute) deviation from the median
the interquartile range
Glen_b♦Glen_b
$\begingroup$ "There's no particular reason you have to use the unbiased estimator" -- indeed you shouldn't necessarily estimate anything. The variance of {3, 5} itself is 1, per the first formula. As you point out, the questioner has attempted to estimate the variance of a population from which this is presumed to be a sample, but who knows whether it is or not. $\endgroup$ – Steve Jessop Jun 21 '15 at 12:44
As some sort of mnemonic, $V\,X = E\,V\,X + V\,E\,X$. So the expected value of a sample's variance is too low, with the difference being the variance of the sample's mean.
The usual sample variance formula compensates for that, and the variance of the sample's mean scales inversely with sample size.
As an extreme example, taking a single sample will always show a sample variance of 0, obviously not indicating a variance of 0 for the underlying distribution.
Now for 2 and 4 evenly weighted samples, the corrective factors are $2/1$ and $4/3$, respectively. So your calculated expected variances differ by a factor of $2/3$. The variance of the sample itself is $1$ in either case. But the first case presents a weaker case for $4$ being the mean of the base distribution, and every other value would mean a larger variance.
$\begingroup$ By conflating estimators with statistics, this answer confuses, rather than clarifies, the question. Please read Glen_b's original answer in this thread. The argument in the first two paragraphs is mysterious because it seems to be irrelevant to the question. $\endgroup$ – whuber♦ Jun 20 '15 at 14:25
Not the answer you're looking for? Browse other questions tagged variance or ask your own question.
How to calculate the variance of vectors for clustering?
Measures of multidimensional spread or variance
Why does $r^2$ between two variables represent proportion of shared variance?
Why does increasing the sample size lower the (sampling) variance?
The role of variance in Central Limit Theorem
Using the median for calculating Variance
How does the direction of maximum variance change with a linear transformation?
Why does the variance change, changing the sign of the random variable?
Unequal variance and sample size
What is the probability that sample variance decreases by adding random Gaussian noise to the variable? | CommonCrawl |
Time-to-infection by Plasmodium falciparum is largely determined by random factors
Mykola Pinkevych1,
Kiprotich Chelimo2,
John Vulule2,
James W Kazura3,
Ann M Moormann4 and
Miles P Davenport1Email author
BMC Medicine201513:19
© Pinkevych et al.; licensee BioMed Central. 2015
The identification of protective immune responses to P. falciparum infection is an important goal for the development of a vaccine for malaria. This requires the identification of susceptible and resistant individuals, so that their immune responses may be studied. Time-to-infection studies are one method for identifying putative susceptible individuals (infected early) versus resistant individuals (infected late). However, the timing of infection is dependent on random factors, such as whether the subject was bitten by an infected mosquito, as well as individual factors, such as their level of immunity. It is important to understand how much of the observed variation in infection is simply due to chance.
We analyse previously published data from a treatment-time-to-infection study of 201 individuals aged 0.5 to 78 years living in Western Kenya. We use a mathematical modelling approach to investigate the role of immunity versus random factors in determining time-to-infection in this cohort. We extend this analysis using a modelling approach to understand what factors might increase or decrease the utility of these studies for identifying susceptible and resistant individuals.
We find that, under most circumstances, the observed distribution of time-to-infection is consistent with this simply being a random process. We find that age, method for detection of infection (PCR versus microscopy), and underlying force of infection are all factors in determining whether time-to-infection is a useful correlate of immunity.
Many epidemiological studies of P. falciparum infection assume that the observed variation in infection outcomes, such as time-to-infection or presence or absence of infection, is determined by host resistance or susceptibility. However, under most circumstances, this distribution appears largely due to the random timing of infection, particularly in children. More direct measurements, such as parasite growth rate, may be more useful than time-to-infection in segregating patients based on their level of immunity.
Blood-stage immunity
Time-to-infection
Infection with Plasmodium falciparum (P. falciparum) causes over 1 million deaths each year [1]. The risks of death and clinical illness is highest in young children (<5 years old), whereas adults living in endemic areas show reduced prevalence of infection, reduced parasitemia, and reduced incidence of clinical illness. This resistance to infection and illness with age is often referred to as 'naturally acquired immunity', and understanding the mechanisms of this may facilitate the development of a vaccine for the control of malaria. Studies of naturally acquired immunity rely on identifying variation in susceptibility in the population, and then characterizing the differences in immune responses between susceptible and resistant individuals. If immune responses associated with resistance can be identified, these may provide useful targets in the development of vaccines.
A key feature in the study of naturally acquired immunity is the identification of individuals that are relatively protected from infection or illness. If immune responses can be characterized at baseline, and subsequent infection rates identified, then it is possible to retrospectively identify those responses most closely associated with protection. Prospective cohort studies offer the opportunity to measure immune responses at baseline, and investigate these as predictors of either infection (parasitemia) or clinical illness. Susceptibility may be measured as either the presence or absence of infection or clinical episodes in a fixed time period, the number of episodes in a period, or the time to an episode. An alternative approach to studying malaria susceptibility and resistance is through a prospective study of time-to-infection in a cohort of individuals treated to eliminate malaria, and then undergoing natural exposure in an endemic setting. By observing which baseline immunological factors predict a delay in the time-to-infection, it is hoped to detect protective immune responses. Such studies have been used to explore the relationship between antibody responses and protection from both infection and clinical episodes [2-4].
Although these are generally referred to as time-to-infection studies, very different results can be obtained depending on whether infection is detected by microscopic examination of blood, or more sensitive PCR techniques [5-7]. Since these two techniques give different times of 'infection', it is probably more accurate to discuss these studies as measuring 'time-to-detection' of infection (using a particular detection method). Thus, it is important to understand that current 'time-to-infection' studies are always measuring 'time-to-detection'. If we had a sensitive enough assay, the time of initiation of infection and time of detection would coincide. However, in the absence of this, we will use the term 'time-to-initiation' to refer to the time until initiation of blood-stage infection, and 'time-to-detection' to refer to what is usually described as 'time-to-infection'.
In time-to-infection (detection) studies a major assumption is that delayed acquisition of infection (or clinical disease) is the result of the level of immune protection. However, the timing of when infection or disease is first detected depends on two major factors. The first is the random timing of when a particular individual experiences a new infection (from an infectious bite from a mosquito). The second factor is how the immune system subsequently modifies the outcome of the bite to determine whether and when infection or clinical illness is detected. For example, liver stage immunity may reduce the probability that an infectious mosquito bite results in a blood-stage infection (and only a small fraction of infected mosquito bites are thought to reach the blood stage [8,9]). Similarly, blood-stage immunity may delay the timing of parasite detection or clinical illness after the initiation of blood-stage infection, and may reduce the peak levels of parasite or the clinical manifestations of infection [5]. It is generally assumed that immunity plays a role in determining differences in time-to-infection, and thus that time-to-infection can be used as a correlate of immunity [2-4]. The major effect of pre-erythrocytic immunity would be to delay the average time-to-initiation of infection. Blood-stage immunity would not change time-to-initiation, but would change time-to-detection, because slower parasite growth would increase the delay between initiation and detection.
We have previously analysed the mechanisms of naturally acquired immunity by studying the dynamics of infection of individuals of different ages [5]. We found that the growth rate of parasites in blood-stage infection decreased with age and that this decrease in growth rate explains the differences in time-to-detection observed in individuals of different ages [5,10]. Our modelling suggested that time-to-initiation of blood-stage infection was not significantly different between age groups, and thus found little evidence for pre-erythrocytic immunity delaying time to initiation. By contrast, we found a decreased blood-stage growth with age and that this decreased growth explained the delayed time-to-detection with age. Understanding how heterogeneity in blood-stage immunity and parasite growth rate affect time-to-infection studies is important to interpreting immune correlates arising from these studies.
Herein, we have analysed the kinetics of infection in a treatment-time-to-infection (detection) study performed in Kenya [2], in order to understand the ability of this approach to identify differences in susceptibility or resistance to infection. We argue that, in most cases, the major factor that determines the time-to-detection is simply the random timing of when infection happened to be initiated. We show that, depending on the age cohort and method used to detect infection, stratifying individuals based on time-to-detection will not be useful in identifying individuals who are more susceptible or immune to infection. As a result, the timing of infection between individuals often carries little information about the level of immunity of the individuals concerned. We illustrate how the sensitivity of the method of detection of parasites can also play an important role in determining how powerful this technique is at estimating the level of immune protection; paradoxically, the higher the sensitivity of the detection method, the lower the ability to discern differences in parasite growth rate. Overall, our analysis suggests that time-to-infection studies need to be interpreted with caution, and alternative approaches such as direct measurement of parasite growth rate may be much more sensitive at detecting differences in acquired immunity to P. falciparum infection.
Field study
We analysed the data from a field study of a cohort of 201 individuals aged 0.5 to 78 years old living in a malaria holoendemic region of western Kenya [11]. This population has a high incidence of P. falciparum infection, which we have recently estimated as a new blood-stage infection approximately every 2 weeks [10]. Subjects were treated with Coartem®, which acts against blood-stage infection but does not affect liver-stage parasites [12]. After treatment, blood smears were monitored weekly for 11 weeks for presence of P. falciparum parasites by light microscopy. Individuals were removed from the study if they were found microscopy-positive by week 2 after treatment (due to presumed treatment failure) or if weekly samples were not collected after the second week of treatment, thus leaving 197 individuals for analysis. Blood samples were also later analysed using a nested polymerase chain reaction (PCR) approach to measure low levels of infection [7]. The PCR analysis was performed post-hoc and thus did not affect the inclusion criteria for the field study. This data was previously analysed to estimate growth rates of P. falciparum in vivo [5,10].
Directly estimating the growth rate using PCR and microscopy data
We can assess the growth rate expressed as parasite multiplication rate (PMR) in individuals using the time between PCR and microscopy detection for each individual. However, we cannot estimate the growth rate precisely, since our PCR measurement shows only the presence or absence of parasites above a threshold, rather than the concentration of parasites.
In order to investigate the PMR with different times-to-detection, we estimated the minimal PMR for each individual using the PCR and microscopy data (Figure 1A and B, respectively). Briefly, we identify the time of the first positive detection of parasites by PCR (tPCR) and the first detection by microscopy (tmicro), and the last week when the PCR was negative (tPCR – 7). We assume a parasite density of 40 parasites/μL as the microscopy detection threshold (T micro ), and a density of 0.12 parasites/μL as the PCR detection threshold (T PCR ) [5,13,14], and use the actual density of parasites at microscopy detection (D micro ). We then estimate the parasite growth rates (PMRs), depending on the relative timing of tPCR and tmicro. If tPCR = tmicro (i.e., parasites were first detected by PCR and microscopy on the same week), then r = (D micro /T PCR )2/7 (i.e., we assume growth from the PCR threshold to the microscopy value over the week before detection). Where tPCR < tmicro (as was usually the case), then we know that i) parasite density was between T PCR and T micro at tPCR, and ii) parasite density was < T PCR at (tPCR – 7). Assuming the real parasite density was at the upper limit of these ranges (i.e., is at T micro at tPCR, and at T PCR at (tPCR – 7)), we can obtain a conservative estimate of PMR, and take the larger of the two estimates. Thus,
Treatment-time-to-infection studies. The results of a previously published treatment-time-to-infection cohort study in Kenya are shown [2]. Parasites were detected by either microscopy (A) or PCR (B). Black shapes are data (joined by dashed lines). Black squares, blue lines – children 1 to 4 years old (y.o.); circles, green line – children 5 to 9 y.o.; triangles, orange line – children 10 to 14 y.o.; diamonds, red line – adults >14 y.o. The solid blue line is the result of fitting the exponential decay model for children 1 to 4 years old.
$$ r= \max {\left({D}_{\mathrm{micro}}/{T}_{\mathrm{micro}}\right)}^{2/\left({t}_{\mathrm{micro}}-{t}_{\mathrm{PCR}}\right)}\Big),{\left({D}_{\mathrm{micro}}/{T}_{\mathrm{PCR}}\right)}^{2/\left({t}_{\mathrm{micro}}-{t}_{\mathrm{PCR}}-7\right)})) $$
We note, that some individuals are PCR positive, but do not become microscopy positive before the end of the study (t max ). In this case, we assume that parasite concentration was at the microscopy detection threshold at the last week of study and estimate a maximal PMR, r = (T micro /T PCR )(2/(tmax – tPCR – 7)).
Modelling the infection curve
In previous studies, we showed that the distribution of time to detection of infection in the different age groups are consistent with the measured reduction in PMR with age, leading to a delay in the time until the detection of infection, as well as a reduced peak of parasitaemia [5,10]. We assumed that parasites have exponential growth from the time of initiation of blood-stage infection, and a life cycle of 2 days. Thus, the concentration of parasites in blood can be described by the formula:
$$ C(t)=C(0){r}^{t/2} $$
Where r is the PMR, C is the concentration of parasites per μL, t is the time (in days) from the initiation of blood-stage infection (t = 0). We note that the concentration of parasites at emergence from the liver is adjusted by blood volume in each age group. In order to find the average blood volumes in age groups (V 1 = 1.1 × 106 μL, V 2 = 2 × 106 μL, V 3 = 3.3 × 106 μL, V 4 = 5 × 106 μL), we used Chart 1 in reference [15]. The number of merozoites released from the liver for a single bite we estimated as 5.6 × 104 [13].
In the model, we assume that bites occur randomly with exponentially distributed times between infective bites. However, we can only detect infection after the delay θ(r) due to blood-stage parasite replication until the parasitemia reaches the detection threshold. This delay is equal to:
$$ \theta (r)=2 \log {}_rT/C(0) $$
where r is the PMR and T is the detection threshold (microscopy or PCR).
Assuming that the PMR is the same within age groups, we obtain the exponential decay curve until detection with initial plateau due to growth of parasites until detection.
$$ S(t)=\left\{\begin{array}{c}\hfill {e}^{-\lambda \left(t-2 \log {}_rT/C(0)-\tau \right)},\kern0.4em t>2 \log {}_rT/C(0)+\tau \kern0.1em \hfill \\ {}\hfill 1,\kern0.5em t\le 2 \log {}_rT/C(0)+\tau \kern6em \hfill \end{array}\kern0.1em \right. $$
The constant τ = 7 days is the first day blood-stage infection could be initiated after treatment due to the pharmacodynamics of lumefantrine [12,16-22], λ is the rate of initiation of the blood-stage infection.
We also assumed that PMR has a normal distribution within a group of people of approximately the same age with mean m and standard deviation βm, where β is a positive constant. This normal distribution of growth rates is consistent with the observed data, however, the precise shape of the distribution is not critical to the conclusions. Functions f(r) and F(r) are the probability density function and the cumulative density function of a normal distribution, respectively. Constant rmax is a maximal number of newly infected red blood cells that can be infected by one infected red blood cell.
The model that describes the infection curve with the delay to detection is defined by formula:
$$ \operatorname{S}(t)=\operatorname{F}(1)+1/\operatorname{F}(rmax){\displaystyle \underset{1}{\overset{rmax\kern0.1em }{\int }}{e}^{-\lambda \max \left(t-\Theta (r)-\tau, 0\right)}}\operatorname{f}(r)dr $$
This formula incorporates the initial delays to detection in exponential decay function for all possible PMRs weighted by the probability of a given PMR. The terms in front of the integral appear due to truncation of the Normal distribution at r max (we assumed maximal PMR is 32 per cycle) and assumption that infections with PMR ≤1 would never be detected (function tends to plateau at F(1)).
In the current study, using assumptions of model (1), we want to find the distribution function h(r) of the PMR for people who were detected positive by PCR or microscopy in a given time window (t1,t2). For this purpose, we need to multiply the distribution function of the PMR f(r), for the whole group by the 'fraction' of people with given PMR in this time window, i.e., people who had initiation of blood-stage infection θ(r) days ago. The distribution of the PMR h(r) in the given time window can be found by the formula:
$$ \operatorname{h}(r)=k\operatorname{f}(r){\displaystyle \underset{t_1}{\overset{t_2}{\int }}{\operatorname{f}}_{\exp}\left(t-\Theta (r)\right)}dt=k\operatorname{f}(r)\left({e}^{\uplambda \max \left({t}_1-\Theta (r),0\right)}-{e}^{\uplambda \max \left({t}_2-\Theta (r),0\right)}\right) $$
The function fexp(t – Θ(r)) is the exponential distribution function that describes the initiation of blood-stage infection Θ(r) days before detection. The constant k normalizes the expression to make h(r) satisfy the condition of the probability density function (PDF).
$$ k=1/{\displaystyle \underset{1}{\overset{rmax}{\int }}\operatorname{f}(r)\left({e}^{-\uplambda \max \left({t}_1-\Theta (r),0\right)}-{e}^{-\uplambda \max \left({t}_2-\Theta (r),0\right)}\right)}dr $$
The influence of PMR on delay to detection by PCR and microscopy and the difference in distributions of PMR in individuals detected earlier and later after treatment is described in the model above and is schematically illustrated in Figure 2.
Schematic of time-to-infection model. (A) Given a constant force of infection, the time-to-initiation of blood-stage infection is exponentially distributed. (B) After emergence from the liver, there is a distribution of parasite growth rates (shaded purple triangles). (C) Parasites grow until they reach the threshold for detection by PCR (red dots and lines) or microscopy (green dots and lines). It is then possible to compare the growth rates for individuals detected early (blue box in C, blue-shaded shape in D) or late (yellow box in C, yellow-shaded shape in D).
Time-to-detection in children is a random process
We first focused on the kinetics of infection of the youngest children in the cohort, aged 1 to 4 years (Figure 1A, in blue). Using detection of parasites by light microscopy, our sensitivity of detection was around 40 parasites per μL. Using this method of detection, we found that time-to-detection in these children varied from 3 weeks to 11 weeks. In order to determine whether this was a random process, we modelled time-to-detection as an exponential process using formula (3). That is, once we allow for a delay from treatment (for washout of lumefantrine, and the time taken for the parasites to grow to the level of detection), we found that the rate of detection of infection was consistent with an exponential process, with a rate of new infections (λ) of 0.066/day (95% CI, 0.056–0.076) (Figure 1). This equates to a 'half-life' (time until half the children are infected) of approximately 10 days. The exponential curve is indicative of a stochastic process, with all individuals at equal risk at all times. The close fit of the data to this model suggests that the timing of detection in these children is a random process, equivalent to radioactive decay. Thus, time-to-detection of an individual child carries essentially no information about the susceptibility of that child. The time-to-detection in these children is determined by the random time-to-initiation (of infection), and can be explained simply as a stochastic process, dependent on the time of biting.
To confirm this, we investigated whether children infected early or late differed in their subsequent infection kinetics. The blood-stage growth rate of the parasite is one factor determining when infection will be detected. Faster growing parasites should take a shorter time from emergence in the liver to reaching the detection threshold. If growth rate were the only (or major) factor determining time of detection, then early detection should be associated with faster growth, and late detection with slow growth. However, if time-to-detection is due to the random time-to-initiation of infection (as suggested above), then blood-stage growth rate will not be correlated with time-to-detection.
One measure of blood-stage parasite growth is the time between PCR detection and microscopy detection, since this will be longer for slower growing parasites. We measured the delay between detection of parasitemia by PCR and by microscopy in children who were infected early (parasites first detected by microscopy in weeks 3 to 4 of the study) versus children infected late (weeks 5 to 9). We found no significant difference in the delay between PCR and microscopy detection in children infected early versus late (Figure 3A). We also estimated the growth rate of parasites in the same group of children (using the time of PCR detection and the level of parasitemia at microscopy detection, and equation (1)) and again found that children with different time-to-detection did not differ significantly in growth rates (Figure 3B). This suggests that blood-stage growth rate of P. falciparum is not a major determinant of time-to-detection in children or, conversely, that time-to-detection does not sort children based on the growth rate of blood-stage infection.
Time-to-microscopy-detection and parasite growth rate. Individuals were grouped according to the time at which parasites were first detected by microscopy. The time between PCR detection and microscopy detection was directly measured from the data for each patient. Parasite growth rates were estimated from the time of PCR detection to time and level of microscopy detection, using equation (1). This allowed us to compare both delays and PMR in early-detected versus late-detected groups. For children, neither the delay (A) nor the growth rate (B) was significantly different between early and late-infected groups (timing of detection shown in E). For adults, there was a significantly longer delay (C), and slower growth rate (D) in the late-infected group (timing of detection shown in F).
Time to microscopy-detectable infection in adults is associated with parasite growth rate
We have previously shown that adults within this cohort had significantly different time-to-detection compared to children [5,10]. The survival curves of adults do not conform to a simple exponential reinfection process, suggesting a greater role for possible infection-modifying immune responses. Two possible mechanisms are likely to modify the time-to-detection: differences in liver-stage immunity (reducing the rate of initiation of infections), or differences in blood-stage immunity, affecting the time from liver emergence to detection of infection. We have previously demonstrated that the survival curves can be explained by heterogeneity in the growth rates of parasites in blood. If time-to-detection were caused by differences in liver-stage immunity, then we would expect no difference in parasite growth rate according to time of infection. However, if blood-stage growth rate plays a role, then we expect that adults with a longer time-to-detection would have slower parasite growth rates. Thus, we studied individuals aged >15 years in the cohort, focusing on individuals infected early (weeks 4 to 6) or late (weeks 8 to 11). Note that the designation of 'early' and 'late' infection differed considerably between children and adults, due to the different timing of detection in the different groups.
In this adult population, we find that adults with longer time-to-detection (by microscopy) have a significantly greater delay between PCR detection and microscopy detection (median 0.5 weeks versus 4 weeks, P value = 0.0002; Figure 3C). Using the levels of parasitemia at microscopy detection and applying formula (1), we also estimated the growth rate of parasites in adults infected early and late, and found significantly slower growth in adults infected late (median 5.156 per 2-day cycle versus 1.561, P = 0.0004; Figure 3D). This indicates that time-to-detection by microscopy carries information on the kinetics of blood-stage parasite growth in this adult population.
Use of more sensitive testing reduces the ability to discriminate differences in parasite growth rate
In addition to screening samples for infection by microscopy, we also screened samples by PCR. In the case of the children's cohort, the shape of the reinfection curve remained exponential, despite an overall predisposition for infection to be detected earlier (Figure 1A,B, dark blue dashed lines). In the case of the adults, the shape of the time-to-detection curve is significantly altered when infection is detected by PCR (Figure 1A,B, red dashed lines). The more sensitive detection threshold of PCR causes the curve to become, overall, much more like an exponential curve. Using this data, we again stratified individuals based on time-to-detection among the children and adults, this time grouping early and late according to time of PCR detection (Figure 4). In both adults and children, there were sometimes long delays from PCR to microscopy detection. A large proportion of adults were PCR positive but were not detected by microscopy. In order not to bias the 'late-infected cohort' we only estimated delays/growth rates where this could be measured in both the early and late cohorts (i.e., within 3 weeks of detection by PCR in adults and 5 weeks in children).
Time-to-PCR detection and parasite growth rates. Individuals were grouped according to the time at which parasites were first detected by PCR. The time between PCR detection and microscopy detection was measured, and PMR estimated, in order to compare parasite growth rates in early-detected versus late-detected groups. For children, neither the delay (A) nor the growth rate (B) was significantly different between early- and late-infected groups (timing of detection shown in E). For adults, there was a significantly shorter delay (C) and faster growth rate (D) in the late-infected group (timing of detection shown in F). Grey symbols in panels A–D and open circles in panels E and F indicate where parasites were not detected by microscopy before the end of the study. Note that the data in panels E and F is the same data as in Figure 2E and F, but sorted according to time-of-detection by PCR.
In the children aged 1 to 5, we studied infection kinetics in individuals becoming PCR-positive in weeks 1 to 3 versus weeks 4 to 6. We observed no significant differences in either the delay between PCR positivity and microscopy positivity, or in the estimated PMR (Figure 4A,B). In adults aged >14 years we performed a similar analysis, this time comparing those who became PCR-positive in weeks 2 to 4 and in weeks 6 to 8. Here, we observed that individuals becoming PCR-positive later had a reduced time between PCR and microscopy detection (P = 0.0185) and an associated increased PMR (P = 0.0238). This is unexpected, as late-detected individuals are expected to have slower parasite growth. A confounding factor here may be the high proportion of adults who were detected as PCR-positive, but did not become microscopy-positive during the study. For these individuals we can only estimate a 'minimum delay' and 'maximum growth rate'.
Modelling time-to-detection
The analysis presented above demonstrates that the use of time-to-detection to classify individuals as susceptible or resistant to infection can be problematic in our study. This is because the random factor of when infection is initiated is often the major factor determining the time-to-detection. However, our study included a particular distribution of ages and number of individuals studied. Therefore, to illustrate the problem more generally, we used a modelling approach to look at how parasite growth rate and parasite detection method interact to determine how informative time-to-detection data is. We have previously illustrated that parasite growth is the major factor that differs between age groups in the field study. By varying only the average parasite growth rate for different age groups and assuming a normal distribution of growth rates within an age group, we found we could simultaneously fit both the PCR-determined and microscopy-determined time-to-infection curves [5,10]. These predicted differences in parasite growth rate with age were also supported by direct estimation of parasite growth rates using PCR and microscopy data for different individuals. Figure 5A,B shows the fitting of the survival curves to the microscopy and PCR detection datasets. The same model, using the predicted distribution of growth rates, was then used to understand whether time-to-detection was useful at identifying differences in parasite growth rate between 'early infected' and 'late infected' groups.
Modelling the parasite growth rates in individuals detected at different times. The time-to-detection of infection by microscopy (A, C, E) or PCR (B, D, F) was modelled assuming the same force of infection for all age groups. Parasite growth rates were assumed to follow a normal distribution, with a different mean parasite growth rate for children aged 1 to 5 (blue lines) and adults >14 years (red lines). Growth rate was estimated for individuals with infection detected in the first (14 to 35 days, light grey), second (35 to 56 days, medium grey), or last (56 to 77 days, dark grey) third of the study period. For children (Panels C and D), very little difference in parasite growth is predicted depending on when their infections were detected (curves for individuals detected early and late overlay each other in C and D). For adults, although the overall PMR is the same regardless of how infection is detected, microscopy is better at sorting patients based on differences in their PMR. Thus, when infection is detected by microscopy (detection threshold of 40 parasites/μL; Panel E), individuals infected later are predicted to have slower growth rates. When infection is detected by PCR (detection threshold of 0.12 parasites/μL; Panel F), there is a smaller difference in PMR between individuals infected at different times.
In order to illustrate the interaction of parasite growth rate and time-to-detection, we divided the youngest and oldest age cohorts (blue and red lines, respectively, in Figure 5A,B) into three groups according to time-to-detection (indicated by grey shading in Figure 5A,B). Then, we used the model to predict the expected growth rate distribution for individuals infected early (14 to 35 days), intermediate (35 to 56 days), or late (56 to 77 days) after treatment (Figure 5C–F). That is, if we model a constant force of infection and a normal distribution of growth rates within a given age group, does time-to-detection segregate individuals based on the parasite growth rates? Figure 5C shows the predicted distribution of parasite growth rates for children 1 to 4 years with infection detected by microscopy (threshold of detection = 40 parasites/μL) at different times. It is clear that, as predicted by the exponential survival curve, all groups are predicted to have very similar parasite growth rates (i.e., almost complete overlap of growth rates in Figure 5C), and the major factor determining time-to-detection is simply the random factor of when they initiated infection. By contrast, we predict that, using microscopy detection of parasites in adults, individuals detected early have a higher growth rate than individuals detected late (Figure 5E). This occurs because the much slower growth of parasites in older individuals means a much longer time between infection and detection. Thus, although the time of detection is still affected by both the random time-to-initiation (of blood-stage infection) and the subsequent growth rate, in this case, the growth rate plays a more significant role in determining time-to-detection. In our adult cohort using microscopy detection, studying early-detected and late-detected individuals allows identification of a more resistant (slower parasite growth) late-infected group.
Using the more the sensitive PCR method to detect infection (threshold of detection = 0.12 parasites/μL), the situation changes. For the children, growth rate remains overlapping for all time-to-infection groups (Figure 5D). For the adults, time-to-infection is now a much less powerful discriminator of parasite growth rate (Figure 5F show greater overlap of growth rates than Figure 5E). This occurs because the more sensitive detection reduces the time between infection and detection. Importantly, more sensitive detection reduces the delays induced by growth rate. This has more of an effect for slow growing infection (where delays are greater) than in fast growing infections. As the time separation between fast- and slow-growing infection narrows, the balance between the contribution of random infection time and differences in time from initiation to detection (due to blood-stage growth) is shifted, and time-to-detection is less affected by parasite growth rate.
Effects of different infection rates
The Kenyan cohort study we have analysed was performed in an area of high transmission, with an estimated entomological inoculation rate of 0.8 per day [8], and new infection rate of 0.066 per day [10]. As a result of this high infection rate, the distribution of time-to-initiation of blood-stage infections was quite short. Since the utility of time-to-detection studies in separating individuals based on growth rate of parasites is determined by the balance between delays due to infection time and delays due to subsequent growth rate, we investigated how differences in infection rate in different cohorts would affect such studies. We used the distributions in parasite growth rates for different age groups estimated from the actual cohort and then modelled the outcome for varying infection rates. Figure 6 shows how different infection rates affect the ability of time-to-infection studies to identify putatively resistant individuals with slow parasite growth. In the baseline scenario (yellow highlighted column in Figure 6), the infection rate estimated from the cohort was used. In this baseline scenario, we were able to identify differences in growth rate in adults (Figure 6H), but not in children (Figure 6E), based on time-to-infection and microscopy detection of infection (as shown also in Figure 5).
Modelling the effects of different infection rates on time-to-infection. Using the same model as in Figure 5, and assuming detection of infection by microscopy, we investigated the effects of raising or lowering the infection rate. The centre column (Highlighted, B, E, H,) shows the same infection rate as in Figure 5 (average time between bites = 10 days). Panel A shows the effects of a quadrupling of the infection rate. Similarly, Panel C shows the effects of a quartering of the infection rate. For each infection rate, the predicted time-to-infection curves for adults (red) and children (blue) are shown. Solid boxes in Panels A–C and solid lines in Panels D–I indicate an 'early infection' group, and dashed boxes and dashed lines indicate a 'late infection' group. The second row (D–F) shows the predicted difference in growth rates for children infected early versus late, at the different infection rates. Although at high infection rates (D) some difference in average growth rate is predicted, this is lost at lower infection rates. For adults, there is a large difference in growth rate between early- and late-infected groups at high infection rates (G). However, this difference is lost at very low infection rates (Panel I).
Differences in rate of initiation of infection will affect the power of the time-to-infection approach to identify more resistant or more susceptible individuals. If the biting rate had been four times higher (half-life to infection of 2.5 days; Panel A), the children would have all become infected much more quickly, meaning a shorter average delay in time-to-initiation. Therefore, even given the relatively narrow spread of parasite growth rates observed in the children, separating early- versus late-infected children would have segregated groups into higher and lower average growth rates. That is, at high infection rates, delays due to growth would have had a proportionately much larger effect on time-to-detection, and time-to-detection could be used to separate children based on parasite growth rate at this high infection rate (Figure 6D). As we lower infection rate to one quarter of the baseline rate (half-life to infection = 40 days), this reduces the differences in growth rate between early- and late-infected children (Figure 6F).
In our field study (the baseline infection rate scenario), adults infected early and late are expected to have different average growth rates (when infection is detected by microscopy) (Figure 6H). At higher infection rates (Figure 6A,G), time-to-detection remains a useful discriminator. However, when we reduced the simulated infection rate to one quarter of the baseline rate (half-life to infection of 40 days, Figure 6C), this slower infection rate significantly reduces the ability of the assay to discriminate subjects based on time-to-detection (Figure 6F,I).
The ability of time-to-infection studies to establish the more susceptible fraction of individuals (of a given age) is determined by balance of delays induced 'waiting to be infected' and delays in subsequent parasite growth. For the children, the delays due to differences in growth rates are small, because of the high growth rate, whereas the expected delays are much higher in adults. Since changing the infection rate does not change the PMR and the same growth rates are used for all scenarios, the time delay during growth from the liver stage to detection are independent of the infection rate. Changing the infection rate only changes the expected distribution of delays due to time-to-initiation of infection. Whenever delays due to time-to-initiation are greater than delays due to growth, time-to-detection is largely random. Only when the delays induced by growth rates are similar or higher than random delays is time-to-initiation informative. For children, this only occurs at a very high infection rate (Figure 6D). For adults, with relatively large delays due to slow parasite growth, PMR is the major determinant of delay at most infection rates shown (Figure 6G–H), but is a smaller factor at low infection rates (Figure 6I).
Identifying naturally acquired immune responses that are able to control parasite growth in the infected host and reduce the frequency of clinical malaria provides a potential avenue for the development of novel vaccination strategies. A number of investigators have studied how baseline (pre-treatment) immune responses affect time-to-infection after treatment [2,4]. This has also been studied using time-to-infection from cohorts that have naturally cleared malaria infection during the dry season in areas of seasonal transmission [23,24]. The underlying premise of such studies is that time-to-infection is determined by the level of immunity of the host. This assumption is supported by the fact that, when stratified by age, older individuals in endemic areas are consistently observed to have a longer time to infection, and this delay is thought to be due to naturally acquired immunity [5,25].
The association between age and time-to-infection suggests that this is a useful correlate of naturally acquired immunity. However, since the phenomenon of acquired resistance with age and exposure is well-known, comparing immune responses and time-to-infection in different age groups seems rather laborious if the same information can be obtained simply from date of birth. The major utility of time-to-infection studies would be in differentiating those of similar age and level of exposure, but who differ in their levels of immunity. By identifying the differences in immune responses between such 'exposure matched' individuals with different levels of protection, we should be able to identify protective responses and antigens. By comparing narrow age cohorts in localized geographical areas, we may be able to identify such responses. However, there are two major problems in this approach. First, it is also likely that such narrow cohorts may also not differ greatly in their levels of immunity, so a study design that is very sensitive to small differences in susceptibility may be required. Secondly, we are most interested in differing levels of immunity and protection in young children, as they are most at risk of clinical illness. However, since children also have the highest parasite growth rates, they are the most difficult population in which to identify differences in time-to-infection due to differences in growth rates. A major question is whether time-to-infection studies are sensitive enough to detect such differences in immunity.
Herein, we have analysed data from a time-to-infection cohort in Kenya in order to test whether such an approach is able to differentiate varying levels of protection in a group of age-matched individuals in an endemic area. We find that when infection is detected by microscopy, time-to-detection identifies adults with slower parasite growth rates; however, it does not do this well in children. When infection is detected using a more sensitive PCR approach, more adults are detected as being infected (and infection is detected earlier), but time-to-detection is less useful at identifying individuals with slower parasite growth rates. This analysis demonstrates that time-to-infection studies are very sensitive to the distribution of parasite growth rates in the group being studied, as well as the method of detection of parasites. Microscopy detection is more sensitive at segregating individuals based on parasite growth rate than PCR detection is (and using a higher threshold for detection is more sensitive still). However, in either case, the rapid growth rates of parasites in children indicate that it is very difficult to identify differences in growth rates in children using this method.
Using a simulation approach, we investigated how different rates of infection would affect the ability of time-to-detection studies to sort individuals based on parasite growth rate. This illustrates that choosing populations with higher underlying infection rates will always lead to a greater role for parasite growth rates in determining time-to-detection and thus be more sensitive at sorting individuals based on time-to-detection. Similarly, using a parasite-detection assay with a higher detection threshold will lead to an increased effect of parasite growth on time-to-detection, and thus also be more sensitive.
The mechanisms of naturally acquired immunity are generally divided into pre-erythrocytic versus blood-stage immunity. Pre-erythrocytic immunity affects the proportion of infectious bites that initiate blood-stage infection by blocking infected bites prior to or in the liver stages – thus affecting time-to-initiation (of blood-stage infection). Blood-stage immunity affects the growth rate of parasites in blood and hence the time from infection to detection. Our modelling of time-to-infection (Figure 6) reveals an inherent limitation of using such approaches to study naturally acquired immunity. Since so much of the outcome is determined by the random time-to-infection, it is very difficult to determine immune effects on parasite growth rate unless they are large. This also has potential implications for studies using time-to-infection as a means to assess vaccine effects on blood-stage parasite growth, as these may have very limited power to detect changes in time-to-detection. For studies of liver-stage vaccines the problem is slightly different, given that it is changes in the infection rate that are the primary concern (the limitations of the statistical power of such studies has been dealt with elsewhere [26]). In our previous studies [5,10], we have found no evidence for differences in infection rate with age and have shown that measured differences in parasite growth rate with age explain the observed differences in time-to-infection for different age groups. Moreover, the good fit of an exponential model of time-to-infection in children suggests little effect of pre-erythrocytic immunity. Tran et al. [23] have recently used a similar study and PCR detection to also show no difference in time-to-detection in different age groups. We note that differences in infection rate with age may make it easier to detect differences in growth rate in groups with a higher infection rate (Figure 6). It is important to understand how time-to-infection studies can be used to understand differences in parasite growth rates both in naturally acquired immunity and in studies of vaccination.
It is important to note that many studies of vaccine efficacy rely on time-to-clinical-episode, rather than time-to-infection. Infection and parasite growth are a pre-requisite for a clinical episode, and thus time-to-infection may still confound such studies. However, one approach to reducing this effect is to restrict that analysis of clinical episodes to individuals with demonstrated infection [27]. We note that, in our study, there were too few clinical episodes in the 10-week monitoring interval to allow a separate assessment of time-to-episode.
An important question is why, given the limited power of time-to-infection studies, many studies have reported significant associations between time-to-infection and both pre-erythrocytic and blood-stage immunity? One answer to this lies in the aggregation of age groups in many studies. That is, in our analysis, we considered relatively narrowly stratified age groups. If we pool all age groups, it is relatively simple to show large differences in time-to-detection with age. Since immunity also varies with age (and exposure), it is obvious that in the cohort as a whole, time-to-detection will correlate with the accumulation of immunity with age. However, since age is such a strong confounder here, it is questionable what additional information the time-to-infection adds; one could have simply correlated immune response with age and presumably reached similar conclusions. Moreover, since immunity accumulates with age and exposure, it is not possible to disentangle which immune responses are simply the result of prolonged exposure and which may be actually playing a role in reducing parasite growth rates. Where we would ideally like to identify protective responses would be in young children with similar levels of exposure but with differences in either phenotype or specificity of their immune response and who differ in infection outcome. However, in young children, we predict that time-to-infection studies are not able to discriminate differences in blood-stage immunity and parasite growth rates unless infection rates are extremely high.
Time-to-infection studies are only one approach to identify susceptible and resistant individuals. Other approaches include observing the presence or absence of infection in a given time interval, or time to presence of clinical malaria (rather than simply infection). We note that if the underlying time to acquisition of infection is a random process, then the presence or absence of infection in a given time interval is also random. For example, if we truncated our study at day 28 (Figure 1A), we would see 41% of children aged 1 to 4 infected and 59% uninfected. However, whether children were in one group or another would be due to the random distribution of time-to-initiation. Similarly, time-to-clinical-malaria is dependent on time-to-infection rate of parasite growth and underlying sensitivity to clinical malaria. Thus, the random time-to-infection may still play a dominant role. We note that others have suggested studying the rate of clinical episodes only in individuals shown to be infected [27-29]. Interestingly, this is sometimes used as a measure to decrease heterogeneity in exposure [30]. However, we suggest that this may also have the effect of reducing the impact of the random factor of when or whether infection occurred, even in the presence of homogenous levels of exposure. Further work is clearly required to determine the role of random factors versus host factors in studies of resistance to clinical malaria.
Detecting differences in parasite growth rate is difficult using time-to-infection studies, since the random timing of infection is often the major factor determining time-to-infection. Human challenge studies provide a much simpler approach for identifying differences in growth rate, as time-of-infection is known. Since all patients are infected synchronously, any delay between patients can be attributed to either a reduced initial burden of infection, or reduced subsequent growth rate. Thus, both time-to-detection and serial measurement of parasitemia can be used to estimate growth rates following infection [13,31,32]. Previous studies have shown major differences in parasite growth rates in naïve versus exposed populations [33], and similar studies could, in principle, be used to correlate prior immune responses with in vivo parasite growth rates following natural infection. Alternatively, the direct measurement of parasite growth rates in time-to-infection studies provides a more direct way to identify differences in blood-stage immunity than using time-to-detection in these studies. Since time-to-infection studies involve regular sampling for infection, if parasites can be detected (by PCR) in two or more sequential samples, then parasite growth rates can be directly estimated, independent of when infection was initiated. Given the significant limitations of time-to-infection studies in detecting differences in both infection rates [26] or differences in growth rates (illustrated here), we propose that direct measurement of parasite growth rates in vivo will be a much more useful correlate of immune control than time-to-infection itself.
Many studies aim to identify resistance or susceptibility to P. falciparum infection based upon the timing or number of observed infections experienced by the patient. However, depending on study design, most of the variation in the timing and number of infections between age-matched individuals may arise simply from the random timing of when infection occurs. Careful attention to study design is required to identify variation in individuals' resistance to P. falciparum infection.
P. falciparum :
PMR:
Parasite multiplication rate
The authors wish to thank Roland Regoes for helpful discussions of his work on time-to-infection in a simian immunodeficiency virus-challenge model that helped initiate these studies.
This work is supported by the Australian Research Council (DP120100064), and National Institutes of Health (NIH, USA), National Institute of Allergy and Infectious Diseases (NIAID), R01 AI043906 (JK and AMM), and Fogarty International Center (FIC) 1D43TW006576 (KC). MPD is an NHMRC Senior Research Fellow.
All authors declare that they have no competing interests.
KC, JV, JWK, and AMM designed and implemented the field study. MP and MPD developed the concepts, designed the approach, and carried out statistical analysis, mathematical modelling of the data, and writing the manuscript. All authors read and approved the final manuscript.
Centre for Vascular Research, University of New South Wales Australia, Kensington, Sydney, NSW 2052, Australia
Kenya Medical Research Institute, Centre for Global Health Research, P. O. Box 1571, Kisumu, 40100, Kenya
Case Western Reserve University, Biomedical Research Building Suite 431, 2109 Adelbert Road, Cleveland, OH 44106, USA
University of Massachusetts Medical School, 373 Plantation Street, Room 318, Worcester, MA 01605, USA
Murray C, Rosenfeld LC, Lim SS, Andrews KG, Foreman KJ, Haring D, et al. Global malaria mortality between 1980 and 2010: a systematic analysis. Lancet. 2012;379:413–31.View ArticlePubMedGoogle Scholar
Dent AE, Bergmann-Leitner ES, Wilson DW, Tisch DJ, Kimmel R, Vulule J, et al. Antibody-mediated growth inhibition of Plasmodium falciparum: relationship to age and protection from parasitemia in Kenyan children and adults. PLoS One. 2008;3:e3557.View ArticlePubMedPubMed CentralGoogle Scholar
Reiling L, Richards JS, Fowkes FJI, Barry AE, Triglia T, Chokejindachai W, et al. Evidence that the erythrocyte invasion ligand PfRh2 is a target of protective immunity against Plasmodium falciparum malaria. J Immunol. 2010;185:6157–67.View ArticlePubMedGoogle Scholar
Richards JS, Stanisic DI, Fowkes FJI, Tavul L, Dabod E, Thompson JK, et al. Association between naturally acquired antibodies to erythrocyte-binding antigens of Plasmodium falciparum and protection from malaria and high-density parasitemia. Clin Infect Dis. 2010;51:e50–60.View ArticlePubMedGoogle Scholar
Pinkevych M, Petravic J, Chelimo K, Vulule J, Kazura JW, Moormann AM, et al. Decreased growth rate of P. falciparum blood stage parasitemia with age in a holoendemic population. J Infect Dis. 2014;209:1136–43.View ArticlePubMedGoogle Scholar
Betuela I, Rosanas-Urgell A, Kiniboro B, Stanisic DI, Samol L, de Lazzari E, et al. Relapses contribute significantly to the risk of plasmodium vivax infection and disease in Papua New Guinean children 1-5 years of age. J Infect Dis. 2012;206:1771–80.View ArticlePubMedGoogle Scholar
Ofulla AV, Moormann AM, Embury PE, Kazura JW, Sumba PO, John CC. Age-related differences in the detection of Plasmodium falciparum infection by PCR and microscopy, in an area of Kenya with holo-endemic malaria. Ann Trop Med Parasitol. 2005;99:431–35.View ArticlePubMedGoogle Scholar
Beier JC, Oster CN, Onyango FK, Bales JD, Sherwood JA, Perkins PV, et al. Plasmodium falciparum incidence relative to entomologic inoculation rates at a site proposed for testing malaria vaccines in western Kenya. Am J Trop Med Hyg. 1994;50:529–36.PubMedGoogle Scholar
Smith DL, Drakeley CJ, Chiyaka C, Hay SI. A quantitative analysis of transmission efficiency versus intensity for malaria. Nat Commun. 2010;1:108.View ArticlePubMedPubMed CentralGoogle Scholar
Pinkevych M, Petravic J, Chelimo K, Kazura JW, Moormann AM, Davenport MP. The dynamics of naturally acquired immunity to Plasmodium falciparum infection. PLoS Comput Biol. 2012;8:e1002729.View ArticlePubMedPubMed CentralGoogle Scholar
Moormann AM, Sumba PO, Chelimo K, Fang H, Tisch DJ, Dent AE, et al. Humoral and cellular immunity to Plasmodium falciparum merozoite surface protein 1 and protection from infection with blood-stage parasites. J Infect Dis. 2013;208:149–58.View ArticlePubMedPubMed CentralGoogle Scholar
Golenser J, Waknine JH, Krugliak M, Hunt NH, Grau GE. Current perspectives on the mechanism of action of artemisinins. Int J Parasitol. 2006;36:1427–41.View ArticlePubMedGoogle Scholar
Bejon P, Andrews L, Andersen RF, Dunachie S, Webster D, Walther M, et al. Calculation of liver-to-blood inocula, parasite growth rates, and preerythrocytic vaccine efficacy, from serial quantitative polymerase chain reaction studies of volunteers challenged with malaria sporozoites. J Infect Dis. 2005;191:619–26.View ArticlePubMedGoogle Scholar
Andrews L, Andersen RF, Webster D, Dunachie S, Walther RM, Bejon P, et al. Quantitative real-time polymerase chain reaction for malaria diagnosis and its use in malaria vaccine clinical trials. Am J Trop Med Hyg. 2005;73:191–8.PubMedGoogle Scholar
Seckei H. Blood volume and circulation time in children. Arch Dis Child. 1936;11:21–30.View ArticlePubMedPubMed CentralGoogle Scholar
Lefèvre G, Looareesuwan S, Treeprasertsuk S, Krudsood S, Silachamroon U, Gathmann I, et al. A clinical and pharmacokinetic trial of six doses of artemether-lumefantrine for multidrug-resistant Plasmodium falciparum malaria in Thailand. Am J Trop Med Hyg. 2001;64:247–56.PubMedGoogle Scholar
McGready R, Tan SO, Ashley EA, Pimanpanarak M, Viladpai-Nguen J, Phaiphun L, et al. A randomised controlled trial of Artemether-Lumefantrine versus Artesunate for uncomplicated plasmodium falciparum treatment in pregnancy. PLoS Med. 2008;5:e253.View ArticlePubMedPubMed CentralGoogle Scholar
White NJ. Assessment of the pharmacodynamic properties of antimalarial drugs in vivo. Antimicrob Agents Chemother. 1997;41:1413–22.PubMedPubMed CentralGoogle Scholar
Ezzet F, van Vugt M, Nosten F, Looareesuwan S, White NJ. Pharmacokinetics and pharmacodynamics of lumefantrine (benflumetol) in acute falciparum malaria. Antimicrob Agents Chemother. 2000;44:697–704.View ArticlePubMedPubMed CentralGoogle Scholar
Mwesigwa J, Parikh S, McGee B, German P, Drysdale T, Kalyango JN, et al. Pharmacokinetics of Artemether-Lumefantrine and Artesunate-Amodiaquine in Children in Kampala, Uganda. Antimicrob Agents Chemother. 2009;54:52–9.View ArticlePubMedPubMed CentralGoogle Scholar
Tarning J, McGready R, Lindegardh N, Ashley EA, Pimanpanarak M, Kamanikom B, et al. Population pharmacokinetics of lumefantrine in pregnant women treated with artemether-lumefantrine for uncomplicated Plasmodium falciparum malaria. Antimicrob Agents Chemother. 2009;53:3837–46.View ArticlePubMedPubMed CentralGoogle Scholar
Djimde A, Lefèvre G. Understanding the pharmacokinetics of Coartem®. Malar J. 2009;8:S4.View ArticlePubMedPubMed CentralGoogle Scholar
Tran TM, Li S, Doumbo S, Doumtabe D, Huang C-Y, Dia S, et al. An intensive longitudinal cohort study of Malian children and adults reveals no evidence of acquired immunity to Plasmodium falciparum infection. Clin Infect Dis. 2013;57:40–7.View ArticlePubMedPubMed CentralGoogle Scholar
Tran TM, Ongoiba A, Coursen J, Crosnier C, Diouf A, Huang C-Y, et al. Naturally acquired antibodies specific for plasmodium falciparum reticulocyte-binding protein homologue 5 inhibit parasite growth and predict protection from malaria. J Infect Dis. 2014;209:789–98.View ArticlePubMedGoogle Scholar
Sokhna CS, Rogier C, Dieye A, Trape JF. Host factors affecting the delay of reappearance of Plasmodium falciparum after radical treatment among a semi-immune population exposed to intense perennial transmission. Am J Trop Med Hyg. 2000;62:266–70.PubMedGoogle Scholar
White MT, Griffin JT, Ghani AC. The design and statistical power of treatment re-infection studies of the association between pre-erythrocytic immunity and infection with Plasmodium falciparum. Malar J. 2013;12:278.View ArticlePubMedPubMed CentralGoogle Scholar
Bejon P, Warimwe G, Mackintosh CL, Mackinnon MJ, Kinyanjui SM, Musyoki JN, et al. Analysis of immunity to febrile malaria in children that distinguishes immunity from lack of exposure. Infect Immun. 2009;77:1917–23.View ArticlePubMedPubMed CentralGoogle Scholar
Bejon P, Cook J, Bergmann-Leitner E, Olotu A, Lusingu J, Mwacharo J, et al. Effect of the pre-erythrocytic candidate malaria vaccine RTS, S/AS01E on blood stage immunity in young children. J Infect Dis. 2011;204:9–18.View ArticlePubMedPubMed CentralGoogle Scholar
Greenhouse B, Ho B, Hubbard A, Njama-Meya D, Narum DL, Lanar DE, et al. Antibodies to Plasmodium falciparum antigens predict a higher risk of malaria but protection from symptoms once parasitemic. J Infect Dis. 2011;204:19–26.View ArticlePubMedPubMed CentralGoogle Scholar
Bousema T, Kreuels B, Gosling R. Adjusting for heterogeneity of malaria transmission in longitudinal studies. J Infect Dis. 2011;204:1–3.View ArticlePubMedPubMed CentralGoogle Scholar
Cheng Q, Lawrence G, Reed C, Stowers A, Ranford-Cartwright L, Creasey A, et al. Measurement of Plasmodium falciparum growth rates in vivo: a test of malaria vaccines. Am J Trop Med Hyg. 1997;57:495–500.PubMedGoogle Scholar
Douglas AD, Edwards NJ, Duncan CJA, Thompson FM, Sheehy SH, O'Hara GA, et al. Comparison of modeling methods to determine liver-to-blood inocula and parasite multiplication rates during controlled human malaria infection. J Infect Dis. 2013;208:340–5.View ArticlePubMedPubMed CentralGoogle Scholar
Douglas AD, Andrews L, Draper SJ, Bojang K, Milligan P, Gilbert SC, et al. Substantially reduced pre-patent parasite multiplication rates are associated with naturally acquired immunity to Plasmodium falciparum. J Infect Dis. 2011;203:1337–40.View ArticlePubMedPubMed CentralGoogle Scholar
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Submission enquiries: [email protected] | CommonCrawl |
CPMCGLM: an R package for p-value adjustment when looking for an optimal transformation of a single explanatory variable in generalized linear models
Benoit Liquet1,2 na1 &
Jérémie Riou ORCID: orcid.org/0000-0002-7056-92573 na1
BMC Medical Research Methodology volume 19, Article number: 79 (2019) Cite this article
In medical research, explanatory continuous variables are frequently transformed or converted into categorical variables. If the coding is unknown, many tests can be used to identify the "optimal" transformation. This common process, involving the problems of multiple testing, requires a correction of the significance level.
Liquet and Commenges proposed an asymptotic correction of significance level in the context of generalized linear models (GLM) (Liquet and Commenges, Stat Probab Lett 71:33–38, 2005). This procedure has been developed for dichotomous and Box-Cox transformations. Furthermore, Liquet and Riou suggested the use of resampling methods to estimate the significance level for transformations into categorical variables with more than two levels (Liquet and Riou, BMC Med Res Methodol 13:75, 2013).
CPMCGLM provides to users both methods of p-value adjustment. Futhermore, they are available for a large set of transformations.
This paper aims to provide insight the user an overview of the methodological context, and explain in detail the use of the CPMCGLM R package through its application to a real epidemiological dataset.
We present here the CPMCGLMR package providing efficient methods for the correction of type-I error rate in the context of generalized linear models. This is the first and the only available package in R providing such methods applied to this context.
This package is designed to help researchers, who work principally in the field of biostatistics and epidemiology, to analyze their data in the context of optimal cutoff point determination.
In applied statistics, statistical models are widely used to assess the relationship between an explanatory and a dependent variable. For instance, in epidemiology, it is common for a study to focus on one particular risk factor. Scientists may wish to determine whether the potential risk factor actually affects the risk of a disease, a biological trait, or another outcome. In this context, statisticians use regression models with an outcome Y, a risk factor X (continuous variable of interest) and q−1 adjustment variables. In clinical and psychological research, the usual approach involves dichotomizing the continuous variable, whereas, in epidemiological studies, it is more usual to create several categories or to perform continuous transformations [1]. It is important to note that the categorization of a continuous predictor can only be justified when threshold effects are suspected. Furthermore, when the assumption of linearity is found to be untenable, a fractional polynomial (FP) transformation should always be favoured.
For instance, let us consider a categorical transformation of X. When the optimal set of cutoff points is unknown, the subjectivity of the choice of this set may lead to the testing of more than one set of values, to find the "optimal" set. For each coding, the nullity of the coefficient associated with the new coded variable is tested. The coding finally selected is that associated with the smallest p-value. This practice implies multiple testing, and an adjustment of the p-value is therefore required. The CPMCGLM package [2] can be used to adjust the p-value in the context of generalized linear models (GLM).
We present here the statistical context, and the various codings available in this R package. We then briefly present the available methods for type-I error correction, before presenting an example based on the PAQUID cohort dataset.
Statistical setting
Generalized linear model
Let us consider a generalized linear model with q explanatory variables [3], in which Y=(Y1,…,Yn) is observed and the Yi's are all identically and independently distributed with a probability density function in the exponential family, defined as follows:
$$f_{Y_{i}}(Y_{i},\theta_{i},\phi)= exp \left \{ \frac{Y_{i}\theta_{i}-b(\theta_{i})}{a(\phi)} + c(Y_{i},\phi) \right \}; $$
with \(\mathbb {E}[Y_{i}]=\mu _{i}=b'(\theta _{i}),\mathbb {V}ar[Y_{i}]=b''(\theta _{i})a(\phi)\) and where a(·),b(·), and c(·) are known and differentiable functions. b(·) is three times differentiable, and its first derivative b′(·) can be inverted. Parameters (θi,ϕ) belong to \(\Omega \subset \mathbb {R}^{2}\), where θi is the canonical parameter and ϕ is the dispersion parameter. The CPMCGLM package allows the use of linear, Poisson, logit and probit models. The specifications of the model are defined with formula, family and link arguments, as a glm() function.
In this context, the main goal is evaluating the association between the outcome Yi and an explanatory variable of interest Xi, adjusted on a vector of explanatory variables Zi. The form of the effect of Xi is unknown, so we may consider K transformations of this variable Xi(k)=gk(Xi) with k=1,…,K.
For instance, if we transform a continuous variable into a categorical variable with mk classes, then mk−1 dummy variables are defined from the function gk(·): \(\mathbf {X_{i}(k)}=g_{k}(X_{i})=\left (X_{i}^{1}(k),\hdots,X_{i}^{m_{k}-1}(k)\right)\). mk different levels of the categorical transformation are possible.
The model for one transformation k can be obtained by modeling the canonical parameter θi as:
$$\theta_{i}(X,Z,k)=\boldsymbol{\gamma} \mathbf{Z_{i}}+ \boldsymbol{\beta_{k}} \mathbf{X_{i}(k)},\ 1 \le i \le n;$$
where \(\mathbf {Z_{i}}=\left (1,Z_{i}^{1},\hdots,Z_{i}^{q-1}\right), \boldsymbol {\gamma }=(\gamma _{0},\hdots,\gamma _{q-1})^{T}\) is a vector of q regression coefficients, and βk is the vector of coefficients associated with the transformation k of the variable Xi.
Multiple testing problem
We consider the problem of testing
$$\mathscr{H}_{0,k}: \boldsymbol{\beta_{k}} = 0 \:\: \text{ against} \:\: \mathscr{H}_{1,k}: \boldsymbol{\beta_{k}} \neq 0, $$
simultenaously for all k∈{1,…,K}. For each transformation k, one test score Tk(Y) is obtained for the nullity of the vector βk [4]. We ultimately obtain a vector of statistics T=(T1(Y),…,TK(Y)). Introduce the associated p-value as
$$p_{k}(y) = \mathbb P_{\boldsymbol{\beta_{k}} = 0}(|T_{k}(Y)|\ge|T_{k}(y)|), \:\: 1\le k \le K, $$
where y is the realization of Y.
Significance level correction
To cope with the multiplicity problem, we aim at testing [5]:
$$\mathscr{H}_{0} \: : \: \bigcap_{k=1}^{K} \mathscr{H}_{0,k} \:\: \ \text{against} \:\: \mathscr{H}_{1} \: : \: \bigcup_{k=1}^{K} \mathscr{H}_{1,k}, $$
by which we mean that X has an effect on Y if and only if at least one transformation of X has an effect on Y. A natural approach is then to consider the maximum of the individual test statistics Tk(Y), or, equivalently, the minimum of the individual p-values pk(Y), leading to the following p-values:
$$p^{maxT}(y) = \mathbb P_{Y\sim P_{0}} \left(T^{maxT}(Y) \ge T^{maxT}(y) \right), $$
where P0 denote the distribution of Y under the null and TmaxT(·)=max1≤k≤K{|Tk(·)|}, or
$$p^{minP}(y) = \mathbb P_{Y\sim P_{0}} \left(p^{minP}(Y) \le p^{minP}(y) \right), $$
where pminP(·)=min1≤k≤K{pk(·)}.
Moreover, if X has an effect on Y (e.g. \(\mathscr {H}_{0}\) is rejected), the best coding corresponds to the transformation k which obtains the highest individual test statistic realization Tk(y), or, equivalently, the smallest individual p-value realization pk(y).
Bonferroni method
The first method available in this package is the Bonferroni method. This is the most widely used correction method in applied statistics. It has been described by several authors in various applications [6–10]. The Bonferroni method rejects \(\mathscr {H}_{0}\) at level α∈[0,1] if
$$ p^{minP}(y) \le \frac{\alpha}{K}, $$
where K is related to the total number of tests performed by the user. However, this method is conservative, particularly when the correlation between test results is high and the number of transformations is high.
Exact method
The second method proposed in this package is the asymptotic exact correction developed by Liquet and Commenges for generalized linear models [11, 12]. This method is valid only for binary transformations, fractional polynomial transformations with one degree (i.e. FP1) and Box-Cox transformations. It is based on the joint asymptotic distribution of the test statistics under the null. Indeed, the p-value pmaxT can be calculated as follows:
$$\begin{array}{@{}rcl@{}} p^{maxT}(y) &=& 1 - \mathbb{P}_{Y\sim P_{0}}\left(T^{maxT}(Y) < T^{maxT}(y) \right) \\ &=& 1 - \mathbb{P}_{Y\sim P_{0}} (T_{1}(Y)<T^{maxT}(y); \hdots ;\\ && T_{K}(Y)< T^{maxT}(y)). \end{array} $$
We then calculated the probability \(\mathbb {P}_{Y\sim P_{0}} \big (T_{1}(Y)<T^{maxT}(y); \hdots ; T_{K}(Y)< T^{maxT}(y)\big)\) by numerical integration of the multivariate Gaussian density (e.g., the asymptotic joint distribution of (Tk)1≤k≤K). Several programs have been written to solve this multiple integral. In this package, we used the method developed by Genz and Bretz in 2009 [13], available in the mvtnorm R package [14].
Minimum p-value procedure
The approach based on pminP, called the minimum p-value procedure, allows to combine statistical tests for different distributions. It is therefore possible to combine dichotomous, Box-Cox, fractional polynomial and transformations into categorical variables with more than two levels. However, the distribution of pminP is unknown and we use resampling-based methods. These procedures take into account the dependence structure of the tests for evaluation of the significance level of the minimum p-value procedure. These procedures can therefore be used for all kinds of coding.
Permutation test procedure
The first resampling-based method is a permutation test procedure. This procedure is used to build the reference distribution of statistical tests based on permutations. From a theoretical point of view, the statistical test procedures are developed by considering the null hypothesis to be true, i.e. in our context, under the null hypothesis, Xi has no impact on Y. Under the null hypothesis, if the exchangeability assumption is satisfied [15–20], then resampling can be performed based on the permutation of Xi the variable of interest in our dataset. The procedure proposed by Liquet and Riou could be summarized by the following algorithm [6]:
Apply the minimum p-value procedure to the original data for the K transformations considered. We note pmin the realization of the minimum of the p-value;
Under \(\mathscr {H}_{0,k}\), Xi has no effect on the response variable Y, and a new dataset is generated by permuting the Xi variable in the initial dataset. This procedure is illustrated in the following Fig. 1;
Permutation Principle under the null hypothesis \(\left (\mathscr {H}_{0,k}\right)\)
Generate B new datasets \(s^{*}_{b}\), b={1,...,B} by repeating step 2 B times;
For each new dataset, apply the minimum p-value procedure for the transformation considered. We note \(p^{*b}_{\text {min}}\) the smallest p-value for each new dataset.
The p-value is then approximated by:
$$\widehat{p^{minP}}=\frac{1}{B}\sum_{b=1}^{B}I_{\left\{p_{\text{min}}^{*b} < p_{\text{min}}\right\}},$$
where I{·} is an indicator function.
This procedure can be used to control for the type-I error.
Parametric bootstrap procedure
The second resampling-based method is the parametric bootstrap procedure, which yields an asymptotic reference distribution. This procedure makes it possible to control for type-I error with fewer assumptions [21]. This procedure is summarized in the following algorithm [6]:
Fit the model under the null hypothesis, using the observed data, and obtain \(\boldsymbol {\hat {\gamma }}\), the maximum likelihood estimate (MLE) of γ;
Generate a new outcome \(Y_{i}^{*}\) for each subject from the probability measure defined under \(\mathscr {H}_{0,k}\).
Repeat this for all the subjects to obtain a sample denoted \(s^{*}=\{Y^{*}_{i},\mathbf {Z_{i}},X_{i}\}\)
Generate B new datasets \(s_{b}^{*}, b=1,\hdots,B\) by repeating step 3 B times ;
$$\widehat{p^{minP}}=\frac{1}{B}\sum_{b=1}^{B}I_{\left\{p_{\text{min}}^{*b} < p_{\text{min}}\right\}}.$$
Codings
We now provide some examples of available transformations in the CPMCGLM package.
Dichotomous coding
Dichotomous coding is often used in clinical and psychological research, either to facilitate interpretation, or because a threshold effect is suspected. In regression models with multiple explanatory variables, it may be seen as easier to interpret the regression coefficient for a binary variable than to understand a one-unit change in the continuous variable. In this context, dichotomous transformations of the variable of interest X are defined as:
$$X(k)= \left\{ \begin{array}{lll} 1& \text{if} & X\geq c_{k} ;\\ 0& \text{if} & X< c_{k},\\ \end{array} \right. $$
where ck denotes the cutoff value for the transformation k (1≤k≤K).
In this R package, the dicho argument of the CPMCGLM() function allows the definition of desired cutoff points based on quantiles in a vector. An example of the dicho argument is provided below:
In this example, the user wants to try three dichotomous transformations of the variable of interest. For the first transformation, the cutoff point is the second decile; for the second, it is the median, and for the third, the seventh decile. The user can also opt to use our quantile-based method. The choice of this method leads to use of the nb.dicho argument. This argument makes it possible to use a quantile-based method, by entering the desired number of transformations. If the user asks for three transformations, the program uses the quartiles as cutoff points. If two transformations are requested, the program uses the terciles, and so on. This argument is also defined as follows.
It is important to note that only one of these arguments (dicho and nb.dicho) can be used in a given CPMCGLM()function.
Coding with more than two classes
In epidemiology, it is usual to create several categories, often four or five. These transformations into categorical variables are defined as follows:
$$X(k)= \left\{ \begin{array}{lll} m-1& \text{if} & X\geq c_{k^{m-2}} ;\\ \vdots & & \vdots \\ j& \text{if} & c_{k^{j}} > X \geq c_{k^{j-1}} ;\\ \vdots & & \vdots \\ 0& \text{if} & X< c_{k^{0}},\\ \end{array} \right. $$
where \(c_{k^{j}}\phantom {\dot {i}\!}\) denotes the jth cutoff point (0≤j≤m−2), for the transformation k (1≤k≤K).
The categ argument of the CPMCGLM() function allows the user to define the desired set of cutoff points using quantiles. This argument must take the form of a matrix, with a number of columns matching the maximum number of cutoff points used in almost all transformations, and a number of rows corresponding to the number of transformations tried. An example of this argument definition is presented below:
In this example, the user will realize four transformations. Two involve transformation into three classes, and two into four classes. It is important to note that binary transformations could not be defined here. The maximum number of cutoff points used in almost all transformations is three. The matrix therefore has the following dimensions: (4×3). For the first transformation, we will define a transformation into a three-class categorical variable with the third and seventh deciles as cut-points, and so on for the other transformations.
The user could also use a quantile-based method to define the transformations. In this case, the user would need to define the number of categorical transformations in the nb.categ argument. If two transformations are requested, then this method will create a two-class categorical variable using the terciles as cutoff points, and a three-class categorical variable using the quartiles as cutoff points. If the user asks for three transformations, the first and second transformations remain the same, and the program creates another categorical variable with four classes based on the quintiles, and so on. For four transformations, the argument is defined in R as follows:
However, users may also wish to define their own set of thresholds. For this reason, the function also includes the argument cutpoint, which can be defined on the basis of true values for the transformations desired. This argument is a matrix, defined as the argument categ. The difference between this argument and that described above is that it is possible to define dichotomous transformations for this argument and quantiles are not used.
Box-Cox transformation
Other transformations are also used, including Box-Cox transformations in particular, defined as follows [22]:
$$X(k)= \left\{ \begin{array}{lll} \lambda_{k}^{-1}(X^{\lambda_{k}}-1) & \text{if} &\lambda_{k} > 0 \\ \log{X}& \text{if} & \lambda_{k} =0,\\ \end{array} \right. $$
This family of transformations incorporates many traditional transformations:
λk = 1.00: no transformation needed; produces results identical to original data
λk = 0.50: square root transformation
λk = 0.33: cube root transformation
λk = 0.25: fourth root transformation
λk = 0.00: natural log transformation
λk = -0.50: reciprocal square root transformation
λk = -1.00: reciprocal (inverse) transformation
The boxcox argument is used to define Box-Cox transformations. This argument is a vector, and the values of its elements denote the desired λk. An example of the boxcox argument for a reciprocal transformation, a natural log transformation, and a square root transformation is provided below:
Fractional polynomial transformation
Royston et al. showed that traditional methods for analyzing continuous or ordinal risk factors based on categorization or linear models could be improved [23, 24]. They proposed an approach based on fractional polynomial transformation. Let us consider generalized linear models with canonical parameters defined as follows:
$$\theta_{i}(X,Z)=\boldsymbol{\gamma} \mathbf{Z_{i}}+ \beta \mathbf{X_{i}}, \ \ 1 \le i \le n;$$
where \(\mathbf {Z_{i}}=\left (1,Z_{i}^{1},\hdots,Z_{i}^{q-1}\right), \boldsymbol {\gamma }=(\gamma _{0},\hdots,\gamma _{q-1})^{T}\) is a vector of q regression coefficients, and β is the coefficient associated with the Xi variable.
Consider the arbitrary powers a1≤…≤aj≤…≤ am, with 1≤j≤m, and a0=0.
If the random variable X is positive, i.e. ∀i∈{1,…,n},Xi>0, then the fractional polynomial transformation is defined as:
$$\theta_{i}^{m}(X,Z,\xi,a)=\boldsymbol{\gamma} \mathbf{Z_{i}}+\sum_{j=0}^{m}\xi_{j}H_{j}(X_{i}),$$
where for 0≤j≤m ξj is the coefficient associated with the fractional polynomial transformation:
$$H_{j}(X_{i})= \left\{ \begin{array}{ll} X_{i}^{(a_{j})} & \text{ if \(a_{j} \neq a_{j-1}\)} \\ H_{j-1}(X_{i})ln(X_{i}) & \text{ if \(a_{j} = a_{j-1}\) } \end{array} \right. $$
where H0(Xi)=1.
However, if non-positive values of X can occur, a preliminary transformation of X to ensure positivity is required. The solution proposed by Royston and Altman is to choose a non-zero origin ζ<Xi and to rewrite the canonical parameter of the model for fractional polynomial transformation as follows:
$$\theta_{i}^{m}(X,Z,\xi,a)=\boldsymbol{\gamma} \mathbf{Z_{i}}+\sum_{j=0}^{m}\xi_{j}H_{j}(X_{i}-\zeta),$$
ζ is set to the lower limit of the rounding interval of samples values for the variable of interest.
Royston and Altman suggested using m powers from a predefined set \(\mathscr {P}\) [25]:
$$\begin{array}{*{20}l} \mathscr{P}= \{ -\text{max}(3,m);\hdots ;-2;-1;-0.5;0;0.5;1;2; \hdots;\text{max}(3,m) \}. \end{array} $$
The FP argument is used to define these transformations. This argument is a matrix. The number of rows correspond to the number of transformations tested, and the number of columns is the maximum number of degrees tested for a single transformation. An example of the FP argument:
In this example, the user performs three transformations of the variable of interest. The first is a fractional polynomial transformation with one degree and a power of − 2. The second transformation is a fractional polynomial transformation with four degrees and powers of 0.5,1,−0.5, and 2. The third transformation is a fractional polynomial transformation with two degrees and powers of − 0.5, and 1.
We revisited the example presented in the article of Liquet and Commenges in 2001 based on the PAQUID database [11], to illustrate the use of the CPMCGLM package, in the context of logistic regression.
PAQUID database
PAQUID is a longitudinal, prospective study of individuals aged at least 65 years on December 31, 1987 living in the community in France. These residents live in two administrative areas in southwestern France. This elderly population-based cohort of 3111 community residents aimed to identify the risk factors for cognitive decline, dementia, and Alzheimer's disease. The data were obtained in a nested case-control study of 311 subjects from this cohort (33 subject with dementia and 278 controls).
Scientific aims
The analysis focused on the influence of HDL(high-density lipoprotein)-cholesterol on the risk of dementia. We considered the variables age, sex, education level, and wine consumption as adjustment variables. Bonarek et al initially considered HDL-cholesterol as a continuous variable [26]. Subsequently, to facilitate clinical interpretation, they decided to transform this variable into a categorical variable with different thresholds, and different numbers of classes. This strategy implied the use of multiple models, and multiple testing. A correction of type-I error taking into account the various transformations performed was therefore required to identify the best association between dementia and HDL-cholesterol.
We applied the various types of correction method described in this article to correct the type-I error rate in the model defined above. These corrections are easy to apply with the CPMCGLM package. The following syntax provided the desired results for one categorical coding, three binary codings, one Box-Cox transformation with λ=0, and one fractional polynomial transformation with two degrees and powers of -0.5, and 1:
By using the "dicho", and "categ" arguments, the function could also be used as follows, for exactly the same analysis:
In R software, the results obtained with the CPMCGLM package described above are summarized as follows:
We can also use the summary function for the main results, which are described as follows for this specific result:
As we can see, for this example, the best coding was obtained for the logistic regression with dichotomous coding of the HDL-cholesterol variable. The cutoff point retained for this variable was the third quartile. Exact correction was not available for this application, due to the use of transformation into categorical variables with more than two classes. Resampling methods gave similar results, and both the resampling methods tested were more powerful than Bonferroni correction. In conclusion, the correction of type-I error is required. Naive correction is not satisfactory, and resampling methods seem to give the best results for p-value correction in this example.
We present here CPMCGLM, an R package providing efficient methods for the correction of type-I error rate in the context of generalized linear models. This is the only available package in R providing such methods applied to this context. We are currently working on the generalization of these methods to proportional hazard models, which we will make available as soon as possible in the CPMCGLM package.
In practice, it is important to correct the multiplicity on all the codings that have been tested. Indeed, if this is not done, the type-I error is not controlled, and then it is possible to obtain some false positive results.
To conclude, this package is designed to help researchers who work principally in epidemiology to analyze with riguor their data in the context of optimal cutoff point determination.
Project name: CPMCGLM
Project home page: https://cran.r-project.org/web/packages/CPMCGLM/index.html
Operating system(s): Platform independent
Programming language: R
Other requirements: R 2.10.0 or above
Any restrictions to use by non-academics: none
FP:
Fractional polynomial
GLM:
High-density lipoprotein
MLE:
Maximum likelihood estimate
PAQUID:
Personnes agées QUID
Royston P, Altman DG, Sauerbrei W. Dichotomizing continuous predictors in multiple regression: a bad idea. Stat Med. 2006; 25(1):127–41.
Riou J, Diakite A, Liquet B. CPMCGLM: Correction of the Pvalue After Multiple Coding. 2017. R package. http://CRAN.R-project.org/package=CPMCGLM.
McCullagh P, Nelder JA. Generalized Linear Models, Second Edition. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. London: Taylor & Francis; 1989.
Rao CR. Large sample tests of statistical hypotheses concerning several parameters with applications to problems of estimation. In: Mathematical Proceedings of the Cambridge Philosophical Society, vol. 44. Cambridge University Press: 1948. p. 50–57.
Berger RL. Multiparameter hypothesis testing and acceptance sampling. Technometrics. 1982; 24(4):295–300.
Liquet B, Riou J. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models. BMC Med Res Methodol. 2013; 13(1):75.
Delorme P, Micheaux PL, Liquet B, Riou J. Type-ii generalized family-wise error rate formulas with application to sample size determination. Stat Med. 2016; 35(16):2687–714.
Simes R. An improved Bonferroni procedure for multiple tests of significance. Biometrika. 1986; 73(3):751–4.
Worsley KJ. An improved bonferroni inequality and applications. Biometrika. 1982; 69:297–302.
Hochberg Y. A sharper bonferroni procedure for multiple test procedure. Biometrika. 1988; 75:800–2.
Liquet B, Commenges D. Correction of the p-value after multiple coding of an explanatory variable in logistic regression. Stat Med. 2001; 20:2815–26.
Liquet B, Commenges D. Computation of the p-value of the minimum of score tests in the generalized linear model, application to multiple coding. Stat Probab Lett. 2005; 71:33–38.
Genz A, Bretz F. Computation of Multivariate Normal and T Probabilities. Lecture Notes in Statistics. Heidelberg: Springer; 2009.
Genz A, Bretz F, Miwa T, Mi X, Leisch F, Scheipl F, Hothorn T. mvtnorm: Multivariate Normal and T Distributions. 2016. R package version 1.0-5. http://CRAN.R-project.org/package=mvtnorm.
Romano JP. On the behavior of randomization tests without a group invariance assumption. J Am Stat Assoc. 1990; 85:686.
Xu H, Hsu JC. Applying the generalized partitioning principle to control the generalized familywise error rate. Biom J. 2007; 49(1):52–67.
Kaizar EE, Li Y, Hsu JC. Permutation multiple tests of binary features do not uniformly control error rates. J Am Stat Assoc. 2011; 106(495):1067–74.
Commenges D, Liquet B. Asymptotic distribution of score statistics for spatial cluster detection with censored data. Biometrics. 2008; 64(4):1287–9.
Commenges D. Transformations which preserve exchangeability and application to permutation tests. J Nonparametric Stat. 2003; 15(2):171–85.
Westfall PH, Troendle JF. Multiple testing with minimal assumptions. Biom J. 2008; 50(5):745–55.
Good PI. Permutation Tests. New York: Springer; 2000.
Box GE, Cox DR. An analysis of transformations. J R Stat Soc Ser B Methodol. 1964:211–52.
Royston P, Altman DG. Regression using fractional polynomials of continuous covariates: parsimonious parametric modelling. Appl Stat. 1994:429–67.
Royston P, Ambler G, Sauerbrei W. The use of fractional polynomials to model continuous risk variables in epidemiology. Int J Epidemiol. 1999; 28(5):964–74.
Royston P, Altman DG. Approximating statistical functions by using fractional polynomial regression. J R Stat Soc Ser D (The Stat). 1997; 46(3):411–22.
Bonarek M, Barberger-Gateau P, Letenneur L, Deschamps V, Iron A, Dubroca B, Dartigues J. Relationships between cholesterol, apolipoprotein e polymorphism and dementia: a cross-sectional analysis from the paquid study. Neuroepidemiology. 2000; 19:141–48.
We thank Luc Letenneur for his help on the PAQUID dataset, and Marine Roux for her help during the review process.
No funding was obtained for this study.
The data that are used to illustrate this package are available from Centre de recherche INSERM U1219, Université de Bordeaux, ISPED but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Centre de recherche INSERM U1219 Université de Bordeaux, ISPED.
Benoit Liquet and J\'{e}r\'{e}mie Riou contributed equally to this work.
Université de Pau et Pays de l'Adour, UFR Sciences et Techniques de la Cote Basque-Anglet UMR CNRS 5142, Allée du Parc Montaury, Anglet, 64600, France
Benoit Liquet
ARC Centre of Excellence for Mathematical and Statistical Frontiers and School of Mathematical Sciences at Queensland University of Technology, Brisbane, Australia
MINT UMR INSERM 1066, CNRS 6021, Université d'Angers, UFR Santé, 16 Boulevard Davier, Angers Cedex, 49085, France
Jérémie Riou
BL and JR developed the methodology, the R code, performed the analysis on the dataset as well as wrote the manuscript. Both authors read and approved the final manuscript.
Correspondence to Jérémie Riou.
The PAQUID study was approved by the ethics committee of the University of Bordeaux Segalen (France) in 1988, and each participant provided written informed consent.
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Liquet, B., Riou, J. CPMCGLM: an R package for p-value adjustment when looking for an optimal transformation of a single explanatory variable in generalized linear models. BMC Med Res Methodol 19, 79 (2019). https://doi.org/10.1186/s12874-019-0711-2
p-value adjustment
Multiple testing
Union intersection test
Optimal cutoff point determination
Data analysis, statistics and modelling | CommonCrawl |
Proving that the eigenvectors of this class of matrices are the binomial coefficients
So I'm trying to figure out the behavior of this system: you have $N$ coins, and every step, you choose one of the coins randomly and flip them.
Now we imagine a bazillion of these systems. We call $\rho_n$ the percentage of the systems that have $n$ coins flipped up (heads) -- state $S_n$.
It's easy to see that the number of systems that end up in $S_1$, for example, is all of the systems that were in $S_0$ (zero heads up, the only way to go is more heads), and $2/N$ of the systems in state $S_2$. With a little more of a stretch, it can be seen that the number of systems that end up in $S_2$ is $(N-1)/N$ of the systems that were in state $S_1$ and $3/N$ of the systems that were in state $S_3$. And so on, and so on.
We can then see that $\vec{\rho}'$ (the distribution of states after an iteration) is a simple matrix multiplication/linear transformation if $\vec{\rho}$, with the coefficients of the matrix being the ones listed above.
For example, for the case of $N = 3$, we have:
$$ \vec{\rho}' = \left[ \begin{array}{cccc} 0 & \frac{1}{3} & 0 & 0 \\ 1 & 0 & \frac{2}{3} & 0 \\ 0 & \frac{2}{3} & 0 & 1 \\ 0 & 0 & \frac{1}{3} & 0 \end{array} \right] \vec{\rho} $$
Which means that $\rho'_0$ (the new percentage of systems in state $S_0$) $= \frac{1}{3} \rho_1$ (1/3rds the percentage of systems that were in state $S_1$), that $\rho'_1 = \rho_0 + \frac{2}{3} \rho_2$, that $\rho'_2 = \frac{2}{3} \rho_1 + \rho_3$, etc.
More generally, for arbitrary $N$, the matrix is
$$ \left[ \begin{array}{cccccc} 0 & \frac{1}{N} & 0 & 0 & \cdots & 0 & 0 \\ 1 & 0 & \frac{2}{N} & 0 & \cdots & 0 & 0 \\ 0 & \frac{N-1}{N} & 0 & \frac{3}{N} & \cdots & 0 & 0 \\ 0 & 0 & \frac{N-2}{N} & 0 & \ddots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \frac{N-1}{N} & 0 \\ 0 & 0 & 0 & 0 & \frac{2}{N} & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{N} & 0 \\ \end{array} \right] $$
I tried to find any steady states -- that is, distributions of $S_n$ that would remain unchanged under one transition. For something like $N=3$, you would expect something that "bulges" in the middle, kinda.
It turns out that the eigenvector corresponding to a steady state for $N=3$ is
$$ \vec{\rho} = \left[ \begin{array}{c} 1 \\ 3 \\ 3 \\ 1 \\ \end{array} \right] $$
Which are third row of the binomial coefficients (the third row of Pascal's Triangle).
This kind of makes a lot of intuitive sense --- you want something that peaks in the middle, and tapers off, kind of like a normal distribution. One could think of the rows of Pascal's Triangle as a sort of discrete normal distribution, so this is sort of understandable.
After testing out $N=2$, which is $[ \begin{array}{ccc} 1 & 2 & 1 \end{array} ]^T$, it appears that the eigenvectors corresponding to steady states of this transition are successive rows of Pascal's Triagngle, or the binomial coefficients.
Now I understand how the binomial coefficients can show up in something like an unbiased random walk. But this isn't an unbiased random walk --- the transition probabilities depend on the current state.
How do they show up here?
matrices binomial-coefficients random-walk binomial-theorem
Justin L.
Justin L.Justin L.
It's easy to verify that the vector $b$ of binomial coefficients is an eigenvector to the eigenvalue $1$, apart from the components $0$ and $N$ (which are easily verified separately), we have the equation
$$\begin{align} (A\cdot b)_k &= \frac{N+1-k}{N}\cdot b_{k-1} + \frac{k+1}{N}\cdot b-{k+1}\\ &= \frac{N+1-k}{N}\binom{N}{k-1} + \frac{k+1}{N}\binom{N}{k+1}\\ &= \binom{N-1}{k-1} + \binom{N-1}{k}\\ &= \binom{N}{k}. \end{align}$$
As to why $b$ is an eigenvector to the eigenvalue $1$, consider each coin separately. It is heads-up with probability $\frac12$, and tails-up with probability $\frac12$ (yes, I'm waving hands here). There are $\binom{N}{k}$ configurations with $k$ coins flipped heads-up.
Daniel FischerDaniel Fischer
$\begingroup$ Ah, it did not occur to me to simply verify this without going through all of the eigenvector math for each N. As for your hand wavy answer, I didn't realize that the binomial coefficients represented the number of configurations that resulted in $S_n$, but I did not know of why the vector of the number of possible configurations (or really, the most likely distribution of $S_n$'s) would be the eigenvector. $\endgroup$ – Justin L. Oct 23 '13 at 20:21
$\begingroup$ Although now, on hindsight, it seems a bit obvious that the steady state distribution would be the most probable distribution. But now I do wonder -- this distribution is independent of the transition rules. Will $b$ be the eigenvector for all matrices representing all 1-norm-preserving transition rules? $\endgroup$ – Justin L. Oct 23 '13 at 20:23
$\begingroup$ No, you need that all configurations are equally probable, so that for each coin, the probability of head/tails must be $\frac12$. If all coins have the same probability $p$ for heads, I think you get the histogram of a Binomial distribution with probability $p$ as eigenvector. $\endgroup$ – Daniel Fischer Oct 23 '13 at 20:29
$\begingroup$ (*did realize, in my first comment) I was referring to (symmetric) arbitrary rules for transitioning from state $S_n$ to $S_m$... maybe flipping some coins more often than others, or something like that. As long as the rules are symmetric, one can assume that the most probable configuration = the steady state solution? Maybe my question is not well enough defined. $\endgroup$ – Justin L. Oct 23 '13 at 20:36
$\begingroup$ Well I guess you sort of answered my question, I didn't realize it. If all states are equally possible, then the steady state is the density of these states... if they are not, I guess we can say that the most probable configuration is the steady state configuration? $\endgroup$ – Justin L. Oct 23 '13 at 20:38
Not the answer you're looking for? Browse other questions tagged matrices binomial-coefficients random-walk binomial-theorem or ask your own question.
Theoretical basis behind calculation of steady state probability distribution of 2-state Markov chain from its transition matrix
Solving a recurrence (with the form of a convolution) involving binomial coefficients
Analyzing a coin tossing game with cheating
The system has a unique solution if and only if $w^{T}A^{-1}v\ne 0$.
Reduce the mixing time by switching a regular state to an absorbing state in a Markov chain
Combinatorial proof that central binomial coefficients are the largest ones
Diagonalization of a generic matrix
Probabilities and Steady State Vector in a Markov Chain
Entropy Rate of Markov Chains
Stationary distribution of "almost" a random walk | CommonCrawl |
Talks are at noon on Monday in room C756 of University Hall
Sept 12 everyone Open problem session
Sept 19 Hadi Kharaghani The Strongly Regular Graph SRG$\boldsymbol{(765,192,48,48)}$
Andries Brouwer is 65 and a special issue of Designs, Codes and Cryptography is issued to celebrate the occasion.
Professor Brouwer maintains an elegant public database of existence results for all possible strongly regular graphs on $n\le 1300$ vertices. In a very nice paper, Cohen and Pasechnik implemented most of the graphs listed there in the open source software Sagemath and obtained a graph for each set of parameters mentioned in the database. In their initial version of the paper, they mentioned 11 cases as missing values. A number of the cases were related to my work with professors Janko, Tonchev, and Ionin. I tried to help out with these cases and four cases were resolved quickly, after I sent detailed instructions. However, there was a problem with the case of SRG$(765,192,48,48)$. This talk relates to this special case and a nice application of generalized Hadamard matrices.
To make the talk accessible to general audiences, I will provide many examples illustrating the concepts involved.
Sept 26 Farzad Aryan On the zero free region of the Riemann zeta function
(Université de Montréal)
We discuss the possibility that the Riemann zeta function has a zero $\sigma +iT$ to the left of the classical zero free region. We will show how the existence of this zero forces the function to have many more zeros in the vicinity of $\sigma+iT$ or/and $\sigma +2iT$.
Oct 3 Dave Morris Hamiltonian paths in projective checkerboards
Place a checker in some square of an $m \times n$ rectangular checkerboard, and glue opposite edges of the checkerboard to make a projective plane. We determine whether the checker can visit all the squares of the checkerboard (without repeating any squares), by moving only north and east. This is joint work with Dallan McCarthy, and no advanced mathematical training will be needed to understand most of the talk.
Oct 17 Mikhail Muzychuk Non-commutative association schemes of rank $\boldsymbol6$
(Netanya Academic College, Israel)
An association scheme is a coloring of a complete graph satisfying certain regularity conditions. It is a generalization of groups and has many applications in algebraic combinatorics. Every association scheme yields a special matrix algebra called the Bose-Mesner algebra of a scheme. A scheme is called commutative if its Bose-Mesner algebra is commutative. Commutative schemes were the main topic of the research in this area for decades. Only recently non-commutative association schemes attracted the attention of researchers. In my talk I'll present the results about non-commutative association schemes of the smallest possible rank, rank $6$. This is a joint work with A. Herman and B. Xu.
Oct 24 Nathan Ng The sixth moment of the Riemann zeta function and ternary additive divisor sums
Hardy and Littlewood initiated the study of the $2k$-th moments of the Riemann zeta function on the critical line. In 1918 Hardy and Littlewood established an asymptotic formula for the second moment and in 1926 Ingham established an asymptotic formula for the fourth moment. In this talk we consider the sixth moment of the zeta function on the critical line. We show that a conjectural formula for a certain family of ternary additive divisor sums implies an asymptotic formula for the sixth moment. This builds on earlier work of Ivic and of Conrey-Gonek.
Oct 31 Amir Akbary Value-distribution of quadratic $\boldsymbol{L}$-functions
We describe a theorem of M. Mourtada and V. Kumar Murty on the distribution of values of the logarithmic derivative of the $L$-functions attached to quadratic characters. Under the assumption of the generalized Riemann Hypothesis they prove the existence of a density function that gives the distribution of values of the logarithmic derivative of such $L$-functions at a fixed real point greater than 1/2. Following classical results of Wintner, we also describe how this distribution can be described as an infinite convolution of local distributions.
Nov 14 Alia Hamieh Value-Distribution of Cubic $\boldsymbol{L}$-functions
In this talk, we describe a method for studying the value-distribution of $L$-functions based on the Jessen-Wintner theory. This method has been explored recently by Ihara and Matsumoto for the case of logarithms and logarithmic derivatives of Dirichlet $L$-functions of prime conductor and by Mourtada and V. K. Murty for the case of logarithmic derivatives of Dirichlet $L$-functions associated with quadratic characters. We show how to extend such results to the case of cubic characters. In fact, we describe a distribution theorem for the values of the logarithms and logarithmic derivatives of a certain family of Artin $L$-functions associated with cubic Hecke characters. This is a joint work with Amir Akbary.
Nov 21 Luke Morgan Permutation groups and graphs
(University of Western Australia)
The use of graphs to study permutation groups goes back to Higman who first introduced the orbital graphs, and used them to characterise the primitive groups. Since then, graph theory and permutation group theory have become intertwined, with many beautiful results. In this talk, I will discuss some problems which lie across the boundary of permutation group theory and graph theory (or at least algebraic graph theory), such as how to characterise a new class of permutation groups that includes the primitive ones - the so called semiprimitive groups.
Nov 28 Gabriel Verret Vertex-primitive digraphs having vertices with almost equal neighbourhoods
(University of Auckland, New Zealand)
A permutation group $G$ on $\Omega$ is transitive if for every $x, y\in\Omega$ there exists $g\in G$ mapping $x$ to $y$. The group $G$ is called primitive if, in addition, it preserves no nontrivial partition of $\Omega$. Let $\Gamma$ be a vertex-primitive digraph, that is, its automorphism group acts primitively on its vertex-set. It is not hard to see that, in this case, $\Gamma$ cannot have two distinct vertices with equal neighbourhoods, unless $\Gamma$ is in some sense trivial. I will discuss some recent results about the case when $\Gamma$ has two vertices with "almost" equal neighbourhoods, and how these results were used to answer a question of Araújo and Cameron about synchronising groups. (This is joint work with Pablo Spiga.)
Past semesters: Fall F2007 F2008 F2009 F2010 F2011 F2012 F2013 F2014 F2015
Spring S2008 S2009 S2010 S2012 S2013 S2014 S2015 S2016 | CommonCrawl |
Can the hydroxyl group of 4-hydroxybenzoic acid react with phosphorus pentachloride?
I learned that, a phenol will not react (or react very slowly) with $\ce{PCl5}$ due to its stabilized structure.
How about the $\ce{-OH}$ group on the benzene ring of 4-hydroxybenzoic acid? Will that $\ce{-OH}$ group react with $\ce{PCl5}$? If the answer is yes, is it anything to do with the $\ce{-COOH}$ group?
organic-chemistry carbonyl-compounds halides phenols
orthocresol♦
asked Aug 22 '15 at 9:13
txntxn
$\begingroup$ Welcome to chemistry.SE! If you had any questions about the policies of our community, please visit the help center. $\endgroup$ – M.A.R. Aug 22 '15 at 9:19
$\begingroup$ Carboxylic group would react. $\endgroup$ – Mithoron Aug 22 '15 at 14:32
a phenol will not react (or react very slowly) with $\ce{PCl5}$
That's not true. Phenol and $\ce{PCl5}$ do react, producing mixture of compounds of formula $\ce{PCl_n(C6H5O)_{5-n}}$ (with $n > 1$ ). It is even possible to produce $\ce{(C6H5O)5P}$![1]
However, unlike in alcohols, in phenols:
the carbon–oxygen bond is much stronger, having a bond order of more than one and being formed with $\mathrm{sp^2}$ hybridized orbitals
the formation of the corresponding transition state for nucleophilic substitution would destroy the aromaticity of the benzene ring.
Thus, the formation of chlorobenzene does not proceed at a significant rate.
It is still possible to replace the phenol group by a chloride atom in some cases. For example, it is possible to prepare picrylchloride (1,3,5-trinitrochlorobenzene) by reaction of pyridine picrate and $\ce{PCl5}$, since three nitro groups greatly increase the reactivity of the phenyl fragment in nucleophilic aromatic substitution reactions.[2] However, as a general rule, benzene derivatives are not active in nucleophilic substitution except if there are a lot of strong π-electron withdrawing groups in the ring.
As for 4-hydroxybenzoic acid, my guess is that it should react as both phenol (as above) and benzoic acid (which forms benzoyl chloride).
While I was unable to find precise information on para-hydroxybenzoic acid, there is some info on ortho-hydroxybenzoic (salicylic acid). $\ce{PCl5}$ replaces the $\ce{OH}$ fragment of the carboxylic acid group with chlorine. The phosphorus oxochloride then reacts with the phenolic group, producing an $\ce{O-POCl2}$ fragment. The information, however, does not look absolutely trustworthy. Still, assuming it is correct and extrapolating it onto para-hydroxybenzoic acid, I'd say that the product should be $\ce{Cl-C(O)-C6H4-O-P(O)Cl2}$.
[1]: Ramirez, F.; Bigler, A. J.; Smith, C. P. Pentaphenoxyphosphorane. J. Am. Chem. Soc. 1968, 90 (13), 3507–3511. DOI: 10.1021/ja01015a038.
[2]: Boyer, R.; Spencer, E. Y.; Wright, G. F. Can. J. Chem. 1946, 24b (5), 200–203. DOI: 10.1139/cjr46b-025.
permeakrapermeakra
Not the answer you're looking for? Browse other questions tagged organic-chemistry carbonyl-compounds halides phenols or ask your own question.
Reaction of salicylic acid with PCl5
Why isn't 4-hydroxybenzoic acid formed along with salicylic acid in Kolbe's reaction?
Reaction an ROH-resonance-stabilized Group with Hydroxide/Acid-Base Reactions
Why is a hydroxyl group more activating than a methoxy group in electrophilic aromatic substitution?
Why do we need to add FeBr3 for bromination of anisole, but not for bromination of phenol?
How does the solvent determine whether mono- or tribromination of phenol occurs?
Decarboxylation of Salicylic acid
Naming ester as a substituent to carboxylic acid
Electrophilic Aromatic Substitution of phenols
Why does FeCl3 react with carboxylic acid, but not with sulphonic acid in this question? | CommonCrawl |
Geographical model-derived grid-based directional routing for massively dense WSNs
Jing-Ya Li1 &
Ren-Song Ko1
This paper presents the grid-based directional routing algorithms for massively dense wireless sensor networks. These algorithms have their theoretical foundation in numerically solving the minimum routing cost problems, which are formulated as continuous geodesic problems via the geographical model. The numerical solutions provide the routing directions at equally spaced grid points in the region of interest, and then, the directions can be used as guidance to route information. In this paper, we investigate two types of routing costs, position-only-dependent costs (e.g., hops, throughput, or energy) and traffic-proportional costs (which correspond to energy-load-balancing). While position-only-dependent costs can be approached directly from geodesic problems, traffic-proportional costs are more easily tackled by transforming the geodesic problem into a set of equations with regard to the routing vector field. We also investigate two numerical approaches for finding the routing direction, the fast marching method for position-only-dependent costs and the finite element method (and its derived distributed algorithm, Gauss-Seidel iteration with finite element method (DGSI-FEM)) for traffic-proportional costs. Finally, we present the numerical results to demonstrate the quality of the derived routing directions.
With their embedded computation and communication capabilities, wireless sensor networks (WSNs) can extend the senses of human beings to normally inaccessible locations and operate unattended for a long period of time, thus opening up the potential of many new applications [1]. Such applications bring up many challenges in network maintenance since sensors may be unreliable in hazardous situations which prohibit any human intervention to repair or replace malfunctioning sensors. Thus, compared to the cost to access WSNs, advanced developments in manufacturing techniques will make it preferable to deploy a large number of sensors in the region of interest (ROI) in one time, in which sensors can self-organize to operate. However, such a deployment strategy may lead to a massively dense WSN which poses many challenges for efficient algorithm design due to the problem scale and hardware constraints.
For such large-scale networks, the complexity of topological algorithms that model the networks by graphs and then describe network operations by nodes and edges may inevitably increase with the number of nodes and edges, since optimizing, particularly globally, the network performance may require the consideration that all nodes or edges determine the best node or edge to perform a given operation. However, two characteristics of WSNs suggest an alternative approach:
WSN applications are usually spatial-oriented, and spatially close nodes tend to perform the same role in networks.
Extending the working duration of the whole WSN is more important than keeping each sensor node alive. In other words, it may be preferable to exhaust individual nodes in an attempt to achieve better overall performance.
Therefore, rather than optimizing the performance of individual nodes by micro-controlling node operations, the high role substitutability of WSNs allows networks to be managed via geographical parameters, i.e., use the geographical parameters to locate appropriate sensor nodes to perform assigned tasks. Thus, network operations are described by geographical parameters, not node identities, and the complexity, even when considering global optimization, depends on the ROI, not the number of nodes and edges.
Furthermore, one advantage of geographical approaches is that we may use "distributions" or "vector fields" defined in geographical space to describe network states or operations, and these distributions or vector fields have some nice mathematical properties under massively dense networks, such as differentiability or integrability, which allow many techniques developed in classical mathematical analysis to be applicable. For example, several studies [2–6] have used geographical approaches to analyze WSN routing problems from a macroscopic perspective. Without the complexity of detailed descriptions in micromanaging individual nodes, the geographical descriptions can still provide sufficient information to allow meaningful analysis and optimization at the macroscopic level and the derivation of useful insights.
In this paper, we adopt the geographical model to study the minimum routing cost problems for massively dense WSNs in which the problems are formulated as continuous geodesic problems. We use density distributions to describe how nodes are deployed and routing vector fields for how information are transmitted. The relationship between density distributions and various routing costs may be further analyzed, and the equivalence between geodesic problems and optimum routing vector field problems can be established. We investigate two types of routing costs, position-only-dependent costs which are presented in the preliminary work [7] and traffic-proportional costs. Position-only-dependent costs may be the number of hops, throughput, or transmission energy, and traffic-proportional costs correspond to energy-load-balancing. While the routing problems with position-only-dependent costs can be tackled directly from geodesic problems, routing vector field problems provide a better approach to solve the routing problems with traffic-proportional costs.
Numerically solving continuous geodesic problems or routing vector field problems requires discretizing continuous functions involved in problems in a systematic way and then producing solutions (paths or vectors) at finite locations in the ROI, e.g., equally spaced grid points in the ROI. These numerical solutions at grid points provide the directions to the next forwarding nodes, which can be used as guidance to route information. Thus, the resulting routing algorithms, which we call grid-based directional routing algorithms, are actually the natural outcomes of the numerical approaches of these problems and mainly consist of the following two stages:
The ROI is divided into equally spaced grids, and then, each grid point computes its routing direction by numerically solving the continuous geodesic problems or routing vector field problems.
A node may use the routing direction of its closest grid point as guidance to determine its next forwarding node.
In this paper, we mainly focus on two numerical approaches for finding the routing direction of each grid point (i.e., the first stage), namely the fast marching (FM) method [8] for position-only-dependent costs and the finite element method (FEM) [9], including its derived distributed algorithm (namely distributed Gauss-Seidel iteration with FEM, DGSI-FEM), for traffic-proportional costs. We then investigate the quality of the derived routing directions via numerical simulations. Note that though the second stage is needed to completely determine a routing path, the study of the second stage is beyond the scope of this paper and we simply use the mechanism adopted in [10] for the second stage to conduct numerical simulations.
The remainder of this paper is organized as follows. After introducing related work in Section 2, we briefly describe the minimum cost routing problem from a macroscopic perspective and the equivalence between geodesic problems and optimum routing vector field problems in Section 3. The minimum routing cost problems with position-only-dependent costs and traffic-proportional costs, including algorithms and numerical results, are then discussed in Sections 4 and 5, respectively. Finally, conclusions are drawn in Section 6. For the sake of convenience, relevant notations introduced in this paper are listed in Table 1.
Table 1 List of notations introduced in this paper
Mauve et al. [11] argued that, for ad hoc networks, geographical routing scales better than topological routing even given frequently changing network topology. Several approaches are known to be suitable for WSNs, including greedy forwarding (GF) [12], in which each node uses the line segment to the destination to select the optimum forwarding node, and its various remedies [13–15] for the hole problem, in which packets may be trapped in local optima due to the existence of holes. In addition, a global pre-defined trajectory, instead of the local line segment used in GF, may be used to determine the next forwarding node [16].
For massively dense WSNs, several studies have applied analysis techniques developed in the disciplines other than networking to geographical models to analyze the macroscopic behavior of WSNs. For instance, Jacquet [2] analyzed how information traffic may impact the curvature of routing paths from the perspective of geometrical optics. Similarly, Catanuto et al. [5] formulated routing paths as equations of the calculus of variations which state that light follows the path that can be traversed in the least time, i.e., Fermat's principle. Additionally, Kalantari and Shayman [4] formulated the routing problems of WSNs as equations analogous to Maxwell's equations in electrostatic theory.
Jung et al. [17] considered spreading network traffic uniformly throughout the ROI using a potential field-based routing scheme in which the potential field is governed by Poisson's equation via an analogy between physics and network routing problems. Chiasserini et al. [18] used a fluid model to analyze a massively dense WSN in which the media access control and the switch between different operating modes, active and sleep, are considered. Altman et al. [19] analyzed the global optimized routing paths of massively dense networks using the techniques developed in road traffic engineering. Various approaches that work around the scalability problem by creating analogies between various WSN problems and problems in branches of mathematics and physics may be found in [20, 21].
Note that for the approaches mentioned above to be applicable, the massive denseness assumption is required for the validity of some mathematical properties such as continuity or differentiability. In addition to [22] which investigated the relation between the feasibility of such an assumption and node density, Ko [23] provided an operational definition of massively dense networks and then used the definition to derive the upper bound of analysis errors obtained from applying macroscopically derived results to nonmassively dense networks.
Minimum cost routing paths
Typically, a routing algorithm is designed with various optimization goals such as minimum total energy consumption or load-balancing. By introducing the transmission cost function \(\mathcal {C} (x_{v},y_{v})\) (i.e., the cost paid by the node v at (x v ,y v ) to transmit one unit amount of information), a routing problem may be formulated as a geodesic problem which minimizes the route cost \(\sum \limits _{v^{\prime } \in \llbracket {P}\rrbracket }\mathcal {C} (x_{v^{\prime }},y_{v^{\prime }})\). That is, a routing problem is to find a path P ∗ to a sink such that:
$$ \sum\limits_{v^{\prime} \in \llbracket{{P}^{\ast}}\rrbracket}\mathcal{C} (x_{v^{\prime}},y_{v^{\prime}}) \leq \sum\limits_{v^{\prime} \in \llbracket{P}\rrbracket}\mathcal{C} (x_{v^{\prime}},y_{v^{\prime}})\!\!\mid $$
((1))
in which P can be any possible path between a given source node v and any possible sink and ⟦P⟦ denotes the set of nodes on P.
To catch the operations, sensing and networking, we use ρ to represent the amount of information generated by a node located in the ROI (denoted as A) and define the routing vector field, \(\mathbf {D}: A \rightarrow \mathbb {R}^{2}\), in which the direction of D(x,y), called the routing direction and denoted as u f (x,y), points to the next forwarding node of the node at (x,y) and the length |D(x,y)| represents the amount of information transmitted by all nodes at (x,y).
Suppose that the information is conservative; that is, ρ does not consider the information generated and then disappears without being transmitted out, and each node in the ROI relays all the information it received. Thus, for v in A, the net amount of information flowing out of v should be equal to ρ(x v ,y v ). Therefore, we have the following theorem which states that the routing problem may be formulated as a geodesic problem (1) or an optimization problem for the routing vector field incurring the minimum total cost. For the proof please refer to [23].
Theorem 1.
Suppose that the information is conservative for a considered WSN. Hence, ∀v in A, \(\sum \limits _{v^{\prime } \in \llbracket {{P}^{\ast }}\rrbracket }\mathcal {C} (x_{v^{\prime }},y_{v^{\prime }})\) is minimum over all possible paths from v to sinks if and only if \(\sum \limits _{v \,\,\text {in}\, A}\mathcal {C} (x_{v},y_{v})\left |\mathbf {D}(x_{v},y_{v})\right |\) is minimum over all possible vector fields for a given ρ.
In the limit of massively dense networks, routing paths can be considered as continuous lines rather than sequences of discrete nodes [2]. Thus, the geodesic problem (1) may be formulated as the one to find the path P ∗ from (x 0,y 0) to a sink such that:
$$ \int_{{P}^{\ast}}\mathcal{C} (s)\mathrm{d}s \leq \int_{P}\mathcal{C} (s)\mathrm{d}s $$
in which P can be any possible path from (x 0,y 0) to any possible sink and s is the curvilinear coordinate associated with the path P ∗ or P. Similar to Theorem 1, the continuous geodesic problem (2) may be expected to be equivalent to the optimum routing vector field problem for massively dense networks; that is,
Suppose that the information is conservative for a considered WSN. Hence, ∀(x 0,y 0) in A, \(\int _{{P}^{\ast }}\mathcal {C} (x(s),y(s))\mathrm {d}s\) is minimum over all possible paths from (x 0,y 0) to sinks if and only if \(\int _{A}\mathcal {C} (x,y)\left |\mathbf {D}(x,y)\right |\mathrm {d}x\mathrm {d}y\) is minimum over all possible vector fields for a given ρ.
Some routing problems can be tackled via geodesic problems; for example, the cost function \(\mathcal {C} (x,y)\) is isotropic (e.g., sensor nodes with omni-directional antennas) and only depends on position. However, Theorem 2 provides an alternative that allows routing problems to be approached via D. One example is that \(\mathcal {C}(x,y)\) is proportional to |D(x,y)|. We will discuss these two types of \(\mathcal {C} {(x,y)}\), respectively, in Sections 4 and 5.
Position-only-dependent routing cost
Cost function and node density
This section considers the cost functions which are isotropic and only depends on position. Reference [24] discussed the relationship between the transmission energy as the cost and the node density ψ. Note that referring to [25], the energy consumption per unit of information is proportional to \({r}^{\alpha _{\textit {rf}}}\phantom {\dot {i}\!}\) in which r is the distance between the sender and receiver and the RF attenuation exponent α rf is typically in the range of 2 to 5. Additionally, the average inter-distance between nodes is proportional to \(1/\sqrt {\psi }\), which leads to \(\mathcal {C}\propto 1/\psi ^{\alpha _{\textit {rf}}/2}\).
As pointed out in [26], while considering the capacity of wireless communications, the throughput of each node at (x,y) cannot be fully utilized and is only proportional to \(1/\sqrt {\psi (x,y)}\) [3]. Therefore, the optimum total throughput at (x,y) can only be proportional to \(\sqrt {\psi (x,y)}\); that is, \(\mathcal {C} \propto 1/\sqrt {\psi }\) corresponds to a network in which the objective is to maximize the throughput.
Several other possible forms of \(\mathcal {C}\) are also listed in [5]. For example, if the objective is to minimize the number of hops, \(\mathcal {C}\) may be taken to be proportional to 1/r, in which communication is constrained between the nearest neighbors. Thus, \(\mathcal {C} \propto \sqrt {\psi }\). In addition, the case that \(\mathcal {C}\) is a constant corresponds to a setting where routing is equally costly at all parts of the network. Thus, the objective is to minimize the length of routes. The relationships between \(\mathcal {C}\) and ψ for the above objectives are summarized in Table 2.
Table 2 Relationship between \(\mathcal {C}\) and ψ
Grid approximation Dijkstra's method (GADM)
It is infeasible to directly find the minimum cost routing path under massively dense networks. One possible approach to reduce the problem scale is to divide the ROI into equally spaced grids which compose a grid point network, referring to Fig. 1. We then find the minimum cost path between each grid point and sink (e.g., using Dijkstra's method) under the grid point network. The routing direction of a grid point will be the direction pointing to the next grid point on the minimum cost path under the grid point network. For the example of Fig. 1, the direction from to is the routing direction of . Here, we denote as the grid point located at the ith column and the jth row, and say a node belongs to if its closest grid point in the ROI is ; for example, all nodes in the dark gray region belong to . Therefore, a node belonging to may use the direction from to as guidance to determine the next forwarding node [10].
Grid point network of ROI. The ROI is divided into equally spaced grids which compose a grid point network (grid points are connected by dashed lines). is the grid point located at the ith column and the jth row. A node is defined as belonging to if its closest grid point in the ROI is ; for example, all nodes in the dark gray region belong to . The path indicated by the blue solid line is the minimum cost routing path from to the grid point which the sink belongs to (indicated by the red circle). The black region represents the hole (the region without enough working sensors)
Note that the routing direction of derived by grid approximation Dijkstra's method (GADM) always points to one of s four adjacent grid points. Such a restriction is the main reason that GADM cannot approximate continuous paths well (i.e., the minimum cost routing paths under massively dense networks), which thus yields less optimum routing paths.
Fast marching (FM) method
Cost map and eikonal equation
Define the cost map T(x,y) as the minimum total routing cost needed from a node at (x,y) to sinks. Assume that T is differentiable. We then have the following theorem for which proof is given in Appendix 1:
$$ \left|\nabla T\right|=\mathcal{C} $$
In addition,
$$ \frac{\mathrm{d}{P}^{\ast}(s)}{\mathrm{d}s} \parallel -\nabla T $$
in which s is the curvilinear coordinate associated with the minimum cost path P ∗ and ∥ is the symbol for two parallel vectors.
Note that the a priori differentiability requirement of T may not be possible, e.g., existence of multiple sinks, in which case a weak solution may be considered instead. Refer to [27] for details.
Equation (3) is known as the eikonal equation, illustrating how a high-frequency wave front advances; T(x,y) corresponds to the time which the front takes to arrive at (x,y), and \(1/\mathcal {C}(x,y)\) is the speed of the front at (x,y). Theorem 3 indicates that if T may be solved from (3), the minimum cost path may be derived by following the gradient of T.
Geodesic path via eikonal equation
To solve (3), we adopt the FM method proposed by Sethian [8]. We first divide the definition domain of T into equally spaced grids with a gap size h and then approximate the differential terms by differences. Referring to Fig. 2, the definition domain of T should be large enough to cover the ROI. We distinguish the ROI and the definition domain of T to provide a consistent formula of difference approximation at the boundary of the ROI (via δ i,j introduced in (6)).
\(\tilde {f}_{{i},{j}}\) and . The definition domain of f (e.g., T or D) is divided into equally spaced grids with a grid size h. is the grid point located at the ith column and the jth row. \(\tilde {f}_{{i},{j}}\) is the value of f at . The set of grid points, marked by black circles, in A is denoted as . The grid points marked by white circles are not in
Various difference approximations to the length of gradient may be used. In this paper, the following less diffusive difference approximation to |∇T| [28] is chosen; that is, for in ROI, (3) is approximated as:
$$ {\small{\begin{aligned} & \left|\nabla \widetilde{T}_{{i},{j}}\right| = \widetilde{\mathcal{C}}_{{i},{j}}\\ \approx &\sqrt{\max\left(\Delta^{-x}_{i,j}T, -\Delta^{+x}_{i,j}T, 0\right)^{2} + \max\left(\Delta^{-y}_{i,j}T, -\Delta^{+y}_{i,j}T, 0\right)^{2}} \end{aligned}}} $$
in which:
$$\begin{aligned} \Delta^{-x}_{i,j}T=\delta_{i-1,j}\left(\frac{\widetilde{T}_{{i},{j}}-\widetilde{T}_{{i-1},{j}}}{h}\right), \\ \Delta^{+x}_{i,j}T=\delta_{i+1,j}\left(\frac{\widetilde{T}_{{i+1},{j}}-\widetilde{T}_{{i},{j}}}{h}\right), \\ \Delta^{-y}_{i,j}T=\delta_{i,j-1}\left(\frac{\widetilde{T}_{{i},{j}}-\widetilde{T}_{{i},{j-1}}}{h}\right), \\ \Delta^{+y}_{i,j}T=\delta_{i,j+1}\left(\frac{\widetilde{T}_{{i},{j+1}}-\widetilde{T}_{{i},{j}}}{h}\right). \end{aligned} $$
Here, \(\widetilde {T}_{{i},{j}}\) is the value of T at , and:
δ i,j is introduced to ensure a consistent difference formula with the grid points not in the ROI. Note that T is undefined for the grid point not in the ROI; thus, if is not in the ROI, δ i−1,j =0 will force \(\Delta ^{-x}_{i,j}T=0\) which corresponds to no information flow from to .
FM iteratively computes \(\widetilde {T}_{{i},{j}}\) starting from sinks via (5). Conceptually, the iteration of FM works as the wave front advances in the ROI. As the front advances in the ROI, Ts and states of the grid points are determined and updated iteratively as illustrated in Figs. 3 and 4: Upwind side: the zone which has been visited by the wave front. The states of grid points in the upwind zone are marked as accepted, and the values of \(\widetilde {T}\)s at these grid points have been determined. Since \( \mathcal {C}(x,y) > 0\), the front moves outward. Thus, the states of the accepted grid point will not be changed. Narrow band: the zone where the wave front is located. The states of grid points in this zone are marked as trial, and FM is determining the values of \(\widetilde {T}\)s at these grid points. Once finished, the grid point with the smallest \(\widetilde {T}\) in this zone will be included in the upwind side and the wave front expands further. Downwind side: the zone which has not been visited by the wave front. The states of grid points in this zone are marked as far away, and the values of \(\widetilde {T}\)s at these grid points have not been determined.
States of grid points in the process of FM. FM determines the minimum cost routing paths of all grid points to the sink (indicated by a red circle) in the order of wave expansion
Evolution of upwind side, narrow band, and downwind side during the iteration of Algorithm 1. The grid points in the upwind side and narrow band are marked by black circles and cyan circles, respectively
The algorithm of FM is listed in Algorithm 1. Here, , , and are the sets of the grid points in the upwind side, narrow band, and downwind side, respectively, and the neighbor set of , denoted as , is the set of s adjacent grid points in A, i.e., .
Initially, the entire ROI is the downwind side except the sinks which are marked as accepted with \(\widetilde {T}=0\) (Line 2); then, the wave front begins to expand (Line 3). FM uses (5) to compute the \(\widetilde {T}\)s of the grid points in the narrow band (Line 4). Once finished, the grid point with the smallest \(\widetilde {T}\) in the narrow band is marked as accepted (Lines 6–7). The wave front will then keep expanding (Lines 8–13) while updating the \(\widetilde {T}\)s of the grid points in the narrow band (Line 12) for the next iteration until the entire ROI is the upwind side (Lines 5–14). The state changes of grid points are illustrated in Fig. 4.
After determining \(\widetilde {T}\)s at all grid points by Algorithm 1, we may use \(\widetilde {T}\)s and (4) to derive the routing direction, \(\widetilde {\mathbf {u}_{\text {f}}}_{{i,j}}\), which is the unit tangent vector along the geodesic path from to the sink. By (4), the vector V=−∇T is tangent to the geodesic path. We may apply the finite difference method to approximate V: for in the ROI:
$$ \begin{aligned} \widetilde{\mathbf{V}}_{{i},{j_{x}}} & = -\frac{\delta_{i-1,j}\left(\widetilde {T}_{{i},{j}}-\widetilde {T}_{{i-1},{j}}\right)+\delta_{i+1,j}\left(\widetilde {T}_{{i+1},{j}}-\widetilde {T}_{{i},{j}}\right)}{\left(1+\delta_{i-1,j}\delta_{i+1,j}\right)h}\\ \widetilde {\mathbf{V}}_{{i},{j_{y}}} & = -\frac{\delta_{i,j-1}\left(\widetilde{T}_{{i},{j-1}}-\widetilde{T}_{{i},{j}}\right)+\delta_{i,j+1}\left(\widetilde {T}_{{i},{j}}-\widetilde {T}_{{i},{j+1}}\right)}{\left(1+\delta_{i,j-1}\delta_{i,j+1}\right)h} \end{aligned} $$
in which \(\widetilde {\mathbf {V}}_{{i},{j_{x}}}\) and \(\widetilde {\mathbf {V}}_{{i},{j_{y}}}\) are the x and y components of \(\widetilde {\mathbf {V}}_{{i},{j}}\), respectively. Note that it is easy to verify that the formula for \(\widetilde {\mathbf {V}}_{{i},{j}}\) in (7) is consistent with the finite difference approximation of ∇T at . In addition, \(\widetilde {\mathbf {V}}_{{i},{j_{x}}} = 0\) if both and are not in the ROI, which corresponds to zero traffic along the x-direction (the similar reasoning may apply to \(\widetilde {\mathbf {V}}_{{i},{j_{y}}}\)). Once \(\widetilde {\mathbf {V}}_{{i},{j}}\) is computed, \(\widetilde {\mathbf {u}_{\text {f}}}_{i,{j}}\) can be determined by \(\widetilde {\mathbf {u}_{\text {f}}}_{{i,j}} = \widetilde {\mathbf {V}}_{{i},{j}}/\left |\widetilde {\mathbf {V}}_{{i},{j}}\right |\).
Numerical results
We first present numerical results, illustrated in Figs. 5 and 6, to compare the effectiveness of GADM and FM. The settings of both scenarios, as summarized in Table 3, are similar except the number of sinks. Furthermore, the cost function \(\mathcal {C}\) considered is a constant; thus, the minimum cost path is the one with the shortest length.
Routing direction and routing paths. Here, the sink indicated by a circle is located at , and the black regions represent the holes. a Routing direction \(\widetilde {\mathbf {u_{f}}}\). b Routing paths: the source is close to the grid point . c Routing paths: the source is close to the grid point
Routing direction and routing paths. Here, the sinks indicated by circles are located at and , and the black regions represent the holes. a Routing direction \(\widetilde {\mathbf {u_{f}}}\). b Routing paths: the source is close to the grid point . c Routing paths: the source is close to the grid point
Table 3 Simulation settings for the scenarios illustrated in Figs. 5, 6, and 7
If the information is currently routed to a node, denoted as v, belonging to , we use \(\widetilde {\mathbf {u_{f}}}_{{i},{j}}\) and the following mechanism adopted in [10] to determine the next forwarding node (i.e., the second stage of the grid-based directional routing algorithms).
Choose the neighbor nodes within the communication range R c of v which can make positive progress to sink. The progress of the neighbor node v ′ is defined as the inner product of \(\widetilde {\mathbf {u_{f}}}_{{i},{j}}\) and the vector from v to v ′. If there are multiple candidates, choose the one which makes the greatest progress.
If no nodes are making positive progress, increase R c by ΔR c .
Note that due to the characteristics of wireless communication [3], it is preferred to use multiple short-range transmissions for optimal power consumption and communication capacity. Therefore, we gradually increase the communication range R c of v in searching for the next forwarding nodes to avoid long distance transmissions. The values of R c and ΔR c are also listed in Table 3.
Figure 5 a depicts the routing directions derived by FM. Figure 5 b, c illustrate the routing paths via the routing directions derived by GADM and FM. The route lengths listed in Table 4 show that FM may derive shorter routing paths than GADM. Note that GADM and FM may result in different routes to bypass the hole for the same source node, as illustrated in Fig. 5 c. Similar results may be found for the second scenario, referring to Fig. 6 and Table 4. In addition, GADM and FM may result in routing to different sinks for the same source node, as illustrated in Fig. 6 c.
Table 4 Length of routing path
The reason that FM outperforms GADM is that the minimum cost path derived under the grid point network may not approximate the actual minimum cost path well. In addition, the routing direction of a grid point always points to the neighbors of (that is, the four adjacent grid points of in our simulations). Though this problem may be alleviated by extending the neighbor set (for example, adding the diagonal grid points to the neighbor set), the direction restriction (the routing direction always points to one of the neighbors) cannot be removed. On the other hand, (5) used in Algorithm 1 approximates |∇T| well, and the routing direction (via using \(\widetilde {T}\)s and (7)) has no such direction restriction.
Figure 7 illustrates how node density may affect routing paths for the optimization objectives listed in Table 2 with α rf =4. Twenty thousand nodes are randomly deployed according to ψ(x,y)∝(3.5×10−5 y 2+0.02). Note that routing directions are solved (i.e., the first stage of the grid-based directional routing algorithms) using only the macroscopic parameter, ψ, but not the detailed position of each node. Thus, FM derives the same routing directions under the same density distribution regardless of the node positions. The node positions are merely used to determine the next forwarding node (i.e., the second stage) from the routing directions using the approach described earlier in this section.
Minimum cost paths for the optimization objectives listed in Table 2 with α rf =4. The scenario settings are listed in Table 3
The results show that routing should utilize the nodes in the sparse area to minimize the number of hops and use the nodes in the dense area to increase the throughput and to avoid long distance transmissions for less energy consumption. Of course, routing should use the straight line to the sink for minimizing the route length.
We also conducted simulations to compare the routing cost of \(\widetilde {\mathbf {u_{f}}}_{{i},{j}}\) obtained from FM and GADM with the optimum routing cost determined by applying a shortest path algorithm to the connectivity graph of the WSN shown in Fig. 7, which basically is a microscopic routing approach. The results illustrated in Fig. 8 reveal that \(\widetilde {\mathbf {u_{f}}}_{{i},{j}}\) obtained from FM may lead to a reasonable routing cost; the average cost is 5 % more than the average optimum cost. On the other hand, \(\widetilde {\mathbf {u_{f}}}_{i,j}\) obtained from GADM may have a routing cost up to 28 % more than the optimum cost. In Fig. 8, the mean of the routing cost is the average cost of all nodes to the sink. The relative mean of FM (or GADM) is defined as the mean of the routing cost of FM (or GADM) divided by the mean of the optimum routing cost.
The relative mean of routing cost of the scenario in Fig. 7: the mean of the routing cost of FM (or GADM) is the average cost of all nodes to the sink using the routing directions derived by FM (or GADM). The relative mean of FM (or GADM) is defined as the mean of the routing cost of FM (or GADM) divided by the mean of the optimum routing cost (OPT)
Traffic-proportional routing cost
Load-balancing routing
This section considers the case in which \(\mathcal {C}(x,y)= \lambda (x,y)^{2}\left |\mathbf {D}(x,y)\right |\); here, λ is the energy cost e, normalized to the initial energy E, for transmitting one unit of information, i.e., λ=e/E. As pointed out in [2], in the context of a massively dense network, routing paths can be considered as continuous lines, instead of sequences of discrete nodes, and D may be considered differentiable. Thus, the fact that information is conservative (i.e., at each location, the net amount of traffic is equal to the amount of information generated) can be formulated as [29]:
$$ \nabla \cdot \mathbf{D}(x,y) - \rho (x,y) = 0. $$
Thus, from Theorem 2, if (8) holds, the geodesic problem (2) with \(\mathcal {C}(x,y)=\lambda (x,y)^{2}\left |\mathbf {D}(x,y)\right |\) is equivalent to the optimization problem which finds the vector field D(x,y) to minimize:
$$ \int_{A}\lambda (x,y)^{2}\left|\mathbf{D}(x,y)\right|^{2}\mathrm{d}x\mathrm{d}y. $$
Note that the variance of λ|D|, \(\int _{A}\left (\lambda (x,y)\left |\mathbf {D}(x,y)\right |-\overline {\lambda \left |\mathbf {D}\right |}\right)^{2}\mathrm {d}x\mathrm {d}y\), is positive; here, \(\overline {\lambda \left |\mathbf {D}\right |}\) is the average of λ(x,y)|D(x,y)|. Since:
$$\begin{aligned} &\int_{A}\left(\lambda (x,y)\left|\mathbf{D}(x,y)\right|-\overline{\lambda \left|\mathbf{D}\right|}\right)^{2}\mathrm{d}x\mathrm{d}y \\ = &\int_{A}\lambda(x,y)^{2}\left|\mathbf{D}(x,y)\right|^{2}\mathrm{d}x\mathrm{d}y-\overline{\lambda\left|\mathbf{D}\right|}^{2}\cdot\text{area(}{A}\text{)} \end{aligned} $$
in which area(A) is the area of A; minimizing (9) not only minimizes the difference of each location's λ|D| but also inherently reduces \(\overline {\lambda \left |\mathbf {D}\right |}\).
Since λ is the normalized communication energy cost per unit of information, λ(x,y)|D(x,y)| is the normalized total communication energy consumption. Thus, it is not difficult to reason that keeping λ|D| the same everywhere in A is equivalent to exhausting the energy of each location in A simultaneously. In other words, the objective of the geodesic problem (2) with \( \mathcal {C}(x,y)=\lambda (x,y)^{2}\left |\mathbf {D}(x,y)\right |\) is to achieve global load-balancing (by minimizing the difference of each location's λ|D|, i.e., the variance) and reduce the total communication energy consumption (by reducing \(\overline {\lambda \left |\mathbf {D}\right |}\)).
As mentioned in [30], the necessary condition for deriving the minimum value of (9) is the existence of a scalar function Φ called potential that satisfies:
$$ \mathbf{D}=J\nabla \varPhi $$
in which J=1/λ 2. In addition, there is no information flow from the outside of A; that is, there is no traffic along the inward pointing normal direction at the boundary of A, denoted as ∂A, which leads to the following boundary condition:
$$ \mathbf{D}(x,y) \cdot \hat{\mathbf{n}}(x,y) = 0, \forall (x,y) \in \partial A $$
in which \(\hat {\mathbf {n}}\) is the unit inward pointing normal vector to ∂A.
Therefore, the minimum cost routing problem with the cost \( \mathcal {C} (x,y)=\lambda (x,y)^{2}\left |\mathbf {D}(x,y)\right |\) can be transformed into a set of partial differential equations that we call load-balancing routing equations, (8), (10), and (11). We may combine these equations into the following single equation called the weak formulation of the load-balancing routing equations:
$$ \underset{A}{\int }J\nabla \varPhi \cdotp \nabla \nu \mathrm{d}y\mathrm{d}x=-\underset{A}{\int}\rho \nu \mathrm{d}y\mathrm{d}x $$
in which ν is an arbitrary smooth scalar valued function.
Note that there is no differential term of D in (12), and the a priori differentiability requirement of D is weakened. Thus, the weak formulation allows us to consider irregular problems in which true solutions cannot be continuously differentiable [9], e.g., the problems in which ψ or ρ are jump functions in A. For the sake of brevity, the derivation of (12) is given in Appendix 2.
The relationship between J and the node density distribution ψ may be further established if the transmission energy consumption model is given. For example, we may adopt the energy consumption model in [25], in which the energy consumption per unit of information (denoted as e) is proportional to \(r^{\alpha _{\textit {rf}}}\phantom {\dot {i}\!}\). Here, r is the sender-to-receiver distance and the RF attenuation exponent α rf is typically in the range of 2 to 5. Since the average inter-distance between nodes is proportional to \(1/\sqrt {\psi }\), \(r \propto 1/\sqrt {\psi }\) and hence \( e\propto {\psi}^{-{\alpha}_{rf}/2} \). In addition, suppose that the nodes have an equal amount of initial energy; thus, the initial energy E is proportional to ψ, which leads to:
$$ J=1/{\lambda}^2={E}^2/{e}^2\propto {\psi}^{2+{\alpha}_{rf}}. $$
Finite element method (FEM) and DGSI-FEM algorithm
Equation (12) can be solved numerically by FEM in which (12) is locally approximated (posed over small partitions called elements of the entire ROI) and a global solution is built by combining the local solutions over these elements [9]. Similarly, referring to Fig. 2, we may divide the ROI into equally spaced grids and then use these grid points to form the elements (i.e., the gray hexagon on the x–y plane illustrated in Fig. 9).
A piecewise-linear finite element basis function. The linear basis function μ i,j is a pyramid with the peak at and is nonzero only within the element centered at (i.e., the gray hexagon). In addition, (x i ,y j ) is the position of
Consider the set of basis functions, μ i,j with , defined on the A such that μ i,j has the following properties:
$$\mu_{i,j}(x_{i^{\prime}}, y_{j^{\prime}})=\left\{ \begin{array}{ll} 1, & \text{if}\,\, i^{\prime}=i\,\, \text{and}\,\, j^{\prime}=j\\ 0, & \text{otherwise} \end{array} \right. $$
Here, \((x_{i^{\prime }}, y_{j^{\prime }})\) is the position of We then approximate Φ, J, and ρ, respectively, by:
in which \(\widetilde {\Phi }_{{i},{j}}=\Phi (x_{i}, y_{j})\), \( {\overset{\sim }{J}}_{i,j}=J\left({x}_i,{y}_j\right) \), and \(\widetilde {\rho }_{{i},{j}}=\rho (x_{i}, y_{j})\) (i.e., the values of Φ, J, and ρ at the grid point , respectively). By substituting (15), (16), and (17) into (12), we obtain the following set of linear equations:
One possible set of candidate functions satisfying (14) are pyramids with peaks at grid points as illustrated in Fig. 9. That is:
$$ {}\mu_{i,j}(x,y) =\left\{ \begin{array}{lll} -\frac{(x-x_{i})}{h}+1 & \text{if}\, (x,y)\, \text{is in}\, {_{i,j}}\triangle{^{i+1,j}_{i+1,j-1}}\\ \frac{(y-y_{j})}{h}+1 & \text{if}\, (x,y)\, \text{is in}\, {_{i,j}}\triangle{^{i+1,j-1}_{i,j-1}}\\ \frac{(x-x_{i})+(y-y_{j})}{h}+1 & \text{if}\, (x,y)\, \text{is in}\, {_{i,j}}\triangle{^{i,j-1}_{i-1,j}}\\ \frac{(x-x_{i})}{h}+1 & \text{if}\, (x,y)\, \text{is in}\, {_{i,j}}\triangle{^{i-1,j}_{i-1,j+1}}\\ -\frac{(y-y_{j})}{h}+1 & \text{if}\,(x,y)\, \text{is in}\, {_{i,j}}\triangle{^{i-1,j+1}_{i,j+1}} \\ -\frac{(x-x_{i})+(y-y_{j})}{h}+1 & \text{if}\, (x,y) \text{is in}\, {_{i,j}}\triangle{^{i,j+1}_{i+1,j}} \\ 0 & \text{otherwise.} \end{array} \right. $$
Here, \({_{i_{1},j_{1}}}\triangle {^{i_{2},j_{2}}_{i_{3},j_{3}}}\) is the triangle formed by and if all , and are in the ROI. If any of these three grid points is not in the ROI, \({_{i_{1},j_{1}}}\triangle {^{i_{2},j_{2}}_{i_{3},j_{3}}}\) is an empty region. That is:
With the linear basis functions (19), the coefficients, \(K^{i^{\prime },j^{\prime }}_{i,j}\) and g i,j , in (18) can be derived:
$$\begin{aligned} & g_{i,j} = -h^{2}/24\left(\mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i+1,j}_{i+1,j-1}} +\, \mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i+1,j-1}_{i,j-1}}\right.\\ & + \mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i,j-1}_{i-1,j}} +\, \mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i-1,j}_{i-1,j+1}} \\ & + \left.\mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i-1,j+1}_{i,j+1}} +\, \mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i,j+1}_{i+1,j}}\right), \end{aligned} $$
in which δ i,j is defined in (6) and:
$$\begin{aligned} &\mathcal{B}^{0}\left[{f}\right]_{{i,j}\triangle^{i_{1},j_{1}}_{i_{2},j_{2}}} = \delta_{i_{1},j_{1}}\delta_{i_{2},j_{2}}\left(\,\,\widetilde{f}_{{i},{j}} + \widetilde{f}_{{i_{1}},{j_{1}}} + \widetilde{f}_{{{i_{2}},{j_{2}}}}\right), \\ &\mathcal{B}^{1}\left[{f}\right]_{{i,j}\triangle^{i_{1},j_{1}}_{i_{2},j_{2}}} = \delta_{i_{1},j_{1}}\delta_{i_{2},j_{2}}\left(2\widetilde{f}_{{i},{j}} + \widetilde{f}_{{i_{1}},{j_{1}}} + \widetilde{f}_{{i_{2}},{j_{2}}}\right). \end{aligned} $$
Note that it is not difficult to verify that \(K^{i^{\prime },j^{\prime }}_{i,j}=K^{i,j}_{i^{\prime },j^{\prime }}\). For the sake of brevity, the detailed computation of \(K^{i^{\prime },j^{\prime }}_{i,j}\) and g i,j is given in Appendix 3.
The Gauss-Seidel iteration (GSI) may solve (18) for \(\widetilde {\Phi }_{{i},{j}}\) via iteratively updating each \(\widetilde {\Phi }_{{i},{j}}\) in lexicographical order from the most updated \(\widetilde {\Phi }\) value at other grid points until the update change \(\left |\widetilde {\Phi }_{{i},{j}}^{(k)}-\widetilde {\Phi }_{{i},{j}}^{(k-1)}\right | \leq \varepsilon \) for all That is, \(\widetilde {\Phi }_{{i},{j}}^{(k)}\) are computed sequentially by:
$$ \begin{aligned} &\widetilde {\Phi}_{{i},{j}}^{(k)} \leftarrow\\ &\frac{1}{K_{i,j}^{i,j}}\left(g_{i,j} - \sum\limits_{\substack{ \mathcal{O}_{L}(i^{\prime}, j^{\prime}) \\ < \mathcal{O}_{L}(i,j)}}K_{i,j}^{i^{\prime}, j^{\prime}}\widetilde {\Phi}_{{i^{\prime}},{j^{\prime}}}^{(k)}- \sum\limits_{\substack{\mathcal{O}_{L}(i^{\prime}, j^{\prime}) \\ > \mathcal{O}_{L}(i,j)}}K_{i,j}^{i^{\prime}, j^{\prime}}\widetilde{\Phi}_{{i^{\prime}},{j^{\prime}}}^{(k-1)}\right), \end{aligned} $$
in which \( \mathcal {O}_{L}(i,j)\) defines the lexicographical order; that is:
$$\mathcal{O}_{L}(i_{1},j_{1}) < \mathcal{O}_{L}(i_{2},j_{2})\,\text{if} \left\{ \begin{array}{ll} i_{1} < i_{2}, \text{or} & \\[2ex] i_{1} = i_{2}\, \text{and}\, j_{1} < j_{2}. \end{array} \right. $$
In GSI, only one \(\widetilde {\Phi }\) is updated in one iteration (21). We say GSI has gone through one sweep when each \(\widetilde {\Phi }\) has been updated once. \(\widetilde {\Phi }_{{i},{j}}^{(k)}\) is the value of \(\widetilde {\Phi }_{{i},{j}}\) after the kth sweep.
Note that if and . Thus, only \(\widetilde {\Phi }_{{i},{j-1}}^{(k)}\), \(\widetilde {\Phi }_{{i-1},{j}}^{(k)}\), \(\widetilde {\Phi }_{{i+1},{j}}^{(k-1)}\), and \(\widetilde {\Phi }_{{i},{j+1}}^{(k-1)}\) are needed to compute \(\widetilde {\Phi }_{{i},{j}}^{(k)}\) via (21). In other words, as long as \(\widetilde {\Phi }_{{i},{j-1}}^{(k)}\) and \(\widetilde {\Phi }_{{i-1},{j}}^{(k)}\) are computed, \(\widetilde {\Phi }_{{i},{j}}^{(k)}\) can be computed.
Accordingly, the distributed routing algorithm, DGSI-FEM, is proposed to coordinate sensors to solve \(\widetilde {\Phi }\)s from (18) in parallel using (21). In DGSI-FEM, a nearby node is selected as the grid head for each grid point to compute the value of \(\widetilde {\Phi }\). The grid head of may update \(\widetilde {\Phi }_{{i},{j}}\) as long as the most updated \(\widetilde {\Phi }_{{i},{j-1}}\) and \(\widetilde {\Phi }_{{i-1},{j}}\) are known; it does not need to wait for the grid heads of all the grid points with lexicographical order less than \(\mathcal {O}_{L}(i,j)\). Note that only these grid heads are involved in the computation of \(\widetilde {\Phi }\)s, resulting in low overhead for a massively dense network. For the sake of brevity, we simply describe the operations of grid points without explicitly mentioning that the operations are actually executed by grid heads.
Since the termination condition is that all the changes made by a sweep fall below a size threshold ε, each grid point needs to know all these changes. To achieve this, DGSI-FEM uses two state packets, PRECISE and DONE, for each grid point, which represent the convergence status and the termination decision, i.e., whether the update changes are small enough and whether the iteration should terminate, respectively. In addition, DGSI-FEM uses two phases (namely, a forward sweep followed by a backward sweep) to propagate the termination decision (via the state packet DONE) and collect the convergence status (via the state packet PRECISE) of all \(\widetilde {\Phi }\)s. Detailed DGSI-FEM is illustrated in Algorithm 2. Note that \(K_{i,j}^{i^{\prime },j^{\prime }}\) and \(\delta _{i^{\prime },j^{\prime }}\) for all and are known in advance. This may be done by letting each grid point discover its adjacent grid points and, once found, exchange \( {\overset{\sim }{J}}_{i,j} \) with them. Additionally, the algorithms for sending and waiting for messages are depicted in Algorithms 3 and 4, respectively. Both algorithms will check whether the communication counterpart is in the ROI and wait will return 〈0,true〉 if not.
After initialization (Lines 1–2), the iteration for will proceed as follows, referring to Fig. 10 for the sequence diagram of DGSI-FEM iteration:
Sequence diagram for in the iteration of DGSI-FEM. For example, in the forward sweep, and update and then send their \(\widetilde {\Phi }\)s and DONEs to After updating \(\widetilde {\Phi }_{{i},{j}}\) and DONE i,j , sends \(\widetilde {\Phi }_{{i},{j}}\) and DONE i,j to and . The dark rectangles drawn on top of the lifelines indicate that grid points are updating \(\widetilde {\Phi }\)s and DONEs (or PRECISEs in the backward sweep)
Forward sweep: iteration direction begins from bottom-left (the grid point with the smallest \(\mathcal {O}_{L}\)) to top-right (the grid point with the largest \(\mathcal {O}_{L}\)).
waits for \(\widetilde {\Phi }\)s and DONEs, respectively, from the down and left adjacent grid points, and , which have smaller \(\mathcal {O}_{L}\) values. (Lines 5–9)
updates \(\widetilde {\Phi }_{{i},{j}}\) by (21). (Line 10)
updates DONE i,j . (Lines 11–19)
sends \(\widetilde {\Phi }_{{i},{j}}\) and DONE i,j to the right and top adjacent grid points, and , respectively, which have larger \(\mathcal {O}_{L}\) values. (Lines 20–20)
Backward sweep: iteration direction moves from top-right to bottom-left.
waits for \(\widetilde {\Phi }\)s and PRECISEs from the top and right adjacent grid points, respectively. (Lines 27–29)
updates PRECISE i,j . (Lines 31–33)
sends \(\widetilde {\Phi }_{{i},{j}}\) and PRECISE i,j to the down and left adjacent grid points, respectively. (Lines 34–36)
Note that will set PRECISE i,j as true if the update made by itself is small enough and the PRECISEs of its top and right grid points are true (Lines 31–33). The PRECISEs collected by the \(\mathcal {O}_{L}\)-initiator, which has no bottom and left adjacent grid points (e.g., the grid point with the smallest lexicographical order), indicate whether the update changes of all \(\widetilde {\Phi }\)s are small enough and are used to determine the termination by the \(\mathcal {O}_{L}\)-initiator.
In addition, will set DONE i,j as true based on the following rules:
is an \(\mathcal {O}_{L}\)-initiator, and the update changes in the last backward sweep are small enough, i.e., PRECISE i,j is true (Lines 11–14), or
is not an \(\mathcal {O}_{L}\)-initiator, but DONE i,j−1=true and DONE i−1,j =true (Lines 15–19).
Note that DONEs are propagated from bottom-left to top-right in the forward sweep, so the termination proceeds from bottom-left to top-right. Thus, the if statement that checks whether the down or left adjacent grid points have terminated is added (Line 6) to avoid infinite waiting when waiting for messages from the down or left adjacent grid points (Line 7).
After \(\widetilde {\Phi }_{{i},{j}}\)s are solved from (18), \(\widetilde {\mathbf {D}}_{{i},{j}}\) may be approximated by the following formulae, which are derived by using (15) to approximate (10):
$$ \begin{array}{cc}{\overset{\sim }{\mathbf{D}}}_{i,{j}_x}=& \frac{\delta_{i+1,j}{\delta}_{i-1,j}{\overset{\sim }{J}}_{i,j}}{2h}\left({\overset{\sim }{\varPhi}}_{i+1,j}-{\overset{\sim }{\varPhi}}_{i-1,j}\right)\\ {}{\overset{\sim }{\mathbf{D}}}_{i,{j}_y}=& \frac{\delta_{i,j+1}{\delta}_{i,j-1}{\overset{\sim }{J}}_{i,j}}{2h}\left({\overset{\sim }{\varPhi}}_{i,j+1}-{\overset{\sim }{\varPhi}}_{i,j-1}\right)\end{array} $$
in which \(\widetilde {\mathbf {D}}_{{i},{j_{x}}}\) and \(\widetilde {\mathbf {D}}_{{i},{j_{y}}}\) are, respectively, the x and y components of \(\widetilde {\mathbf {D}}_{{i},{j}}\). For the sake of brevity, the derivation is given in Appendix 4. Once \(\widetilde {\mathbf {D}}_{{i},{j}}\) is computed, \(\widetilde {\mathbf {u_{f}}}_{{i},{j}}\) can be determined by \(\widetilde {\mathbf {u_{f}}}_{{i},{j}}=\widetilde {\mathbf {D}}_{{i},{j}}/\left |\widetilde {\mathbf {D}}_{{i},{j}}\right |\) and then used as guidance to find the next forwarding nodes for routing information.
We present several numerical results to demonstrate the effectiveness of DGSI-FEM for different scenarios, namely the ROI with holes, the ROI with a nonuniform information generation rate, and the ROI with a nonuniform density. The simulation settings for these scenarios are listed in Table 5. Twenty thousand sensors are randomly deployed based on the density distribution ψ and generate information based on ρ except for the sink which will consume all the information generated. Similarly, routing directions are solved using only the macroscopic parameters, ψ and ρ, but not the detailed position of each node. The node positions are merely used to determine the next forwarding node from the routing directions. In addition, the energy consumption per unit of information is proportional to \(r^{\alpha _{\textit {rf}}}\phantom {\dot {i}\!}\) with α rf =2 and thus J=ψ 4 as indicated in (13).
Table 5 Simulation settings for the scenarios illustrated in Fig. 11
The routing directions obtained by DGSI-FEM are depicted as arrows in Fig. 11. The arrows provide the routing guidance for load-balancing. For example, Fig. 11 a reveals that information may be forwarded in a direction which deviates from a straight line to the sink to bypass the holes in advance. Thus, unlike many hole-bypassing algorithms [15, 31–33], using routing directions may alleviate the excess energy consumption of the boundary sensors.
Routing direction. All ROIs are square regions divided into 37×37 grids. Sinks which will consume all the information generated are marked as circles, and the arrows represent the routing directions, \(\widetilde {\mathbf {u_{f}}}\)s. a Uniform ψ and ρ: the black regions represent the holes. b Uniform ψ and nonuniform ρ: the sensors in the gray region generate ten times more information than other sensors. c Nonuniform ψ and uniform ρ: the gray region has 50 % higher sensor density than the white region. d Nonuniform density and uniform ρ: the gray region has 30 % lower sensor density than the white region
The routing directions shown in Fig. 11 b indicate that, to achieve load-balancing, information may be forwarded to the sensors outside the high- ρ region and then to the sink, instead of being delivered straight to the sink by the sensors in the high- ρ region. Particularly, some nodes around the bottom-right corner of the high- ρ region may forward packets in the opposite direction to the sink in order to avoid using nodes in the high- ρ region. Note that the sensors in the high- ρ region generate more events and potentially have more loading.
The routing directions shown in Fig. 11 c, d indicate that the information traffic tends to flow into the high-density regions and bypass the low-density regions to achieve load-balancing. Similar to Fig. 11 b, Fig. 11 d depicts that some nodes along the bottom-left boundary of the low-density region may forward packets in the opposite direction to the sink in order to avoid using nodes in the low-density region. In the last two scenarios, the ρs are uniform; thus, the sensors in the high-density (or low-density) region generate fewer (or more) events and potentially have less (or more) loading.
We then conducted simulations to compare the energy consumption of the routing directions obtained from DGSI-FEM with that of a microscopic routing algorithm, namely greedy perimeter stateless routing (GPSR) [31]. We used the approach described in Section 4.4 to route the information via the routing directions. Note that GPSR normally works as GF [12]; that is, the next forwarding node will be the one closest to the destination among the current sender's neighbors. However, if GF fails to find the node making any progress in delivering information, the node to the left in a planar subgraph of the connectivity graph of the WSN will be selected as the next forwarding node until GF recovers. The planarization used here is RNG [34].
We also conducted comparative simulations for another microscopic routing algorithm, namely geographical and energy aware routing (GEAR) [35], which attempts to achieve load-balancing by considering both the distance to the sink and the energy consumption. If a node has neighbors closer to the sink, GEAR will choose among these neighbors the one with the smallest weighted sum of the distance to the sink and the energy consumed as the next forwarding node; otherwise, the neighbor with the smallest weighted sum is the next forwarding node.
Figure 12 depicts the means and standard deviations of the energy consumption of DGSI-FEM and GEAR, normalized to the energy consumption of GPSR. Referring to Fig. 11 a, DGSI-FEM may forward information in a direction which deviates from a straight line to the sink to bypass the holes in advance, while our GPSR implementation uses the left-hand rule to forward information, thus resulting in excess energy consumption along the holes. Hence, DGSI-FEM may achieve better load-balancing with less energy consumption than GPSR for the ROI with holes.
The relative statistics of energy consumption of the scenarios in Fig. 11: the relative mean (or standard deviation) of DGSI-FEM (or GEAR) is defined as the mean (or standard deviation) of the routing energy consumption of DGSI-FEM (or GEAR) divided by the mean (or standard deviation) of the routing energy consumption of GPSR. a The relative mean of the energy consumption. b The relative standard deviation of the energy consumption
Furthermore, GPSR is degenerated to GF for the ROIs without holes and thus exhibits a lower average energy consumption for the scenarios shown in Fig. 11 b–d. On the other hand, DGSI-FEM will avoid the nodes in the high- ρ and low-density region and utilize the nodes in the high-density region for load-balancing. Thus, the routing paths will bypass the high- ρ and low-density regions or bend into the high-density region.
In addition, though GEAR strives to achieve load-balancing by considering the distance and energy factors, the best next forwarding node is still a local optimum; thus, GEAR provides less effective load-balancing than does DGSI-FEM. The standard deviation results in Fig. 12 b indicate that the routing directions solved by DGSI-FEM can effectively achieve load-balancing, particularly for the ROIs having holes.
This paper studies the minimum routing cost problems for massively dense WSNs via the geographical model, which leads to the grid-based directional routing algorithms. The minimum routing cost problems are formulated as continuous geodesic problems under massively dense WSNs, and the grid-based directional routing algorithms are the natural outcomes of numerically solving these problems; numerical solutions of the geodesic problems provide the directions to the next forwarding nodes at equally spaced grid points in the ROI, and these directions can be used as guidance to route information.
We first consider the position-only-dependent costs (e.g., hops, throughput, or energy) and investigate two numerical approaches, GADM and FM. GADM uses Dijkstra's method to determine the minimum cost path (under the grid point network). However, GADM may yield less optimum routing paths due to the direction restriction. On the other hand, by introducing the cost map T, the geodesic problem can be transformed into the eikonal equation and then solved by FM. Note that the geodesic problem considered here is to find a continuous curve which has the minimum cost from a given source to a sink. Our numerical results show that FM is more suitable than GADM for approximating the continuous curves and thus yields a path with less routing cost. The routing cost comparison results shows that FM has a routing cost 5 % more than the optimum cost, and GADM may have a routing cost 28 % more than the optimum cost.
We then consider the traffic-proportional costs which correspond to energy-load-balancing. By the equivalence between geodesic problems and optimum routing vector field problems, we transform the geodesic problem into a set of equations with regard to the routing vector field, which can be more easily tackled. We propose a distributed algorithm, i.e., DGSI-FEM, for solving the routing vector field via FEM and present the numerical results to demonstrate the quality of the derived routing directions. The routing energy consumption results show that routing directions may effectively achieve load-balancing than the microscopic routing algorithms, GPSR, and GEAR, particularly for the ROIs with holes.
Many aspects of this paper, the problems studied and the approaches taken, are the application of the existing work, e.g., minimum cost routing paths [23], cost function and node density [5, 24, 26], geodesic path via eikonal equation [27], fast marching method [8, 28], load-balancing routing equations [2, 24, 29, 30], and finite element method [9]. However, these works either do not specifically focus on the network routing problems or only theoretically analyze the routing problems without providing routing algorithms. The main contribution of this paper is to propose a systematic framework to develop low overhead routing algorithms for massively dense WSNs, i.e., coordinate sensor nodes themselves to solve the routing directions using these existing techniques and then route the information with the routing directions. In addition, there have existed numerous strategies for solving geodesic paths and PDEs. We believe this paper will open up a potential research direction toward the development of routing algorithms via investigation of the appropriateness of these strategies for implementation on WSNs.
Appendix 1: proof of theorem 3
For the sake of convenience, we use the position vector x to represent the position (x,y). Consider T(x) and for any dx, i.e., a small change of x:
$$ T(\mathbf{x} + \mathrm{d}\mathbf{x}) = T(\mathbf{x}) + \nabla T \cdot \mathrm{d}\mathbf{x} + \mathrm{O}\left({\left|\mathrm{d}\mathbf{x}\right|^{2}}\right) $$
by the Taylor expansion [29]. Let the cost of the straight line from x to x+dx be ΔT ′; then:
$$\Delta T^{\prime} = \mathcal{C}\left|\mathrm{d}\mathbf{x}\right| + \mathrm{O}\left({\left|\mathrm{d}\mathbf{x}\right|^{2}}\right). $$
Since T is the minimum cost:
$$T(\mathbf{x} + \mathrm{d}\mathbf{x}) \leq T(\mathbf{x}) + \Delta T^{\prime}. $$
Thus, choosing dx = small multiple of ∇T:
$$\left|\nabla T\right| \leq \mathcal{C}. $$
On the other hand, consider x and x+dx along a minimum cost path. We have:
$$T(\mathbf{x} + \mathrm{d}\mathbf{x})-T(\mathbf{x})=\mathcal{C}\left|\mathrm{d}\mathbf{x}\right| + \mathrm{O}\left({\left|\mathrm{d}\mathbf{x}\right|^{2}}\right), $$
and then by (22):
$$ \mathcal{C}\left|\mathrm{d}\mathbf{x}\right| + \mathrm{O}\left({\left|\mathrm{d}\mathbf{x}\right|^{2}}\right)=\nabla T \cdot \mathrm{d}\mathbf{x} = \left|\nabla T\right|\left|\mathrm{d}\mathbf{x}\right|\cos{\theta} $$
in which θ is the angle between ∇T and dx. Therefore, \(\left |\nabla T\right | \geq \mathcal {C}\), and (3) is proved.
Furthermore, consider x and x+dx along a minimum cost path. It is also clear from (3) and (23) that θ=0. Since both x and x+dx are on a minimum cost path, dx and thus ∇T are tangent to the minimum cost path.
Appendix 2: weak formulation of the load-balancing routing equations, (8), (10), and (11)
Multiply (8) by an arbitrary smooth scalar valued function ν and integrate it over the ROI; then:
$$\int_{A}\left(\nabla \cdot \mathbf{D} - \rho\right)\nu\mathrm{d}y\mathrm{d}x = 0. $$
By the product rule of a scalar valued function and a vector field:
$$\nabla\cdot\nu\mathbf{D} = \nu\nabla\cdot\mathbf{D} + \mathbf{D}\cdot\nabla\nu, $$
$$\int_{A}\left(\nabla\cdot\nu\mathbf{D}-\mathbf{D}\cdot\nabla\nu\right)\mathrm{d}y\mathrm{d}x - \int_{A}\rho\nu\mathrm{d}y\mathrm{d}x = 0, $$
and hence:
$$\int_{A}\mathbf{D}\cdot\nabla\nu\mathrm{d}y\mathrm{d}x = -\int_{A}\rho\nu\mathrm{d}y\mathrm{d}x + \int_{A}\nabla\cdot\nu\mathbf{D}\mathrm{d}y\mathrm{d}x. $$
By divergence theorem, we obtain:
$$\int_{A}\mathbf{D}\cdot\nabla\nu\mathrm{d}y\mathrm{d}x = -\int_{A}\rho\nu\mathrm{d}y\mathrm{d}x + \int_{\partial A}\nu\mathbf{D}\cdot \hat{\mathbf{n}}\mathrm{d}y\mathrm{d}x. $$
By substituting (10) and the boundary condition (11) into the above equation, we have the weak formulation (12).
Appendix 3: values of \(K^{i',j'}_{i,j}\) and g i,j
Referring to Fig. 9, for the element centered at (i.e., the gray hexagon), denoted H i,j as the set of vertices, that is:
and T i,j as the set of the triangles forming the element, that is:
$$ \begin{array}{cc}{T}_{i,j}=& \left\{{\kern1em }_{i,j}{\triangle}_{i+1,j-1}^{i+1,j},\kern0.3em {\kern1em }_{i,j}{\triangle}_{i,j-1}^{i+1,j-1},\kern0.60em {\kern1em }_{i,j}{\triangle}_{i-1,j}^{i,j-1},\kern0.60em {\kern1em }_{i,j}{\triangle}_{i-1,j+1}^{i-1,j},\right.\\ {}\left.{\kern1em }_{i,j}\triangle {\kern1em }_{i,j+1}^{i-1,j+1},\kern0.60em {\kern1em }_{i,j}{\triangle}_{i+1,j}^{i,j+1}\right\}.\end{array} $$
Similar to (15), we approximate ν by:
By substituting (15) and the above equation into (12), (12) becomes:
By reordering the summation and integral of the above equation, we then have:
Since ν is arbitrary, \(\widetilde {\nu }_{{i,j}}\) are arbitrary. Therefore, the above equation leads to:
By reordering the summation and integral of the above equation, we obtain:
Define:
$$ \begin{array}{cc}{K}_{i,j}^{i^{\prime },{j}^{\prime }}=& \underset{A}{\int }J\nabla {\mu}_{i^{\prime },{j}^{\prime }}\cdotp \nabla {\mu}_{i,j}\mathrm{d}y\mathrm{d}x\\ {}=& \underset{A}{\int }J\left(\frac{\partial {\mu}_{i^{\prime },{j}^{\prime }}}{\partial x}\frac{\partial {\mu}_{i,j}}{\partial x}+\frac{\partial {\mu}_{i^{\prime },{j}^{\prime }}}{\partial y}\frac{\partial {\mu}_{i,j}}{\partial y}\right)\mathrm{d}y\mathrm{d}x\end{array} $$
$$g_{i,j} = -\int_{A}\rho \mu_{i,j}\mathrm{d}y\mathrm{d}x. $$
Then (24) can be written as (18).
From (19):
$$ \frac{\partial \mu_{i,j}(x,y)}{\partial x} =\left\{ \begin{array}{lll} -1/h & \text{if}\, \, (x,y)\, \, \text{is in}_{i,j}{\triangle}^{i+1,j}_{i+1,j-1}\\ 0 & \text{if}\, \,(x,y)\, \, \text{is in}_{i,j}{\triangle}^{i+1,j-1}_{i,j-1} \\ 1/h & \text{if}\, \, (x,y)\, \, \text{is in}_{i,j}{\triangle}^{i,j-1}_{i-1,j}\\ 1/h & \text{if}\, \, (x,y)\, \, \text{is in}_{i,j}{\triangle}^{i-1,j}_{i-1,j+1} \\ 0 & \text{if} \, \,(x,y)\,\, \text{is in}_{i,j}{\triangle}^{i-1,j+1}_{i,j+1} \\ -1/h & \text{if}\, \, (x,y) \text{is in}_{i,j}{\triangle}^{i,j+1}_{i+1,j} \\ 0 & \text{otherwise} \end{array} \right. $$
$$ \frac{\partial \mu_{i,j}(x,y)}{\partial y} =\left\{ \begin{array}{lll} 0 & \text{if}\, \,(x,y)\,\, \text{is in}_{i,j}{\triangle}^{i+1,j}_{i+1,j-1} \\ 1/h & \text{if}\, \, (x,y)\, \, \text{is in}_{i,j}{\triangle}^{i+1,j-1}_{i,j-1} \\ 1/h & \text{if}\, \, (x,y)\, \, \text{is in}_{i,j}{\triangle}^{i,j-1}_{i-1,j} \\ 0 & \text{if}\, \, (x,y)\, \, \text{is in}_{i,j}{\triangle}^{i-1,j}_{i-1,j+1}\\ -1/h & \text{if}\, \,(x,y)\, \, \text{is in}_{i,j}{\triangle}^{i-1,j+1}_{i,j+1} \\ -1/h & \text{if}\, \, (x,y)\, \, \text{is in}_{i,j}{\triangle}^{i,j+1}_{i+1,j} \\ 0 & \text{otherwise} \end{array} \right. $$
Note that if and the element centered at and the element centered at do not overlap; therefore, it is not difficult to verify that:
$$\frac{\partial \mu_{i^{\prime},j^{\prime}}}{\partial x}\frac{\partial \mu_{i,j}}{\partial x} + \frac{\partial \mu_{i^{\prime},j^{\prime}}}{\partial y}\frac{\partial \mu_{i,j}}{\partial y} = 0\; \text{and hence}\; K^{i^{\prime},j^{\prime}}_{i,j} = 0. $$
In addition, if (x,y) in \( {}_{i_1}{,}_{j_1}{\triangle}_{i_3,{j}_3}^{i_2,{j}_2}\in {T}_{i_1,{j}_1} \), (x,y) is located within the elements centered at and and \( J\left(x,y\right)=\sum_{k=1}^3{\overset{\sim }{J}}_{i_k,{j}_k}{\mu}_{i_k,{j}_k}\left(x,y\right) \) by (16) and (19). For example, for (x,y) in \(_{i,j}\triangle ^{i+1,j}_{i+1,j-1}\):
$$ \begin{array}{cc}J\left(x,y\right)=& {\overset{\sim }{J}}_{i,j}{\mu}_{i,j}\left(x,y\right)+{\overset{\sim }{J}}_{i+1,j}{\mu}_{i+1,j}\left(x,y\right)\\ {}+{\overset{\sim }{J}}_{i+1,j-1}{\mu}_{i+1,j-1}\left(x,y\right).\end{array} $$
Thus:
$$ \begin{array}{cc}\underset{{}_{i,j}{\triangle}_{i+1,j-1}^{i+1,j}}{\int }J\mathrm{d}y\mathrm{d}x& ={\overset{\sim }{J}}_{i,j}\underset{{}_{i,j}{\triangle}_{i+1,j-1}^{i+1,j}}{\int }{\mu}_{i,j}\mathrm{d}y\mathrm{d}x\\ {}\kern1em +{\overset{\sim }{J}}_{i+1,j}\underset{{}_{i,j}{\triangle}_{i+1,j-1}^{i+1,j}}{\int }{\mu}_{i+1,j}\mathrm{d}y\mathrm{d}x\\ {}\kern1em +{\overset{\sim }{J}}_{i+1,j-1}\underset{{}_{i,j}{\triangle}_{i+1,j-1}^{i+1,j}}{\int }{\mu}_{i+1,j-1}\mathrm{d}y\mathrm{d}x.\end{array} $$
Note that referring to Fig. 9, \( \int {_{_{i,j}\triangle ^{i+1,j}_{i+1,j-1}}}\mu _{i,j}\mathrm {d}y\mathrm {d}x\) is the volume of the triangular pyramid formed by the vertex (x i ,y j ,μ i,j (x i ,y j )) and \(_{i,j}\triangle ^{i+1,j}_{i+1,j-1}\); here, the volume is h 2/6, since μ i,j (x i ,y j )=1 and the area of \(_{i,j}\triangle ^{i+1,j}_{i+1,j-1}\) is h 2/2. The same argument can be used to compute the rest of two integrals in the above equation. Thus, referring to (20) and (6):
$$ \begin{array}{c}\underset{{}_{i,j}{\triangle}_{i+1,j-1}^{i+1,j}}{\int }J\mathrm{d}y\mathrm{d}x=\\ {}{\delta}_{i+1,j}{\delta}_{i+1,j-1}{h}^2/6\left({\overset{\sim }{J}}_{i,j}+{\overset{\sim }{J}}_{i+1,j}+{\overset{\sim }{J}}_{i+1,j-1}\right).\end{array} $$
Here, δs are added to check whether the vertices of \(_{i,j}\triangle ^{i+1,j}_{i+1,j-1}\) are in the ROI.
Similarly, this integral computation can apply to other \(\vartriangle \)s in T i,j , and we have:
$$ \underset{i,j{\triangle}_{i_2,{j}_2}^{i_1,{j}_1}}{\int }J\mathrm{d}y\mathrm{d}x={\delta}_{i_1,{j}_1}{\delta}_{i_2,{j}_2}{h}^2/6\left({\overset{\sim }{J}}_{i,j}+{\overset{\sim }{J}}_{i_1,{j}_1}+{\overset{\sim }{J}}_{i_2,{j}_2}\right), $$
for i,j △i 2,j 2 i 1,j 1∈T i,j . By denoting:
$$\mathcal{B}^{0}\left[{f}\right]_{{i,j}\triangle^{i_{1},j_{1}}_{i_{2},j_{2}}} = \delta_{i_{1},j_{1}}\delta_{i_{2},j_{2}}\left(\,\,\widetilde{f}_{{i},{j}} + \widetilde{f}_{{i_{1}},{j_{1}}} + \widetilde{f}_{{i_{2}},{j_{2}}}\right), $$
we then have:
$$ \underset{i,j{\triangle}_{i_2,{j}_2}^{i_1,{j}_1}}{\int }J\mathrm{d}y\mathrm{d}x=\frac{h^2}{6}{\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j}{\triangle}_{i_1,{j}_2}^{i_1,{j}_1}. $$
Thus, by (25) and (26), for
$$ \begin{array}{cc}{K}_{i,j}^{i,j}=& 1/{h}^2\left(\underset{{}_{i,j}{\triangle}_{i+1,j-1}^{i+1,j}}{\int }J\mathrm{d}y\mathrm{d}x+\underset{{}_{i,j}{\triangle}_{i,j-1}^{i+1,j-1}}{\int }J\mathrm{d}y\mathrm{d}x\kern0.3em +\right.\\ {}2\underset{{}_{i,j}{\triangle}_{i-1,j}^{i,j-1}}{\int }J\mathrm{d}y\mathrm{d}x+\underset{{}_{i,j}{\triangle}_{i-1,j+1}^{i-1,j}}{\int }J\mathrm{d}y\mathrm{d}x\kern0.3em +\\ {}\left.\underset{{}_{i,j}{\triangle}_{i,j+1}^{i-1,j+1}}{\int }J\mathrm{d}y\mathrm{d}x+2\underset{{}_{i,j}{\triangle}_{i+1,j}^{i,j+1}}{\int }J\mathrm{d}y\mathrm{d}x\right)\\ {}=& 1/6\left({\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i+1,j-1}^{i+1,j}}+\kern0.3em {\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i,j-1}^{i+1,j-1}}+\kern0.3em 2{\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i-1,j}^{i,j-1}}\right.\\ {}+& \left.{\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i-1,j+1}^{i-1,j}}+\kern0.3em {\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i,j+1}^{i-1,j+1}}+\kern0.3em 2{\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i+1,j}^{i,j+1}}\right).\end{array} $$
Now we compute \(K^{i^{\prime },j^{\prime }}_{i,j}\)s for We first consider Referring to (25), the only \(\vartriangle \)s for both ∂μ i,j /∂x≠0 and ∂μ i+1,j /∂x≠0 are \(_{i,j}\triangle ^{i,j+1}_{i+1,j}\) and \(_{i,j}\triangle ^{i+1,j}_{i+1,j-1}\), and there is no \(\vartriangle \) for both ∂μ i,j /∂y≠0 and ∂μ i+1,j /∂y≠0. Therefore:
$$ \begin{array}{cc}{K}_{i,j}^{i+1,j}=& -1/{h}^2\left(\underset{{}_{i,j}{\triangle}_{i+1,j}^{i,j+1}}{\int }J\mathrm{d}y\mathrm{d}x+\underset{{}_{i,j}{\triangle}_{i+1,j-1}^{i+1,j}}{\int }J\mathrm{d}y\mathrm{d}x\right)\\ {}=& -1/6\left({\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i+1,j}^{i,j+1}}+\kern0.3em {\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i+1,j-1}^{i+1,j}}\right).\end{array} $$
The same argument can be used to compute \(K^{i^{\prime },j^{\prime }}_{i,j}\)s for the rest of in H i,j , and we have:
$$ \begin{array}{c}{K}_{i,j}^{i+1,j-1}=0,\\ {}{K}_{i,j}^{i,j-1}=-1/6\left({\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i,j-1}^{i+1,j-1}}+\kern0.3em {\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i-1,j}^{i,j-1}}\right),\\ {}{K}_{i,j}^{i-1,j}=-1/6\left({\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i-1,j}^{i,j-1}}+\kern0.3em {\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i-1,j+1}^{i-1,j}}\right),\\ {}{K}_{i,j}^{i-1,j+1}=0,\\ {}{K}_{i,j}^{i,j+1}=-1/6\left({\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i,j+1}^{i-1,j+1}}+\kern0.3em {\mathcal{B}}^0{\left[\kern0.3em J\right]}_{i,j{\triangle}_{i+1,j}^{i,j+1}}\right).\end{array} $$
Note that as mentioned earlier, \(K^{i^{\prime },j^{\prime }}_{i,j} = 0\) if and Thus, together with \(K^{i+1,j-1}_{i,j}= 0\) and \(K^{i-1,j+1}_{i,j}= 0\), we have:
To compute g i,j , we use (17) to expand g i,j as follows:
If and the element centered at and the element centered at do not overlap; therefore, it is not difficult to verify from (19) that \(\mu _{i^{\prime },j^{\prime }} \mu _{i,j} = 0\). Hence:
In addition, it is obvious that if \( \vartriangle \notin {T}_{i,j} \), \(\int _{\vartriangle }\mu _{i^{\prime },j^{\prime }} \mu _{i,j}\mathrm {d}y\mathrm {d}x=0\). We only need to compute the integral over the region \( \vartriangle \in {T}_{i,j} \).
We first consider the integral over \(_{i,j}\triangle ^{i+1,j}_{i+1,j-1}\):
The computation of each integral of the above equation is carried out as follows:
$${\small{\begin{aligned} & \int_{_{i,j}\triangle^{i+1,j}_{i+1,j-1}}\mu_{i,j} \mu_{i,j}\mathrm{d}y\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/h^{2}\int_{x_{i}}^{x_{i}+h}\int_{x_{i}+y_{j}-x}^{y_{j}}\left(-(x-x_{i})+h\right)^{2}\mathrm{d}y\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/h^{2}\int_{x_{i}}^{x_{i}+h}\left(-(x-x_{i})+h\right)^{2}y|_{x_{i}+y_{j}-x}^{y_{j}}\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/h^{2}\int_{x_{i}}^{x_{i}+h}\left(-(x-x_{i})+h\right)^{2}(x-x_{i})\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/h^{2}\int_{x_{i}}^{x_{i}+h}h\left(-(x-x_{i})+h\right)^{2} \\ & \quad-\left(-(x-x_{i})+h\right)^{3}\mathrm{d}x\\ & = \delta_{i+1,j}\delta_{i+1,j-1}/h^{2}\left(-h/3\left(-(x-x_{i})+h\right)^{3}\right.\\ & \left.\quad+\,1/4\left(-(x-x_{i})+h\right)^{4}\right)|_{x_{i}}^{x_{i}+h}\\ & = \delta_{i+1,j}\delta_{i+1,j-1}/h^{2} \left(h^{4}/3-h^{4}/4\right) \\ & = \delta_{i+1,j}\delta_{i+1,j-1}h^{2}/12, \end{aligned}}} $$
$${\small{\begin{aligned} & \int_{_{i,j}\triangle^{i+1,j}_{i+1,j-1}}\mu_{i+1,j} \mu_{i,j}\mathrm{d}y\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/h^{2}\int_{x_{i}}^{x_{i}+h}\int_{x_{i}+y_{j}-x}^{y_{j}}\left((x-x_{i})+(y-y_{j})\right)\\ &\qquad\qquad\qquad\qquad\qquad\left(-(x-x_{i})+h\right)\mathrm{d}y\mathrm{d}x\\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\int_{x_{i}}^{x_{i}+h}\left(-(x-x_{i})+h\right)\\ &\qquad\qquad\qquad\qquad\qquad\left((x-x_{i})+(y-y_{j})\right)^{2}|_{x_{i}+y_{j}-x}^{y_{j}}\mathrm{d}x\\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\int_{x_{i}}^{x_{i}+h}\left(-(x-x_{i})+h\right)(x-x_{i})^{2}\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\int_{x_{i}}^{x_{i}+h}h(x-x_{i})^{2}-(x-x_{i})^{3}\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\left(h(x-x_{i})^{3}/3-(x-x_{i})^{4}/4\right)|_{x_{i}}^{x_{i}+h} \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\left(h^{4}/3-h^{4}/4\right) \\ & = \delta_{i+1,j}\delta_{i+1,j-1}h^{2}/24, \end{aligned}}} $$
$${\small{\begin{aligned} & \int_{_{i,j}\triangle^{i+1,j}_{i+1,j-1}}\mu_{i+1,j-1} \mu_{i,j}\mathrm{d}y\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/h^{2}\int_{x_{i}}^{x_{i}+h}\int_{x_{i}+y_{j}-x}^{y_{j}}\left(-(y-y_{j})\right)\\ &\qquad\qquad\qquad\qquad\qquad\left(-(x-x_{i})+h\right)\mathrm{d}y\mathrm{d}x\\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\int_{x_{i}}^{x_{i}+h}-(y-y_{j})^{2}\\ &\qquad\qquad\qquad\qquad\qquad\left(-(x-x_{i})+h\right)|_{x_{i}+y_{j}-x}^{y_{j}}\mathrm{d}x\\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\int_{x_{i}}^{x_{i}+h}(x-x_{i})^{2}\left(-(x-x_{i})+h\right)\mathrm{d}x \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\left(h(x-x_{i})^{3}/3-(x-x_{i})^{4}/4\right)|_{x_{i}}^{x_{i}+h} \\ & = \delta_{i+1,j}\delta_{i+1,j-1}/2h^{2}\left(h^{4}/3-h^{4}/4\right) \\ & = \delta_{i+1,j}\delta_{i+1,j-1}h^{2}/24. \end{aligned}}} $$
By denoting:
$$\mathcal{B}^{1}\left[{f}\right]_{{i,j}\triangle^{i_{1},j_{1}}_{i_{2},j_{2}}} = \delta_{i_{1},j_{1}}\delta_{i_{2},j_{2}}\left(2\widetilde{f}_{{i},{j}} + \widetilde{f}_{{i_{1}},{j_{1}}} + \widetilde{f}_{{i_{2}},{j_{2}}}\right), $$
The same computation can be carried out for the rest of △s in T i,j , and we have:
$$\begin{aligned} g_{i,j} &= -h^{2}/24\left(\mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i+1,j}_{i+1,j-1}} + \mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i+1,j-1}_{i,j-1}}\right. \\ & \quad+ \mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i,j-1}_{i-1,j}} + \mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i-1,j}_{i-1,j+1}} \\ & \quad+ \left.\mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i-1,j+1}_{i,j+1}} + \mathcal{B}^{1}[\!{\rho}]_{{i,j}\triangle^{i,j+1}_{i+1,j}}\right). \end{aligned} $$
Appendix 4: values of \(\widetilde {\mathbf {D}}_{{i},{j}}\)
By (10) and (15) together with the boundary condition (11):
$$ \begin{array}{cc}{\overset{\sim }{\mathbf{D}}}_{i,{j}_x}& ={\mathbf{D}}_x\left({x}_i,{y}_j\right)={\delta}_{i+1,j}{\delta}_{i-1,j}J\left({x}_i,{y}_j\right)\frac{\partial \varPhi }{\partial x}\left({x}_i,{y}_j\right)\\ {}={\delta}_{i+1,j}{\delta}_{i-1,j}{\overset{\sim }{J}}_{i,j}{ \lim}_{\varDelta x\to 0}\frac{\varPhi \left({x}_i+\varDelta x,{y}_j\right)-\varPhi \left({x}_i-\varDelta x,{y}_j\right)}{2\varDelta x}.\end{array} $$
Here, D x is the x component of D and (x i ,y j ) is the position of
Note that (x i +Δx,y j ) is located within the elements centered at and . Thus:
$$\begin{aligned} \Phi(x_{i}+\Delta x, y_{j}) = & \widetilde{\Phi}_{{i},{j}}\mu_{i,j}(x_{i}+\Delta x, y_{j})\\ & + \widetilde{\Phi}_{{i+1},{j}}\mu_{i+1,j}(x_{i}+\Delta x, y_{j}) \\ & + \widetilde{\Phi}_{{i},{j+1}}\mu_{i,j+1}(x_{i}+\Delta x, y_{j}) \\ & + \widetilde{\Phi}_{{i+1},{j-1}}\mu_{i+1,j-1}(x_{i}+\Delta x, y_{j}). \end{aligned} $$
From (19), μ i,j+1(x i +Δx,y j )=0 and μ i+1,j−1(x i +Δx,y j )=0. Thus:
$${\small{\begin{aligned} {}\Phi(x_{i}+\Delta x, y_{j}) \\ = & \widetilde{\Phi}_{{i},{j}}\mu_{i,j}(x_{i}+\Delta x, y_{j}) + \widetilde{\Phi}_{{i+1},{j}}\mu_{i+1,j}(x_{i}+\Delta x, y_{j}). \end{aligned}}} $$
$$\mu_{i,j}(x_{i}+\Delta x, y_{j}) = -\Delta x/h + 1, $$
$$\mu_{i+1,j}(x_{i}+\Delta x, y_{j}) = \Delta x/h. $$
$$\Phi(x_{i}+\Delta x, y_{j}) = \left(-\Delta x/h + 1\right)\widetilde{\Phi}_{{i},{j}} + \left(\Delta x/h\right)\widetilde{\Phi}_{{i+1},{j}}. $$
Similarly:
$$\Phi(x_{i}-\Delta x, y_{j}) = \left(-\Delta x/h + 1\right)\widetilde{\Phi}_{{i},{j}} + \left(\Delta x/h\right)\widetilde{\Phi}_{{i-1},{j}}. $$
$${}{\lim}_{\Delta x \to 0}\frac{\Phi(x_{i}+\Delta x, y_{j})-\Phi(x_{i}-\Delta x, y_{j})}{2\Delta x} = \frac{\widetilde{\Phi}_{{i+1},{j}} - \widetilde{\Phi}_{{i-1},{j}}}{2h}. $$
Therefore:
$$ {\overset{\sim }{\mathbf{D}}}_{i,{j}_x}=\frac{\delta_{i+1,j}{\delta}_{i-1,j}{\overset{\sim }{J}}_{i,j}}{2h}\left({\overset{\sim }{\varPhi}}_{i+1,j}-{\overset{\sim }{\varPhi}}_{i-1,j}\right). $$
$$ {\overset{\sim }{\mathbf{D}}}_{i,{j}_y}=\frac{\delta_{i,j+1}{\delta}_{i,j-1}{\overset{\sim }{J}}_{i,j}}{2h}\left({\overset{\sim }{\varPhi}}_{i,j+1}-{\overset{\sim }{\varPhi}}_{i,j-1}\right). $$
IF Akyildiz, W Su, Y Sankarasubramaniam, EE Cayirci, A survey on sensor networks. IEEE Commun. Mag.40(8), 102–114 (2002).
P Jacquet, in Proceedings of the 5th ACM International Symposium on Mobile Ad Hoc Networking and Computing. Geometry of information propagation in massively dense ad hoc networks (ACMRoppongi Hills, Tokyo, Japan, 2004), pp. 157–162.
P Gupta, PR Kumar, The capacity of wireless networks. IEEE Trans. Inform. Theory. 46(2), 388–404 (2000).
M Kalantari, M Shayman, in 2004 IEEE International Conference on Communications, vol. 7. Routing in wireless ad hoc networks by analogy to electrostatic theory (IEEE PressParis, France, 2004), pp. 4028–4033.
R Catanuto, G Morabito, S Toumpis, in Proceedings of the 3rd International Symposium on Wireless Communications Systems. Optical routing in massively dense networks: practical issues and dynamic programming interpretation (ACMValencia, Spain, 2006).
E Hyytiä, J Virtamo, in Proceedings of the 10th ACM Symposium on Modeling, Analysis, and Simulation of Wireless and Mobile Systems. On optimality of single-path routes in massively dense wireless multi-hop networks (ACMChania, Crete Island, Greece, 2007), pp. 28–35.
J-Y Li, R-S Ko, in Proceedings of the 28th Edition of the International Conference on Information Networking. Grid-based directional minimum cost routing for massively dense wireless sensor networks (IEEE PressPhuket, Thailand, 2014), pp. 136–141.
JA Sethian, Fast marching methods. SIAM Rev. 41(2), 199–235 (1999). doi:10.1137/S0036144598347059.
TI Zohdi, A Finite Element Primer for Beginners: The Basics. SpringerBriefs in Applied Sciences and Technology (Springer, New York, 2014).
R-S Ko, Analyzing the redeployment problem of mobile wireless sensor networks via geographic models. Wirel. Commun. Mob. Comput.13(2), 111–129 (2013). doi:10.1002/wcm.1099.
M Mauve, J Widmer, H Hartenstein, A survey on position-based routing in mobile ad-hoc networks. IEEE Netw.15(6), 30–39 (2001).
GG Finn, Routing and addressing problems in large metropolitan-scale internetworks. Research ISI/RR-87-180, Information Sciences Institute (1987).
I Stojmenovic, X Lin, Loop-free hybrid single-path/flooding routing algorithms with guaranteed delivery for wireless networks. IEEE Trans. Parallel Distrib. Syst.12(10), 1023–1032 (2001).
I Stojmenovic, M Russell, B Vukojevic, in Proceedings of the 2000 International Conference on Parallel Processing. Depth first search and location based localized routing and QoS routing in wireless networks (IEEE Computer SocietyToronto, Canada, 2000), pp. 173–180.
Q Fang, J Gao, LJ Guibas, Locating and bypassing holes in sensor networks. Mobile Networks and Applications. 11(2), 187–200 (2006).
D Niculescu, B Nath. Trajectory based forwarding and its applications (ACMSan Diego, CA, USA, 2003), pp. 260–272.
S Jung, M Kserawi, D Lee, J-KK Rhee, Distributed potential field based routing and autonomous load balancing for wireless mesh networks. IEEE Commun. Lett.13(6), 429–431 (2009).
C-F Chiasserini, R Gaeta, M Garetto, M Gribaudo, D Manini, M Sereno, Fluid models for large-scale wireless sensor networks. Perform. Eval.64(7–8), 715–736 (2007).
E Altman, P Bernhard, A Silva, in Proceedings of the 7th International Conference on Ad-hoc, Mobile and Wireless Networks. The mathematics of routing in massively dense ad-hoc networks (SpringerSophia-Antipolis, Frances, 2008), pp. 122–134.
S Toumpis, in The 2006 Workshop on Interdisciplinary Systems Approach in Performance Evaluation and Design of Computer & Communications Systems. Optimal design and operation of massively dense wireless networks: or how to solve 21st century problems using 19th century mathematics (ACM PressPisa, Italy, 2006).
S Toumpis, Mother nature knows best: a survey of recent results on wireless networks based on analogies with physics. Comput. Netw.52(2), 360–383 (2008).
M Haghpanahi, M Kalantari, M Shayman, in Proceedings of the 28th IEEE Conference on Global Telecommunications. Implementing information paths in a dense wireless sensor network (IEEE PressHonolulu, Hawaii, USA, 2009), pp. 5412–5418.
R-S Ko, Macroscopic analysis of wireless sensor network routing problems. Adhoc & Sensor Wireless Networks. 13(1–2), 59–85 (2011).
R-S Ko, A distributed routing algorithm for sensor networks derived from macroscopic models. Comput. Netw.55(1), 314–329 (2011).
F Zhao, L Guibas, Wireless Sensor Networks: An Information Processing Approach (Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2004).
S Toumpis, L Tassiulas, Optimal deployment of large wireless sensor networks. IEEE Trans. Inform. Theory. 52(7), 2935–2953 (2006).
G Peyré, M Péchaud, R Keriven, LD Cohen, Geodesic methods in computer vision and graphics. Foundations and Trends in Computer Graphics and Vision. 5(3–4), 197–397 (2010). doi:10.1561/0600000029.
E Rouy, A Tourin, A viscosity solutions approach to shape-from-shading. SIAM J. Numer. Anal.29(3), 867–884 (1992). doi:10.1137/0729053.
JE Marsden, AJ Tromba, Vector Calculus, 5th edn. (W. H. Freeman, New York, 2003).
M Kalantari, M Shayman, in IEEE Wireless Communications and Networking Conference. Design optimization of multi-sink sensor networks by analogy to electrostatic theory (IEEE PressLas Vegas, NV USA, 2006), pp. 431–438.
B Karp, HT Kung, in Proceedings of the 6th Annual International Conference on Mobile Computing and Networking. GPSR: greedy perimeter stateless routing for wireless networks (ACMBoston, MA, US, 2000), pp. 243–254.
P Bose, P Morin, I Stojmenovic, J Urrutia, Routing with guaranteed delivery in ad hoc wireless networks. Wirel. Netw.7(6), 609–616 (2001).
W-J Liu, K-T Feng, Greedy routing with anti-void traversal for wireless sensor networks. IEEE Trans. Mobile Comput.8(7), 910–922 (2009).
GT Toussaint, The relative neighbourhood graph of a finite planar set. Pattern Recogn.12(4), 261–268 (1980).
Y Yu, R Govindan, D Estrin, Geographical and energy aware routing: a recursive data dissemination protocol for wireless sensor networks. Technical Report UCLA/CSD-TR-01-0023, Computer Science Department, UCLA (2001).
This research was supported by Ministry of Science and Technology of Taiwan, under Grant MOST 104-2221-E-194-008. The authors gratefully acknowledge this support.
Department of Computer Science and Information Engineering, National Chung Cheng University, No. 168, Sec. 1, University Road, Chia-Yi, 621, Taiwan
Jing-Ya Li
& Ren-Song Ko
Search for Jing-Ya Li in:
Search for Ren-Song Ko in:
Correspondence to Ren-Song Ko.
Li, J., Ko, R. Geographical model-derived grid-based directional routing for massively dense WSNs. J Wireless Com Network 2016, 17 (2016). https://doi.org/10.1186/s13638-015-0492-1
Geographical routing algorithm
Geodesic problem
Fast marching method | CommonCrawl |
Journal of Wood Science
Official Journal of the Japan Wood Research Society
Nonlinear finite-element analysis of embedment behavior of metal washer in bolted timber joints
Masaki Teranishi ORCID: orcid.org/0000-0003-3011-02701,
Doppo Matsubara2,
Yoshiaki Wakashima3,
Hidemaru Shimizu4 &
Akihisa Kitamori5
Journal of Wood Science volume 67, Article number: 41 (2021) Cite this article
The pretensioning force in bolted joints enhances the lateral strength of the connections, and causes the embedment of metal washers into wood. Despite the significance of embedment behavior in the design of bolted joints, its mechanism has yet to be fully understood. In this study, the mechanism of the embedment of a metal washer into wood along the radial direction was examined through three-dimensional nonlinear finite-element analysis (FEA). The FEA results were validated by comparing them with experimental results for nine metal washers with different geometries. Moreover, the sensitivity of embedment stiffness and yield load to wooden material constants was also investigated. The numerical results showed good qualitative and quantitative agreement with the experimental results. In addition, the embedment stiffness and yield load were sensitive to the yield stress and Young's modulus of wood in the radial and tangential directions. The determination of these mechanical properties of wood through material testing is important for reproducing the behavior of the embedment of a metal washer into wood and accurately estimating the yield load and initial stiffness using FEA. This will play a significant role in designing bolted joints.
Timber connections with dowel-type fasteners are frequently used as load-carrying parts between members in timber structures. The appropriate design of dowel-type fasteners plays a significant role in increasing the ductility of timber connections [1]. The load–slip characteristics of laterally loaded dowel-type joints have been experimentally and numerically investigated [2,3,4,5,6,7].
The axial force on laterally loaded dowel-type fasteners contributes to strengthening the lateral strength of connections; this is referred to as the rope effect. This effect becomes particularly significant in slender dowel-type fasteners. The rope effect improves the load-carrying capacity of bolted joints under monotonic loading [8] and cyclic loading [9, 10] and the load-carrying capacity of timber-to-timber connections [5].
The pretensioning force on bolted joints causes the embedment of steel plates into wood. Thus, the embedment mechanism plays a significant role in designing bolted joints. The theory of beams on elastic foundation was applied to the embedment of metal washers into wood; the proposed method could estimate embedment stiffness and yield resistance [11]. Three-dimensional finite-element analysis (FEA) of embedment has been performed, and good agreement was qualitatively obtained between the embedment load–displacement curves obtained via FEA and an embedment test [12]. However, the aforementioned studies did not fully address the mechanism of the initiation and development of the plastic deformation of wood. The elucidation of this mechanism is necessary to evaluate the strength of pretensioned bolted joints accurately. FEA is a useful tool for the investigation of this mechanism; its results depend greatly on the elastic modulus and yield stress of wood. The mechanical properties of wood vary largely among specimens. However, the sensitivity of the embedment mechanism to the mechanical properties of wood to verify the reliability of FEA for designing pretensioned bolted joints have been rarely studied.
In this study, the mechanism of the embedment of metal washers into wood was investigated through three-dimensional nonlinear FEA, where a bolt, metal washer, and wood were finely discretized. The FEA results were validated by comparing them with experimental results. Moreover, FEA was carried out with intentionally changed material constants to investigate the sensitivity of embedment stiffness and the yield load to wooden material constants.
The embedment behavior of the metal washers into wood was investigated by Matsubara et al. [11] through embedment tests. This section provides an overview of the experiment, which is described in detail in their paper. The schematic of the embedment test setup is shown in Fig. 1. A square metal washer and nut were set on the wood. The nut was pressed vertically; then, the metal washer was embedded into the wood. A bolt hole with a diameter of 13.0 mm was created at the center of the wood. The lengths of the wood in the longitudinal, radial, and tangential directions were 130, 29, and 99 mm, respectively. Square washers with three different side lengths (40, 60, and 80 mm) and three different thicknesses (2.3, 4.5, and 6.0 mm) were employed. The embedment load was measured using a load cell, and the embedment displacement was measured by the vertical displacement of the crosshead. The radial direction of wood was parallel to the loading direction.
Schematic of apparatus for embedment test (L: longitudinal, T: tangential, R: radial)
Numerical conditions
A finite-element (FE) model was created using Abaqus/CAE. Figure 2 shows the three-dimensional FE model of the embedment test system with the square metal washer of thickness 2.3 mm and a side length of 40 mm. A quarter FE model was used by considering mechanical symmetry, where symmetric planes were created on the bolt, metal washer, and wood. The bolt, metal washer, and wood were discretized using the 20-node brick element (C3D20). The number of nodes and elements in all FE models is listed in Table 1. It should be noted that the influence of element size was investigated by comparison among numerical results with various element sizes, and the sufficiently fine mesh was employed in the FE model. The FE models were identified by the thickness and side length of the metal washer, e.g., the model of the washer with a side length of 40 mm and a thickness of 2.3 mm thickness was named S40T2.3.
Finite-element model of metal washer with a thickness of 2.3 mm and a side length of 40 mm (L: longitudinal, T: tangential, R: radial)
Table 1 Number of nodes and elements in each finite-element model
The bottom of the wood was fixed in all directions. The symmetric plane was fixed in the direction perpendicular to itself. The surface-to-surface contact condition was imposed on the bolt-to-washer and washer-to-wood interfaces, where the horizontal friction coefficients were 0.4 and 0.3, respectively [8, 13]. The augmented Lagrangian method was employed on the contact surface. In this method, penalty stiffness is used during the augmentation iteration to improve the accuracy of approximation. The penalty stiffness was set as 205000 MPa.
The metal washer and bolt were assumed as isotropic materials with isotropic linear elasticity, the von Mises yield criterion, and perfect elastoplasticity [14]. The Young's modulus, Poisson's ratio, and yield stress for the metal washer and bolt were denoted as E, v, and σy, respectively. Unlike the metal washer and bolt, the wood was assumed as an orthotropic material with orthogonal linear elasticity, Hill's anisotropic yield criterion, and perfect elastoplasticity [15, 16]. The Hill's anisotropic yield criterion has been frequently used in wooden material [6, 12, 17]. Subscripts L, R, and T were used to denote the properties of the wood in the longitudinal, radial, and tangential directions, respectively. For example, the Young's moduli of the wood in the longitudinal, radial, and tangential directions were denoted as EL, ER, and ET, respectively. Hill's anisotropic yield criterion used for the wood was as follows [16]:
$$f = \sqrt {F\left( {\sigma_{{{\text{RR}}}} - \sigma_{{{\text{TT}}}} } \right)^{2} + G\left( {\sigma_{{{\text{TT}}}} - \sigma_{{{\text{LL}}}} } \right)^{2} + H\left( {\sigma_{{{\text{LL}}}} - \sigma_{{{\text{RR}}}} } \right)^{2} + 2L\sigma_{{{\text{RT}}}}^{2} + 2M\sigma_{{{\text{TL}}}}^{2} + 2N\sigma_{{{\text{LR}}}}^{2} } - 1 \le 0$$
$$\begin{gathered} F = \frac{1}{2}\left( {\frac{1}{{\left( {\sigma_{{{\text{RR}}}}^{y} } \right)^{2} }} + \frac{1}{{\left( {\sigma_{{{\text{TT}}}}^{y} } \right)^{2} }} - \frac{1}{{\left( {\sigma_{{{\text{LL}}}}^{y} } \right)^{2} }}} \right),\;G = \frac{1}{2}\left( {\frac{1}{{\left( {\sigma_{{{\text{TT}}}}^{y} } \right)^{2} }} + \frac{1}{{\left( {\sigma_{{{\text{LL}}}}^{y} } \right)^{2} }} - \frac{1}{{\left( {\sigma_{{{\text{RR}}}}^{y} } \right)^{2} }}} \right) \hfill \\ H = \frac{1}{2}\left( {\frac{1}{{\left( {\sigma_{{{\text{LL}}}}^{y} } \right)^{2} }} + \frac{1}{{\left( {\sigma_{{{\text{RR}}}}^{y} } \right)^{2} }} - \frac{1}{{\left( {\sigma_{{{\text{TT}}}}^{y} } \right)^{2} }}} \right),\;L = \frac{3}{{2\left( {\sigma_{{{\text{RT}}}}^{y} } \right)^{2} }},\;M = \frac{3}{{2\left( {\sigma_{{{\text{TL}}}}^{y} } \right)^{2} }},\;N = \frac{3}{{2\left( {\sigma_{{{\text{LR}}}}^{y} } \right)^{2} }} \hfill \\ \end{gathered}$$
where\(\sigma_{ij}^{{}}\) is the stress tensor and \(\sigma_{ij}^{y}\) is the yield stress.
The material constants of the metal washer and bolt were assumed as the mechanical properties of SS400, which is a steel grade defined in the Japanese Industrial Standard. The Young's modulus, Poisson's ratio, and yield stress of the metal washer and bolt were taken from the literature [18]. The wood had three Young's moduli, EL, ER, and ET, three shear moduli, GLT, GLR, and GRT, six Poisson's ratios, vLT, vTL, vLR, vRL, vRT, and vTR, and six yield stresses, σyL, σyR, σyT, σyLT, σyLR, and σyRT. The Young's modulus (ER) and yield stress (σyR) of the wood in the radial direction were obtained from the compressive test performed by Matsubara et al. [11]. The Young's moduli in the other two directions and the shear moduli were calculated from ER using the following relationships for coniferous forests [19]:
EL:ER:ET = 22:2:1
$$G_\text{LR} :G_\text{LT} :G_\text{RT} \, = \,20: \, 17: \, 1$$
EL:GLR = 16.7:1:1
The yield stresses in the other two directions were determined by considering the tensile and shear tests results of Japanese cedar [20], and it was assumed that σyT is equal to σyR and σyLT is equal to σyLR and σyRT as shown in previous studies [6, 12, 21]. The Poisson's ratio in each direction was obtained from the experimental results of Japanese cedar in literature [22]. The material constants of the bolt, metal washer, and wood are summarized in Tables 2 and 3, respectively. It should be noted that the assumptions of elastic modulus and yield stress were used conveniently; however, this may be inappropriate for some analyses which are sensitive to the material constants in each direction. Thus, the sensitivity of embedment behavior of this study to material constants was investigated as mentioned latter.
Table 2 Material constants of bolt and metal washer
Table 3 Material constants of wood
The three-dimensional FEA was carried out considering geometrical and material nonlinearity using Abaqus/Standard. The analysis was carried out sequentially on a single workstation with a dual 4-core CPU with a clock speed of 3.8 GHz and 96 GB RAM. The calculation time was approximately 24 h.
As observed by Matsubara et al. [11], two types of deformation modes in the experiment were observed: the metal washer around the bolt hole partially bending and being embedded into the wood, and the entire metal washer being embedded into the wood. The former was observed in the washer with small thickness and long side length, i.e., low bending stiffness. The latter was observed in the washer with large thickness and short side length, i.e., high bending stiffness.
Comparison between experimental and numerical results
This section compares the embedment test results and FEA results. It should be noted that a few cases of FEA diverged during the equilibrium iteration. However, the numerical results could be compared with the experimental results without loss of generality, because FEA was performed beyond the yield load in all cases.
Figure 3 shows the embedment load–displacement curves obtained via the experiment and FEA for each side length of the metal washer. Table 4 lists the initial stiffness K and yield load Py. The initial stiffness was obtained from the first straight line using the least-square methods, and the yield load was defined as an intersection of first and second straight lines. In the experiment, the initial slip displacement occurred owing to the clearance between the upper surface of the nut and the steel plate jig. This clearance in the experiment cannot be dealt with by FEA. Because of this, the embedment load–displacement curve was intentionally translated along the x-axis to remove the clearance. The trends of the experimental load–displacement curves were broadly reproduced by the FEA. However, on average, the initial stiffness and yield load in the numerical results were 18% lower and 14% higher than those in the experimental results, respectively.
Embedment load–displacement curves obtained via experiments and FEA. Side lengths of metal washers are (a) 40, (b) 60, and (c) 80 mm
Table 4 Initial stiffness and yield load obtained via experiment and FEA
Figure 4 shows the residual displacement of wood after the embedment test is finished and the distribution of the equivalent plastic strain of wood at the final step of the FEA. In the case of the metal washer with a large thickness and a small side length (S40T6), the equivalent plastic strain of wood increased along the edge of the metal washer, because the metal washer had high bending stiffness and small bending deformation. By contrast, in the case of the metal washer with a small thickness and a large side length (S80T2.3), the plastic deformation of wood occurred in a wide area owing to the bending deformation of the metal washer. The trend of the plastic deformation of wood was also observed from the residual deformation in the experiment, where the area with high plastic strain approximately corresponded to the area with large residual deformation.
Residual displacement of wood after embedment test is finished, and distribution of equivalent plastic strain of wood in the LT (L: longitudinal, T: tangential) plane at the final step of FEA. Side lengths of metal washers are (a) 40, (b) 60, and (c) 80 mm
Figure 5 shows the development of the equivalent plastic strain of wood in cases S40T6 and S80T2.3 at two points, i.e., when the load reached the yield point and when plastic deformation increased after the yield point. In particular, the development of plastic deformation in the longitudinal and tangential directions is examined. In S40T6, the stress of wood was concentrated beneath the edge of the metal washer. At the yield point, plastic deformation was distributed in the diagonal direction from the edge of the metal washer on the RT (R: radial, T: tangential) symmetric plane. After the yield point, plastic deformation increased in the diagonal direction and along the edge of the metal washer. The side length and stiffness of wood in the longitudinal direction were higher than those in the tangential direction. Thus, the wood was deflected toward the tangential direction, as shown in Fig. 6. In addition, the plastic deformation of wood in the tangential direction preceded that in the longitudinal direction. During the loading process, the shape of the metal washer remained intact owing to its high bending stiffness. By contrast, in S80T2.3, the plastic deformation of wood was distributed around the bolt hole at the yield point, unlike S40T6, because the metal washer with low bending stiffness was deformed by the vertical force from the bolt and its center part compressed the wood. After the yield point, plastic deformation uniformly extended in the longitudinal and tangential directions; in particular, it increased in the tangential direction.
Development of equivalent plastic strain in S40T6 and S80T2.3
Deformation of wood at final step in (a) S40T6 and (b) S80T2.3, whose scale is magnified 10 and 5 times, respectively (L: longitudinal, T: tangential)
Sensitivity of embedment behavior to material constants
As discussed in the previous section, the embedment behavior of the metal washer into wood might be influenced by the mechanical properties in not only the radial direction but also the longitudinal and tangential directions. This section describes the numerical analysis of the sensitivity of embedment behavior to the material constants in each direction. In the analysis, the elastic modulus and yield stress of wood in cases S40T6 and S80T2.3 were intentionally doubled as compared with the original models. However, when σyT or σyR were doubled individually, the values of G and H in Eq. (2) became negative, which caused instability in the equilibrium iteration of the FEA. Thus, σyT and σyR were doubled simultaneously. A previous study reported that embedment behavior was insensitive to Poisson's ratio; hence, Poisson's ratio was not considered in the sensitivity analysis in this study [23]. The ratios of initial stiffness K and yield load Py with and without doubled material constants are defined as follows:
$$r^{K} = \frac{{K{\text{ in FEA with intentionally changed material constants}}}}{{K{\text{ in}}{\text{ FEA without intentionally changed material constants}}}}$$
$$r^{P} = \frac{{P^{y} {\text{ in FEA with intentionally changed material constants}}}}{{P^{y} {\text{ in FEA without intentionally changed material constants}}}}.$$
Figures 7 and 8 show the embedment load–displacement curves obtained via the FEA with the intentionally changed elastic modulus and yield stress in each direction. rK and rP are listed in Table 5. K increased when ER was doubled, K and Py increased when ET was doubled, and Py increased when σyR and σyT were doubled. K and Py were only slightly affected when the other elastic moduli and yield stresses were doubled. It was evident that ER and σyR affected K and Py because the radial direction was parallel to the direction of vertical force. In addition, ET and σyT influenced K and Py owing to the large deflection of wood toward the tangential direction, as noted in the previous section. In S40T6, the maximum increase in K (approximately 30%) and Py (approximately 90%) was observed when ER was doubled and when σyR and σyT were simultaneously doubled, respectively. The influence of the intentionally changed elastic modulus and yield stress was stronger in S40T6 compared to S80T2.3 because the bending deformation of the metal washer affected K and Py in S80T2.3. The results show that the measurement of ET, ER, σyT, and σyR via material testing is important for reproducing the behavior of the embedment of the metal washer into wood and accurately estimating the yield load and initial stiffness using FEA.
Embedment load–displacement curves obtained via FEA with intentionally changed elastic moduli in (a) S40T6 and (b) S80T2.3
Embedment load–displacement curve obtained via FEA with intentionally changed material constants in (a) S40T6 and (b) S80T2.3
Table 5 Ratio of initial stiffness and yield load before and after changing material constants
The mechanism of the embedment behavior of a bolted joint was examined using FEA. A metal washer was embedded into wood in the radial direction. The numerical results were validated by comparing them with experimental results. In addition, the sensitivity of embedment behavior to the elastic modulus and yield stress of wood in each direction was investigated through numerical analysis. The major findings are as follows:
(1) The trends of the embedment load–displacement curves observed in the embedment tests were approximately reproduced by the FEA results. On average, the initial stiffness and yield load obtained via numerical analysis were 18% lower and 14% higher than those obtained via experiments, respectively. Moreover, the residual displacement of wood in the experiment approximately corresponded to the distribution of the equivalent plastic strain of wood in the FEA.
(2) In the numerical case of the metal washer with high bending stiffness, the plastic deformation of wood initiated beneath the edge of the metal washer and extended in the tangential direction mainly owing to the difference between the stiffness in the tangential and longitudinal directions. On the contrary, in the case of the metal washer with low stiffness, the plastic deformation of wood initiated in the vicinity of the bolt hole and then developed particularly in the tangential direction.
(3) The initial stiffness and yield load for the embedment were sensitive to the Young's modulus and yield stress of wood in the tangential and radial directions. The influence of the change in these material constants was stronger in the case of the metal washer with high bending stiffness compared to low bending stiffness. Thus, the measurement of these constants through material testing is important for reproducing the behavior of the embedment of a metal washer into wood and accurately estimating the yield load and initial stiffness using FEA.
In this study, the geometrical properties of the metal washer were varied for the experiment and FEA, whereas the geometrical properties of wood (Japanese cedar) were fixed; this might affect the embedment behavior. In future, the variation in the geometrical properties of wood should be considered to obtain general results.
FEA:
Finite-element analysis
Lathuillière D, Bléron L, Descamps T, Bocquet J-F (2015) Reinforcement of dowel type connections. Constr Build Mater 97:48–54
Nishiyama N, Ando N (2003) Analysis of load-slip characteristics of nailed wood joints: application of a two-dimensional geometric nonlinear analysis. J Wood Sci 49(6):505–512
Sawata K, Yasumura M (2003) Estimation of yield and ultimate strengths of bolted timber joints by nonlinear analysis and yield theory. J Wood Sci 49(5):383–391
Sawata K (2015) Strength of bolted timber joints subjected to lateral force. J Wood Sci 61(3):221–229
Gečys T, Bader TK, Olsson A, Kajėnas S (2019) Influence of the rope effect on the slip curve of laterally loaded, nailed and screwed timber-to-timber connections. Constr Build Mater 228:116702
A Awaludin, T Toshiro Hayashikawa, T Hirai, Y Sasaki. 2010 Loading resistance of bolted timber joints beyond their Yield-loads. In: Proceedings of the 2nd ASEAN Civil Engineering Conference, Vientiane
Wusqo U, Awaludin A, Irawati IS, Setiawan AF (2019) Study of laminated veneer lumber (LVL) sengon to concrete joint using two-dimensional numerical simulation. J Civ Eng Forum 5(3):275–288
Awaludin A, Hirai T, Hayashikawa T, Sasaki Y (2008) Load-carrying capacity of steel-to-timber joints with a pretensioned bolt. J Wood Sci 54(5):362–368
Awaludin A, Hirai T, Hayashikawa T, Sasaki Y, Oikawa A (2008) Effects of pretension in bolts on hysteretic responses of moment-carrying timber joints. J Wood Sci 54(2):114–120
Awaludin A, Hirai T, Sasaki Y, Hayashikawa T, Oikawa A (2011) Beam to column timber joints with pretensioned bolts. Civ Eng Dimens 13(2):59–64
Matsubara D, Shimada M, Hirai T, Funada R, Hattori N (2016) Embedment of metal washers into timber members of bolted timber joints I. Application of the theory of a beam on an elastic foundation. Mokuzai Gakkaishi 62(4):119–132
Awaludin A, Hirai T, Hayashikawa T, Leijten A (2012) A finite element analysis of bearing resistance of timber loaded through a steel plate. Civ Eng Dimens 14(1):1–6
Kuwamura H (2011) Coefficient of friction between wood and steel under heavy contact. J Struct Constr Eng 76(666):1469–1478
de Souza Neto EA, Peric D, Owen DR (2011) Computational methods for plasticity: theory and applications. Wiley, New York
Ambartsumian SA (1964) Theory of anisotropic shells. NASA IT F-118, Washington D.C.
Hill R (1948) A theory of the yielding and plastic flow of anisotropic metals. Proc R Soc A Math Phys Eng Sci 193(1033):281–297
Awaludin A, Irawati IS, Shulhan MA (2019) Two-dimensional finite element analysis of the flexural resistance of LVL Sengon non-prismatic beams. Case Stud Constr Mater 10:e00225
Architectural Institute of Japan (2017) AIJ design standard for steel structures: Based on allowable stress concept (2005 edition). Architectural Institute of Japan, Tokyo
Takahashi T, Nakayama Y (2015) Wood science series III. Physics. Kaisei-sha, Shiga
Kuwamura H (2013) Failure strength of wood in single-bolted joints loaded parallel-to-grain: Study on steel-framed timber structures, Part 16. J Struct Constr Eng 78(691):1575–1584
Sirumbal-Zapata LF, Málaga-Chuquitaype C, Elghazouli AY (2018) A three-dimensional plasticity-damage constitutive model for timber under cyclic loads. Comput Struct 195:47–63
Architectural Institute of Japan (2014) Fundamental theory of timber engineering. Architectural Institute of Japan, Tokyo
Mitsui S, Minami Y, Kawachi T, Kondo K (2010) Finite element analysis of wooden behavior of compressive strain inclined to the grain: (Part-1) Outline of the present approach and some numerical analyses of uniform partial compression test. J Struct Eng 56:359–369
This work was supported in part by grants from the Japan Society for the Promotion of Science (KAKENHI, JP19K06180).
Niigata University, 8050, Ikarashi 2-no-cho, Nishi-ku, Niigata, 950-2181, Japan
Masaki Teranishi
Kindai University, 11-6 Kayanomori, Iizuka, Fukuoka, 820-8555, Japan
Doppo Matsubara
Toyama Prefectural Agricultural, Forestry and Fisheries Research Center, Imizu, Toyama, 4940939-0311, Japan
Yoshiaki Wakashima
Sugiyama Jogakuen University, 17-3, Chikusa-ku, Nagoya, Aichi, 464-8662, Japan
Hidemaru Shimizu
Osaka Sangyo Univerisity, 3 Chome-1-1 Nakagaito, Daito, Osaka, 574-8530, Japan
Akihisa Kitamori
TM carried out the finite-element analysis and investigated its results. DM designed and performed the experiments. YW, HS, and AK assisted in the preparation of the experiments. TM wrote the manuscript in consultation with DM, YW, HS, and AK. All authors read and approved the final manuscript.
Correspondence to Masaki Teranishi.
We agree to allow the publication of our manuscript.
Teranishi, M., Matsubara, D., Wakashima, Y. et al. Nonlinear finite-element analysis of embedment behavior of metal washer in bolted timber joints. J Wood Sci 67, 41 (2021). https://doi.org/10.1186/s10086-021-01973-9
Contact problem
Pretensioned bolted joint
Embedment behavior
Metal washer | CommonCrawl |
Convertible Notes
Bonds Fixed Income Essentials
Learn to Calculate Yield to Maturity in MS Excel
By Sean Ross
Understanding a bond's yield to maturity (YTM) is an essential task for fixed income investors. But to fully grasp YTM, we must first discuss how to price bonds in general. The price of a traditional bond is determined by combining the present value of all future interest payments (cash flows), with the repayment of principal (the face value or par value) of the bond at maturity.
The rate used to discount these cash flows and principal is called the "required rate of return", which is the rate of return required by investors who are weighing the risks associated with the investment.
To calculate the a bond's maturity (YTM) it's vital to understand how to bonds are priced by combining the present value of all future interest payments (cash flows), with the repayment of principal (the face value or par value) of the bond at maturity.
The pricing of a bond largely depends on the difference between the coupon rate--a known figure, and the required rate--an inferred figure.
Coupon rates and required returns frequently do not match in the subsequent months and years following an issuance, as market events impact the interest rate environment.
How to Price a Bond
The formula to price a traditional bond is:
PV=P(1+r)1+P(1+r)2+⋯+P+Principal(1+r)nwhere:PV=present value of the bondP=payment, or coupon rate×par value÷number ofpayments per yearr=required rate of return÷number of paymentsper yearPrincipal=par (face) value of the bondn=number of years until maturity\begin{aligned} &\text{PV} = \frac { \text{P} }{ ( 1 + r ) ^ 1 } + \frac { \text{P} }{ ( 1 + r ) ^ 2 } + \cdots + \text{P} + \frac { \text{Principal} }{ ( 1 + r ) ^ n } \\ &\textbf{where:} \\ &\text{PV} = \text{present value of the bond} \\ &\text{P} = \text{payment, or coupon rate} \times \text{par value} \div \text{number of} \\ &\text{payments per year} \\ &r = \text{required rate of return} \div \text{number of payments} \\ &\text{per year} \\ &\text{Principal} = \text{par (face) value of the bond} \\ &n = \text{number of years until maturity} \\ \end{aligned}PV=(1+r)1P+(1+r)2P+⋯+P+(1+r)nPrincipalwhere:PV=present value of the bondP=payment, or coupon rate×par value÷number ofpayments per yearr=required rate of return÷number of paymentsper yearPrincipal=par (face) value of the bondn=number of years until maturity
The pricing of a bond is therefore critically dependent on the difference between the coupon rate, which is a known figure, and the required rate, which is inferred.
Suppose the coupon rate on a $100 bond is 5%, meaning the bond pays $5 per year, and the required rate – given the risk of the bond – is 5%. Because these two figures are identical, the bond will be priced at par, or $100.
This is shown below (note: if tables are hard to read, please right-click and choose "view image"):
Pricing a Bond after It's Issued
Bonds trade at par when they are first issued. Frequently, the coupon rate and required return don't match in the subsequent months and years, as events impact the interest rate environment. Failure of these two rates to match causes the price of the bond to appreciate above par (trade at a premium to its face value) or decline below par (trade at a discount to its face value), in order to compensate for the rate difference.
Take the same bond as above (5% coupon, pays out $5 a year on $100 principal) with five years left until maturity. If the current Federal Reserve rate is 1%, and other similar-risk bonds are at 2.5% (they pay out $2.50 a year on $100 principal), this bond looks very attractive: offering 5% in interest—double that of comparable debt instruments.
Given this scenario, the market will adjust the price of the bond proportionally, in order to reflect this difference in rates. In this case, the bond would trade at a premium amount of $111.61. The current price of $111.61 is higher than the $100 you will receive at maturity, and that $11.61 represents the difference in present value of the extra cash flow you receive over the life of the bond (the 5% vs. the required return of 2.5%).
In other words, in order to get that 5% interest when all other rates are much lower, you must buy something today for $111.61 that you know in the future will only be worth $100. The rate that normalizes this difference is the yield to maturity.
Calculating the Yield to Maturity in Excel
The above examples break out each cash flow stream by year. This is a sound method for most financial modeling because best practices dictate that the sources and assumptions of all calculations should be easily auditable. However, when it comes to pricing a bond, we can make an exception to this rule because of the following truths:
Some bonds have many years (decades) to maturity and a yearly analysis, like that shown above, may not be practical
Most of the information is known and fixed: we know the par value, we know the coupon, and we know the years to maturity.
For these reasons, we'll set up the calculator as follows:
In the above example, the scenario is made slightly more realistic by using two coupon payments per year, which is why the YTM is 2.51 – slightly above the 2.5% required rate of return in the first examples.
For YTMs to be accurate, it's a given that bondholders must commit to holding the bond until maturity!
Yield to Maturity – YTM vs. Spot Rate: What's the Difference?
Current yield vs yield to maturity
Calculate PV of a Different Bond Type With Excel
Simple Math Terms for Fixed-Coupon Corporate Bonds
Calculating the Macaulay Duration of a Zero-Coupon Bond in Excel
Bond Floor Definition
Bond floor refers to the minimum value a specific bond should trade for and is derived from the discounted value of its coupons plus redemption value.
Yield to Maturity (YTM)
Yield to maturity (YTM) is the total return expected on a bond if the bond is held until maturity.
Bond Valuation: What's the Fair Value of a Bond?
Bond valuation is a technique for determining the theoretical fair value of a particular bond.
Duration Definition
Duration indicates the years it takes to receive a bond's true cost, weighing in the present value of all future coupon and principal payments.
A bond is a fixed income investment in which an investor loans money to an entity (corporate or governmental) that borrows the funds for a defined period of time at a fixed interest rate.
Modified duration is a formula that expresses the measurable change in the value of a security in response to a change in interest rates. | CommonCrawl |
Field emission theory of dislocation-sensitized photo-stimulated exo-electron emission from coloured alkali halide crystals
B P Chandra R S Chandok P K Khare
Volume 48 Issue 6 June 1997 pp 1135-1143
Click here to view fulltext PDF
https://www.ias.ac.in/article/fulltext/pram/048/06/1135-1143
Photo-stimulated exo-electron emission; alkalihalide crystals; plastic deformation; colour centres
A new field emission theory of dislocation-sensitized photo-stimulated exo-electron emission (DSPEE) is proposed, which shows that the increase in the intensity of photo emission fromF-centres during plastic deformation is caused by the appearance of an electric field which draws excited electrons out of the deeper layer and, therefore, increases the number of electrons which reach the surface. The theory of DSPEE shows that the variation of DSPEE flux intensity should obey the following relation$$\frac{{\Delta J_e \left( \varepsilon \right)}}{{J_e \left( o \right)}} = \left[ {\frac{{Y_s }}{{d_F }}\exp \left( {\frac{\chi }{{kT}}} \right) - 1} \right]$$. The theory of DSPEE is able to explain several experimental observations like linear increase of DSPEE intensityJe with the strain at low deformation, occurrence of the saturation inJe at higher deformation, temperature dependence ofJe, linear dependence ofJe on the electric field strength, the order of the critical strain at which saturation occurs inJe, and the ratio of the PEE intensity of deformed and undeformed crystals. At lower values of the strain, some of the excited electrons are captured by surface traps, where the deformation generated electric field is not able to cause the exo-emission. At larger deformation (in between 2% and 3%) of the crystal, the deformation-generated electric field becomes sufficient to cause an additional exo-electron emission of the electrons trapped in surface traps, and therefore,t here appears a hump in theJe versusε curves of the crystals.
B P Chandra1 R S Chandok1 P K Khare1
Department of Postgraduate Studies and Research in Physics, Rani Durgavati Vishwavidyalaya, Jabalpur - 482 001, India
Manuscript received | CommonCrawl |
New non-binary quantum codes from constacyclic codes over $ \mathbb{F}_q[u,v]/\langle u^{2}-1, v^{2}-v, uv-vu\rangle $
A subspace code of size $ \bf{333} $ in the setting of a binary $ \bf{q} $-analog of the Fano plane
August 2019, 13(3): 435-455. doi: 10.3934/amc.2019028
A unified polynomial selection method for the (tower) number field sieve algorithm
Palash Sarkar 1, and Shashank Singh 2,,
Indian Statistical Institute, Kolkata 700108, West Bengal, India
Indian Institute of Science Education and Research Bhopal, Bhopal 462066, Madhya Pradesh, India
* Corresponding author: Shashank Singh
Received July 2018 Revised January 2019 Published April 2019
Figure(5) / Table(1)
At Eurocrypt 2015, Barbulescu et al. introduced two new methods of polynomial selection, namely the Conjugation and the Generalised Joux-Lercier methods, for the number field sieve (NFS) algorithm as applied to the discrete logarithm problem over finite fields. A sequence of subsequent works have developed and applied these methods to the multiple and the (extended) tower number field sieve algorithms. This line of work has led to new asymptotic complexities for various cases of the discrete logarithm problem over finite fields. The current work presents a unified polynomial selection method which we call Algorithm $ \mathcal{D} $. Starting from the Barbulescu et al. paper, all the subsequent polynomial selection methods can be seen as special cases of Algorithm $ \mathcal{D} $. Moreover, for the extended tower number field sieve (exTNFS) and the multiple extended TNFS (MexTNFS), there are finite fields for which using the polynomials selected by Algorithm $ \mathcal{D} $ provides the best asymptotic complexity. Suppose $ Q = p^n $ for a prime $ p $ and further suppose that $ n = \eta\kappa $ such that there is a $ c_{\theta}>0 $ for which $ p^{\eta} = L_Q(2/3, c_{\theta}) $. For $ c_{\theta}>3.39 $, the complexity of exTNFS-$ \mathcal{D} $ is lower than the complexities of all previous algorithms; for $ c_{\theta}\notin (0, 1.12)\cup[1.45, 3.15] $, the complexity of MexTNFS-$ \mathcal{D} $ is lower than that of all previous methods.
Keywords: Finite fields, discrete logarithm, number field sieve, tower number field sieve, multiple tower number field sieve.
Mathematics Subject Classification: Primary: 11Y16; Secondary: 94A60.
Citation: Palash Sarkar, Shashank Singh. A unified polynomial selection method for the (tower) number field sieve algorithm. Advances in Mathematics of Communications, 2019, 13 (3) : 435-455. doi: 10.3934/amc.2019028
L. M. Adleman, The function field sieve, In Leonard M. Adleman and Ming-Deh A. Huang, editors, ANTS, volume 877 of Lecture Notes in Computer Science, pages 108–121. Springer, 1994. doi: 10.1007/3-540-58691-1_48. Google Scholar
L. M. Adleman and M.-D. A. Huang, Function field sieve method for discrete logarithms over finite fields, Inf. Comput., 151 (1999), 5-16. doi: 10.1006/inco.1998.2761. Google Scholar
R. Barbulescu and S. Duquesne, Updating key size estimations for pairings, Journal of Cryptology, 2018, 1–39. https://link.springer.com/article/10.1007/s00145-018-9280-5. doi: 10.1007/s00145-018-9280-5. Google Scholar
R. Barbulescu, P. Gaudry, A. Guillevic and F. Morain, Improving NFS for the discrete logarithm problem in non-prime finite fields, In Elisabeth Oswald and Marc Fischlin, editors, Advances in Cryptology – EUROCRYPT 2015, volume 9056 of Lecture Notes in Computer Science, pages 129–155. Springer Berlin Heidelberg, 2015. doi: 10.1007/978-3-662-46800-5_6. Google Scholar
R. Barbulescu, P. Gaudry, A. Joux and E. Thomé, A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic, In Phong Q. Nguyen and Elisabeth Oswald, editors, Advances in Cryptology - EUROCRYPT 2014 - 33rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Copenhagen, Denmark, May 11-15, 2014. Proceedings, volume 8441 of Lecture Notes in Computer Science, pages 1–16. Springer, 2014. doi: 10.1007/978-3-642-55220-5_1. Google Scholar
R. Barbulescu, P. Gaudry and T. Kleinjung, The tower number field sieve, In Tetsu Iwata and Jung Hee Cheon, editors, Advances in Cryptology - ASIACRYPT 2015 - 21st International Conference on the Theory and Application of Cryptology and Information Security, Auckland, New Zealand, November 29 - December 3, 2015, Proceedings, Part II, volume 9453 of Lecture Notes in Computer Science, pages 31–55. Springer, 2015. doi: 10.1007/978-3-662-48800-3_2. Google Scholar
R. Barbulescu and C. Pierrot, The multiple number field sieve for medium and high characteristic finite fields, LMS Journal of Computation and Mathematics, 17 (2014), 230-246. doi: 10.1112/S1461157014000369. Google Scholar
Y. Bistritz and A. Lifshitz, Bounds for resultants of univariate and bivariate polynomials, Linear Algebra and its Applications, 432 (2010), 1995–2005. Special issue devoted to the 15th ILAS Conference at Cancun, Mexico, June 16-20, 2008. doi: 10.1016/j.laa.2009.08.012. Google Scholar
N. Gama and P. Q. Nguyen, Predicting lattice reduction, In Nigel Smart, editor, Advances in Cryptology – EUROCRYPT 2008, 31–51, Lecture Notes in Comput. Sci., 4965, Springer, Berlin, 2008. doi: 10.1007/978-3-540-78967-3_3. Google Scholar
P. Gaudry, L. Grémy and M. Videau, Collecting relations for the number field sieve in GF(p6), LMS Journal of Computation and Mathematics, 19 (2016), 332-350. doi: 10.1112/S1461157016000164. Google Scholar
D. M. Gordon, Discrete logarithms in GF(p) using the number field sieve, SIAM J. Discrete Math., 6 (1993), 124-138. doi: 10.1137/0406010. Google Scholar
R. Granger, T. Kleinjung and J. Zumbrägel, Discrete logarithms in GF(29234), NMBRTHRY list, January 2014.Google Scholar
A. Guillevic, Computing individual discrete logarithms faster in GF(pn) with the NFS-DL algorithm, Advances in cryptology–ASIACRYPT 2015. Part I, 149–173, Lecture Notes in Comput. Sci., 9452, Springer, Heidelberg, 2015, http://eprint.iacr.org/. doi: 10.1007/978-3-662-48797-6_7. Google Scholar
A. Guillevic, F. Morain and E. Thomé, Solving discrete logarithms on a 170-bit MNT curve by pairing reduction, Selected Areas in Cryptography – SAC 2016, 2017,559–578. http://eprint.iacr.org/. doi: 10.1007/978-3-319-69453-5_30. Google Scholar
K. Hayasaka, K. Aoki, T. Kobayashi and T. Takagi, A construction of 3-dimensional lattice sieve for number field sieve over $ \mathbb{F}_{p^n} $, JSIAM Lett., 6 (2014), 53-56. doi: 10.14495/jsiaml.6.53. Google Scholar
A. Joux, Faster index calculus for the medium prime case: Application to 1175-bit and 1425-bit finite fields, In Thomas Johansson and Phong Q. Nguyen, editors, EUROCRYPT, volume 7881 of Lecture Notes in Computer Science, pages 177–193. Springer, 2013. doi: 10.1007/978-3-642-38348-9_11. Google Scholar
A. Joux, A new index calculus algorithm with complexity $ L(1/4+o(1)) $ in small characteristic, In Tanja Lange, Kristin E. Lauter, and Petr Lisonek, editors, Selected Areas in Cryptography - SAC 2013 - 20th International Conference, Burnaby, BC, Canada, August 14-16, 2013, Revised Selected Papers, volume 8282 of Lecture Notes in Computer Science, pages 355–379. Springer, 2014. doi: 10.1007/978-3-662-43414-7_18. Google Scholar
A. Joux and R. Lercier, The function field sieve is quite special, In Claus Fieker and David R. Kohel, editors, ANTS, volume 2369 of Lecture Notes in Computer Science, pages 431–445. Springer, 2002. doi: 10.1007/3-540-45455-1_34. Google Scholar
A. Joux and R. Lercier, Improvements to the general number field sieve for discrete logarithms in prime fields. A comparison with the Gaussian integer method, Math. Comput., 72 (2003), 953-967. doi: 10.1090/S0025-5718-02-01482-5. Google Scholar
A. Joux and R. Lercier, The function field sieve in the medium prime case, In Serge Vaudenay, editor, EUROCRYPT, volume 4004 of Lecture Notes in Computer Science, pages 254–270. Springer, 2006. doi: 10.1007/11761679_16. Google Scholar
A. Joux, R. Lercier, N. P. Smart and F. Vercauteren, The number field sieve in the medium prime case, In Cynthia Dwork, editor, Advances in Cryptology - CRYPTO 2006, 26th Annual International Cryptology Conference, Santa Barbara, California, USA, August 20-24, 2006, Proceedings, volume 4117 of Lecture Notes in Computer Science, pages 326–344. Springer Berlin Heidelberg, 2006. doi: 10.1007/11818175_19. Google Scholar
A. Joux and C. Pierrot, The special number field sieve in $ \mathbb{F}_{p^n} $ - Application to pairing-friendly constructions, In Zhenfu Cao and Fangguo Zhang, editors, Pairing-Based Cryptography - Pairing 2013 - 6th International Conference, Beijing, China, November 22-24, 2013, Revised Selected Papers, volume 8365 of Lecture Notes in Computer Science, pages 45–61. Springer, 2013. doi: 10.1007/978-3-319-04873-4_3. Google Scholar
A. Joux and C. Pierrot, Improving the polynomial time precomputation of Frobenius representation discrete logarithm algorithms - simplified setting for small characteristic finite fields, In Palash Sarkar and Tetsu Iwata, editors, Advances in Cryptology - ASIACRYPT 2014 - 20th International Conference on the Theory and Application of Cryptology and Information Security, Kaoshiung, Taiwan, R.O.C., December 7-11, 2014. Proceedings, Part I, volume 8873 of Lecture Notes in Computer Science, pages 378–397. Springer, 2014. doi: 10.1007/978-3-662-45611-8_20. Google Scholar
T. Kim and R. Barbulescu, Extended tower number field sieve: A new complexity for the medium prime case, In Matthew Robshaw and Jonathan Katz, editors, Advances in Cryptology - CRYPTO 2016 - 36th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 14-18, 2016, Proceedings, Part I, volume 9814 of Lecture Notes in Computer Science, pages 543–571. Springer, 2016. doi: 10.1007/978-3-662-53018-4_20. Google Scholar
T. Kim and J. Jeong, Extended tower number field sieve with application to finite fields of arbitrary composite extension degree, In Serge Fehr, editor, Public-Key Cryptography - PKC 2017 - 20th IACR International Conference on Practice and Theory in Public-Key Cryptography, Amsterdam, The Netherlands, March 28-31, 2017, Proceedings, Part I, volume 10174 of Lecture Notes in Computer Science, pages 388–408. Springer, 2017. Google Scholar
A. K. Lenstra, H. W. Lenstra and L. Lovász, Factoring polynomials with rational coefficients, Mathematische Annalen, 261 (1982), 515-534. doi: 10.1007/BF01457454. Google Scholar
A. Menezes, P. Sarkar and S. Singh, Challenges with assessing the impact of NFS advances on the security of pairing-based cryptography, In Mycrypt, volume 10311 of Lecture Notes in Computer Science, pages 83–108. Springer, 2016. doi: 10.1007/978-3-319-61273-7_5. Google Scholar
C. Pierrot, The multiple number field sieve with conjugation and generalized Joux-Lercier methods, In Advances in Cryptology - EUROCRYPT 2015 - 34th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Sofia, Bulgaria, April 26-30, 2015, Proceedings, Part I, pages 156–170, Lecture Notes in Comput. Sci., 9056, Springer, Heidelberg, 2015. doi: 10.1007/978-3-662-46800-5_7. Google Scholar
P. Sarkar and S. Singh, Fine tuning the function field sieve algorithm for the medium prime case, IEEE Transactions on Information Theory, 62 (2016), 2233–2253. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=7405328. doi: 10.1109/TIT.2016.2528996. Google Scholar
P. Sarkar and S. Singh, A general polynomial selection method and new asymptotic complexities for the tower number field sieve algorithm, In Jung Hee Cheon and Tsuyoshi Takagi, editors, Advances in Cryptology - ASIACRYPT 2016 - 22nd International Conference on the Theory and Application of Cryptology and Information Security, Hanoi, Vietnam, December 4-8, 2016, Proceedings, Part I, volume 10031 of Lecture Notes in Computer Science, pages 37–62, 2016. doi: 10.1007/978-3-662-53887-6_2. Google Scholar
P. Sarkar and S. Singh, New complexity trade-offs for the (multiple) number field sieve algorithm in non-prime fields, In Marc Fischlin and Jean-Sébastien Coron, editors, Advances in Cryptology - EUROCRYPT 2016 - 35th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Vienna, Austria, May 8-12, 2016, Proceedings, Part I, volume 9665 of Lecture Notes in Computer Science, pages 429–458, Springer, 2016. doi: 10.1007/978-3-662-49890-3_17. Google Scholar
O. Schirokauer, Discrete logarithms and local units, Philosophical Transactions: Physical Sciences and Engineering, 345 91993), 409–423. doi: 10.1098/rsta.1993.0139. Google Scholar
O. Schirokauer, Using number fields to compute logarithms in finite fields, Math. Comp., 69 (2000), 1267-1283. doi: 10.1090/S0025-5718-99-01137-0. Google Scholar
O. Schirokauer, Virtual logarithms, J. Algorithms, 57 (2005), 140-147. doi: 10.1016/j.jalgor.2004.11.004. Google Scholar
D. H. Wiedemann, Solving sparse linear equations over finite fields, IEEE Trans. Information Theory, 32 (1986), 54–62. doi: 10.1109/TIT.1986.1057137. Google Scholar
P. Zajac, On the use of the lattice sieve in the 3d NFS, Tatra Mountains Mathematical Publications, 45 (2010), 161-172. doi: 10.2478/v10127-010-0012-y. Google Scholar
Figure 1. Tower of Number Fields
Figure 2. Commutative diagram for TNFS
Figure 3. Product of norms for various polynomial selection methods
Figure 4. Complexity plot for medium characteristic finite fields
Table 1. Parameterised efficiency estimates for NFS obtained from the different polynomial selection methods
Method Norms Product Conditions
NFS-JLSV1[21] $E^{\frac{4n}{t}}Q^{\frac{t-1}{n}}$
NFS-GJL[4] $E^{\frac{2(2r+1)}{t}}Q^{\frac{t-1}{r+1}}$ $r\geq n$
NFS-Conj[4] $E^{\frac{6n}{t}}Q^{\frac{t-1}{2n}}$
NFS-$\mathcal{A}$[31] $E^{\frac{2d(2r+1)}{t}}Q^{\frac{t-1}{d(r+1)}}$ $d|n$, $r\geq n/d$
exTNFS-JLSV1[24] $E^{\frac{4\kappa}{t}}Q^{\frac{t-1}{\kappa}}$ $n=\eta\kappa$, $\gcd(\eta, \kappa)=1$
exTNFS-GJL[24] $E^{\frac{2(2r+1)}{t}}Q^{\frac{t-1}{r+1}}$ $n=\eta\kappa$, $\gcd(\eta, \kappa)=1$, $r\geq\kappa$
exTNFS-$\mathcal{C}$[30] $E^{\frac{2d(2r+1)}{t}}Q^{\frac{(t-1)(r(\lambda-1)+k)}{\kappa(r\lambda+1)}}$ $n=\eta\kappa$, $k=\kappa/d$, $r\geq k$, $\lambda\in\{1, \eta\}$
exTNFS-gConj [25] $E^{\frac{6\kappa}{t}}Q^{\frac{t-1}{2\kappa}}$ $n=\eta\kappa$
exTNFS-$\mathcal{D}$ $E^{\frac{2d(2r+1)}{t}}Q^{\frac{(t-1)}{d(r+1)}}$ $n=\eta\kappa$, $d|\kappa$, $\gcd(\eta, \kappa/d)=1$, $r\geq \kappa/d$
NFS-GJL:$\eta=d=1$
NFS-Conj:$\eta=1$, $d=\kappa=n$, $r=1$
NFS-$\mathcal{A}$:$\eta=1$, $\kappa=n$, $d|n$, $r\geq n/d$
exTNFS-GJL:$d=1$
exTNFS-gConj:$d=\kappa$, $r=1$
Francesco Cellarosi, Ilya Vinogradov. Ergodic properties of $k$-free integers in number fields. Journal of Modern Dynamics, 2013, 7 (3) : 461-488. doi: 10.3934/jmd.2013.7.461
Daniele Bartoli, Adnen Sboui, Leo Storme. Bounds on the number of rational points of algebraic hypersurfaces over finite fields, with applications to projective Reed-Muller codes. Advances in Mathematics of Communications, 2016, 10 (2) : 355-365. doi: 10.3934/amc.2016010
Qixuan Wang, Hans G. Othmer. The performance of discrete models of low reynolds number swimmers. Mathematical Biosciences & Engineering, 2015, 12 (6) : 1303-1320. doi: 10.3934/mbe.2015.12.1303
Jean-François Biasse. Improvements in the computation of ideal class groups of imaginary quadratic number fields. Advances in Mathematics of Communications, 2010, 4 (2) : 141-154. doi: 10.3934/amc.2010.4.141
Jean-François Biasse. Subexponential time relations in the class group of large degree number fields. Advances in Mathematics of Communications, 2014, 8 (4) : 407-425. doi: 10.3934/amc.2014.8.407
Laurent Imbert, Michael J. Jacobson, Jr., Arthur Schmidt. Fast ideal cubing in imaginary quadratic number and function fields. Advances in Mathematics of Communications, 2010, 4 (2) : 237-260. doi: 10.3934/amc.2010.4.237
Xiaolu Hou, Frédérique Oggier. Modular lattices from a variation of construction a over number fields. Advances in Mathematics of Communications, 2017, 11 (4) : 719-745. doi: 10.3934/amc.2017053
Muminu O. Adamu, Aderemi O. Adewumi. Minimizing the weighted number of tardy jobs on multiple machines: A review. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1465-1493. doi: 10.3934/jimo.2016.12.1465
Harald Fripertinger. The number of invariant subspaces under a linear operator on finite vector spaces. Advances in Mathematics of Communications, 2011, 5 (2) : 407-416. doi: 10.3934/amc.2011.5.407
Hui Cao, Yicang Zhou. The basic reproduction number of discrete SIR and SEIS models with periodic parameters. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 37-56. doi: 10.3934/dcdsb.2013.18.37
Timothy C. Reluga, Jan Medlock, Alison Galvani. The discounted reproductive number for epidemiology. Mathematical Biosciences & Engineering, 2009, 6 (2) : 377-393. doi: 10.3934/mbe.2009.6.377
Michel Laurent, Arnaldo Nogueira. Rotation number of contracted rotations. Journal of Modern Dynamics, 2018, 12: 175-191. doi: 10.3934/jmd.2018007
Josu Doncel, Nicolas Gast, Bruno Gaujal. Discrete mean field games: Existence of equilibria and convergence. Journal of Dynamics & Games, 2019, 0 (0) : 1-19. doi: 10.3934/jdg.2019016
Thomas Alazard. A minicourse on the low Mach number limit. Discrete & Continuous Dynamical Systems - S, 2008, 1 (3) : 365-404. doi: 10.3934/dcdss.2008.1.365
Wilfrid Gangbo, Andrzej Świech. Optimal transport and large number of particles. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1397-1441. doi: 10.3934/dcds.2014.34.1397
Sujay Jayakar, Robert S. Strichartz. Average number of lattice points in a disk. Communications on Pure & Applied Analysis, 2016, 15 (1) : 1-8. doi: 10.3934/cpaa.2016.15.1
G.F. Webb. The prime number periodical cicada problem. Discrete & Continuous Dynamical Systems - B, 2001, 1 (3) : 387-399. doi: 10.3934/dcdsb.2001.1.387
Tiago de Carvalho, Bruno Freitas. Birth of an arbitrary number of T-singularities in 3D piecewise smooth vector fields. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4851-4861. doi: 10.3934/dcdsb.2019034
Noureddine Jilani Ben Naouara, Faouzi Trabelsi. Generalization on optimal multiple stopping with application to swing options with random exercise rights number. Mathematical Control & Related Fields, 2015, 5 (4) : 807-826. doi: 10.3934/mcrf.2015.5.807
Ata Allah Taleizadeh, Solaleh Sadat Kalantari, Leopoldo Eduardo Cárdenas-Barrón. Determining optimal price, replenishment lot size and number of shipments for an EPQ model with rework and multiple shipments. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1059-1071. doi: 10.3934/jimo.2015.11.1059
PDF downloads (53)
Palash Sarkar Shashank Singh | CommonCrawl |
neet_in_training
mirror of https://github.com/tuhdo/os01
Tree: 2572a03fa5
0.0.1-release
build-automate
from '2572a03fa5'
os01/book_src/Operating Systems From 0 to...
Operating Systems From 0 to 1.txt 552KB
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713171417151716171717181719172017211722172317241725172617271728172917301731173217331734173517361737173817391740174117421743174417451746174717481749175017511752175317541755175617571758175917601761176217631764176517661767176817691770177117721773177417751776177717781779178017811782178317841785178617871788178917901791179217931794179517961797179817991800180118021803180418051806180718081809181018111812181318141815181618171818181918201821182218231824182518261827182818291830183118321833183418351836183718381839184018411842184318441845184618471848184918501851185218531854185518561857185818591860186118621863186418651866186718681869187018711872187318741875187618771878187918801881188218831884188518861887188818891890189118921893189418951896189718981899190019011902190319041905190619071908190919101911191219131914191519161917191819191920192119221923192419251926192719281929193019311932193319341935193619371938193919401941194219431944194519461947194819491950195119521953195419551956195719581959196019611962196319641965196619671968196919701971197219731974197519761977197819791980198119821983198419851986198719881989199019911992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320242025202620272028202920302031203220332034203520362037203820392040204120422043204420452046204720482049205020512052205320542055205620572058205920602061206220632064206520662067206820692070207120722073207420752076207720782079208020812082208320842085208620872088208920902091209220932094209520962097209820992100210121022103210421052106210721082109211021112112211321142115211621172118211921202121212221232124212521262127212821292130213121322133213421352136213721382139214021412142214321442145214621472148214921502151215221532154215521562157215821592160216121622163216421652166216721682169217021712172217321742175217621772178217921802181218221832184218521862187218821892190219121922193219421952196219721982199220022012202220322042205220622072208220922102211221222132214221522162217221822192220222122222223222422252226222722282229223022312232223322342235223622372238223922402241224222432244224522462247224822492250225122522253225422552256225722582259226022612262226322642265226622672268226922702271227222732274227522762277227822792280228122822283228422852286228722882289229022912292229322942295229622972298229923002301230223032304230523062307230823092310231123122313231423152316231723182319232023212322232323242325232623272328232923302331233223332334233523362337233823392340234123422343234423452346234723482349235023512352235323542355235623572358235923602361236223632364236523662367236823692370237123722373237423752376237723782379238023812382238323842385238623872388238923902391239223932394239523962397239823992400240124022403240424052406240724082409241024112412241324142415241624172418241924202421242224232424242524262427242824292430243124322433243424352436243724382439244024412442244324442445244624472448244924502451245224532454245524562457245824592460246124622463246424652466246724682469247024712472247324742475247624772478247924802481248224832484248524862487248824892490249124922493249424952496249724982499250025012502250325042505250625072508250925102511251225132514251525162517251825192520252125222523252425252526252725282529253025312532253325342535253625372538253925402541254225432544254525462547254825492550255125522553255425552556255725582559256025612562256325642565256625672568256925702571257225732574257525762577257825792580258125822583258425852586258725882589259025912592259325942595259625972598259926002601260226032604260526062607260826092610261126122613261426152616261726182619262026212622262326242625262626272628262926302631263226332634263526362637263826392640264126422643264426452646264726482649265026512652265326542655265626572658265926602661266226632664266526662667266826692670267126722673267426752676267726782679268026812682268326842685268626872688268926902691269226932694269526962697269826992700270127022703270427052706270727082709271027112712271327142715271627172718271927202721272227232724272527262727272827292730273127322733273427352736273727382739274027412742274327442745274627472748274927502751275227532754275527562757275827592760276127622763276427652766276727682769277027712772277327742775277627772778277927802781278227832784278527862787278827892790279127922793279427952796279727982799280028012802280328042805280628072808280928102811281228132814281528162817281828192820282128222823282428252826282728282829283028312832283328342835283628372838283928402841284228432844284528462847284828492850285128522853285428552856285728582859286028612862286328642865286628672868286928702871287228732874287528762877287828792880288128822883288428852886288728882889289028912892289328942895289628972898289929002901290229032904290529062907290829092910291129122913291429152916291729182919292029212922292329242925292629272928292929302931293229332934293529362937293829392940294129422943294429452946294729482949295029512952295329542955295629572958295929602961296229632964296529662967296829692970297129722973297429752976297729782979298029812982298329842985298629872988298929902991299229932994299529962997299829993000300130023003300430053006300730083009301030113012301330143015301630173018301930203021302230233024302530263027302830293030303130323033303430353036303730383039304030413042304330443045304630473048304930503051305230533054305530563057305830593060306130623063306430653066306730683069307030713072307330743075307630773078307930803081308230833084308530863087308830893090309130923093309430953096309730983099310031013102310331043105310631073108310931103111311231133114311531163117311831193120312131223123312431253126312731283129313031313132313331343135313631373138313931403141314231433144314531463147314831493150315131523153315431553156315731583159316031613162316331643165316631673168316931703171317231733174317531763177317831793180318131823183318431853186318731883189319031913192319331943195319631973198319932003201320232033204320532063207320832093210321132123213321432153216321732183219322032213222322332243225322632273228322932303231323232333234323532363237323832393240324132423243324432453246324732483249325032513252325332543255325632573258325932603261326232633264326532663267326832693270327132723273327432753276327732783279328032813282328332843285328632873288328932903291329232933294329532963297329832993300330133023303330433053306330733083309331033113312331333143315331633173318331933203321332233233324332533263327332833293330333133323333333433353336333733383339334033413342334333443345334633473348334933503351335233533354335533563357335833593360336133623363336433653366336733683369337033713372337333743375337633773378337933803381338233833384338533863387338833893390339133923393339433953396339733983399340034013402340334043405340634073408340934103411341234133414341534163417341834193420342134223423342434253426342734283429343034313432343334343435343634373438343934403441344234433444344534463447344834493450345134523453345434553456345734583459346034613462346334643465346634673468346934703471347234733474347534763477347834793480348134823483348434853486348734883489349034913492349334943495349634973498349935003501350235033504350535063507350835093510351135123513351435153516351735183519352035213522352335243525352635273528352935303531353235333534353535363537353835393540354135423543354435453546354735483549355035513552355335543555355635573558355935603561356235633564356535663567356835693570357135723573357435753576357735783579358035813582358335843585358635873588358935903591359235933594359535963597359835993600360136023603360436053606360736083609361036113612361336143615361636173618361936203621362236233624362536263627362836293630363136323633363436353636363736383639364036413642364336443645364636473648364936503651365236533654365536563657365836593660366136623663366436653666366736683669367036713672367336743675367636773678367936803681368236833684368536863687368836893690369136923693369436953696369736983699370037013702370337043705370637073708370937103711371237133714371537163717371837193720372137223723372437253726372737283729373037313732373337343735373637373738373937403741374237433744374537463747374837493750375137523753375437553756375737583759376037613762376337643765376637673768376937703771377237733774377537763777377837793780378137823783378437853786378737883789379037913792379337943795379637973798379938003801380238033804380538063807380838093810381138123813381438153816381738183819382038213822382338243825382638273828382938303831383238333834383538363837383838393840384138423843384438453846384738483849385038513852385338543855385638573858385938603861386238633864386538663867386838693870387138723873387438753876387738783879388038813882388338843885388638873888388938903891389238933894389538963897389838993900390139023903390439053906390739083909391039113912391339143915391639173918391939203921392239233924392539263927392839293930393139323933393439353936393739383939394039413942394339443945394639473948394939503951395239533954395539563957395839593960396139623963396439653966396739683969397039713972397339743975397639773978397939803981398239833984398539863987398839893990399139923993399439953996399739983999400040014002400340044005400640074008400940104011401240134014401540164017401840194020402140224023402440254026402740284029403040314032403340344035403640374038403940404041404240434044404540464047404840494050405140524053405440554056405740584059406040614062406340644065406640674068406940704071407240734074407540764077407840794080408140824083408440854086408740884089409040914092409340944095409640974098409941004101410241034104410541064107410841094110411141124113411441154116411741184119412041214122412341244125412641274128412941304131413241334134413541364137413841394140414141424143414441454146414741484149415041514152415341544155415641574158415941604161416241634164416541664167416841694170417141724173417441754176417741784179418041814182418341844185418641874188418941904191419241934194419541964197419841994200420142024203420442054206420742084209421042114212421342144215421642174218421942204221422242234224422542264227422842294230423142324233423442354236423742384239424042414242424342444245424642474248424942504251425242534254425542564257425842594260426142624263426442654266426742684269427042714272427342744275427642774278427942804281428242834284428542864287428842894290429142924293429442954296429742984299430043014302430343044305430643074308430943104311431243134314431543164317431843194320432143224323432443254326432743284329433043314332433343344335433643374338433943404341434243434344434543464347434843494350435143524353435443554356435743584359436043614362436343644365436643674368436943704371437243734374437543764377437843794380438143824383438443854386438743884389439043914392439343944395439643974398439944004401440244034404440544064407440844094410441144124413441444154416441744184419442044214422442344244425442644274428442944304431443244334434443544364437443844394440444144424443444444454446444744484449445044514452445344544455445644574458445944604461446244634464446544664467446844694470447144724473447444754476447744784479448044814482448344844485448644874488448944904491449244934494449544964497449844994500450145024503450445054506450745084509451045114512451345144515451645174518451945204521452245234524452545264527452845294530453145324533453445354536453745384539454045414542454345444545454645474548454945504551455245534554455545564557455845594560456145624563456445654566456745684569457045714572457345744575457645774578457945804581458245834584458545864587458845894590459145924593459445954596459745984599460046014602460346044605460646074608460946104611461246134614461546164617461846194620462146224623462446254626462746284629463046314632463346344635463646374638463946404641464246434644464546464647464846494650465146524653465446554656465746584659466046614662466346644665466646674668466946704671467246734674467546764677467846794680468146824683468446854686468746884689469046914692469346944695469646974698469947004701470247034704470547064707470847094710471147124713471447154716471747184719472047214722472347244725472647274728472947304731473247334734473547364737473847394740474147424743474447454746474747484749475047514752475347544755475647574758475947604761476247634764476547664767476847694770477147724773477447754776477747784779478047814782478347844785478647874788478947904791479247934794479547964797479847994800480148024803480448054806480748084809481048114812481348144815481648174818481948204821482248234824482548264827482848294830483148324833483448354836483748384839484048414842484348444845484648474848484948504851485248534854485548564857485848594860486148624863486448654866486748684869487048714872487348744875487648774878487948804881488248834884488548864887488848894890489148924893489448954896489748984899490049014902490349044905490649074908490949104911491249134914491549164917491849194920492149224923492449254926492749284929493049314932493349344935493649374938493949404941494249434944494549464947494849494950495149524953495449554956495749584959496049614962496349644965496649674968496949704971497249734974497549764977497849794980498149824983498449854986498749884989499049914992499349944995499649974998499950005001500250035004500550065007500850095010501150125013501450155016501750185019502050215022502350245025502650275028502950305031503250335034503550365037503850395040504150425043504450455046504750485049505050515052505350545055505650575058505950605061506250635064506550665067506850695070507150725073507450755076507750785079508050815082508350845085508650875088508950905091509250935094509550965097509850995100510151025103510451055106510751085109511051115112511351145115511651175118511951205121512251235124512551265127512851295130513151325133513451355136513751385139514051415142514351445145514651475148514951505151515251535154515551565157515851595160516151625163516451655166516751685169517051715172517351745175517651775178517951805181518251835184518551865187518851895190519151925193519451955196519751985199520052015202520352045205520652075208520952105211521252135214521552165217521852195220522152225223522452255226522752285229523052315232523352345235523652375238523952405241524252435244524552465247524852495250525152525253525452555256525752585259526052615262526352645265526652675268526952705271527252735274527552765277527852795280528152825283528452855286528752885289529052915292529352945295529652975298529953005301530253035304530553065307530853095310531153125313531453155316531753185319532053215322532353245325532653275328532953305331533253335334533553365337533853395340534153425343534453455346534753485349535053515352535353545355535653575358535953605361536253635364536553665367536853695370537153725373537453755376537753785379538053815382538353845385538653875388538953905391539253935394539553965397539853995400540154025403540454055406540754085409541054115412541354145415541654175418541954205421542254235424542554265427542854295430543154325433543454355436543754385439544054415442544354445445544654475448544954505451545254535454545554565457545854595460546154625463546454655466546754685469547054715472547354745475547654775478547954805481548254835484548554865487548854895490549154925493549454955496549754985499550055015502550355045505550655075508550955105511551255135514551555165517551855195520552155225523552455255526552755285529553055315532553355345535553655375538553955405541554255435544554555465547554855495550555155525553555455555556555755585559556055615562556355645565556655675568556955705571557255735574557555765577557855795580558155825583558455855586558755885589559055915592559355945595559655975598559956005601560256035604560556065607560856095610561156125613561456155616561756185619562056215622562356245625562656275628562956305631563256335634563556365637563856395640564156425643564456455646564756485649565056515652565356545655565656575658565956605661566256635664566556665667566856695670567156725673567456755676567756785679568056815682568356845685568656875688568956905691569256935694569556965697569856995700570157025703570457055706570757085709571057115712571357145715571657175718571957205721572257235724572557265727572857295730573157325733573457355736573757385739574057415742574357445745574657475748574957505751575257535754575557565757575857595760576157625763576457655766576757685769577057715772577357745775577657775778577957805781578257835784578557865787578857895790579157925793579457955796579757985799580058015802580358045805580658075808580958105811581258135814581558165817581858195820582158225823582458255826582758285829583058315832583358345835583658375838583958405841584258435844584558465847584858495850585158525853585458555856585758585859586058615862586358645865586658675868586958705871587258735874587558765877587858795880588158825883588458855886588758885889589058915892589358945895589658975898589959005901590259035904590559065907590859095910591159125913591459155916591759185919592059215922592359245925592659275928592959305931593259335934593559365937593859395940594159425943594459455946594759485949595059515952595359545955595659575958595959605961596259635964596559665967596859695970597159725973597459755976597759785979598059815982598359845985598659875988598959905991599259935994599559965997599859996000600160026003600460056006600760086009601060116012601360146015601660176018601960206021602260236024602560266027602860296030603160326033603460356036603760386039604060416042604360446045604660476048604960506051605260536054605560566057605860596060606160626063606460656066606760686069607060716072607360746075607660776078607960806081608260836084608560866087608860896090609160926093609460956096609760986099610061016102610361046105610661076108610961106111611261136114611561166117611861196120612161226123612461256126612761286129613061316132613361346135613661376138613961406141614261436144614561466147614861496150615161526153615461556156615761586159616061616162616361646165616661676168616961706171617261736174617561766177617861796180618161826183618461856186618761886189619061916192619361946195619661976198619962006201620262036204620562066207620862096210621162126213621462156216621762186219622062216222622362246225622662276228622962306231623262336234623562366237623862396240624162426243624462456246624762486249625062516252625362546255625662576258625962606261626262636264626562666267626862696270627162726273627462756276627762786279628062816282628362846285628662876288628962906291629262936294629562966297629862996300630163026303630463056306630763086309631063116312631363146315631663176318631963206321632263236324632563266327632863296330633163326333633463356336633763386339634063416342634363446345634663476348634963506351635263536354635563566357635863596360636163626363636463656366636763686369637063716372637363746375637663776378637963806381638263836384638563866387638863896390639163926393639463956396639763986399640064016402640364046405640664076408640964106411641264136414641564166417641864196420642164226423642464256426642764286429643064316432643364346435643664376438643964406441644264436444644564466447644864496450645164526453645464556456645764586459646064616462646364646465646664676468646964706471647264736474647564766477647864796480648164826483648464856486648764886489649064916492649364946495649664976498649965006501650265036504650565066507650865096510651165126513651465156516651765186519652065216522652365246525652665276528652965306531653265336534653565366537653865396540654165426543654465456546654765486549655065516552655365546555655665576558655965606561656265636564656565666567656865696570657165726573657465756576657765786579658065816582658365846585658665876588658965906591659265936594659565966597659865996600660166026603660466056606660766086609661066116612661366146615661666176618661966206621662266236624662566266627662866296630663166326633663466356636663766386639664066416642664366446645664666476648664966506651665266536654665566566657665866596660666166626663666466656666666766686669667066716672667366746675667666776678667966806681668266836684668566866687668866896690669166926693669466956696669766986699670067016702670367046705670667076708670967106711671267136714671567166717671867196720672167226723672467256726672767286729673067316732673367346735673667376738673967406741674267436744674567466747674867496750675167526753675467556756675767586759676067616762676367646765676667676768676967706771677267736774677567766777677867796780678167826783678467856786678767886789679067916792679367946795679667976798679968006801680268036804680568066807680868096810681168126813681468156816681768186819682068216822682368246825682668276828682968306831683268336834683568366837683868396840684168426843684468456846684768486849685068516852685368546855685668576858685968606861686268636864686568666867686868696870687168726873687468756876687768786879688068816882688368846885688668876888688968906891689268936894689568966897689868996900690169026903690469056906690769086909691069116912691369146915691669176918691969206921692269236924692569266927692869296930693169326933693469356936693769386939694069416942694369446945694669476948694969506951695269536954695569566957695869596960696169626963696469656966696769686969697069716972697369746975697669776978697969806981698269836984698569866987698869896990699169926993699469956996699769986999700070017002700370047005700670077008700970107011701270137014701570167017701870197020702170227023702470257026702770287029703070317032703370347035703670377038703970407041704270437044704570467047704870497050705170527053705470557056705770587059706070617062706370647065706670677068706970707071707270737074707570767077707870797080708170827083708470857086708770887089709070917092709370947095709670977098709971007101710271037104710571067107710871097110711171127113711471157116711771187119712071217122712371247125712671277128712971307131713271337134713571367137713871397140714171427143714471457146714771487149715071517152715371547155715671577158715971607161716271637164716571667167716871697170717171727173717471757176717771787179718071817182718371847185718671877188718971907191719271937194719571967197719871997200720172027203720472057206720772087209721072117212721372147215721672177218721972207221722272237224722572267227722872297230723172327233723472357236723772387239724072417242724372447245724672477248724972507251725272537254725572567257725872597260726172627263726472657266726772687269727072717272727372747275727672777278727972807281728272837284728572867287728872897290729172927293729472957296729772987299730073017302730373047305730673077308730973107311731273137314731573167317731873197320732173227323732473257326732773287329733073317332733373347335733673377338733973407341734273437344734573467347734873497350735173527353735473557356735773587359736073617362736373647365736673677368736973707371737273737374737573767377737873797380738173827383738473857386738773887389739073917392739373947395739673977398739974007401740274037404740574067407740874097410741174127413741474157416741774187419742074217422742374247425742674277428742974307431743274337434743574367437743874397440744174427443744474457446744774487449745074517452745374547455745674577458745974607461746274637464746574667467746874697470747174727473747474757476747774787479748074817482748374847485748674877488748974907491749274937494749574967497749874997500750175027503750475057506750775087509751075117512751375147515751675177518751975207521752275237524752575267527752875297530753175327533753475357536753775387539754075417542754375447545754675477548754975507551755275537554755575567557755875597560756175627563756475657566756775687569757075717572757375747575757675777578757975807581758275837584758575867587758875897590759175927593759475957596759775987599760076017602760376047605760676077608760976107611761276137614761576167617761876197620762176227623762476257626762776287629763076317632763376347635763676377638763976407641764276437644764576467647764876497650765176527653765476557656765776587659766076617662766376647665766676677668766976707671767276737674767576767677767876797680768176827683768476857686768776887689769076917692769376947695769676977698769977007701770277037704770577067707770877097710771177127713771477157716771777187719772077217722772377247725772677277728772977307731773277337734773577367737773877397740774177427743774477457746774777487749775077517752775377547755775677577758775977607761776277637764776577667767776877697770777177727773777477757776777777787779778077817782778377847785778677877788778977907791779277937794779577967797779877997800780178027803780478057806780778087809781078117812781378147815781678177818781978207821782278237824782578267827782878297830783178327833783478357836783778387839784078417842784378447845784678477848784978507851785278537854785578567857785878597860786178627863786478657866786778687869787078717872787378747875787678777878787978807881788278837884788578867887788878897890789178927893789478957896789778987899790079017902790379047905790679077908790979107911791279137914791579167917791879197920792179227923792479257926792779287929793079317932793379347935793679377938793979407941794279437944794579467947794879497950795179527953795479557956795779587959796079617962796379647965796679677968796979707971797279737974797579767977797879797980798179827983798479857986798779887989799079917992799379947995799679977998799980008001800280038004800580068007800880098010801180128013801480158016801780188019802080218022802380248025802680278028802980308031803280338034803580368037803880398040804180428043804480458046804780488049805080518052805380548055805680578058805980608061806280638064806580668067806880698070807180728073807480758076807780788079808080818082808380848085808680878088808980908091809280938094809580968097809880998100810181028103810481058106810781088109811081118112811381148115811681178118811981208121812281238124812581268127812881298130813181328133813481358136813781388139814081418142814381448145814681478148814981508151815281538154815581568157815881598160816181628163816481658166816781688169817081718172817381748175817681778178817981808181818281838184818581868187818881898190819181928193819481958196819781988199820082018202820382048205820682078208820982108211821282138214821582168217821882198220822182228223822482258226822782288229823082318232823382348235823682378238823982408241824282438244824582468247824882498250825182528253825482558256825782588259826082618262826382648265826682678268826982708271827282738274827582768277827882798280828182828283828482858286828782888289829082918292829382948295829682978298829983008301830283038304830583068307830883098310831183128313831483158316831783188319832083218322832383248325832683278328832983308331833283338334833583368337833883398340834183428343834483458346834783488349835083518352835383548355835683578358835983608361836283638364836583668367836883698370837183728373837483758376837783788379838083818382838383848385838683878388838983908391839283938394839583968397839883998400840184028403840484058406840784088409841084118412841384148415841684178418841984208421842284238424842584268427842884298430843184328433843484358436843784388439844084418442844384448445844684478448844984508451845284538454845584568457845884598460846184628463846484658466846784688469847084718472847384748475847684778478847984808481848284838484848584868487848884898490849184928493849484958496849784988499850085018502850385048505850685078508850985108511851285138514851585168517851885198520852185228523852485258526852785288529853085318532853385348535853685378538853985408541854285438544854585468547854885498550855185528553855485558556855785588559856085618562856385648565856685678568856985708571857285738574857585768577857885798580858185828583858485858586858785888589859085918592859385948595859685978598859986008601860286038604860586068607860886098610861186128613861486158616861786188619862086218622862386248625862686278628862986308631863286338634863586368637863886398640864186428643864486458646864786488649865086518652865386548655865686578658865986608661866286638664866586668667866886698670867186728673867486758676867786788679868086818682868386848685868686878688868986908691869286938694869586968697869886998700870187028703870487058706870787088709871087118712871387148715871687178718871987208721872287238724872587268727872887298730873187328733873487358736873787388739874087418742874387448745874687478748874987508751875287538754875587568757875887598760876187628763876487658766876787688769877087718772877387748775877687778778877987808781878287838784878587868787878887898790879187928793879487958796879787988799880088018802880388048805880688078808880988108811881288138814881588168817881888198820882188228823882488258826882788288829883088318832883388348835883688378838883988408841884288438844884588468847884888498850885188528853885488558856885788588859886088618862886388648865886688678868886988708871887288738874887588768877887888798880888188828883888488858886888788888889889088918892889388948895889688978898889989008901890289038904890589068907890889098910891189128913891489158916891789188919892089218922892389248925892689278928892989308931893289338934893589368937893889398940894189428943894489458946894789488949895089518952895389548955895689578958895989608961896289638964896589668967896889698970897189728973897489758976897789788979898089818982898389848985898689878988898989908991899289938994899589968997899889999000900190029003900490059006900790089009901090119012901390149015901690179018901990209021902290239024902590269027902890299030903190329033903490359036903790389039904090419042904390449045904690479048904990509051905290539054905590569057905890599060906190629063906490659066906790689069907090719072907390749075907690779078907990809081908290839084908590869087908890899090909190929093909490959096909790989099910091019102910391049105910691079108910991109111911291139114911591169117911891199120912191229123912491259126912791289129913091319132913391349135913691379138913991409141914291439144914591469147914891499150915191529153915491559156915791589159916091619162916391649165916691679168916991709171917291739174917591769177917891799180918191829183918491859186918791889189919091919192919391949195919691979198919992009201920292039204920592069207920892099210921192129213921492159216921792189219922092219222922392249225922692279228922992309231923292339234923592369237923892399240924192429243924492459246924792489249925092519252925392549255925692579258925992609261926292639264926592669267926892699270927192729273927492759276927792789279928092819282928392849285928692879288928992909291929292939294929592969297929892999300930193029303930493059306930793089309931093119312931393149315931693179318931993209321932293239324932593269327932893299330933193329333933493359336933793389339934093419342934393449345934693479348934993509351935293539354935593569357935893599360936193629363936493659366936793689369937093719372937393749375937693779378937993809381938293839384938593869387938893899390939193929393939493959396939793989399940094019402940394049405940694079408940994109411941294139414941594169417941894199420942194229423942494259426942794289429943094319432943394349435943694379438943994409441944294439444944594469447944894499450945194529453945494559456945794589459946094619462946394649465946694679468946994709471947294739474947594769477947894799480948194829483948494859486948794889489949094919492949394949495949694979498949995009501950295039504950595069507950895099510951195129513951495159516951795189519952095219522952395249525952695279528952995309531953295339534953595369537953895399540954195429543954495459546954795489549955095519552955395549555955695579558955995609561956295639564956595669567956895699570957195729573957495759576957795789579958095819582958395849585958695879588958995909591959295939594959595969597959895999600960196029603960496059606960796089609961096119612961396149615961696179618961996209621962296239624962596269627962896299630963196329633963496359636963796389639964096419642964396449645964696479648964996509651965296539654965596569657965896599660966196629663966496659666966796689669967096719672967396749675967696779678967996809681968296839684968596869687968896899690969196929693969496959696969796989699970097019702970397049705970697079708970997109711971297139714971597169717971897199720972197229723972497259726972797289729973097319732973397349735973697379738973997409741974297439744974597469747974897499750975197529753975497559756975797589759976097619762976397649765976697679768976997709771977297739774977597769777977897799780978197829783978497859786978797889789979097919792979397949795979697979798979998009801980298039804980598069807980898099810981198129813981498159816981798189819982098219822982398249825982698279828982998309831983298339834983598369837983898399840984198429843984498459846984798489849985098519852985398549855985698579858985998609861986298639864986598669867986898699870987198729873987498759876987798789879988098819882988398849885988698879888988998909891989298939894989598969897989898999900990199029903990499059906990799089909991099119912991399149915991699179918991999209921992299239924992599269927992899299930993199329933993499359936993799389939994099419942994399449945994699479948994999509951995299539954995599569957995899599960996199629963996499659966996799689969997099719972997399749975997699779978997999809981998299839984998599869987998899899990999199929993999499959996999799989999100001000110002100031000410005100061000710008100091001010011100121001310014100151001610017100181001910020100211002210023100241002510026100271002810029100301003110032100331003410035100361003710038100391004010041100421004310044100451004610047100481004910050100511005210053100541005510056100571005810059100601006110062100631006410065100661006710068100691007010071100721007310074100751007610077100781007910080100811008210083100841008510086100871008810089100901009110092100931009410095100961009710098100991010010101101021010310104101051010610107101081010910110101111011210113101141011510116101171011810119101201012110122101231012410125101261012710128101291013010131101321013310134101351013610137101381013910140101411014210143101441014510146101471014810149101501015110152101531015410155101561015710158101591016010161101621016310164101651016610167101681016910170101711017210173101741017510176101771017810179101801018110182101831018410185101861018710188101891019010191101921019310194101951019610197101981019910200102011020210203102041020510206102071020810209102101021110212102131021410215102161021710218102191022010221102221022310224102251022610227102281022910230102311023210233102341023510236102371023810239102401024110242102431024410245102461024710248102491025010251102521025310254102551025610257102581025910260102611026210263102641026510266102671026810269102701027110272102731027410275102761027710278102791028010281102821028310284102851028610287102881028910290102911029210293102941029510296102971029810299103001030110302103031030410305103061030710308103091031010311103121031310314103151031610317103181031910320103211032210323103241032510326103271032810329103301033110332103331033410335103361033710338103391034010341103421034310344103451034610347103481034910350103511035210353103541035510356103571035810359103601036110362103631036410365103661036710368103691037010371103721037310374103751037610377103781037910380103811038210383103841038510386103871038810389103901039110392103931039410395103961039710398103991040010401104021040310404104051040610407104081040910410104111041210413104141041510416104171041810419104201042110422104231042410425104261042710428104291043010431104321043310434104351043610437104381043910440104411044210443104441044510446104471044810449104501045110452104531045410455104561045710458104591046010461104621046310464104651046610467104681046910470104711047210473104741047510476104771047810479104801048110482104831048410485104861048710488104891049010491104921049310494104951049610497104981049910500105011050210503105041050510506105071050810509105101051110512105131051410515105161051710518105191052010521105221052310524105251052610527105281052910530105311053210533105341053510536105371053810539105401054110542105431054410545105461054710548105491055010551105521055310554105551055610557105581055910560105611056210563105641056510566105671056810569105701057110572105731057410575105761057710578105791058010581105821058310584105851058610587105881058910590105911059210593105941059510596105971059810599106001060110602106031060410605106061060710608106091061010611106121061310614106151061610617106181061910620106211062210623106241062510626106271062810629106301063110632106331063410635106361063710638106391064010641106421064310644106451064610647106481064910650106511065210653106541065510656106571065810659106601066110662106631066410665106661066710668106691067010671106721067310674106751067610677106781067910680106811068210683106841068510686106871068810689106901069110692106931069410695106961069710698106991070010701107021070310704107051070610707107081070910710107111071210713107141071510716107171071810719107201072110722107231072410725107261072710728107291073010731107321073310734107351073610737107381073910740107411074210743107441074510746107471074810749107501075110752107531075410755107561075710758107591076010761107621076310764107651076610767107681076910770107711077210773107741077510776107771077810779107801078110782107831078410785107861078710788107891079010791107921079310794107951079610797107981079910800108011080210803108041080510806108071080810809108101081110812108131081410815108161081710818108191082010821108221082310824108251082610827108281082910830108311083210833108341083510836108371083810839108401084110842108431084410845108461084710848108491085010851108521085310854108551085610857108581085910860108611086210863108641086510866108671086810869108701087110872108731087410875108761087710878108791088010881108821088310884108851088610887108881088910890108911089210893108941089510896108971089810899109001090110902109031090410905109061090710908109091091010911109121091310914109151091610917109181091910920109211092210923109241092510926109271092810929109301093110932109331093410935109361093710938109391094010941109421094310944109451094610947109481094910950109511095210953109541095510956109571095810959109601096110962109631096410965109661096710968109691097010971109721097310974109751097610977109781097910980109811098210983109841098510986109871098810989109901099110992109931099410995109961099710998109991100011001110021100311004110051100611007110081100911010110111101211013110141101511016110171101811019110201102111022110231102411025110261102711028110291103011031110321103311034110351103611037110381103911040110411104211043110441104511046110471104811049110501105111052110531105411055110561105711058110591106011061110621106311064110651106611067110681106911070110711107211073110741107511076110771107811079110801108111082110831108411085110861108711088110891109011091110921109311094110951109611097110981109911100111011110211103111041110511106111071110811109111101111111112111131111411115111161111711118111191112011121111221112311124111251112611127111281112911130111311113211133111341113511136111371113811139111401114111142111431114411145111461114711148111491115011151111521115311154111551115611157111581115911160111611116211163111641116511166111671116811169111701117111172111731117411175111761117711178111791118011181111821118311184111851118611187111881118911190111911119211193111941119511196111971119811199112001120111202112031120411205112061120711208112091121011211112121121311214112151121611217112181121911220112211122211223112241122511226112271122811229112301123111232112331123411235112361123711238112391124011241112421124311244112451124611247112481124911250112511125211253112541125511256112571125811259112601126111262112631126411265112661126711268112691127011271112721127311274112751127611277112781127911280112811128211283112841128511286112871128811289112901129111292112931129411295112961129711298112991130011301113021130311304113051130611307113081130911310113111131211313113141131511316113171131811319113201132111322113231132411325113261132711328113291133011331113321133311334113351133611337113381133911340113411134211343113441134511346113471134811349113501135111352113531135411355113561135711358113591136011361113621136311364113651136611367113681136911370113711137211373113741137511376113771137811379113801138111382113831138411385113861138711388113891139011391113921139311394113951139611397113981139911400114011140211403114041140511406114071140811409114101141111412114131141411415114161141711418114191142011421114221142311424114251142611427114281142911430114311143211433114341143511436114371143811439114401144111442114431144411445114461144711448114491145011451114521145311454114551145611457114581145911460114611146211463114641146511466114671146811469114701147111472114731147411475114761147711478114791148011481114821148311484114851148611487114881148911490114911149211493114941149511496114971149811499115001150111502115031150411505115061150711508115091151011511115121151311514115151151611517115181151911520115211152211523115241152511526115271152811529115301153111532115331153411535115361153711538115391154011541115421154311544115451154611547115481154911550115511155211553115541155511556115571155811559115601156111562115631156411565115661156711568115691157011571115721157311574115751157611577115781157911580115811158211583115841158511586115871158811589115901159111592115931159411595115961159711598115991160011601116021160311604116051160611607116081160911610116111161211613116141161511616116171161811619116201162111622116231162411625116261162711628116291163011631116321163311634116351163611637116381163911640116411164211643116441164511646116471164811649116501165111652116531165411655116561165711658116591166011661116621166311664116651166611667116681166911670116711167211673116741167511676116771167811679116801168111682116831168411685116861168711688116891169011691116921169311694116951169611697116981169911700117011170211703117041170511706117071170811709117101171111712117131171411715117161171711718117191172011721117221172311724117251172611727117281172911730117311173211733117341173511736117371173811739117401174111742117431174411745117461174711748117491175011751117521175311754117551175611757117581175911760117611176211763117641176511766117671176811769117701177111772117731177411775117761177711778117791178011781117821178311784117851178611787117881178911790117911179211793117941179511796117971179811799118001180111802118031180411805118061180711808118091181011811118121181311814118151181611817118181181911820118211182211823118241182511826118271182811829118301183111832118331183411835118361183711838118391184011841118421184311844118451184611847118481184911850118511185211853118541185511856118571185811859118601186111862118631186411865118661186711868118691187011871118721187311874118751187611877118781187911880118811188211883118841188511886118871188811889118901189111892118931189411895118961189711898118991190011901119021190311904119051190611907119081190911910119111191211913119141191511916119171191811919119201192111922119231192411925119261192711928119291193011931119321193311934119351193611937119381193911940119411194211943119441194511946119471194811949119501195111952119531195411955119561195711958119591196011961119621196311964119651196611967119681196911970119711197211973119741197511976119771197811979119801198111982119831198411985119861198711988119891199011991119921199311994119951199611997119981199912000120011200212003120041200512006120071200812009120101201112012120131201412015120161201712018120191202012021120221202312024120251202612027120281202912030120311203212033120341203512036120371203812039120401204112042120431204412045120461204712048120491205012051120521205312054120551205612057120581205912060120611206212063120641206512066120671206812069120701207112072120731207412075120761207712078120791208012081120821208312084120851208612087120881208912090120911209212093120941209512096120971209812099121001210112102121031210412105121061210712108121091211012111121121211312114121151211612117121181211912120121211212212123121241212512126121271212812129121301213112132121331213412135121361213712138121391214012141121421214312144121451214612147121481214912150121511215212153121541215512156121571215812159121601216112162121631216412165121661216712168121691217012171121721217312174121751217612177121781217912180121811218212183121841218512186121871218812189121901219112192121931219412195121961219712198121991220012201122021220312204122051220612207122081220912210122111221212213122141221512216122171221812219122201222112222122231222412225122261222712228122291223012231122321223312234122351223612237122381223912240122411224212243122441224512246122471224812249122501225112252122531225412255122561225712258122591226012261122621226312264122651226612267122681226912270122711227212273122741227512276122771227812279122801228112282122831228412285122861228712288122891229012291122921229312294122951229612297122981229912300123011230212303123041230512306123071230812309123101231112312123131231412315123161231712318123191232012321123221232312324123251232612327123281232912330123311233212333123341233512336123371233812339123401234112342123431234412345123461234712348123491235012351123521235312354123551235612357123581235912360123611236212363123641236512366123671236812369123701237112372123731237412375123761237712378123791238012381123821238312384123851238612387123881238912390123911239212393123941239512396123971239812399124001240112402124031240412405124061240712408124091241012411124121241312414124151241612417124181241912420124211242212423124241242512426124271242812429124301243112432124331243412435124361243712438124391244012441124421244312444124451244612447124481244912450124511245212453124541245512456124571245812459124601246112462124631246412465124661246712468124691247012471124721247312474124751247612477124781247912480124811248212483124841248512486124871248812489124901249112492124931249412495124961249712498124991250012501125021250312504125051250612507125081250912510125111251212513125141251512516125171251812519125201252112522125231252412525125261252712528125291253012531125321253312534125351253612537125381253912540125411254212543125441254512546125471254812549125501255112552125531255412555125561255712558125591256012561125621256312564125651256612567125681256912570125711257212573125741257512576125771257812579125801258112582125831258412585125861258712588125891259012591125921259312594125951259612597125981259912600126011260212603126041260512606126071260812609126101261112612126131261412615126161261712618126191262012621126221262312624126251262612627126281262912630126311263212633126341263512636126371263812639126401264112642126431264412645126461264712648126491265012651126521265312654126551265612657126581265912660126611266212663126641266512666126671266812669126701267112672126731267412675126761267712678126791268012681126821268312684126851268612687126881268912690126911269212693126941269512696126971269812699127001270112702127031270412705127061270712708127091271012711127121271312714127151271612717127181271912720127211272212723127241272512726127271272812729127301273112732127331273412735127361273712738127391274012741127421274312744127451274612747127481274912750127511275212753127541275512756127571275812759127601276112762127631276412765127661276712768127691277012771127721277312774127751277612777127781277912780127811278212783127841278512786127871278812789127901279112792127931279412795127961279712798127991280012801128021280312804128051280612807128081280912810128111281212813128141281512816128171281812819128201282112822128231282412825128261282712828128291283012831128321283312834128351283612837128381283912840128411284212843128441284512846128471284812849128501285112852128531285412855128561285712858128591286012861128621286312864128651286612867128681286912870128711287212873128741287512876128771287812879128801288112882128831288412885128861288712888128891289012891128921289312894128951289612897128981289912900129011290212903129041290512906129071290812909129101291112912129131291412915129161291712918129191292012921129221292312924129251292612927129281292912930129311293212933129341293512936129371293812939129401294112942129431294412945129461294712948129491295012951129521295312954129551295612957129581295912960129611296212963129641296512966129671296812969129701297112972129731297412975129761297712978129791298012981129821298312984129851298612987129881298912990129911299212993129941299512996129971299812999130001300113002130031300413005130061300713008130091301013011130121301313014130151301613017130181301913020130211302213023130241302513026130271302813029130301303113032130331303413035130361303713038130391304013041130421304313044130451304613047130481304913050130511305213053130541305513056130571305813059130601306113062130631306413065130661306713068130691307013071130721307313074130751307613077130781307913080130811308213083130841308513086130871308813089130901309113092130931309413095130961309713098130991310013101131021310313104131051310613107131081310913110131111311213113131141311513116131171311813119131201312113122131231312413125131261312713128131291313013131131321313313134131351313613137131381313913140131411314213143131441314513146131471314813149131501315113152131531315413155131561315713158131591316013161131621316313164131651316613167131681316913170131711317213173131741317513176131771317813179131801318113182131831318413185131861318713188131891319013191131921319313194131951319613197131981319913200132011320213203132041320513206132071320813209132101321113212132131321413215132161321713218132191322013221132221322313224132251322613227132281322913230132311323213233132341323513236132371323813239132401324113242132431324413245132461324713248132491325013251132521325313254132551325613257132581325913260132611326213263132641326513266132671326813269132701327113272132731327413275132761327713278132791328013281132821328313284132851328613287132881328913290132911329213293132941329513296132971329813299133001330113302133031330413305133061330713308133091331013311133121331313314133151331613317133181331913320133211332213323133241332513326133271332813329133301333113332133331333413335133361333713338133391334013341133421334313344133451334613347133481334913350133511335213353133541335513356133571335813359133601336113362133631336413365133661336713368133691337013371133721337313374133751337613377133781337913380133811338213383133841338513386133871338813389133901339113392133931339413395133961339713398133991340013401134021340313404134051340613407134081340913410134111341213413134141341513416134171341813419134201342113422134231342413425134261342713428134291343013431134321343313434134351343613437134381343913440134411344213443134441344513446134471344813449134501345113452134531345413455134561345713458134591346013461134621346313464134651346613467134681346913470134711347213473134741347513476134771347813479134801348113482134831348413485134861348713488134891349013491134921349313494134951349613497134981349913500135011350213503135041350513506135071350813509135101351113512135131351413515135161351713518135191352013521135221352313524135251352613527135281352913530135311353213533135341353513536135371353813539135401354113542135431354413545135461354713548135491355013551135521355313554135551355613557135581355913560135611356213563135641356513566135671356813569135701357113572135731357413575135761357713578135791358013581135821358313584135851358613587135881358913590135911359213593135941359513596135971359813599136001360113602136031360413605136061360713608136091361013611136121361313614136151361613617136181361913620136211362213623136241362513626136271362813629136301363113632136331363413635136361363713638136391364013641136421364313644136451364613647136481364913650136511365213653136541365513656136571365813659136601366113662136631366413665136661366713668136691367013671136721367313674136751367613677136781367913680136811368213683136841368513686136871368813689136901369113692136931369413695136961369713698136991370013701137021370313704137051370613707137081370913710137111371213713137141371513716137171371813719137201372113722137231372413725137261372713728137291373013731137321373313734137351373613737137381373913740137411374213743137441374513746137471374813749137501375113752137531375413755137561375713758137591376013761137621376313764137651376613767137681376913770137711377213773137741377513776137771377813779137801378113782137831378413785137861378713788137891379013791137921379313794137951379613797137981379913800138011380213803138041380513806138071380813809138101381113812138131381413815138161381713818138191382013821138221382313824138251382613827138281382913830138311383213833138341383513836138371383813839138401384113842138431384413845138461384713848138491385013851138521385313854138551385613857138581385913860138611386213863138641386513866138671386813869138701387113872138731387413875138761387713878138791388013881138821388313884138851388613887138881388913890138911389213893138941389513896138971389813899139001390113902139031390413905139061390713908139091391013911139121391313914139151391613917139181391913920139211392213923139241392513926139271392813929139301393113932139331393413935139361393713938139391394013941139421394313944139451394613947139481394913950139511395213953139541395513956139571395813959139601396113962139631396413965139661396713968139691397013971139721397313974139751397613977139781397913980139811398213983139841398513986139871398813989139901399113992139931399413995139961399713998139991400014001140021400314004140051400614007140081400914010140111401214013140141401514016140171401814019140201402114022140231402414025140261402714028140291403014031140321403314034140351403614037140381403914040140411404214043140441404514046140471404814049140501405114052140531405414055140561405714058140591406014061140621406314064140651406614067140681406914070140711407214073140741407514076140771407814079140801408114082140831408414085140861408714088140891409014091140921409314094140951409614097140981409914100141011410214103141041410514106141071410814109141101411114112141131411414115141161411714118141191412014121141221412314124141251412614127141281412914130141311413214133141341413514136141371413814139141401414114142141431414414145141461414714148141491415014151141521415314154141551415614157141581415914160141611416214163141641416514166141671416814169141701417114172141731417414175141761417714178141791418014181141821418314184141851418614187141881418914190141911419214193141941419514196141971419814199142001420114202142031420414205142061420714208142091421014211142121421314214142151421614217142181421914220142211422214223142241422514226142271422814229142301423114232142331423414235142361423714238142391424014241142421424314244142451424614247142481424914250142511425214253142541425514256142571425814259142601426114262142631426414265142661426714268142691427014271142721427314274142751427614277142781427914280142811428214283142841428514286142871428814289142901429114292142931429414295142961429714298142991430014301143021430314304143051430614307143081430914310143111431214313143141431514316143171431814319143201432114322143231432414325143261432714328143291433014331143321433314334143351433614337143381433914340143411434214343143441434514346143471434814349143501435114352143531435414355143561435714358143591436014361143621436314364143651436614367143681436914370143711437214373143741437514376143771437814379143801438114382143831438414385143861438714388143891439014391143921439314394143951439614397143981439914400144011440214403144041440514406144071440814409144101441114412144131441414415144161441714418144191442014421144221442314424144251442614427144281442914430144311443214433144341443514436144371443814439144401444114442144431444414445144461444714448144491445014451144521445314454144551445614457144581445914460144611446214463144641446514466144671446814469144701447114472144731447414475144761447714478144791448014481144821448314484144851448614487144881448914490144911449214493144941449514496144971449814499145001450114502145031450414505145061450714508145091451014511145121451314514145151451614517145181451914520145211452214523145241452514526145271452814529145301453114532145331453414535145361453714538145391454014541145421454314544145451454614547145481454914550145511455214553145541455514556145571455814559145601456114562145631456414565145661456714568145691457014571145721457314574145751457614577145781457914580145811458214583145841458514586145871458814589145901459114592145931459414595145961459714598145991460014601146021460314604146051460614607146081460914610146111461214613146141461514616146171461814619146201462114622146231462414625146261462714628146291463014631146321463314634146351463614637146381463914640146411464214643146441464514646146471464814649146501465114652146531465414655146561465714658146591466014661146621466314664146651466614667146681466914670146711467214673146741467514676146771467814679146801468114682146831468414685146861468714688146891469014691146921469314694146951469614697146981469914700147011470214703147041470514706147071470814709147101471114712147131471414715147161471714718147191472014721147221472314724147251472614727147281472914730147311473214733147341473514736147371473814739147401474114742147431474414745147461474714748147491475014751147521475314754147551475614757147581475914760147611476214763147641476514766147671476814769147701477114772147731477414775147761477714778147791478014781147821478314784147851478614787147881478914790147911479214793147941479514796147971479814799148001480114802148031480414805148061480714808148091481014811148121481314814148151481614817148181481914820148211482214823148241482514826148271482814829148301483114832148331483414835148361483714838148391484014841148421484314844148451484614847148481484914850148511485214853148541485514856148571485814859148601486114862148631486414865148661486714868148691487014871148721487314874148751487614877148781487914880148811488214883148841488514886148871488814889148901489114892148931489414895148961489714898148991490014901149021490314904149051490614907149081490914910149111491214913149141491514916149171491814919149201492114922149231492414925149261492714928149291493014931149321493314934149351493614937149381493914940149411494214943149441494514946149471494814949149501495114952149531495414955149561495714958149591496014961149621496314964149651496614967149681496914970149711497214973149741497514976149771497814979149801498114982149831498414985149861498714988149891499014991149921499314994149951499614997149981499915000150011500215003150041500515006150071500815009150101501115012150131501415015150161501715018150191502015021150221502315024150251502615027150281502915030150311503215033150341503515036150371503815039150401504115042150431504415045150461504715048150491505015051150521505315054150551505615057150581505915060150611506215063150641506515066150671506815069150701507115072150731507415075150761507715078150791508015081150821508315084150851508615087150881508915090150911509215093150941509515096150971509815099151001510115102151031510415105151061510715108151091511015111151121511315114151151511615117151181511915120151211512215123151241512515126151271512815129151301513115132151331513415135151361513715138151391514015141151421514315144151451514615147151481514915150151511515215153151541515515156151571515815159151601516115162151631516415165151661516715168151691517015171151721517315174151751517615177151781517915180151811518215183151841518515186151871518815189151901519115192151931519415195151961519715198151991520015201152021520315204152051520615207152081520915210152111521215213152141521515216152171521815219152201522115222152231522415225152261522715228152291523015231152321523315234152351523615237152381523915240152411524215243152441524515246152471524815249152501525115252152531525415255152561525715258152591526015261152621526315264152651526615267152681526915270152711527215273152741527515276152771527815279152801528115282152831528415285152861528715288152891529015291152921529315294152951529615297152981529915300153011530215303153041530515306153071530815309153101531115312153131531415315153161531715318153191532015321153221532315324153251532615327153281532915330153311533215333153341533515336153371533815339153401534115342153431534415345153461534715348153491535015351153521535315354153551535615357153581535915360153611536215363153641536515366153671536815369153701537115372153731537415375153761537715378153791538015381153821538315384153851538615387153881538915390153911539215393153941539515396153971539815399154001540115402154031540415405154061540715408154091541015411154121541315414154151541615417154181541915420154211542215423154241542515426154271542815429154301543115432154331543415435154361543715438154391544015441154421544315444154451544615447154481544915450154511545215453154541545515456154571545815459154601546115462154631546415465154661546715468154691547015471154721547315474154751547615477154781547915480154811548215483154841548515486154871548815489154901549115492154931549415495154961549715498154991550015501155021550315504155051550615507155081550915510155111551215513155141551515516155171551815519155201552115522155231552415525155261552715528155291553015531155321553315534155351553615537155381553915540155411554215543155441554515546155471554815549155501555115552155531555415555155561555715558155591556015561155621556315564155651556615567155681556915570155711557215573155741557515576155771557815579155801558115582155831558415585155861558715588155891559015591155921559315594155951559615597155981559915600156011560215603156041560515606156071560815609156101561115612156131561415615156161561715618156191562015621156221562315624156251562615627156281562915630156311563215633156341563515636156371563815639156401564115642156431564415645156461564715648156491565015651156521565315654156551565615657156581565915660156611566215663156641566515666156671566815669156701567115672156731567415675156761567715678156791568015681156821568315684156851568615687156881568915690156911569215693156941569515696156971569815699157001570115702157031570415705157061570715708157091571015711157121571315714157151571615717157181571915720157211572215723157241572515726157271572815729157301573115732157331573415735157361573715738157391574015741157421574315744157451574615747157481574915750157511575215753157541575515756157571575815759157601576115762157631576415765157661576715768157691577015771157721577315774157751577615777157781577915780157811578215783157841578515786157871578815789157901579115792157931579415795157961579715798157991580015801158021580315804158051580615807158081580915810158111581215813158141581515816158171581815819158201582115822158231582415825158261582715828158291583015831158321583315834158351583615837158381583915840158411584215843158441584515846158471584815849158501585115852158531585415855158561585715858158591586015861158621586315864158651586615867158681586915870158711587215873158741587515876158771587815879158801588115882158831588415885158861588715888158891589015891158921589315894158951589615897158981589915900159011590215903159041590515906159071590815909159101591115912159131591415915159161591715918159191592015921159221592315924159251592615927159281592915930159311593215933159341593515936159371593815939159401594115942159431594415945159461594715948159491595015951159521595315954159551595615957159581595915960159611596215963159641596515966159671596815969159701597115972159731597415975159761597715978159791598015981159821598315984159851598615987159881598915990159911599215993159941599515996159971599815999160001600116002160031600416005160061600716008160091601016011160121601316014160151601616017160181601916020160211602216023160241602516026160271602816029160301603116032160331603416035160361603716038160391604016041160421604316044160451604616047160481604916050160511605216053160541605516056160571605816059160601606116062160631606416065160661606716068160691607016071160721607316074160751607616077160781607916080160811608216083160841608516086160871608816089160901609116092160931609416095160961609716098160991610016101161021610316104161051610616107161081610916110161111611216113161141611516116161171611816119161201612116122161231612416125161261612716128161291613016131161321613316134161351613616137
Tu, Do Hoang
Why another book on Operating Systems?
What you will learn in this book
What this book is not about
The organization of the book
Part I Preliminary
Domain documents
Problem domains
Documents for implementing a problem dom
Software Requirement Document
Software Specification
Documents for writing an x86 Operating S
The physical implementation of a bit
MOSFET transistors
Beyond transistors: digital logic gates
The theory behind logic gates
Logic Gate implementation: CMOS circuit
Beyond Logic Gates: Machine Language
Machine language
Why abstraction works
Why abstraction reduces complexity
What is a computer?
Field Gate Programmable Array
Application-Specific Integrated Circuit
x86 architecture
Intel Q35 Chipset
x86 Execution Environment
x86 Assembly and C
objdump
Reading the output
Intel manuals
Experiment with assembly code
Anatomy of an Assembly Instruction
Understand an instruction in detail
Example: jmp instruction
Examine compiled data
Pointer Data Types
Bit Field Data Type
String Data Types
Examine compiled code
Automatic variables
Function Call and Return
The Anatomy of a Program
Reference documents:
ELF header
Section header table
Understand Section in-depth
Program header table
Segments vs sections
Runtime inspection and debug
A sample program
Static inspection of a program
Command: info target/info file/info file
Command: maint info sections
Command: info functions
Command: info variables
Command: disassemble/disas
Command: x
Command: print/p
Runtime inspection of a program
Command: run
Command: break/b
Command: next/n
Command: step/s
Command: ni
Command: si
Command: until
Command: finish
Command: bt
Command: up
Command: down
Command: info registers
How debuggers work: A brief introduction
How breakpoints work
Single stepping
How a debugger understands high level so
Part II Groundwork
x86 Boot Process
Using BIOS services
Example Bootloader
Compile and load
Loading a program from bootloader
Floppy Disk Anatomy
Read and load sectors from a floppy disk
Improve productivity with scripts
Automate build with GNU Make
GNU Make Syntax summary
Automate debugging steps with GDB script
Linking and loading on bare metal
Understand relocations with readelf
Sym.Value
Sym. Name
Crafting ELF binary with linker scripts
Example linker script
Understand the custom ELF structure
Manipulate the program segments
C Runtime: Hosted vs Freestanding
Debuggable bootloader on bare metal
Debuggable program on bare metal
Loading an ELF binary from a bootloader
Debugging the memory layout
Testing the new binary
Part III Kernel Programming
x86 Descriptors
Basic operating system concepts
Hardware Abstraction Layer
System programming interface
The need for an Operating System
Userspace and kernel space
Memory Segment
Segment Descriptor
Types of Segment Descriptors
Code and Data descriptors
Task Descriptor
Interrupt Descriptor
Descriptor Scope
Global Descriptor
Local Descriptor
Segment Selector
Enhancement: Bootloader with descriptors
Context switch
Preemptive vs Non-preemptive
Process states
procfs
Task: x86 concept of a process
Task Data Structure
Task State Segment
Process Implementation
Major Plan
Stage 1: Switch to a task from bootloade
Stage 2: Switch to a task with one funct
Stage 3: Switch to a task with many func
Milestone: Code Refactor
Example: Ex2 filesystem
You've probably asked yourself at least once how an operating
system is written from the ground up. You might even have years
of programming experience under your belt, yet your understanding
of operating systems may still be a collection of abstract
concepts not grounded in actual implementation. To those who've
never built one, an operating system may seem like magic: a
mysterious thing that can control hardware while handling a
programmer's requests via the API of their favorite programming
language. Learning how to build an operating system seems
intimidating and difficult; no matter how much you learn, it
never feels like you know enough. You're probably reading this
book right now to gain a better understanding of operating
systems to be a better software engineer.
If that is the case, this book is for you. By going through this
book, you will be able to find the missing pieces that are
essential and enable you to implement your own operating system
from scratch! Yes, from scratch without going through any
existing operating system layer to prove to yourself that you are
an operating system developer. You may ask,"Isn't it more
practical to learn the internals of Linux?".
and no.
Learning Linux can help your workflow at your day job. However,
if you follow that route, you still won't achieve the ultimate
goal of writing an actual operating system. By writing your own
operating system, you will gain knowledge that you will not be
able to glean just from learning Linux.
Here's a list of some benefits of writing your own OS:
• You will learn how a computer works at the hardware level, and
you will learn to write software to manage that hardware
directly.
• You will learn the fundamentals of operating systems, allowing
you to adapt to any operating system, not just Linux
• To hack on Linux internals suitably, you'll need to write at
least one operating system on your own. This is just like
applications programming: to write a large application, you'll
need to start with simple ones.
• You will open pathways to various low-level programming domains
such as reverse engineering, exploits, building virtual
machines, game console emulation and more. Assembly language
will become one of your most indispensable tools for low-level
analysis. (But that does not mean you have to write your
operating system in Assembly!)
• Writing an operating system is fun!
There are many books and courses on this topic made by famous
professors and experts out there already. Who am I to write a
book on such an advanced topic? While it's true that many quality
resources exist, I find them lacking. Do any of them show you how
to compile your C code and the C runtime library independent of
an existing operating system? Most books on operating system
design and implementation only discuss the software side; how the
operating system communicates with the hardware is skipped.
Important hardware details are skipped, and it's difficult for a
self-learner to find relevant resources on the Internet. The aim
of this book is to bridge that gap: not only will you learn how
to program hardware directly, but also how to read official
documents from hardware vendors to program it. You no longer have
to seek out resources to help yourself interpret hardware manuals
and documentation: you can do it yourself. Lastly, I wrote this
book from an autodidact's perspective. I made this book as
self-contained as possible so you can spend more time learning
and less time guessing or seeking out information on the
One of the core focuses of this book is to guide you through the
process of reading official documentation from vendors to
implement your software. Official documents from hardware vendors
like Intel are critical for implementing an operating system or
any other software that directly controls the hardware. At a
minimum, an operating system developer needs to be able to
comprehend these documents and implement software based on a set
of hardware requirements. Thus, the first chapter is dedicated to
discussing relevant documents and their importance.
Another distinct feature of this book is that it is "Hello World"
centric. Most examples revolve around variants of a "Hello World"
program, which will acquaint you with core concepts. These
concepts must be learned before attempting to write an operating
system. Anything beyond a simple "Hello World" example gets in
the way of teaching the concepts, thus lengthening the time spent
on getting started writing an operating system.
Let's dive in. With this book, I hope to provide enough
foundational knowledge that will open doors for you to make sense
of other resources. This book is will be especially beneficial to
students who've just finished their first C/C++ course. Imagine
how cool it would be to show prospective employers that you've
already built an operating system.
• Basic knowledge of circuits
– Basic Concepts of Electricity: atoms, electrons, proton,
neutron, current flow.
– Ohm's law
If you are unfamiliar with these concepts, you can quickly
learn them here: http://www.allaboutcircuits.com/textbook/, by
reading chapter 1 and chapter 2.
• C programming. In particular:
– Variable and function declarations/definitions
– While and for loops
– Pointers and function pointers
– Fundamental algorithms and data structures in C
• Linux basics:
– Know how to navigate directory with the command line
– Know how to invoke a command with options
– Know how to pipe output to another program
• Touch typing. Since we are going to use Linux, touch typing
helps. I know typing speed does not relate to problem-solving,
but at least your typing speed should be fast enough not to let
it get it the way and degrade the learning experience.
In general, I assume that the reader has basic C programming
knowledge, and can use an IDE to build and run a program.
• How to write an operating system from scratch by reading
hardware datasheets. In the real world, you will not be able to
consult Google for a quick answer.
• Write code independently. It's pointless to copy and paste
code. Real learning happens when you solve problems on your
own. Some examples are provided to help kick start your work,
but most problems are yours to conquer. However, the solutions
are available online for you after giving a good try.
• A big picture of how each layer of a computer related to each
other, from hardware to software.
• How to use Linux as a development environment and common tools
for low-level programming.
• How a program is structured so that an operating system can
• How to debug a program running directly on hardware with gdb
and QEMU.
• Linking and loading on bare metal x86_64, with pure C. No
standard library. No runtime overhead.
• Electrical Engineering: The book discusses some concepts from
electronics and electrical engineering only to the extent of
how software operates on bare metal.
• How to use Linux or any OS types of books: Though Linux is used
as a development environment and as a medium to demonstrate
high-level operating system concepts, it is not the focus of
this book.
• Linux Kernel development: There are already many high-quality
books out there on this subject.
• Operating system books focused on algorithms: This book focuses
more on actual hardware platform - Intel x86_64 - and how to
write an OS that utilizes of OS support from the hardware
Part 1 provides a foundation for learning operating system.
• Chapter 1 briefly explains the importance of domain
documents. Documents are crucial for the learning experience,
so they deserve a chapter.
• Chapter 2 explains the layers of abstractions from hardware
to software. The idea is to provide insight into how code
runs physically.
• Chapter 3 provides the general architecture of a computer,
then introduces a sample computer model that you will use to
write an operating system.
• Chapter 4 introduces the x86 assembly language through the
use of the Intel manuals, along with commonly used
instructions. This chapter gives detailed examples of how
high-level syntax corresponds to low-level assembly, enabling
you to read generated assembly code comfortably. It is
necessary to read assembly code when debugging an operating
• Chapter 5 dissects ELF in detail. Only by understanding how
the structure of a program at the binary level, you can build
one that runs on bare metal.
• Chapter 6 introduces gdb debugger with extensive examples for
commonly used commands. After acquainting the reader with
gdb, it then provides insight on how a debugger works. This
knowledge is essential for building a debuggable program on
the bare metal.
Part 2 presents how to write a bootloader to bootstrap a
kernel. Hence the name "Groundwork". After mastering this part,
the reader can continue with the next part, which is a guide
for writing an operating system. However, if the reader does not
like the presentation, he or she can look elsewhere, such as
the OSDev Wiki: http://wiki.osdev.org/.
• Chapter 7 introduces what the bootloader is, how to write one
in assembly, and how to load it on QEMU, a hardware emulator.
This process involves typing repetitive and long commands, so
GNU Make is applied to improve productivity by automating the
repetitive parts and simplifying the interaction with the
project. This chapter also demonstrates the use of GNU Make
in context.
• Chapter 8 introduces linking by explaining the relocation
process when combining object files. In addition to a
bootloader and an operating system written in C, this is the
last piece of the puzzle required for building debuggable
programs on bare metal, including the bootloader written in
Assembly and an operating system written in C.
Part 3 provides guidance on how to write an operating system,
as you should implement an operating system on your own and be
proud of your creation. The guidance consists of simpler and
coherent explanations of necessary concepts, from hardware to
software, to implement the features of an operating system.
Without such guidance, you will waste time gathering
information spread through various documents and the Internet.
It then provides a plan on how to map the concepts to code.
Thank you, my beloved family. Thank you, the contributors.
In the real world, software engineering is not only focused on
software, but also the problem domain it is trying to solve.
A problem domain[margin:
problem domain
]problem domain is the part of the world where the computer is to
produce effects, together with the means available to produce
them, directly or indirectly. (Kovitz, 1999)
A problem domainproblem domain is anything outside of programming
that a software engineer needs to understand to produce correct
code that can achieve the desired effects. "Directly" means
include anything that the software can control to produce the
desired effects, e.g. keyboards, printers, monitors, other
software... "Indirectly" means anything not part of the software
but relevant to the problem domain e.g. appropriate people to be
informed by the software when some event happens, students that
move to correct classrooms according to the schedule generated by
the software. To write a finance application, a software engineer
needs to learn sufficient finance concepts to understand the [margin:
]requirementsrequirements of a customer and implement such
requirements, correctly.
Requirements are the effects that the machine is to exert in the
problem domain by virtue of its programming.
Programming alone is not too complicated; programming to solve a
problem domain, is [footnote:
We refer to the concept of "programming" here as someone able to
write code in a language, but not necessary know any or all
software engineering knowledge.
]. Not only a software engineer needs to understand how to
implement the software, but also the problem domain that it tries
to solve, which might require in-depth expert knowledge. The
software engineer must also select the right programming
techniques that apply to the problem domain he is trying to
solve because many techniques that are effective in one domain
might not be in another. For example, many types of applications
do not require performant written code, but a short time to
market. In this case, interpreted languages are widely popular
because it can satisfy such need. However, for writing huge 3D
games or operating system, compiled languages are dominant
because it can generate the most efficient code required for such
Often, it is too much for a software engineer to learn
non-trivial domains (that might require a bachelor degree or
above to understand the domains). Also, it is easier for a domain expert
domain expert to learn enough programming to break down the
problem domain into parts small enough for the software engineers
to implement. Sometimes, domain experts implement the software
[float Figure:
[Figure 0.1:
Problem domains: Software and Non-software.
<Graphics file: C:/Users/Tu Do/os01/book_src/images/01/domains_general.pdf>
One example of such scenario is the domain that is presented in
this book: operating system. A certain amount of electrical
engineering (EE) knowledge is required to implement an operating
system. If a computer science (CS) curriculum that does not
include minimum EE courses, students in the curriculum have
little chance to implement a working operating system. Even if
they can implement one, either they need to invest a significant
amount of time to study on their own, or they fill code in a
predefined framework just to understand high-level algorithms.
For that reason, EE students have an easier time to implement an
OS, as they only need to study a few core CS courses. In fact,
only "C programming" and "Algorithms and Data Structures" classes
are usually enough to get them started writing code for device
drivers, and later generalize it into an operating system.
Operating System domain.
<Graphics file: C:/Users/Tu Do/os01/book_src/images/01/domains_os_example.pdf>
One thing to note is that software is its own problem domain. A
problem domain does not necessarily divide between software and
itself. Compilers, 3D graphics, games, cryptography, artificial
intelligence, etc., are parts of software engineering domains
(actually it is more of a computer science domain than a software
engineering domain). In general, a software-exclusive domain
creates software to be used by other software. Operating System
is also a domain, but is overlapped with other domains such as
electrical engineering. To effectively implement an operating
system, it is required to learn enough of the external domain.
How much learning is enough for a software engineer? At the
minimum, a software engineer should be knowledgeable enough to
understand the documents prepared by hardware engineers for using
(i.e. programming) their devices.
Learning a programming language, even C or Assembly, does not
mean a software engineer can automatically be good at hardware
programming or any related low-level programming domains. One can
spend 10 years, 20 years or his entire life writing C/C++ code,
and he still cannot write an operating system, simply because of
the ignorance of relevant domain knowledge. Just like learning
English does not mean a person automatically becomes good at
reading Math books written in English. Much more than that is
needed. Knowing one or two programming languages is not enough.
If a programmer writes software for a living, he had better be
specialized in one or two problem domains outside of software if
he does not want his job taken by domain experts who learn
programming in their spare time.
Documents for implementing a problem domain
Documents are essential for learning a problem domain (and
actually, anything) since information can be passed down in a
reliable way. It is evident that this written text has been used
for thousands of years to pass knowledge from generation to
generation. Documents are integral parts of non-trivial
projects. Without the documents:
• New people will find it much harder to join a project.
• It is harder to maintain a project because people may forget
important unresolved bugs or quirks in their system.
• It is challenging for customers to understand the product they
are going to use. However, documents do not need to be written
in book format. It can be anything from HTML format to database
format to be displayed by a graphical user interface. Important
information must be stored somewhere safe, readily accessible.
There are many types of documents. However, to facilitate the
understanding of a problem domain, these two documents need to be
written: software requirement document and software
Software requirement document[margin:
Software requirement
]Software requirement document includes both a list of
requirements and a description of the problem domain (Kovitz, 1999)
A software solves a business problem. But, which problems to
solve, are requested by a customer. Many of these requests make a
list of requirements that our software needs to fulfill. However,
an enumerated list of features is seldom useful in delivering
software. As stated in the previous section, the tricky part is
not programming alone but programming according to a problem
domain. The bulk of software design and implementation depends
upon the knowledge of the problem domain. The better understood
the domain, the higher quality software can be. For example,
building a house is practiced over thousands of years and is well
understood, and it is easy to build a high-quality house;
software is no different. Code that is difficult to understand
is usually due to the author's ignorance of a problem domain. In
the context of this book, we seek to understand the low-level
working of various hardware devices.
Because software quality depends upon an understanding of the
problem domain, a software requirement document should always
include a description of the problem domain.
Be aware that software requirements are not:
What vs How
"what" and "how" are vague terms. What is the "what"? Is it
nouns only? If so, what if a customer requires his software to
perform specific steps of operations, such as purchasing
procedure for a customer on a website. Does it include "verbs"
now? However, isn't the "how" supposed to be step by step
operations? Anything can be the "what" and anything can be the "
how".
Software requirement document is all about the problem domain.
It should not be a high-level description of an implementation.
Some problems might seem straightforward to map directly from
its domain description to the structure of an implementation.
• Users are given a list of books in a drop-down menu to
choose.
• Books are stored in a linked list".
In the future, instead of a drop-down menu, all books are
listed directly on a page in thumbnails. Books might be
reimplemented as a graph, and each node is a book for finding
related books, as a recommender is going to be added in the
next version. The requirement document needs updating again to
remove all the outdated implementation details, thus required
additional efforts to maintain the requirement document, and
when the effort for syncing with the implementation is too
much, the developers give up documentation, and everyone starts
ranting how useless documentation is.
More often than not there is no straightforward one-to-one
mapping. For example, a regular computer user expects OS to be
something that runs some program with GUI, or their favorite
computer games. But for such requirements, an operating system
is implemented as multiple layers, each hides the details from
the upper layers. To implement an operating system, a large
body of knowledge from multiple fields are required, especially
if the operating system runs on non-PC devices.
It's better to put anything related to the problem domain in
the requirement document. A good way to test the quality of
requirement document is to hand it to the domain expert for
proofreading if he can understand the material thoroughly.
Requirement document is also useful as a help document later,
or for writing one much easier.
Software specification[margin:
]Software specification document states rules relating desired
behavior of the output devices to all possible behavior of the
input devices, as well as any rules that other parts of the
problem domain must obey.Kovitz (1999)
Simply put, software specification is interface design, with
constraints for the problem domain to follow e.g. the software
can accept certain types of input such as the software is
designed to accept English but no other language. For a hardware
device, a specification is always needed, as software depends on
its hardwired behaviors. And in fact, it is mostly the case that
hardware specifications are well-defined, with the tiniest
details in it. It needs to be that way because once hardware is
physically manufactured, there's no going back, and if defects
exist, it's a devastating damage to the company on both finance
and reputation.
Note that, similar to a requirement document, a specification
only concerns interface design. If implementation details leak
in, it is a burden to sync between the actual implementation and
the specification, and soon to be abandoned.
Another important remark is that, though a specification document
is important, it does not have to be produced before the
implementation. It can be prepared in any order: before or after
a complete implementation; or at the same time with the
implementation, when some part is done, and the interface is
ready to be recorded in the specification. Regardless of methods,
what matter is a complete specification at the end.
Documents for writing an x86 Operating System
When problem domain is different from software domain,
requirement document and specification are usually separated.
However, if the problem domain is inside software, specification
most often includes both, and content of both can be mixed with
each other. As demonstrated by previous sections the importance
of documents, to implement an OS, we will need to collects
relevant documents to gain sufficient domain knowledge. These
documents are as follow:
• Intel® 64 and IA-32 Architectures Software Developer's Manual
(Volume 1, 2, 3)
• Intel® 3 Series Express Chipset Family Datasheet
• System V Application Binary Interface
Aside from the Intel's official website, the website of this book
also hosts the documents for convenience[footnote:
Intel may change the links to the documents as they update their
website, so this book doesn't contain any link to the documents
to avoid confusion for readers.
Intel documents divide the requirement and specification sections
clearly, but call the sections with different names. The
corresponding to the requirement document is a section called "
Functional Description", which consists mostly of domain
description; for specification, "Register Description" section
describes all programming interfaces. Both documents carry no
unnecessary implementation details[footnote:
As it should be, those details are trade secret.
]. Intel documents are also great examples of how to write well
requirements/specifications, as explained in this chapter.
Other than the Intel documents, other documents will be
introduced in the relevant chapters.
This chapter gives an intuition on how hardware and software
connected together, and how software is represented physically.
All electronic devices, from simple to complex, manipulate this
flow to achieve desired effects in the real world. Computers are
no exception. When we write software, we indirectly manipulate
electrical current at the physical level, in such a way that the
underlying machine produces desired effects. To understand the
process, we consider a simple light bulb. A light bulb can change
two states between on and off with a switch, periodically: an off
means number 0, and an on means 1.[float MarginFigure:
[MarginFigure 1:
A lightbulb
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/bulb.svg>
However, one problem is that such a switch requires manual
intervention from a human. What is required is an automatic
switch based on the voltage level, as described above. To enable
automatic switching of electrical signals, a device called
transistor, invented by William Shockley, John Bardeen and Walter
Brattain. This invention started the whole computer industry.
At the core, a [margin:
]transistortransistor is just a resistor whose values can vary
based on an input voltage value[float MarginFigure:
Modern transistor
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/transistor.svg>
]. With this property, a transistor can be used as a current
amplifier (more voltage, less resistance) or switch electrical
signals off and on (block and unblock an electron flow) based on
a voltage level. At 0 v, no current can pass through a
transistor, thus it acts like a circuit with an open switch
(light bulb off) because the resistor value is enough to block
the electrical flow. Similarly, at +3.5 v, current can flow
through a transistor because the resistor value is lessened,
effectively enables electron flow, thus acts like a circuit with
a closed switch.[margin:
If you want a deeper explanation of transistors e.g. how
electrons move, you should look at the video "How semiconductors
work" on Youtube, by Ben Eater.
A bit has two states: 0 and 1, which is the building block of all
digital systems and software. Similar to a light bulb that can be
turned on and off, bits are made out of this electrical stream
from the power source: Bit 0 are represented with 0 v (no
electron flow), and bit 1 is +3.5 v to +5 v (electron flow).
Transistor implements a bit correctly, as it can regulate the
electron flow based on voltage level.
The classic transistors invented open a whole new world of micro
digital devices. Prior to the invention, vacuum tubes - which are
just fancier light bulbs - were used to present 0 and 1, and
required human to turn it on and off. [margin:
]MOSFETMOSFET, or Metal–Oxide–Semiconductor Field-Effect
Transistor, invented in 1959 by Dawon Kahng and Martin M. (John)
Atalla at Bell Labs, is an improved version of classic
transistors that is more suitable for digital devices, as it
requires shorter switching time between two states 0 and 1, more
stable, consumes less power and easier to produce.
There are also two types of MOSFETs analogous to two types of
transistors: n-MOSFET and p-MOSFET. n-MOSFET and p-MOSFET are
also called NMOS and PMOS transistors for short.
All digital devices are designed with logic gates. A logic gate[margin:
]logic gate is a device that implements a boolean function. Each
logic gate includes a number of inputs and an output. All
computer operations are built from the combinations of logic
gates, which are just combinations of boolean functions. [float MarginFigure:
Example: NAND gate
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/Nand-gate.svg>
Logic gates accept only binary inputs[footnote:
Input that is either a 0 or 1.
] and produce binary outputs. In other words, logic gates are
functions that transform binary values. Fortunately, a branch of
math that deals exclusively with binary values already existed,
called Boolean Algebra, developed in the 19[superscript:th]century by George Boole. With a sound mathematical theory as a
foundation logic gates were created. As logic gates implement
Boolean functions, a set of Boolean functions is functionally complete
[margin:
functionally complete
]functionally complete, if this set can construct all other
Boolean functions can be constructed from. Later, Charles Sanders
Peirce (during 1880 -- 1881) proved that either Boolean function
of NOR or NAND alone is enough to create all other Boolean logic
functions. Thus NOR and NAND gates are functionally complete Peirce (1933)
. Gates are simply the implementations of Boolean logic
functions, therefore NAND or NOR gate is enough to implement all
other logic gates. The simplest gates CMOS circuit can implement
are inverters (NOT gates) and from the inverters, comes NAND
gates. With NAND gates, we are confident to implement everything
else. This is why the inventions of transistors, then CMOS
circuit revolutionized computer industry.[margin:
If you want to understand why and how from NAND gate we can
create all Boolean functions and a computer, I suggest the course
Build a Modern Computer from First Principles: From Nand to
Tetris available on Coursera: https://www.coursera.org/learn/build-a-computer
. Go even further, after the course, you should take the series
Computational Structures on Edx.
We should realize and appreciate how powerful boolean functions
are available in all programming languages.
Underlying every logic gate is a circuit called [margin:
]CMOSCMOS - Complementary MOSFET. CMOS consists of two
complementary transistors, NMOS and PMOS. The simplest CMOS
circuit is an inverter or a NOT gate:
From NOT gate, a NAND gate can be created:
From NAND gate, we have all other gates. As demonstrated, such a
simple circuitry performs the logical operators in day-to-day
program languages e.g. NOT operator ~ is executed directly by an
inverter circuit, and operator & is executed by an AND circuit
and so on. Code does not run on magic a black box. In contrast,
code execution is precise and transparent, often as simple as
running some hardwired circuit. When we write software, we simply
manipulate electrical current at the physical level to run
appropriate circuits to produce desired outcomes. However, this
whole process somehow does not relate to any thought involving
electrical current. That is the real magic and will be explained
One interesting property of CMOS is that a k-input gate uses k
PMOS and k NMOS transistors (Wakerly, 1999). All logic gates are
built by pairs of NMOS and PMOS transistors, and gates are the
building blocks of all digital devices from simple to complex,
including any computer. Thanks to this pattern, it is possible to
separate between the actual physical circuit implementation and
logical implementation. Digital designs are done by designing
with logic gates then later be "compiled" into physical circuits.
In fact, later we will see that logic gates become a language
that describes how circuits operate. Understanding how CMOS works
is important to understand how a computer is designed, and as a
consequence, how a computer works[footnote:
Again, if you want to understand how logic gates make a computer,
consider the suggested courses on Coursera and Edx earlier.
Finally, an implemented circuit with its wires and transistors is
stored physically in a package called a chip. A chipchip is a
substrate that an integrated circuit is etched onto. However, a
chip also refers to a completely packaged integrated circuit in
consumer market. Depends on the context, it is understood
differently.[float MarginFigure:
74HC00 chip physical view
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/74hc00_nxp_physical.jpg>
74HC00 is a chip with four 2-input NAND gates. The chip comes
with 8 input pins and 4 output pins, 1 pin for connecting to a
voltage source and 1 pin for connecting to the ground. This
device is the physical implementation of NAND gates that we can
physically touch and use. But instead of just a single gate, the
chip comes with 4 gates that can be combined. Each combination
enables a different logic function, effective creating other
logic gates. This feature is what make the chip popular.
74HC00 logic diagrams (Source: 74HC00 datasheet, http://www.nxp.com/documents/data_sheet/74HC_HCT00.pdf
[Sub-Figure a:
Logic diagram of 74HC00
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/7400_block_diagram.png>
] [float Figure:
[Sub-Figure b:
Logic diagram of one NAND gate
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/7400_logic_diagram.png>
Each of the gates above is just a simple NAND circuit with the
electron flows, as demonstrated earlier. Yet, many these
NAND-gates chips combined can build a simple computer.
Software, at the physical level, is just electron flows.
How can the above gates can be created with 74HC00? It is
simple: as every gate has 2 input pins and 1 output pin, we can
write the output of 1 NAND gate to an input of another NAND
gate, thus chaining NAND gates together to produce the diagrams
Being built upon gates, as gates only accept a series of 0 and 1,
a hardware device only understands 0 and 1. However, a device
only takes 0 and 1 in a systematic way. [margin:
]Machine languageMachine language is a collection of unique bit
patterns that a device can identify and perform a corresponding
action. A machine instruction is a unique bit pattern that a
device can identify. In a computer system, a device with its
language is called CPU - Central Processing Unit, which controls
all activities going inside a computer. For example, in the x86
architecture, the pattern 10100000 means telling a CPU to add two
numbers, or 000000101 to halt a computer. In the early days of
computers, people had to write completely in binary.
Why does such a bit pattern cause a device to do something? The
reason is that underlying each instruction is a small circuit
that implements the instruction. Similar to how a
function/subroutine in a computer program is called by its name,
a bit pattern is a name of a little function inside a CPU that
got executed when the CPU finds one.
Note that CPU is not the only device with its language. CPU is
just a name to indicate a hardware device that controls a
computer system. A hardware device may not be a CPU but still has
its language. A device with its own machine language is a
programmable device, since a user can use the language to command
the device to perform different actions. For example, a printer
has its set of commands for instructing it how to prints a page.
<exa:74HC00-chip-can>A user can use 74HC00 chip without knowing
its internal, but only the interface for using the device. First,
we need to know its layout:
74HC00 Pin Layout (Source: 74HC00 datasheet, http://www.nxp.com/documents/data_sheet/74HC_HCT00.pdf
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/7400_pin_configuration.pdf>
Then, the functionality of each pin:
[float Table:
[Table 1:
Pin Description (Source: 74HC00 datasheet, http://www.nxp.com/documents/data_sheet/74HC_HCT00.pdf
+-----------------------------+---------------+-----------------+
| Symbol | Pin | Description |
+------------------------------+---------------+----------------+
| 1A to 4A | 1, 4, 9, 12 | data input |
| 1B to 4B | 2, 5, 10, 13 | data input |
| 1Y to 4Y | 3, 6, 8, 11 | data output |
| GND | 7 | ground (0 V) |
| V[subscript:cc][subscript:] | 14 | supply voltage |
Finally, how to use the pins:
Functional Description
+------------+--------+
| Input | Output |
+-----+------+--------+
| nA | nB | nY |
| L | X | H |
| X | L | H |
| H | H | L |
• n is a number, either 1, 2, 3, or 4
• H = HIGH voltage level; L = LOW voltage level; X = don't care.
]The functional description provides a truth table with all
possible pin inputs and outputs, which also describes the usage
of all pins in the device. A user needs not to know the
implementation, but on such a table to use the device. We can
say that the truth table above is the machine language of the
device. Since the device is digital, its language is a
collection of binary strings:
• The device has 8 input pins, and this means it accepts binary
strings of 8 bits.
• The device has 4 output pins, and this means it produces
binary strings of 4 bits from the 8-bit inputs.
The number of input strings is what the device understand, and
the number of output strings is what the device can speak.
Together, they make the language of the device. Even though
this device is simple, yet the language it can accept contains
quite many binary strings: 2^{8}+2^{4}=272
. However, the
number is a tiny fraction of a complex device like a CPU, with
hundreds of pins.
When leaving as is, 74HC00 is simply a NAND device with two
4-bit inputs[footnote:
Or simply 4-bit NAND gate, as it can only accept 4 bits of input
at the maximum.
+--------+-----------------------------------------------+----------------------+
| | Input | Output |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+----+
| Pin | 1A | 1B | 2A | 2B | 3A | 3B | 4A | 4B | 1Y | 2Y | 3Y | 4Y |
| Value | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 |
The inputs and outputs as visually presented:
Pins when receiving digital signals that correspond to a binary
string. Green signals are inputs; blue signals are outputs.
] <Graphics file: C:/Users/Tu Do/os01/book_src/images/02/7400_bin_string1.pdf>
On the other hand, if OR gate is implemented, we can only build
a 2-input OR gate from 74HC00, as it requires 3 NAND gates: 2
input NAND gates and 1 output NAND gate. Each input NAND gate
represents only a 1-bit input of the OR gate. In the following
figure, the pins of each input NAND gates are always set to the
same values (either both inputs are A or both inputs are B) to
represent a single bit input for the final OR gate:
Truth table of OR logic diagram.
+----+----+----+----+---+
| A | B | C | D | Y |
| 0 | 0 | 1 | 1 | 0 |
To implement a 4-bit OR gate, we need a total of four of 74HC00
chips configured as OR gates, packaged as a single chip as in
figure [or-chip-74hc00].
4-bit OR chip made from four 74HC00 devices
]<or-chip-74hc00>
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/4-bit-or-gate-layout.pdf>
Assembly language is the symbolic representation of binary
machine code, by giving bit patterns mnemonic names. It was a
vast improvement when programmers had to write 0 and 1. For
example, instead of writing 000000101, a programmer simply write
hlt to stop a computer. Such an abstraction makes instructions
executed by a CPU easier to remember, and thus more instructions
could be memorized, less time spent looking up CPU manual to find
instructions in bit forms and as a result, code was written
Understand assembly language is crucial for low-level programming
domains, even to this day. The more instructions a programmer
want to understand, the deeper understanding of machine
architecture is required.
We can build a device with 2 assembly instructions:
or <op1>, <op2>
nand <op1>, <op2>
• or accepts two 4-bit operands. This corresponds to a 4-input
OR gate device built from 4 74HC00 chips.
• nand accepts two 4-bit operands. This corresponds to a single
74HC00 chips, leave as is.
Essentially, the gates in the example [exa:74HC00-chip-can]
implements the instructions. Up to this point, we only specify
input and output and manually feed it to a device. That is, to
perform an operation:
• Pick a device by hands.
• Manually put electrical signals into pins.
First, we want to automate the process of device selection.
That is, we want to simply write assembly instruction and the
device that implements the instruction is selected correctly.
Solving this problem is easy:
• Give each instruction an index in binary code, called
operation code or opcode for short, and embed it as part of
input. The value for each instruction is specified as in
table [ex-ins-ops].[float MarginTable:
[MarginTable 1:
Instruction-Opcode mapping.
]<ex-ins-ops>
+--------------+-------------+
| Instruction | Binary Code |
| nand | 00 |
| or | 01 |
Each input now contains additional data at the beginning: an
opcode. For example, the instruction:
nand 1100, 1100
corresponds to the binary string: 0011001100. The first two
bits 00 encodes a nand instruction, as listed in the table
• Add another device to select a device, based on a binary code
peculiar to an instruction.
Such a device is called a decoder, an important component in a
CPU that decides which circuit to use. In the above example,
when feeding 0011001100 to the decoder, because the opcode is
00, data are sent to NAND device for computing.
Finally, writing assembly code is just an easier way to write
binary strings that a device can understand. When we write
assembly code and save in a text file, a program called an [margin:
]assemblerassembler translates the text file into binary strings
that a device can understand. So, how can an assembler exist in
the first place? Assume this is the first assembler in the
world, then it is written in binary code. In the next version,
life is easier: the programmers write the assembler in the
assembly code, then use the first version to compile itself.
These binary strings are then stored in another device that
later can be retrieved and sent to a decoder. A storage device[margin:
]storage device is the device that stores machine instructions,
which is an array of circuits for saving 0 and 1 states.
A decoder is built out of logic gates similar to other digital
devices. However, a storage device can be anything that can
store 0 and 1 and is retrievable. A storage device can be a
magnetized device that uses magnetism to store information, or
it can be made out of electrical circuits using. Regardless of
the technology used, as long as the device can store data and
is accessible to retrieve data, it suffices. Indeed, the modern
devices are so complex that it is impossible and unnecessary to
understand every implementation detail. Instead, we only need
to learn the interfaces, e.g. the pins, that the devices
expose.
A computer essentially implements this process:
• Fetch an instruction from a storage device.
• Decode the instruction.
• Execute the instruction.
Or in short, a fetch -- decode -- executefetch -- decode --
execute cycle. The above device is extremely rudimentary, but
it already represents a computer with a fetch -- decode --
execute cycle. More instructions can be implemented by adding
more devices and allocating more opcodes for the instructions,
then update the decoder accordingly. The Apollo Guidance
Computer, a digital computer produced for the Apollo space
program from 1961 -- 1972, was built entirely with NOR gates -
the other choice to NAND gate for creating other logic gates.
Similarly, if we keep improving our hypothetical device, it
eventually becomes a full-fledge computer.
Assembly language is a step up from writing 0 and 1. As time goes
by, people realized that many pieces of assembly code had
repeating patterns of usages. It would be nice if instead of
writing all the repeating blocks of code all over again in all
places, we simply refer to such blocks of code with easier to use
text forms. For example, a block of assembly code checks whether
one variable is greater than another and if so, execute a block
of code, else execute another block of code; in C, such block of
assembly code is represented by an if statement that is close to
human language.
Repeated assembly patterns are generalized into a new language.
<Graphics file: C:/Users/Tu Do/os01/book_src/images/02/asm_to_proglang.pdf>
People created text forms to represent common blocks of assembly
code, such as the if syntax above, then write a program to
translate the text forms into assembly code. The program that
translates such text forms to machine code is called a [margin:
]compilercompiler:
Any software logic a programming language can implement, hardware
can also implement. The reverse is also true: any hardware logic
that is implemented in a circuit can be reimplemented in a
programming language. The simple reason is that programming
languages, or assembly languages, or machine languages, or logic
gates are just languages to express computations. It is
impossible for software to implement something hardware is
incapable of because programming language is just a simpler way
to use the underlying hardware. At the end of the day,
programming languages are translated to machine instructions that
are valid to a CPU. Otherwise, code is not runnable, thus a
useless software. In reverse, software can do everything hardware
(that run the software) can, as programming languages are just an
easier way to use the hardware.
In reality, even though all languages are equivalent in power,
not all of them are capable of express programs of each other.
Programming languages vary between two ends of a spectrum: high
level and low level.
The higher level a programming language is, the distant it
becomes with hardware. In some high-level programming languages,
such as Python, a programmer cannot manipulate underlying
hardware, despite being able to deliver the same computations as
low-level programming languages. The reason is that high-level
languages want to hide hardware details to free programmers from
dealing with irrelevant details not related to current problem
domains. Such convenience, however, is not free: it requires
software to carry an extra code for managing hardware details
(e.g. memory) thus making the code run slower, and it makes
hardware programming difficult or impossible. The more
abstractions a programming language imposes, the more difficult
it is for writing low-level software, such as hardware drivers or
an operating system. This is the reason why C is usually a
language of choice for writing an operating system, since C is
just a thin wrapper of the underlying hardware, making it easy to
understand how exactly a hardware device runs when executing a
certain piece of C code.
Each programming language represents a way of thinking about
programs. Higher-level programming languages help to focus on
problem domains that are not related to hardware at all, and
where programmer performance is more important than computer
performance. Lower-level programming languages help to focus on
the inner-working of a machine, thus are best suited for problem
domains that are related to control hardware. That is why so many
languages exist. Use the right tools for the right job to achieve
the best results.
AbstractionAbstraction is a technique for hiding complexity that
is irrelevant to the problem in context. For example, writing
programs without any other layer except the lowest layer: with
circuits. Not only a person needs an in-depth understanding of
how circuits work, making it much more obscure to design a
circuit because the designer must look at the raw circuits but
think in higher-level such as logic gates. It is a distracting
process, as a designer must constantly translate the idea into
circuits. It is possible for a designer simply thinks his
high-level ideas straight, and later translate the ideas into
circuits. Not only it is more efficient, but it is also more
accurate as a designer can focus all his efforts into verifying
the design with high-level thinking. When a new designer arrives,
he can easily understand the high-level designs, thus can
continue to develop or maintain existing systems.
In all the layers, abstractions manifest itself:
• Logic gates abstract away the details of CMOS.
• Machine language abstracts away the details of logic gates.
• Assembly language abstracts away the details of machine
• Programming language abstracts away the details of assembly
We see repeating patterns of how lower-layers build upper-layers:
• A lower layer has a recurring pattern. Then, this recurring
pattern is taken out and built a language on top of it.
• A higher layer strips away layer-specific (non-recurring)
details to focus on the recurring details.
• The recurring details are given a new and simpler language than
the languages of the lower layers.
What to realize is that every layer is just a more convenient
language to describe the lower layer. Only after a description is
fully created with the language of the higher layer, it is then
be implemented with the language of the lower layer.
• CMOS layer has a recurring pattern that makes sure logic gates
are reliably translated to CMOS circuits: a k-input gate uses k
PMOS and k NMOS transistors (Wakerly, 1999). Since digital
devices use CMOS exclusively, a language arose to describe
higher level ideas while hiding CMOS circuits: Logic Gates.
• Logic Gates hides the language of circuits and focuses on how
to implement primitive Boolean functions and combine them to
create new functions. All logic gates receive input and
generate output as binary numbers. Thanks to this recurring
patterns, logic gates are hidden away for the new language:
Assembly, which is a set of predefined binary patterns that
cause the underlying gates to perform an action.
• Soon, people realized that many recurring patterns arisen from
within Assembly language. Repeated blocks of Assembly code
appear in Assembly source files that express the same or
similar idea. There were many such ideas that can be reliably
translated into Assembly code. Thus, the ideas were extracted
for building into the high level programming languages that
everyone programmer learns today.
Recurring patterns are the key to abstraction. Recurring patterns
are why abstraction works. Without them, no language can be
built, and thus no abstraction. Fortunately, human already
developed a systematic discipline for studying patterns:
Mathematics. As quoted from the British mathematician G. H. Hardy
(2005):
A mathematician, like a painter or a poet, is a maker of
patterns. If his patterns are more permanent than theirs, it is
because they are made with ideas.
Isn't that a mathematical formula a representation of a pattern?
A variable represents values with the same properties given by
constraints? Mathematics provides a formal system to identify and
describe existing patterns in nature. For that reason, this
system can certainly be applied in the digital world, which is
just a subset of the real world. Mathematics can be used as a
common language to help translation between layers easier, and
help with the understanding of layers.
Abstraction by building language certainly leverages productivity
by stripping irrelevant details to a problem. Imagine writing
programs without any other layout except the lowest layer: with
circuits. This is how complexity emerges: when high-level ideas
are expressed with lower-level language, as the example above
demonstrated. Unfortunately, this is the case with software as
programming languages at the moment are more emphasized on
software rather than the problem domains. That is, without prior
knowledge, code written in a language is unable to express itself
the knowledge of its target domain. In other words, a language is
expressive if its syntax is designed to express the problem
domain it is trying to solve. Consider this example: That is, the
what it will do rather the how it will do.
Graphviz (http://www.graphviz.org/) is a visualization software
that provides a language, called dot, for describing graph:
As can be seen, the code perfectly expresses itself how the
graph is connected. Even a non-programmer can understand and
use such language easily. If it were to implement in C, it
would be more troublesome, and this is assuming that the
functions for drawing graphs are already available. To draw a
line, in C we might write something like:
draw_line(a, b);
However, it is still verbose compared with:
a -> b;
Also, a and b must be defined in C, compared to the implicit
nodes in the dot language. However, if we do not factor in the
verbosity, then C still has a limitation: it cannot change its
syntax to suit the problem domain. A domain-specific language
might even be more verbose, but it makes a domain more
understandable. If a problem domain must be expressed in C,
then it is constraint by the syntax of C. Since C is not a
specialized language for a problem domain that, but is a
general-purpose programming language, the domain knowledge is
buried within the implementation details. As a result, a C
programmer is needed to decipher and extract the domain
knowledge out. If the domain knowledge cannot be extracted,
then the software cannot be further developed.
Linux is full of applications controlled by many domain-specific
languages and are placed in /etc directory, such as a web server.
Instead of reprogramming the software, a domain-agnostic language
is made for it.
In general, code that can express a problem domain must be
understandable by a domain expert. Even within the software
domain, building a language out of repeated programming patterns
is useful. It helps people aware the existence of such patterns
in code and thus making software easier to maintain, as software
structure is visible as a language. Only a programming language
that is capable of morphing itself to suit a problem domain can
achieve that goal. Such language is called a programmable
programming language. Unfortunately, this approach of turning
software structure visible is not favored among programmers, as a
new language must be made out of it along with new toolchain to
support it. Thus, software structure and domain knowledge are
buried within code written in the syntax of a general-purpose
language, and if a programmer is not familiar or even aware of
the existence of a code pattern, then it is hopeless to
understand the code. A prime example is reading C code that
controls hardware, e.g. an operating system: if a programmer
knows absolutely nothing about hardware, then it is impossible to
read and write operating system code in C, even if he could have
20 years of writing application C code.
With abstraction, a software engineer can also understand the
inner-working of a device without specialized knowledge of
physical circuit design, enables the software engineer to write
code that controls a device. The separation between logical and
physical implementation also entails that gate designs can be
reused even when the underlying technologies changed. For
example, in some distant future biological computer could be a
reality, and gates might not be implemented as CMOS but some kind
of biological cells e.g. as living cells; in either technology:
electrical or biological, as long as logic gates are physically
realized, the same computer design could be implemented.
To write lower level code, a programmer must understand the
architecture of a computer. It is similar to when one writes
programs in a software framework, he must know what kinds of
problems the framework solves, and how to use the framework by
its provided software interfaces. But before getting to the
definition of what computer architecture is, we must understand
what exactly is a computer, as many people still think that a
computer is a regular computer we put on a desk, or at best, a
server. Computers come in various shapes and sizes and are
devices that people never imagine they are computers, and that
code can run on such devices.
A [margin:
]computercomputer is a hardware device that consists of at least
a processor (CPU), a memory device and input/output interfaces.
All the computers can be grouped into two types:
Single-purpose computer is a computer built at the hardware
level for specific tasks. For example, dedicated application
encoders/decoders , timer, image/video/sound processors.
General-purpose computer is a computer that can be programmed
(without modifying its hardware) to emulate various features of
single-purpose computers.
A server[margin:
]server is a general-purpose high-performance computer with huge
resources to provide large-scale services for a broad audience.
The audience are people with their personal computer connected to
a server.
Blade servers. Each blade server is a computer with a modular
design optimize for the use of physical space and energy. The
enclosure of blade servers is called a chassis.(Source: [https://commons.wikimedia.org/wiki/File:Wikimedia_Foundation_Servers-8055_35.jpg||Wikimedia]
, author: Victorgrigas)
<Graphics file: C:/Users/Tu Do/os01/book_src/images/03/Wikimedia_Foundation_Servers-8055_35.jpg>
]desktop computerdesktop computer is a general-purpose computer
with an input and output system designed for a human user, with
moderate resources enough for regular use. The input system
usually includes a mouse and a keyboard, while the output system
usually consists of a monitor that can display a large mount of
pixels. The computer is enclosed in a chassis large enough for
putting various computer components such as a processor, a
motherboard, a power supply, a hard drive, etc.
A typical desktop computer.
<Graphics file: C:/Users/Tu Do/os01/book_src/images/03/computer-158675.svg>
A mobile computer[margin:
]mobile computer is similar to a desktop computer with fewer
resources but can be carried around.
Game consoles are similar to desktop computers but are optimized
for gaming. Instead of a keyboard and a mouse, the input system
of a game console are game controllers, which is a device with a
few buttons for controlling on-screen objects; the output system
is a television. The chassis is similar to a desktop computer but
is smaller. Game consoles use custom processors and graphic
processors but are similar to ones in desktop computers. For
example, the first Xbox uses a custom Intel Pentium III
processor.
Handheld game consoles are similar to game consoles, but
incorporate both the input and output systems along with the
computer in a single package.
An [margin:
]embedded computerembedded computer is a single-board or
single-chip computer with limited resources designed for
integrating into larger hardware devices. [float MarginFigure:
An Intel 82815 Graphics and Memory Controller Hub embedded on a
PC motherboard. (Source: [https://commons.wikimedia.org/wiki/File:Intel_82815_GMCH.jpg||Wikimedia]
, author: Qurren)
<Graphics file: C:/Users/Tu Do/os01/book_src/images/03/Intel_82815_GMCH.jpg>
][float MarginFigure:
A PIC microcontroller. (Soure: [http://www.microchip.com/wwwproducts/en/PIC18F4620||Microchip]
<Graphics file: C:/Users/Tu Do/os01/book_src/images/03/medium-PIC18F4620-PDIP-40.png>
]microcontrollerMicrocontroller is an embedded computer designed
for controlling other hardware devices. A microcontroller is
mounted on a chip. Microcontrollers are general-purpose
computers, but with limited resources so that it is only able to
perform one or a few specialized tasks. These computers are used
for a single purpose, but they are still general-purpose since it
is possible to program them to perform different tasks, depends
on the requirements, without changing the underlying hardware.
Another type of embedded computer is system-on-chip. A
system-on-chipsystem-on-chip is a full computer on a single chip.
Though a microcontroller is housed on a chip, its purpose is
different: to control some hardware. A microcontroller is usually
simpler and more limited in hardware resources as it specializes
only in one purpose when running, whereas a system-on-chip is a
general-purpose computer that can serve multiple purposes. A
system-on-chip can run like a regular desktop computer that is
capable of loading an operating system and run various
applications. A system-on-chip typically presents in a
smartphone, such as Apple A5 SoC used in Ipad2 and iPhone 4S, or
Qualcomm Snapdragon used in many Android phones.[float MarginFigure:
Apple A5 SoC
<Graphics file: C:/Users/Tu Do/os01/book_src/images/03/128px-Apple_A5_Chip.jpg>
Be it a microcontroller or a system-on-chip, there must be an
environment where these devices can connect to other devices.
This environment is a circuit board called a PCBPCB -- Printed Circuit Board
Printed Circuit Board. A printed circuit boardPrinted Circuit Board
is a physical board that contains lines and pads to enable
electron flows between electrical and electronics components.
Without a PCB, devices cannot be combined to create a larger
device. As long as these devices are hidden inside a larger
device and contribute to a larger device that operates at a
higher level layer for a higher level purpose, they are embedded
devices. Writing a program for an embedded device is therefore
called embedded programmingembedded programming. Embedded
computers are used in automatically controlled devices including
power tools, toys, implantable medical devices, office machines,
engine control systems, appliances, remote controls and other
types of embedded systems.
The line between a microcontroller and a system-on-chip is
blurry. If hardware keeps evolving more powerful, then a
microcontroller can get enough resources to run a minimal
operating system on it for multiple specialized purposes. In
contrast, a system-on-chip is powerful enough to handle the job
of a microcontroller. However, using a system-on-chip as a
microcontroller would not be a wise choice as price will rise
significantly, but we also waste hardware resources since the
software written for a microcontroller requires little computing
Field Programmable Gate Array
]Field Programmable Gate ArrayField Gate Programmable Array (FPGA
FPGA) is a hardware an array of reconfigurable gates that makes
circuit structure programmable after it is shipped away from the
factory[footnote:
This is why it is called Field Gate Programmable Array. It is
changeable "in the field" where it is applied.
]. Recall that in the previous chapter, each 74HC00 chip can be
configured as a gate, and a more sophisticated device can be
built by combining multiple 74HC00 chips. In a similar manner,
each FPGA device contains thousands of chips called logic blocks,
which is a more complicated chip than a 74HC00 chip that can be
configured to implement a Boolean logic function. These logic
blocks can be chained together to create a high-level hardware
feature. This high-level feature is usually a dedicated algorithm
that needs high-speed processing.
[Figure 0.10:
FPGA Architecture (Source: [http://www.ni.com/tutorial/6097/en/||National Instruments]
<Graphics file: C:/Users/Tu Do/os01/book_src/images/03/fpga_400x212.jpg>
Digital devices can be designed by combining logic gates, without
regarding actual circuit components, since the physical circuits
are just multiples of CMOS circuits. Digital hardware, including
various components in a computer, is designed by writing code,
like a regular programmer, by using a language to describe how
gates are wired together. This language is called a Hardware
Description LanguageHardware Description Language. Later the
hardware description is compiled to a description of connected
electronic components called a netlistnetlist, which is a more
detailed description of how gates are connected.
The difference between FPGA and other embedded computers is that
programs in FPGA are implemented at the digital logic level,
while programs in embedded computers like microcontrollers or
system-on-chip devices are implemented at assembly code level. An
algorithm written for a FPGA device is a description of the
algorithm in logic gates, which the FPGA device then follows the
description to configure itself to run the algorithm. An
algorithm written for a microcontroller is in assembly
instructions that a processor can understand and act accordingly.
FPGA is applied in the cases where the specialized operations are
unsuitable and costly to run on a regular computer such as
real-time medical image processing, cruise control system,
circuit prototyping, video encoding/decoding, etc. These
applications require high-speed processing that is not achievable
with a regular processor because a processor wastes a significant
amount of time in executing many non-specialized instructions -
which might add up to thousands of instructions or more - to
implement a specialized operation, thus more circuits at physical
level to carry the same operation. A FPGA device carries no such
overhead; instead, it runs a single specialized operation
implemented in hardware directly.
An Application-Specific Integrated CircuitApplication-Specific
Integrated Circuit (or ASICASIC) is a chip designed for a
particular purpose rather than for general-purpose use. ASIC does
not contain a generic array of logic blocks that can be
reconfigured to adapt to any operation like an FPGA; instead,
every logic block in an ASIC is made and optimized for the
circuit itself. FPGA can be considered as the prototyping stage
of an ASIC, and ASIC as the final stage of circuit production.
ASIC is even more specialized than FPGA, so it can achieve even
higher performance. However, ASICs are very costly to manufacture
and once the circuits are made, if design errors happen,
everything is thrown away, unlike the FPGA devices which can
simply be reprogrammed because of the generic gate array.
The previous section examined various classes of computers.
Regardless of shapes and sizes, every computer is designed for an
architect from high level to low level.
Computer\,Architecture=Instruction\,Set\,Architecture+Computer\,Organization+Hardware
At the highest-level is the Instruction Set Architecture.
At the middle-level is the Computer Organization.
At the lowest-level is the Hardware.
An instruction setinstruction set is the basic set of commands
and instructions that a microprocessor understands and can carry
An Instruction Set ArchitectureInstruction Set Architecture, or ISA
ISA, is the design of an environment that implements an
instruction set. Essentially, a runtime environment similar to
those interpreters of high-level languages. The design includes
all the instructions, registers, interrupts, memory models (how
memory are arranged to be used by programs), addressing modes,
I/O... of a CPU. The more features (e.g. more instructions) a CPU
has, the more circuits are required to implement it.
]Computer organizationComputer organization is the functional
view of the design of a computer. In this view, hardware
components of a computer are presented as boxes with input and
output that connects to each other and form the design of a
computer. Two computers may have the same ISA, but different
organizations. For example, both AMD and Intel processors
implement x86 ISA, but the hardware components of each processor
that make up the environments for the ISA are not the same.
Computer organizations may vary depend on a manufacturer's
design, but they are all originated from the Von Neumann
architecture[footnote:
John von Neumann was a mathematician and physicist who invented a
computer architecture.
Von-Neumann Architecture
<Graphics file: C:/Users/Tu Do/os01/book_src/images/03/von_neumann_architecture.pdf>
CPUCPU fetches instructions continuously from main memory and
MemoryMemory stores program code and data.
BusBus are electrical wires for sending raw bits between the
above components.
I/O DevicesI/O Devices are devices that give input to a
computer i.e. keyboard, mouse, sensor... and takes the output
from a computer i.e. monitor takes information sent from CPU to
display it, LED turns on/off according to a pattern computed by
CPU...
The Von-Neumann computer operates by storing its instructions in
main memory, and CPU repeatedly fetches those instructions into
its internal storage for executing, one after another. Data are
transferred through a data bus between CPU, memory and I/O
devices, and where to store in the devices is transferred through
the address bus by the CPU. This architecture completely
implements the fetch -- decode -- executefetch -- decode --
execute cycle.
The earlier computers were just the exact implementations of the
Von Neumann architecture, with CPU and memory and I/O devices
communicate through the same bus. Today, a computer has more
buses, each is specialized in a type of traffic. However, at the
core, they are still Von Neumann architecture. To write an OS for
a Von Neumann computer, a programmer needs to be able to
understand and write code that controls the cores components:
CPU, memory, I/O devices, and bus.
CPUCPU, or Central Processing UnitCentral Processing Unit, is the
heart and brain of any computer system. Understand a CPU is
essential to writing an OS from scratch:
• To use these devices, a programmer needs to controls the CPU to
use the programming interfaces of other devices. CPU is the
only way, as CPU is the only direct device a programmer can use
and the only device that understand code written by a
programmer.
• In a CPU, many OS concepts are already implemented directly in
hardware, e.g. task switching, paging. A kernel programmer
needs to know how to use the hardware features, to avoid
duplicating such concept in software, thus wasting computer
• CPU built-in OS features boost both OS performance and
developer productivity because those features are actual
hardware, the lowest possible level, and developers are free to
implement such features.
• To effectively use the CPU, a programmer needs to understand
the documentation provided from CPU manufacturer. For example, [[http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html||Intel® 64 and IA-32 Architectures Software Developer Manuals]
• After understanding one CPU architecture well, it is easier to
learn other CPU architectures.
A CPU is an implementation of an ISA, effective the
implementation of an assembly language (and depends on the CPU
architecture, the language may vary). Assembly language is one of
the interfaces that are provided for software engineers to
control a CPU, thus control a computer. But how can every
computer device be controlled with only the access to the CPU?
The simple answer is that a CPU can communicate with other
devices through these two interfaces, thus commanding them what
Registers Registers[margin:
]are a hardware component for high-speed data access and
communication with other hardware devices. Registers allow
software to control hardware directly by writing to registers
of a device, or receive information from hardware device when
reading from registers of a device.
Not all registers are used for communication with other
devices. In a CPU, most registers are used as high-speed
storage for temporary data. Other devices that a CPU can
communicate always have a set of registers for interfacing with
the CPU.
Port Port[margin:
]is a specialized register in a hardware device used for
communication with other devices. When data are written to a
port, it causes a hardware device to perform some operation
according to values written to the port. The different between
a port and a register is that port does not store data, but
delegate data to some other circuit.
These two interfaces are extremely important, as they are the
only interfaces for controlling hardware with software. Writing
device drivers is essentially learning the functionality of each
register and how to use them properly to control the device.
]MemoryMemory is a storage device that stores information. Memory
consists of many cells. Each cell is a byte with its address
number, so a CPU can use such address number to access an exact
location in memory. Memory is where software instructions (in the
form of machine language) is stored and retrieved to be executed
by CPU; memory also stores data needed by some software. Memory
in a Von Neumann machine does not distinguish between which bytes
are data and which bytes are software instructions. It's up to
the software to decide, and if somehow data bytes are fetched and
executed as instructions, CPU still does it if such bytes
represents valid instructions, but will produce undesirable
results. To a CPU, there's no code and data; both are merely
different types of data for it to act on: one tells it how to do
something in a specific manner, and one is necessary materials
for it to carry such action.
The RAM is controlled by a device called a memory controllermemory controller
. Currently, most processors have this device embedded, so the
CPU has a dedicated memory bus connecting the processor to the
RAM. On older CPU[footnote:
Prior to the CPU's produced in 2009
], however, this device was located in a chip also known as MCH
or Memory Controller HubMemory Controller Hub. In this case, the
CPU does not communicate directly to the RAM, but to the MCH
chip, and this chip then accesses the memory to read or write
data. The first option provides better performance since there is
no middleman in the communications between the CPU and the
At the physical level, RAM is implemented as a grid of cells that
each contain a transistor and an electrical device called a [margin:
]capacitorcapacitor, which stores charge for short periods of
time. The transistor controls access to the capacitor; when
switched on, it allows a small charge to be read from or written
to the capacitor. The charge on the capacitor slowly dissipates,
requiring the inclusion of a refresh circuit to periodically read
values from the cells and write them back after amplification
from an external power source.
Bus[margin:
]Bus is a subsystem that transfers data between computer
components or between computers. Physically, buses are just
electrical wires that connect all components together and each
wire transfer a single big of data. The total number of wires is
called bus width[margin:
]bus width, and is dependent on how many wires a CPU can support.
If a CPU can only accept 16 bits at a time, then the bus has 16
wires connecting from a component to the CPU, which means the CPU
can only retrieve 16 bits of data a time.
Hardware is a specific implementation of a computer. A line of
processors implement the same instruction set architecture and
use nearly identical organizations but differ in hardware
implementation. For example, the Core i7 family provides a model
for desktop computers that is more powerful but consumes more
energy, while another model for laptops is less performant but
more energy efficient. To write software for a hardware device,
seldom we need to understand a hardware implementation if
documents are available. Computer organization and especially the
instruction set architecture are more relevant to an operating
system programmer. For that reason, the next chapter is devoted
to study the x86 instruction set architecture in depth.
A chipsetchipset is a chip with multiple functions. Historically,
a chipset is actually a set of individual chips, and each is
responsible for a function, e.g. memory controller, graphic
controllers, network controller, power controller, etc. As
hardware progressed, the set of chips were incorporated into a
single chip, thus more space, energy, and cost efficient. In a
desktop computer, various hardware devices are connected to each
other through a PCB called a motherboardmotherboard. Each CPU
needs a compatible motherboard that can host it. Each motherboard
is defined by its chipset model that determine the environment
that a CPU can control. This environment typically consists of
• a slot or more for CPU
• a chipset of two chips which are the Northbridge and
Southbridge chips
– Northbridge chip is responsible for the high-performance
communication between CPU, main memory and the graphic card.
– Southbridge chip is responsible for the communication with
I/O devices and other devices that are not performance
sensitive.
• slots for memory sticks
• a slot or more for graphic cards.
• generic slots for other devices, e.g. network card, sound card.
• ports for I/O devices, e.g. keyboard, mouse, USB.
Motherboard organization.
]<mobo-organization>
<Graphics file: C:/Users/Tu Do/os01/book_src/images/03/Motherboard_diagram.svg>
To write a complete operating system, a programmer needs to
understand how to program these devices. After all, an operating
system manages hardware automatically to free application
programs doing so. However, of all the components, learning to
program the CPU is the most important, as it is the component
present in any computer, regardless of what type a computer is.
For this reason, the primary focus of this book will be on how to
program an x86 CPU. Even solely focused on this device, a
reasonably good minimal operating system can be written. The
reason is that not all computers include all the devices as in a
normal desktop computer. For example, an embedded computer might
only have a CPU and limited internal memory, with pins for
getting input and producing an output; yet, operating systems
were written for such devices.
However, learning how to program an x86 CPU is a daunting task,
with 3 primary manuals written for it: almost 500 pages for
volume 1, over 2000 pages for volume 2 and over 1000 pages for
volume 3. It is an impressive feat for a programmer to master
every aspect of x86 CPU programming.
Q35 is an Intel chipset released September 2007. Q35 is used as
an example of a high-level computer organization because later we
will use QEMU to emulate a Q35 system, which is latest Intel
system that QEMU can emulate. Though released in 2007, Q35 is
relatively modern to the current hardware, and the knowledge can
still be reused for current chipset model. With a Q35 chipset,
the emulated CPU is also relatively up-to-date with features
presented in current day CPUs so we can use the latest software
manuals from Intel.
Figure [mobo-organization] is a typical current-day motherboard
organization, in which Q35 shares similar organization.
An execution environmentexecution environment is an environment
that provides the facility to make code executable. The execution
environment needs to address the following question:
• Supported operations? data transfer, arithmetic, control,
floating-point...
• Where are operands stored? registers, memory, stack,
• How many explicit operands are there for each instruction? 0,
1, 2, or 3
• How is the operand location specified? register, immediate,
indirect, . . .
• What type and size of operands are supported? byte, int, float,
double, string, vector...
For the remain of this chapter, please carry on the reading to
chapter 3 in Intel Manual Volume 1, "Basic Execution Environment"
In this chapter, we will explore assembly language, and how it
connects to C. But why should we do so? Isn't it better to trust
the compiler, plus no one writes assembly anymore?
Not quite. Surely, the compiler at its current state of the art
is trustworthy, and we do not need to write code in assembly,
most of the time. A compiler can generate code, but as mentioned
previously, a high-level language is a collection of patterns of
a lower-level language. It does not cover everything that a
hardware platform provides. As a consequence, not every assembly
instruction can be generated by a compiler, so we still need to
write assembly code for these circumstances to access
hardware-specific features. Since hardware-specific features
require writing assembly code, debugging requires reading it. We
might spend even more time reading than writing. Working with
low-level code that interacts directly with hardware, assembly
code is unavoidable. Also, understand how a compiler generates
assembly code could improve a programmer's productivity. For
example, if a job or school assignment requires us to write
assembly code, we can simply write it in C, then let gcc does the
hard working of writing the assembly code for us. We merely
collect the generated assembly code, modify as needed and be done
with the assignment.
We will learn objdump extensively, along with how to use Intel
documents to aid in understanding x86 assembly code.
objdumpobjdump is a program that displays information about
object files. It will be handy later to debug incorrect layout
from manual linking. Now, we use objdump to examine how high
level source code maps to assembly code. For now, we ignore the
output and learn how to use the command first. It is simple to
use objdump :
$ objdump -d hello
-d option only displays assembled contents of executable
sections. A sectionsection is a block of memory that contains
either program code or data. A code section is executable by the
CPU, while a data section is not executable. Non-executable
sections, such as .data and .bss (for storing program data),
debug sections... are not displayed. We will learn more about
section when studying ELF binary file format in chapter [chap:The-Anatomy-of-a-program]
. On the other hand:
where -D option displays assembly contents of all sections. If -D
, -d is implicitly assumed. objdump is mostly used for inspecting
assembly code, so -d is the most useful and thus is set by
The output overruns the terminal screen. To make it easy for
reading, send all the output to less:
$ objdump -d hello | less
To intermix source code and assembly, the binary must be compiled
with -g option to include source code in it, then add -S option:
$ objdump -S hello | less
The default syntax used by objdump is AT&T syntax. To change it
to the familiar Intel syntax:
$ objdump -M intel -D hello | less
When using -M option, option -D or -d must be explicitly
supplied. Next, we will use objdump to examine how compiled C
data and code are represented in machine code.
Finally, we will write a 32-bit kernel, therefore we will need to
compile a 32-bit binary and examine it in 32-bit mode:
$ objdump -M i386,intel -D hello | less
-M i386 tells objdump to display assembly content using 32-bit
layout. Knowing the difference between 32-bit and 64-bit is
crucial for writing kernel code. We will examine this matter
later on when writing our kernel.
At the start of the output displays the file format of the object
hello: file format elf64-x86-64
After the line is a series of disassembled sections:
Disassembly of section .interp:
Disassembly of section .note.ABI-tag:
Disassembly of section .note.gnu.build-id:
Finally, each disassembled section displays its actual content -
which is a sequence of assembly instructions - with the following
4004d6: 55 push rbp
• The first column is the address of an assembly instruction. In
the above example, the address is 0x4004d6.
• The second column is assembly instruction in raw hex values. In
the above example, the address is 0x55.
• The third column is the assembly instruction. Depends on the
section, the assembly instruction might be meaningful or
meaningless. For example, if the assembly instructions are in a
.text section, then the assembly instructions are actual
program code. On the other hand, if the assembly instructions
are displayed in a .data section, then we can safely ignore the
displayed instructions. The reason is that objdump doesn't know
which hex values are code and which are data, so it blindly
translates every hex values into assembly instructions. In the
above example, the assembly instruction is push %rbp.
• The optional fourth column is a comment - appears when there is
a reference to an address - to inform where the address
originates. For example, the comment in blue:
lea r12,[rip+0x2008ee] # 600e10
<__frame_dummy_init_array_entry>
is to inform that the referenced address from [rip+0x2008ee] is
0x600e10, where the variable __frame_dummy_init_array_entry
resides.
In a disassembled section, it may also contain labels. A label is
a name given to an assembly instruction. The label denotes the
purpose of an assembly block to a human reader, to make it easier
to understand. For example, .text section carries many of such
labels to denote where code in a program start; .text section
below carries two functions: _start and deregister_tm_clones. The
_start function starts at address 4003e0, is annotated to the
left of the function name. Right below _start label is also the
instruction at address 4003e0. This whole thing means that a
label is simply a name of a memory address. The function
deregister_tm_clones also shares the same format as every
function in the section.
00000000004003e0 <_start>:
4003e0: 31 ed xor ebp,ebp
4003e2: 49 89 d1 mov r9,rdx
4003e5: 5e pop rsi
...more assembly code....
0000000000400410 <deregister_tm_clones>:
400410: b8 3f 10 60 00 mov eax,0x60103f
400415: 55 push rbp
400416: 48 2d 38 10 60 00 sub rax,0x601038
The best way to understand and use assembly language properly is
to understand precisely the underlying computer architecture and
what each machine instruction does. To do so, the most reliable
source is to refer to documents provided by vendors. After all,
hardware vendors are the one who made their machines. To
understand Intel's instruction set, we need the document "Intel
64 and IA-32 architectures software developer's manual combined
volumes 2A, 2B, 2C, and 2D: Instruction set reference, A-Z". The
document can be retrieved here: https://software.intel.com/en-us/articles/intel-sdm
• Chapter 1 provides brief information about the manual, and the
comment notations used in the book.
• Chapter 2 provides an in-depth explanation of the anatomy of an
assembly instruction, which we will investigate in the next
section.
• Chapter 3 - 5 provide the details of every instruction of the
x86_64 architecture.
• Chapter 6 provides information about safer mode extensions. We
won't need to use this chapter.
The first volume "Intel® 64 and IA-32 Architectures Software
Developer's Manual Volume 1: Basic Architecture" describes the
basic architecture and programming environment of Intel
processors. In the book, Chapter 5 gives the summary of all Intel
instructions, by listing instructions into different categories.
We only need to learn general-purpose instructions listed chapter
5.1 for our OS. Chapter 7 describes the purpose of each category.
Gradually, we will learn all of these instructions.
Read section 1.3 in volume 2, exclude sections 1.3.5 and 1.3.7.
The subsequent sections examine the anatomy of an assembly
instruction. To fully understand, it is necessary to write code
and see the code in its actual form displayed as hex numbers. For
this purpose, we use nasm assembler to write a few line of
assembly code and see the generated code.
Suppose we want to see the machine code generated for this
jmp eax
Then, we use an editor e.g. Emacs, then create a new file,
write the code and save it in a file, e.g. test.asm. Then, in
the terminal, run the command:
$ nasm -f bin test.asm -o test
-f option specifies the file format, e.g. ELF, of the final
output file. But in this case, the format is bin, which means
this file is just a flat binary output without any extra
information. That is, the written assembly code is translated
to machine code as is, without the overhead of the metadata
from file format like ELF. Indeed, after compiling, we can
examine the output using this command:
$ hd test
hd (short for hexdump) is a program that displays the content
of a file in hex format[margin:
Though its name is short for hexdump, hd can display in different
base, e.g. binary, other than hex.
]. And get the following output:
00000000 66 ff e0 |f..|
The file only consists of 3 bytes: 66 ff e0, which is
equivalent to the instruction jmp eax.
If we were to use elf as file format:
$ nasm -f elf test.asm -o test
It would be more challenging to learn and understand assembly
instructions with all the added noise[footnote:
The output from hd.
00000000 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00
|.ELF............|
00000010 01 00 03 00 01 00 00 00 00 00 00 00 00 00 00 00
|................|
|@.......4.....(.|
000000a0 20 01 00 00 21 00 00 00 00 00 00 00 00 00 00 00 |
...!...........|
000000b0 01 00 00 00 00 00 00 00 11 00 00 00 02 00 00 00
000000c0 00 00 00 00 00 00 00 00 50 01 00 00 30 00 00 00
|........P...0...|
000000d0 04 00 00 00 03 00 00 00 04 00 00 00 10 00 00 00
000000e0 19 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00
000000f0 80 01 00 00 0d 00 00 00 00 00 00 00 00 00 00 00
00000110 ff e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00000120 00 2e 74 65 78 74 00 2e 73 68 73 74 72 74 61 62
|..text..shstrtab|
00000130 00 2e 73 79 6d 74 61 62 00 2e 73 74 72 74 61 62
|..symtab..strtab|
00000160 01 00 00 00 00 00 00 00 00 00 00 00 04 00 f1 ff
00000180 00 74 65 73 74 2e 61 73 6d 00 00 00 00 00 00 00
|.disp8-5.asm....|
Thus, it is better just to use flat binary format in this case,
to experiment instruction by instruction.
With such a simple workflow, we are ready to investigate the
structure of every assembly instruction.
Note: Using the bin format puts nasm by default into 16-bit mode.
To enable 32-bit code to be generated, we must add this line at
the beginning of an nasm source file:
bits 32
Chapter 2 of the instruction reference manual provides an
in-depth of view of instruction format. But, the information is
too much that it can overwhelm beginners. This section provides
an easier instruction before reading the actual chapter in the
<Graphics file: C:/Users/Tu Do/os01/book_src/images/04/x86_instruction_format.pdf>
Recall that an assembly instruction is simply a fixed-size series
of bits. The length of an instruction varies and depends on how
complicated an instruction is. What every instruction shares is a
common format described in the figure above that divides the bits
of an instruction into smaller parts that encode different types
of information. These parts are:
Instruction Prefixes appears at the beginning of an
instruction. Prefixes are optional. A programmer can choose to
use a prefix or not because in practice, a so-called prefix is
just another assembly instruction to be inserted before another
assembly instruction that such prefix is applicable.
Instructions with 2 or 3-bytes opcodes include the prefixes by
Opcode is a unique number that identifies an instruction. Each
opcode is given an mnemonic name that is human readable, e.g.
one of the opcodes for instruction add is 04. When a CPU sees
the number 04 in its instruction cache, it sees instruction add
and execute accordingly. Opcode can be 1,2 or 3 bytes long and
includes an additional 3-bit field in the ModR/M byte when
This instruction:
jmp [0x1234]
generates the machine code:
ff 26 34 12
The very first byte, 0xff is the opcode, which is unique to jmp
instruction.
ModR/M specifies operands of an instruction. Operand can either
be a register, a memory location or an immediate value. This
component of an instruction consists of 3 smaller parts:
• mod field, or modifier field, is combined with r/m field for
a total of 5 bits of information to encode 32 possible
values: 8 registers and 24 addressing modes.
• reg/opcode field encodes either a register operand, or
extends the Opcode field with 3 more bits.
• r/m field encodes either a register operand or can be
combined with mod field to encode an addressing mode.
The tables [mod-rm-16] and [mod-rm-32] list all possible 256
values of ModR/M byte and how each value maps to an addressing
mode and a register, in 16-bit and 32-bit modes.
+---------------------------------------------+-------+-------+-------+-------+-------+-------+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| r8(/r) | AL | CL | DL | BL | AH | CH | DH | BH |
| r16(/r) | AX | CX | DX | BX | SP | BP¹ | SI | DI |
| r32(/r) | EAX | ECX | EDX | EBX | ESP | EBP | ESI | EDI |
| mm(/r) | MM0 | MM1 | MM2 | MM3 | MM4 | MM5 | MM6 | MM7 |
| xmm(/r) | XMM0 | XMM1 | XMM2 | XMM3 | XMM4 | XMM5 | XMM6 | XMM7 |
| (In decimal) /digit (Opcode) | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| (In binary) REG = | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111 |
+---------------------------+--------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Effective Address | Mod | R/M | Values of ModR/M Byte (In Hexadecimal) |
+---------------------------+--------+--------+-------+-------+-------+-------+-------+-------+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| [BX + SI] | 00 | 000 | 00 | 08 | 10 | 18 | 20 | 28 | 30 | 38 |
| [BX + DI] | | 001 | 01 | 09 | 11 | 19 | 21 | 29 | 31 | 39 |
| [BP + SI] | | 010 | 02 | 0A | 12 | 1A | 22 | 2A | 32 | 3A |
| [BP + DI] | | 011 | 03 | 0B | 13 | 1B | 23 | 2B | 33 | 3B |
| [SI] | | 100 | 04 | 0C | 14 | 1C | 24 | 2C | 34 | 3C |
| [DI] | | 101 | 05 | 0D | 15 | 1D | 25 | 2D | 35 | 3D |
| disp16² | | 110 | 06 | 0E | 16 | 1E | 26 | 2E | 36 | 3E |
| [BX] | | 111 | 07 | 0F | 17 | 1F | 27 | 2F | 37 | 3F |
| [BX + SI] + disp8³ | 01 | 000 | 40 | 48 | 50 | 58 | 60 | 68 | 70 | 78 |
| [BX + DI] + disp8 | | 001 | 41 | 49 | 51 | 59 | 61 | 69 | 71 | 79 |
| [BP + SI] + disp8 | | 010 | 42 | 4A | 52 | 5A | 62 | 6A | 72 | 7A |
| [BP + DI] + disp8 | | 011 | 43 | 4B | 53 | 5B | 63 | 6B | 73 | 7B |
| [SI] + disp8 | | 100 | 44 | 4C | 54 | 5C | 64 | 6C | 74 | 7C |
| [DI] + disp8 | | 101 | 45 | 4D | 55 | 5D | 65 | 6D | 75 | 7D |
| [BP] + disp8 | | 110 | 46 | 4E | 56 | 5E | 66 | 6E | 76 | 7E |
| [BX] + disp8 | | 111 | 47 | 4F | 57 | 5F | 67 | 6F | 77 | 7F |
| [BX + SI] + disp16 | 10 | 000 | 80 | 88 | 90 | 98 | A0 | A8 | B0 | B8 |
| [BX + DI] + disp16 | | 001 | 81 | 89 | 91 | 99 | A1 | A9 | B1 | B9 |
| [BP + SI] + disp16 | | 010 | 82 | 8A | 92 | 9A | A2 | AA | B2 | BA |
| [BP + DI] + disp16 | | 011 | 83 | 8B | 93 | 9B | A3 | AB | B3 | BB |
| [SI] + disp16 | | 100 | 84 | 8C | 94 | 9C | A4 | AC | B4 | BC |
| [DI] + disp16 | | 101 | 85 | 8D | 95 | 9D | A5 | AD | B5 | BD |
| [BP] + disp16 | | 110 | 86 | 8E | 96 | 9E | A6 | AE | B6 | BE |
| [BX] + disp16 | | 111 | 87 | 8F | 97 | 9F | A7 | AF | B7 | BF |
| EAX/AX/AL/MM0/XMM0 | 11 | 000 | C0 | C8 | D0 | D8 | E0 | E8 | F0 | F8 |
| ECX/CX/CL/MM1/XMM1 | | 001 | C1 | C9 | D1 | D9 | E1 | E9 | F1 | F9 |
| EDX/DX/DL/MM2/XMM2 | | 010 | C2 | CA | D2 | DA | E2 | EA | F2 | FA |
| EBX/BX/BL/MM3/XMM3 | | 011 | C3 | CB | D3 | DB | E3 | EB | F3 | FB |
| ESP/SP/AHMM4/XMM4 | | 100 | C4 | CC | D4 | DC | E4 | EC | F4 | FC |
| EBP/BP/CH/MM5/XMM5 | | 101 | C5 | CD | D5 | DD | E5 | ED | F5 | FD |
| ESI/SI/DH/MM6/XMM6 | | 110 | C6 | CE | D6 | DE | E6 | EE | F6 | FE |
| EDI/DI/BH/MM7/XMM7 | | 111 | C7 | CF | D7 | DF | E7 | EF | F7 | FF |
1. The default segment register is SS for the effective addresses
containing a BP index, DS for other effective addresses.
2. The disp16 nomenclature denotes a 16-bit displacement that
follows the ModR/M byte and that is added to the index.
3. The disp8 nomenclature denotes an 8-bit displacement that
follows the ModR/M byte and that is sign-extended and added to
the index.
<mod-rm-16>
+---------------------------------------------+-------+-------+-------+-------+-------+-------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| r8(/r) | AL | CL | DL | BL | AH | CH | DH | BH |
| r16(/r) | AX | CX | DX | BX | SP | BP | SI | DI |
| r32(/r) | EAX | ECX | EDX | EBX | ESP | EBP | ESI | EDI |
| mm(/r) | MM0 | MM1 | MM2 | MM3 | MM4 | MM5 | MM6 | MM7 |
| xmm(/r) | XMM0 | XMM1 | XMM2 | XMM3 | XMM4 | XMM5 | XMM6 | XMM7 |
| (In decimal) /digit (Opcode) | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| (In binary) REG = | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111 |
+---------------------------+--------+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Effective Address | Mod | R/M | Values of ModR/M Byte (In Hexadecimal) |
+---------------------------+--------+--------+-------+-------+-------+-------+-------+-------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| [EAX] | 00 | 000 | 00 | 08 | 10 | 18 | 20 | 28 | 30 | 38 |
| [ECX] | | 001 | 01 | 09 | 11 | 19 | 21 | 29 | 31 | 39 |
| [EDX] | | 010 | 02 | 0A | 12 | 1A | 22 | 2A | 32 | 3A |
| [EBX] | | 011 | 03 | 0B | 13 | 1B | 23 | 2B | 33 | 3B |
| [-][-]¹ | | 100 | 04 | 0C | 14 | 1C | 24 | 2C | 34 | 3C |
| disp32² | | 101 | 05 | 0D | 15 | 1D | 25 | 2D | 35 | 3D |
| [ESI] | | 110 | 06 | 0E | 16 | 1E | 26 | 2E | 36 | 3E |
| [EDI] | | 111 | 07 | 0F | 17 | 1F | 27 | 2F | 37 | 3F |
| [EAX] + disp8³ | 01 | 000 | 40 | 48 | 50 | 58 | 60 | 68 | 70 | 78 |
| [ECX] + disp8 | | 001 | 41 | 49 | 51 | 59 | 61 | 69 | 71 | 79 |
| [EDX] + disp8 | | 010 | 42 | 4A | 52 | 5A | 62 | 6A | 72 | 7A |
| [EBX] + disp8 | | 011 | 43 | 4B | 53 | 5B | 63 | 6B | 73 | 7B |
| [-][-] + disp8 | | 100 | 44 | 4C | 54 | 5C | 64 | 6C | 74 | 7C |
| [EBP] + disp8 | | 101 | 45 | 4D | 55 | 5D | 65 | 6D | 75 | 7D |
| [ESI] + disp8 | | 110 | 46 | 4E | 56 | 5E | 66 | 6E | 76 | 7E |
| [EDI] + disp8 | | 111 | 47 | 4F | 57 | 5F | 67 | 6F | 77 | 7F |
| [EAX] + disp32 | 10 | 000 | 80 | 88 | 90 | 98 | A0 | A8 | B0 | B8 |
| [ECX] + disp32 | | 001 | 81 | 89 | 91 | 99 | A1 | A9 | B1 | B9 |
| [EDX] + disp32 | | 010 | 82 | 8A | 92 | 9A | A2 | AA | B2 | BA |
| [EBX] + disp32 | | 011 | 83 | 8B | 93 | 9B | A3 | AB | B3 | BB |
| [-][-] + disp32 | | 100 | 84 | 8C | 94 | 9C | A4 | AC | B4 | BC |
| [EBP] + disp32 | | 101 | 85 | 8D | 95 | 9D | A5 | AD | B5 | BD |
| [ESI] + disp32 | | 110 | 86 | 8E | 96 | 9E | A6 | AE | B6 | BE |
| [EDI] + disp32 | | 111 | 87 | 8F | 97 | 9F | A7 | AF | B7 | BF |
| EAX/AX/AL/MM0/XMM0 | 11 | 000 | C0 | C8 | D0 | D8 | E0 | E8 | F0 | F8 |
| ECX/CX/CL/MM/XMM1 | | 001 | C1 | C9 | D1 | D9 | E1 | E9 | F1 | F9 |
| EDX/DX/DL/MM2/XMM2 | | 010 | C2 | CA | D2 | DA | E2 | EA | F2 | FA |
| EBX/BX/BL/MM3/XMM3 | | 011 | C3 | CB | D3 | DB | E3 | EB | F3 | FB |
| ESP/SP/AH/MM4/XMM4 | | 100 | C4 | CC | D4 | DC | E4 | EC | F4 | FC |
| EBP/BP/CH/MM5/XMM5 | | 101 | C5 | CD | D5 | DD | E5 | ED | F5 | FD |
| ESI/SI/DH/MM6/XMM6 | | 110 | C6 | CE | D6 | DE | E6 | EE | F6 | FE |
| EDI/DI/BH/MM7/XMM7 | | 111 | C7 | CF | D7 | DF | E7 | EF | F7 | FF |
1. The [-][-] nomenclature means a SIB follows the ModR/M byte.
follows the ModR/M byte (or the SIB byte if one is present) and
that is added to the index.
that is sign-extended and added to the index.
How to read the table:
In an instruction, next to the opcode is a ModR/M byte. Then,
look up the byte value in this table to get the corresponding
operands in the row and column.
An instruction uses this addressing mode:
Then, the machine code is:
0xff is the opcode. Next to it, 0x26 is the ModR/M byte. Look
up in the 16-bit table [margin:
Remember, using bin format generates 16-bit code by default
], the first operand is in the row, equivalent to a disp16, which
means a 16-bit offset. Since the instruction does not have a
second operand, the column can be ignored.
add eax, ecx
Then the machine code is:
01 c8
0x01 is the opcode. Next to it, c8 is the ModR/M byte. Look up
in the 16-bit table at c8 value, the row tells the first
operand is ax [margin:
], the column tells the second operand is cx; the column can't be
ignored as the second operand is in the instruction.
Why is the first operand in the row and the second in a column?
Let's break down the ModR/M byte, with an example value c8,
into bits:
+----------+---------------------+-------------+
| mod | reg/opcode | r/m |
+----+-----+----+----+-----------+----+----+---+
| 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 |
The mod field divides addressing modes into 4 different
categories. Further combines with the r/m field, exactly one
addressing mode can be selected from one of the 24 rows. If an
instruction only requires one operand, then the column can be
ignored. Then the reg/opcode field finally provides the if an
instruction requires one.
SIB is Scale-Index-Base byte. This byte encodes ways to
calculate the memory position into an element of an array. SIB
is the name that is based on this formula for calculating an
effective address:
\mathtt{Effective\,address=scale*index+base}
• Index is an offset into an array.
• Scale is a factor of Index. Scale is one of the values 1, 2,
4 or 8; any other value is invalid. To scale with values
other than 2, 4 or 8, the scale factor must be set to 1, and
the offset must be calculated manually. For example, if we
want to get the address of the n[superscript:th] element in an array and each element is 12-bytes long. Because
each element is 12-bytes long instead of 1, 2, 4 or 8, Scale
is set to 1 and a compiler needs to calculate the offset:
\mathtt{Effective\,address=1*(12*n)+base}
Why do we bother with SIB when we can manually calculate the
offset? The answer is that in the above scenario, an
additional mul instruction must be executed to get the
offset, and the mul instruction consumes more than 1 byte,
while the SIB only consumes 1 byte. More importantly, if the
element is repeatedly accessed many times in a loop, e.g.
millions of times, then an extra mul instruction can
detriment the performance as the CPU must spend time
executing millions of these additional mul instructions.
The values 2, 4 and 8 are not random chosen. They map to
16-bit (or 2 bytes), 32-bit (or 4 bytes) and 64-bit (or 8
bytes) numbers that are often used for intensive numeric
calculations.
• Base is the starting address.
Below is the table listing all 256 values of SIB byte, with the
lookup rule similar to ModR/M tables:
+--------------------------------------------+------+------+-------+-------+------+------+------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| r32(/r) | EAX | ECX | EDX | EBX | ESP | EBP | ESI | EDI |
| (In decimal) /digit (Opcode) | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| (In binary) REG = | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111 |
+---------------------------+-------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Effective Address | SS | R/M | Values of SIB Byte (In Hexadecimal) |
+---------------------------+-------+--------+------+------+-------+-------+------+------+------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| [EAX] | 00 | 000 | 00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 |
| [ECX] | | 001 | 08 | 09 | 0A | 0B | 0C | 0D | 0E | 0F |
| [EDX] | | 010 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |
| [EBX] | | 011 | 18 | 19 | 1A | 1B | 1C | 1D | 1E | 1F |
| none | | 100 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 |
| [EBP] | | 101 | 28 | 29 | 2A | 2B | 2C | 2D | 2E | 2F |
| [ESI] | | 110 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 |
| [EDI] | | 111 | 38 | 39 | 3A | 3B | 3C | 3D | 3E | 3F |
| [EAX*2] | 01 | 000 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 |
| [ECX*2] | | 001 | 48 | 49 | 4A | 4B | 4C | 4D | 4E | 4F |
| [EDX*2] | | 010 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 |
| [EBX*2] | | 011 | 58 | 59 | 5A | 5B | 5C | 5D | 5E | 5F |
| [EBP*2] | | 101 | 68 | 69 | 6A | 6B | 6C | 6D | 6E | 6F |
| [ESI*2] | | 110 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 |
| [EDI*2] | | 111 | 78 | 79 | 7A | 7B | 7C | 7D | 7E | 7F |
| none | | 100 | A0 | A1 | A2 | A3 | A4 | A5 | A6 | A7 |
| [EBP*4] | | 101 | A8 | A9 | AA | AB | AC | AD | AE | AF |
| [ESI*4] | | 110 | B0 | B1 | B2 | B3 | B4 | B5 | B6 | B7 |
| [EDI*4] | | 111 | B8 | B9 | BA | BB | BC | BD | BE | BF |
| [EAX*8] | 11 | 000 | C0 | C1 | C2 | C3 | C4 | C5 | C6 | C7 |
| [ECX*8] | | 001 | C8 | C9 | CA | CB | CC | CD | CE | CF |
| [EDX*8] | | 010 | D0 | D1 | D2 | D3 | D4 | D5 | D6 | D7 |
| [EBX*8] | | 011 | D8 | D9 | DA | DB | DC | DD | DE | DF |
| none | | 100 | E0 | E1 | E2 | E3 | E4 | E5 | E6 | E7 |
| [EBP*8] | | 101 | E8 | E9 | EA | EB | EC | ED | EE | EF |
| [ESI*8] | | 110 | F0 | F1 | F2 | F3 | F4 | F5 | F6 | F7 |
| [EDI*8] | | 111 | F8 | F9 | FA | FB | FC | FD | FE | FF |
1. The [*] nomenclature means a disp32 with no base if the MOD is
00B. Otherwise, [*] means disp8 or disp32 + [EBP]. This
provides the following address modes:
+-----------+---------------------------------+
| MOD bits | Effective Address |
| 00 | [scaled index] + disp32 |
| 01 | [scaled index] + disp8 + [EBP] |
| 10 | [scaled index] + disp32 + [EBP] |
<sib>
jmp [eax*2 + ebx]
generates the following code:
00000000 67 ff 24 43
First of all, the first byte, 0x67 is not an opcode but a
prefix. The number is a predefined prefix for address-size
override prefix. After the prefix, comes the opcode 0xff and
the ModR/M byte 0x24. The value from ModR/M suggests that
there exists a SIB byte that follows. The SIB byte is 0x43.
Look up in the SIB table, the row tells that eax is scaled by
2, and the column tells that the base to be added is in ebx.
Displacement is the offset from the start of the base index.
generates machine code is:
0x1234, which is generated as 34 12 in raw machine code, is
the displacement and stands right next to 0x26, which is the
ModR/M byte.
jmp [eax * 4 + 0x1234]
67 ff 24 8d 34 12 00 00
• 0x67 is an address-size override prefix. Its meaning is
that if an instruction runs a default address size e.g.
16-bit, the use of prefix enables the instruction to use
non-default address size, e.g. 32-bit or 64-bit. Since the
binary is supposed to be 16-bit, 0x67 changes the
instruction to 32-bit mode.
• 0xff is the opcode.
• 0x24 is the ModR/M byte. The value suggests that a SIB byte
follows, according to table [mod-rm-32].
• 34 12 00 00 is the displacement. As can be seen, the
displacement is 4 bytes in size, which is equivalent to
32-bit, due to address-size override prefix.
Immediate When an instruction accepts a fixed value, e.g.
0x1234, as an operand, this optional field holds the value.
Note that this field is different from displacement: the value
is not necessary used an offset, but an arbitrary value of
anything.
mov eax, 0x1234
generates the code:
66 b8 34 12 00 00
• 0x66 is operand-sized override prefix. Similar to
address-size override prefix, this prefix enables
operand-size to be non-default.
• 0xb8 is one of the opcodes for mov instruction.
• 0x1234 is the value to be stored in register eax. It is
just a value for storing directly into a register, and
nothing more. On the other hand, displacement value is an
offset for some address calculation.
Read section 2.1 in Volume 2 for even more details.
Skim through section 5.1 in volume 1. Read chapter 7 in volume
1. If there are terminologies that you don't understand e.g.
segmentation, don't worry as the terms will be explained in
later chapters or ignored.
In the instruction reference manual (Volume 2), from chapter 3
onward, every x86 instruction is documented in detail. Whenever
the precise behavior of an instruction is needed, we always
consult this document first. However, before using the document,
we must know the writing conventions first. Every instruction has
the following common structure for organizing information:
Opcode table lists all possible opcodes of an assembly
Each table contains the following fields, and can have one or
more rows:
+---------------------------------------------------------------------------------------+
| Opcode Instruction Op/En 64/32-bit Mode CPUID
Feature flag Description |
Opcode shows a unique hexadecimal number assigned to an
instruction. There can be more than one opcode for an
instruction, each encodes a variant of the instruction. For
example, one variant requires one operand, but another
requires two. In this column, there can be other notations
aside from hexadecimal numbers. For example, /r indicates
that the ModR/M byte of the instruction contains a reg
operand and an r/m operand. The detail listing is in section
3.1.1.1 and 3.1.1.2 in the Intel's manual, volume 2.
Instruction gives the syntax of the assembly instruction that a
programmer can use for writing code. Aside from the mnemonic
representation of the opcode, e.g. jmp, other symbols
represent operands with specific properties in the
instruction. For example, rel8 represents a relative address
from 128 bytes before the end of the instruction to 127 bytes
after the end of instruction; similarly rel16/rel32 also
represents relative addresses, but with the operand size of
16/32-bit instead of 8-bit like rel8. For a detailed listing,
please refer to section 3.1.1.3 of volume 2.
Op/En is short for Operand/Encoding. An operand encoding
specifies how a ModR/M byte encodes the operands that an
instruction requires. If a variant of an instruction requires
operands, then an additional table named "Instruction Operand
Encoding" is added for explaining the operand encoding, with
the following structure:
+--------+------------+------------+------------+-----------+
| Op/En | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
Most instructions require one to two operands. We make use of
these instructions for our OS and skip the instructions that
require three or four operands. The operands can be readable
or writable or both. The symbol (r) denotes a readable
operand, and (w) denotes a writable operand. For example,
when Operand 1 field contains ModRM:r/m (r), it means the
first operand is encoded in r/m field of ModR/M byte, and is
only readable.
64/32-bit mode indicates whether the opcode sequence is
supported in a 64-bit mode and possibly 32-bit mode.
CPUID Feature Flag indicates indicate a particular CPU feature
must be available to enable the instruction. An instruction
is invalid if a CPU does not support the required feature.[margin:
In Linux, the command:
cat /proc/cpuinfo
lists the information of available CPUs and its features in flags
Compat/Leg Mode Many instructions do not have this field, but
instead is replaced with Compat/Leg Mode, which stands for
Compatibility or Legacy Mode. This mode enables 64-bit
variants of instructions to run normally in 16 or 32-bit
mode. [float MarginTable:
Notations in Compat/Leg Mode
+-----------+----------------------------------------------------------------------------------+
| Notation | Description |
| Valid | Supported |
| I | Not supported |
| N.E. | The 64-bit opcode cannot be encoded as it overlaps with existing
32-bit opcode. |
Description briefly explains the variant of an instruction in
the current row.
Description specifies the purpose of the instructions and how
an instruction works in detail.
Operation is pseudo-code that implements an instruction. If a
description is vague, this section is the next best source to
understand an assembly instruction. The syntax is described in
section 3.1.1.9 in volume 2.
Flags affected lists the possible changes to system flags in
EFLAGS register.
Exceptions list the possible errors that can occur when an
instruction cannot run correctly. This section is valuable for
OS debugging. Exceptions fall into one of the following
• Protected Mode Exceptions
• Real-Address Mode Exception
• Virtual-8086 Mode Exception
• Floating-Point Exception
• SIMD Floating-Point Exception
• Compatibility Mode Exception
• 64-bit Mode Exception
For our OS, we only use Protected Mode Exceptions and
Real-Address Mode Exceptions. The details are in section 3.1.1.13
and 3.1.1.14, volume 2.
Let's look at our good old jmp instruction. First, the opcode
+-----------------+---------------+----------+--------------+------------------+------------------------------------------------------------------------------------------------+
| Opcode | Instruction | Op/
En | 64-bit Mode | Compat/Leg Mode | Description |
| EB cb | JMP rel8 | D | Valid | Valid | Jump short, RIP = RIP + 8-bit displacement sign extended to
64-bits |
| E9 cw | JMP rel16 | D | N.S. | Valid | Jump near, relative, displacement relative to next instruction.
Not supported in 64-bit mode. |
| E9 cd | JMP rel32 | D | Valid | Valid | Jump near, relative, RIP = RIP + 32-bit displacement sign
extended to 64-bits |
| FF /4 | JMP r/m16 | M | N.S. | Valid | Jump near, absolute indirect, address = zero- extended r/m16. Not
supported in 64-bit mode |
| FF /4 | JMP r/m32 | M | N.S. | Valid | Jump near, absolute indirect, address given in r/m32. Not
supported in 64-bit mode |
| FF /4 | JMP r/m64 | M | Valid | N.E | Jump near, absolute indirect, RIP = 64-Bit offset from register
or memory |
| EA cd | JMP ptr16:16 | D | Inv. | Valid | Jump far, absolute, address given in operand |
| EA cp | JMP ptr16:32 | D | Inv. | Valid | Jump far, absolute, address given in operand |
| FF /5 | JMP m16:16 | D | Valid | Valid | Jump far, absolute indirect, address given in m16:16 |
| REX.W + FF /5 | JMP m16:64 | D | Valid | N.E. | Jump far, absolute indirect, address given in m16:64 |
<jmp-instruction>
Each row lists a variant of jmp instruction. The first column has
the opcode EB cb, with an equivalent symbolic form jmp rel8.
Here, rel8 means 128 bytes offset, counting from the end of the
instruction. The end of an instruction is the next byte after the
last byte of an instruction. To make it more concrete, consider
this assembly code:
jmp main
jmp main2
main2:
jmp 0x1234
Memory address of each opcode
+-------------------+ +-------------------------+
| main | | main2 |
\downarrow
+----------+--------------+-----+-----+-----+-----+-----+--------------+-----+-----+----+
| Address | 00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 |
+---------- +-----+-----+-----+-----+-----+--------------+-----+-----+----+
| Opcode | eb | fe | eb | 02 | eb | fa | e9 | 2b | 12 | 00 |
The first jmp main instruction is generated into eb fe and
occupies the addresses 00 and 01; the end of the first jmp main
is at address 02, past the last byte of the first jmp main which
is located at the address 01. The value fe is equivalent to -2,
since eb opcode uses only a byte (8 bits) for relative
addressing. The offset is -2, and the end address of the first
jmp main is 02, adding them together we get 00 which is the
destination address for jumping to.
Similarly, the jmp main2 instruction is generated into eb 02,
which means the offset is +2; the end address of jmp main2 is at
04, and adding together with the offset we get the destination
address is 06, which is the start instruction marked by the label
main2.
The same rule can be applied to rel16 and rel32 encoding. In the
example code, jmp 0x1234 uses rel16 (which means 2-byte offset)
and is generated into e9 2b 12. As the table [jmp-instruction]
shows, e9 opcode takes a cw operand, which is a 2-byte offset
(section 3.1.1.1, volume 2). Notice one strange issue here: the
offset value is 2b 12, while it is supposed to be 34 12. There is
nothing wrong. Remember, rel8/rel16/rel32 is an offset, not an
address. A offset is a distance from a point. Since no label is
given but a number, the offset is calculated from the start of a
program. In this case, the start of the program is the address
00, the end of jmp 0x1234 is the address 09[footnote:
which means 9 bytes was consumed, starting from address 0.
], so the offset is calculated as 0x1234 - 0x9 = 0x122b. That
solved the mystery!
The jmp instructions with opcode FF /4 enable jumping to a near,
absolute address stored in a general-purpose register or a memory
location; or in short, as written in the description, absolute
indirect. The symbol /4 is the column with digit 4 in table [mod-rm-16]
[footnote:
The column with the following fields:
XMM4
]. For example:
is generated into:
Since this is 16-bit code, we use table [mod-rm-16]. Looking up
the table, ModR/M value 26 means disp16, which means a 16-bit
offset from the start of current index[footnote:
Look at the note under the table.
], which is the base address stored in DS register. In this case,
jmp [0x1234] is implicitly understood as jmp [ds:0x1234], which
means the destination address is 0x1234 bytes away from the start
of a data segment.
The jmp instruction with opcode FF /5 enables jumping to a far,
absolute address stored in a memory location (as opposed to /4,
which means stored in a register); in short, a far pointer. To
generate such instruction, the keyword far is needed to tell nasm
we are using a far pointer:
jmp far [eax]
67 ff 28
Since 28 is the value in the 5th column of the table [mod-rm-32][footnote:
Remember the prefix 67 indicates the instruction is used as
32-bit. The prefix only added if the default environment is
assumed as 16-bit when generating code by an assembler.
] that refers to [eax], we successfully generate an instruction
for a far jump. After CPU runs the instruction, the program
counter eip and code segment register cs is set to the memory
address, stored in the memory location that eax points to, and
CPU starts fetching code from the new address in cs and eip. To
make it more concrete, here is an example:
<Graphics file: C:/Users/Tu Do/os01/book_src/images/04/far_jmp_ex.pdf>
The far address consumes total of 6 bytes in size for a 16-bit
segment and 32-bit address, which is encoded as m16:32 from the
table [jmp-instruction]. As can be seen from the figure above,
the blue part is a segment address, loaded into cs register with
the value 0x5678; the red part is the memory address within that
segment, loaded into eip register with the value 0x1234 and start
executing from there.
Finally, the jmp instructions with EA opcode jump to a direct
absolute address. For example, the instruction:
jmp 0x5678:0x1234
ea 34 12 78 56
The address 0x5678:0x1234 is right next to the opcode, unlike FF
/5 instruction that needs an indirect address in eax register.
We skip the jump instruction with REX prefix, as it is a 64-bit
In this section, we will examine how data definition in C maps to
its assembly form. The generated code is extracted from .bss
section. That means, the assembly code displayed has no[footnote:
Actually, code is just a type of data, and is often used for
hijacking into a running program to execute such code. However,
we have no use for it in this book.
], aside from showing that such a value has an equivalent
assembly opcode that represents an instruction.
The code-assembly listing is not random, but is based on Chapter
4 of Volume 1, "Data Type". The chapter lists fundamental data
types that x86 hardware operates on, and through learning the
generated assembly code, it can be understood how close C maps
its syntax to hardware, and then a programmer can see why C is
appropriate for OS programming. The specific objdump command used
in this section will be:
$ objdump -z -M intel -S -D <object file> | less
Note: zero bytes are hidden with three dot symbols: ... To show
all the zero bytes, we add -z option.
The most basic types that x86 architecture works with are based
on sizes, each is twice as large as the previous one: 1 byte (8
bits), 2 bytes (16 bits), 4 bytes (32 bits), 8 bytes (64 bits)
and 16 bytes (128 bits).
<Graphics file: C:/Users/Tu Do/os01/book_src/images/04/fundamental_data_types.pdf>
These types are simplest: they are just chunks of memory at
different sizes that enables CPU to access memory efficiently.
From the manual, section 4.1.1, volume 1:
Words, doublewords, and quadwords do not need to be aligned in
memory on natural boundaries. The natural boundaries for words,
double words, and quadwords are even-numbered addresses,
addresses evenly divisible by four, and addresses evenly
divisible by eight, respectively. However, to improve the
performance of programs, data structures (especially stacks)
should be aligned on natural boundaries whenever possible. The
reason for this is that the processor requires two memory
accesses to make an unaligned memory access; aligned accesses
require only one memory access. A word or doubleword operand that
crosses a 4-byte boundary or a quadword operand that crosses an
8-byte boundary is considered unaligned and requires two separate
memory bus cycles for access.
Some instructions that operate on double quadwords require memory
operands to be aligned on a natural boundary. These instructions
generate a general-protection exception (#GP) if an unaligned
operand is specified. A natural boundary for a double quadword is
any address evenly divisible by 16. Other instructions that
operate on double quadwords permit unaligned access (without
generating a general-protection exception). However, additional
memory bus cycles are required to access unaligned data from
In C, the following primitive types (must include stdint.h) maps
to the fundamental types:
uint8_t @|\color{red}\bfseries byte|@ = 0x12;
uint16_t @|\color{blue}\bfseries word|@ = 0x1234;
uint32_t @|\color{green}\bfseries dword|@ = 0x12345678;
uint64_t @|\color{magenta}\bfseries qword|@ = 0x123456789abcdef;
unsigned __int128 @|\color{cyan}\bfseries dqword1|@ = (__int128)
0x123456789abcdef;
0x123456789abcdef << 64;
int main(int argc, char *argv[]) {
0804a018 <byte>:
804a018: 12 00 adc al,BYTE PTR
[eax]
0804a01a <word>:
804a01a: 34 12 xor al,0x12
0804a01c <dword>:
804a01c: 78 56 js 804a074
<_end+0x48>
804a01e: 34 12 xor al,0x12
0804a020 <qword>:
804a020: ef out dx,eax
804a021: cd ab int 0xab
804a023: 89 67 45 mov DWORD PTR
[edi+0x45],esp
804a026: 23 01 and eax,DWORD PTR
[ecx]
0000000000601040 <dqword1>:
601040: ef out dx,eax
601041: cd ab int 0xab
601043: 89 67 45 mov DWORD PTR
[rdi+0x45],esp
601046: 23 01 and eax,DWORD PTR
[rcx]
601048: 00 00 add BYTE PTR
[rax],al
60104a: 00 00 add BYTE PTR
60104c: 00 00 add BYTE PTR
60104e: 00 00 add BYTE PTR
60105b: 89 67 45 mov DWORD PTR
60105e: 23 01 and eax,DWORD PTR
gcc generates the variables byte, word, dword, qword, dqword1,
dword2, written earlier, with their respective values highlighted
in the same colors; variables of the same type are also
highlighted in the same color. Since this is data section, the
assembly listing carries no meaning. When byte is declared with
uint8_t, gcc guarantees that the size of byte is always 1 byte.
But, an alert reader might notice the 00 value next to the 12
value in the byte variable. This is normal, as gcc avoid memory
misalignment by adding extra padding bytespadding bytes. To make
it easier to see, we look at readelf output of .data section:
$ readelf -x .data hello
the output is (the colors mark which values belong to which
variables):
Hex dump of section '.data':
0x00601020 00000000 00000000 00000000 00000000 ................
0x00601030 12003412 78563412 efcdab89 67452301 ..4.xV4.....gE#.
0x00601040 efcdab89 67452301 00000000 00000000 ....gE#.........
0x00601050 00000000 00000000 efcdab89 67452301 ............gE#.
As can be seen in the readelf output, variables are allocated
storage space according to their types and in the declared order
by the programmer (the colors correspond the the variables).
Intel is a little-endian machine, which means smaller addresses
hold bytes with smaller values, larger addresses hold byte with
larger values. For example, 0x1234 is displayed as 34 12; that
is, 34 appears first at address 0x601032, then 12 at 0x601033.
The decimal values within a byte is unchanged, so we see 34 12
instead of 43 21. This is quite confusing at first, but you will
get used to it soon.
Also, isn't it redundant when char type is always 1 byte already
and why do we bother adding int8_t? The truth is, char type is
not guaranteed to be 1 byte in size, but only the minimum of 1
byte in size. In C, a byte is defined to be the size of a char,
and a char is defined to be smallest addressable unit of the
underlying hardware platform. There are hardware devices that the
smallest addressable unit is 16 bit or even bigger, which means
char is 2 bytes in size and a "byte" in such platforms is
actually 2 units of 8-bit bytes.
Not all architectures support the double quadword type. Still,
gcc does provide support for 128-bit number and generate code
when a CPU supports it (that is, a CPU must be 64-bit). By
specifying a variable of type __int128 or unsigned __int128, we
get a 128-bit variable. If a CPU does not support 64-bit mode,
gcc throws an error.
The data types in C, which represents the fundamental data types,
are also called unsigned numbers. Other than numerical
calculations, unsigned numbers are used as a tool for structuring
data in memory; we will this application see later in the book,
when various data structures are organized into bit groups.
In all the examples above, when the value of a variable with
smaller size is assigned to a variable with larger size, the
value easily fits in the larger variable. On the contrary, the
value of a variable with larger size is assigned to a variable
with smaller size, two scenarios occur:
• The value is greater than the maximum value of the variable
with smaller layout, so it needs truncating to the size of the
variable and causing incorrect value.
• The value is smaller than the maximum value of the variable
with a smaller layout, so it fits the variable.
However, the value might be unknown until runtime and can be
value, it is best not to let such implicit conversion handled by
the compiler, but explicitly controlled by a programmer.
Otherwise it will cause subtle bugs that are hard to catch as the
erroneous values might rarely be used to reproduce the bugs.
Pointers are variables that hold memory addresses. x86 works with
2 types of pointers:
Near pointer is a 16-bit/32-bit offset within a segment, also
called effective address.
Far pointer is also an offset like a near pointer, but with an
explicit segment selector.
<Graphics file: C:/Users/Tu Do/os01/book_src/images/04/pointer_data_type.pdf>
C only provides near pointer, since far pointer is platform
dependent, such as x86. In application code, you can assume that
the address of current segment starts at 0, so the offset is
actually any memory addres from 0 to the maximum address.
int8_t i = 0;
int8_t @|\color{red}\bfseries *p1|@ = (int8_t *) 0x1234;
int8_t @|\color{blue}\bfseries *p2|@ = &i;
0000000000601030 <p1>:
601030: 34 12 xor al,0x12
601038: 41 10 60 00 adc BYTE PTR
[r8+0x0],spl
Disassembly of section .bss:
0000000000601040 <__bss_start>:
0000000000601041 <i>:
601047: 00 .byte 0x0
The pointer p1 holds a direct address with the value 0x1234. The
pointer p2 holds the address of the variable i. Note that both
the pointers are 8 bytes in size (or 4-byte, if 32-bit).
A bit fieldbit field is a contiguous sequence of bits. Bit fields
allow data structuring at bit level. For example, a 32-bit data
can hold multiple bit fields that represent multiples different
pieces of information, such as bits 0-4 specifies the size of a
data structure, bit 5-6 specifies permissions and so on. Data
structures at the bit level are common for low-level programming.
<Graphics file: C:/Users/Tu Do/os01/book_src/images/04/bit_field_data_type.pdf>
struct bit_field {
int data1:8;
struct bit_field2 {
char data5:4;
struct normal_struct {
int data1;
struct normal_struct @|\color{red}\bfseries ns|@ = {
.data1 = @|\color{red}\bfseries 0x12345678|@,
.data2 = @|\color{red}\bfseries 0x9abcdef0|@,
int @|\color{blue}\bfseries i|@ = 0x12345678;
struct bit_field @|\color{magenta}\bfseries bf|@ = {
.data1 = @|\color{magenta}\bfseries 0x12|@,
.data4 = @|\color{magenta}\bfseries 0x78|@
struct bit_field2 @|\color{green}\bfseries bf2|@ = {
.data1 = @|\color{green}\bfseries 0x12|@,
.data5 = @|\color{green}\bfseries 0xf|@
Each variable and its value are given a unique color in the
assembly listing below:
0804a018 <ns>:
804a018: 78 56 js 804a070 <_end+0x34>
804a01a: 34 12 xor al,0x12
804a01c: f0 de bc 9a 78 56 34 lock fidivr WORD PTR
[edx+ebx*4+0x12345678]
804a023: 12
804a024: f0 de bc 9a 78 56 34 lock fidivr WORD PTR
804a02b: 12
0804a028 <i>:
0804a02c <bf>:
804a02c: 12 34 56 adc dh,BYTE PTR
[esi+edx*2]
804a02f: 78 12 js 804a043 <_end+0x7>
0804a030 <bf2>:
804a030: 12 34 56 adc dh,BYTE PTR
804a033: 78 0f js 804a044 <_end+0x8>
804a035: 00 00 add BYTE PTR [eax],al
804a037: 00 .byte 0x0
The sample code creates 4 variables: ns, i, bf, bf2. The
definition of normal_struct and bit_field structs both specify 4
integers. bit_field specifies additional information next to its
member name, separated by a colon, e.g. .data1 : 8. This extra
information is the bit width of each bit group. It means, even
though defined as an int, .data1 only consumes 8 bit of
information. If additional data members are specified after
.data1, two scenarios happen:
• If the new data members fit within the remaining bits after
.data, which are 24 bits[footnote:
Since .data1 is declared as an int, 32 bits are still allocated,
but .data1 can only access 8 bits of information.
], then the total size of bit_field struct is still 4 bytes, or
32 bits.
• If the new data members don't fit, then the remaining 24 bits
(3 bytes) are still allocated. However, the new data members
are allocated brand new storages, without using the previous 24
bits.
In the example, the 4 data members: .data1, .data2, .data3 and
.data4, each can access 8 bits of information, and together can
access all of 4 bytes of the integer first declared by .data1. As
can be seen by the generated assembly code, the values of bf are
follow natural order as written in the C code: 12 34 56 78, since
each value is a separate members. In contrast, the value of i is
a number as a whole, so it is subject to the rule of little
endianess and thus contains the value 78 56 34 12. Note that at
804a02f, is the address of the final byte in bf, but next to it
is a number 12, despite 78 is the last number in it. This extra
number 12 does not belong to the value of bf. objdump is just
being confused that 78 is an opcode; 78 corresponds to js
instruction, and it requires an operand. For that reason, objdump
grabs whatever the next byte after 78 and put it there. objdump
is a tool to display assembly code after all. A better tool to
use is gdb that we will learn in the next chapter. But for this
chapter, objdump suffices.
Unlike bf, each data member in ns is allocated fully as an
integer, 4 bytes each, 16 bytes in total. As we can see, bit
field and normal struct are different: bit field structure data
at the bit level, while normal struct works at byte level.
Finally, the struct of bf2[footnote:
bit_field2
] is the same of bf[footnote:
bit_field
], except it contains one more data member: .data5, and is
defined as an integer. For this reason, another 4 bytes are
allocated just for .data5, even though it can only access 8 bits
of information, and the final value of bf2 is: 12 34 56 78 0f 00
00 00. The remaining 3 bytes must be accessed by the mean of a
pointer, or casting to another data type that can fully access
all 4 bytes..
What happens when the definition of bit_field struct and bf
variable are changed to:
struct bit_field bf = {
.data1 = 0x1234,
What will be the value of .data1?
What happens when the definition of bit_field2 struct is
changed to:
int data5:32;
What is layout of a variable of type bit_field2?
Although share the same name, string as defined by x86 is
different than a string in C. x86 defines string as "continuous
sequences of bits, bytes, words, or doublewords". On the other
hand, C defines a string as an array of 1-byte characters with a
zero as the last element of the array to make a null-terminated
string. This implies that strings in x86 are arrays, not C
strings. A programmer can define an array of bytes, words or
doublewords with char or uint8_t, short or uint16_t and int or
uint32_t, except an array of bits. However, such a feature can be
easily implemented, as an array of bits is essentially any array
of bytes, or words or doublewords, but operates at the bit level.
The following code demonstrates how to define array (string) data
uint8_t @|\color{red}\bfseries a8[2]|@ = {0x12, 0x34};
uint16_t @|\color{blue}\bfseries a16[2]|@ = {0x1234, 0x5678};
uint32_t @|\color{magenta}\bfseries a32[2]|@ = {0x12345678,
0x9abcdef0};
uint64_t @|\color{green}\bfseries a64[2]|@ = {0x123456789abcdef0,
0x123456789abcdef0};
0804a018 <a8>:
[eax+eax*1]
804a01b: 00 34 12 add BYTE PTR
[edx+edx*1],dh
0804a01c <a16>:
804a01c: 34 12 xor al,0x12
804a01e: 78 56 js 804a076 <_end+0x3a>
0804a020 <a32>:
804a020: 78 56 js 804a078 <_end+0x3c>
804a022: 34 12 xor al,0x12
804a024: f0 de bc 9a f0 de bc lock fidivr WORD PTR
[edx+ebx*4-0x65432110]
804a02b: 9a
804a02f: 12
Despite a8 is an array with 2 elements, each is 1-byte long, but
it is still allocated with 4 bytes. Again, to ensure natural
alignment for best performance, gcc pads extra zero bytes. As
shown in the assembly listing, the actual value of a8 is 12 34 00
00, with a8[0] equals to 12 and a8[1] equals to 34.
Then it comes a16 with 2 elements, each is 2-byte long. Since 2
elements are 4 bytes in total, which is in the natural alignment,
gcc pads no byte. The value of a16 is 34 12 78 56, with a16[0]
equals to 34 12 and a16[1] equals to 78 56. Note that, objdump is
confused again, as de is the opcode for the instruction fidivr
(short of reverse divide) that requires another operand, so
objdump grabs whatever the next bytes that makes sense to it for
creating "an operand". Only the highlighted values belong to a32.
Next is a32, with 2 elements, 4 bytes each. Similar to above
arrays, the value of a32[0] is 78 56 34 12, the value of a32[1]
is f0 de bc 9a, exactly what is assigned in the C code.
Finally is a64, also with 2 elements, but 8 bytes each. The total
size of a64 is 16 bytes, which is in the natural alignment,
therefore no padding bytes added. The values of both a64[0] and
a64[1] are the same: f0 de bc 9a 78 56 34 12, that got
misinterpreted to fidivr instruction.
a8, a16, a32 and a64 memory layouts
+----------+
| 12 | 34 |
a16:
| 34 12 | 78 56 |
+----------------------------------------+
| 78 56 34 12 | f0 de bc 9a |
+---------------------------------------------------------------------------------+
| f0 de bc 9a 78 56 34 12 | f0 de bc 9a 78 56 34 12
However, beyond one-dimensional arrays that map directly to
hardware string type, C provides its own syntax for
multi-dimensional arrays:
uint8_t @|\color{red}\bfseries a2[2][2]|@ = {
{0x12, 0x34},
{0x56, 0x78}
uint8_t @|\color{blue}\bfseries a3[2][2][2]|@ = {
{{0x12, 0x34},
{0x56, 0x78}},
{{0x9a, 0xbc},
{0xde, 0xff}},
804a01b: 78 12 js 804a02f <_end+0x7>
0804a01c <a3>:
804a01f: 78 9a js 8049fbb
<_DYNAMIC+0xa7>
804a021: bc .byte 0xbc
804a022: de ff fdivrp st(7),st
Technically, multi-dimensional arrays are like normal arrays: in
the end, the total size is translated into flat allocated bytes.
A 2 x 2 array is allocated with 4 bytes; a 2\times2\times2
is allocated with 8 bytes, as can be seen in the assembly listing
of a2[footnote:
Again, objdump is confused and put the number 12 next to 78 in a3
] and a3. In low-level assembly code, the representation is the
same between a[4] and a[2][2]. However, in high-level C code, the
difference is tremendous. The syntax of multi-dimensional array
enables a programmer to think with higher level concepts, instead
of translating manually from high-level concepts to low-level
code and work with high-level concepts in his head at the same
The following two-dimensional array can hold a list of 2 names
with the length of 10:
char names[2][10] = {
"John Doe",
"Jane Doe"
To access a name, we simply adjust the column index[footnote:
The left index is called column index since it changes the index
based on a column.
] e.g. names[0], names[1]. To access individual character within
a name, we use the row index[footnote:
Same with column index, the right index is called row index since
it changes the index based on a row.
] e.g. names[0][0] gives the character "J", names[0][1] gives the
character "o" and so on.
Without such syntax, we need to create a 20-byte array e.g.
names[20], and whenever we want to access a character e.g. to
check if the names contains with a number in it, we need to
calculate the index manually. It would be distracting, since we
constantly need to switch thinkings between the actual problem
and the translate problem.
Since this is a repeating pattern, C abstracts away this
problem with the syntax for define and manipulating
multi-dimensional array. Through this example, we can clearly
see the power of abstraction through language can give us. It
would be ideal if a programmer is equipped with such power to
define whatever syntax suitable for a problem at hands. Not
many languages provide such capacity. Fortunately, through C
macro, we can partially achieve that goal .
In all cases, an array is guaranteed to generate contiguous bytes
of memory, regardless of the dimensions it has.
What is the difference between a multi-dimensional array and an
array of pointers, or even pointers of pointers?
This section will explore how compiler transform high level code
into assembly code that CPU can execute, and see how common
assembly patterns help to create higher level syntax. -S option
is added to objdump to better demonstrate the connection between
high and low level code.
In this section, the option --no-show-raw-insn is added to
objdump command to omit the opcodes for clarity:
$ objdump --no-show-raw-insn -M intel -S -D <object file> | less
Previous section explores how various types of data are created,
and how they are laid out in memory. Once memory storages are
allocated for variables, they must be accessible and writable.
Data transfer instructions move data (bytes, words, doublewords
or quadwords) between memory and registers, and between
registers, effectively read from a storage source and write to
another storage source.
int32_t i = 0x12345678;
int j = i;
int k = 0xabcdef;
080483db <main>:
80483db: push ebp
80483dc: mov ebp,esp
80483de: sub esp,0x10
80483e1: mov eax,ds:0x804a018
80483e6: mov DWORD PTR [ebp-0x8],eax
80483e9: mov DWORD PTR [ebp-0x4],0xabcdef
80483f0: mov eax,0x0
80483f5: leave
80483f6: ret
80483f7: xchg ax,ax
80483fb: xchg ax,ax
80483fd: xchg ax,ax
80483ff: nop
The general data movement is performed with the mov instruction.
Note that despite the instruction being called mov, it actually
copies data from one destination to another.
The red instruction copies data from the register esp to the
register ebp. This mov instruction moves data between registers
and is assigned the opcode 89.
The blue instructions copies data from one memory location (the i
variable) to another (the j variable). There exists no data
movement from memory to memory; it requires two mov instructions,
one for copying the data from a memory location to a register,
and one for copying the data from the register to the destination
memory location.
The pink instruction copies an immediate value into memory.
Finally, the green instruction copies immediate data into a
register.
int expr(int i, int j)
int add = i + j;
int sub = i - j;
int mul = i * j;
int div = i / j;
int mod = i % j;
int neg = -i;
int and = i & j;
int or = i | j;
int xor = i ^ j;
int not = ~i;
int shl = i << 8;
int shr = i >> 8;
char equal1 = (i == j);
int equal2 = (i == j);
char greater = (i > j);
char less = (i < j);
char greater_equal = (i >= j);
char less_equal = (i <= j);
int logical_and = i && j;
int logical_or = i || j;
++i;
--i;
int i1 = i++;
int i2 = ++i;
int i3 = i--;
int i4 = --i;
The full assembly listing is really long. For that reason, we
examine expression by expression.
Expression: int add = i + j;
80483e1: mov edx,DWORD PTR [ebp+0x8]
80483e4: mov eax,DWORD PTR [ebp+0xc]
80483e7: add eax,edx
80483e9: mov DWORD PTR [ebp-0x34],eax
The assembly code is straight forward: variable i and j are
stored in eax and edx respectively, then added together with
the add instruction, and the final result is stored into eax.
Then, the result is saved into the local variable add, which
is at the location [ebp-0x34].
Expression: int sub = i - j;
80483ec: mov eax,DWORD PTR [ebp+0x8]
80483ef: sub eax,DWORD PTR [ebp+0xc]
80483f2: mov DWORD PTR [ebp-0x30],eax
Similar to add instruction, x86 provides a sub instruction
for subtraction. Hence, gcc translates a subtraction into sub
instruction, with eax is reloaded with i, as eax still
carries the result from previous expression. Then, j is
subtracted from i. After the subtraction, the value is saved
into the variable sub, at location [ebp-0x30].
Expression: int mul = i * j;
80483f5: mov eax,DWORD PTR [ebp+0x8]
80483f8: imul eax,DWORD PTR [ebp+0xc]
80483fc: mov DWORD PTR [ebp-0x34],eax
Similar to sub instruction, only eax is reloaded, since it
carries the result of previous calculation. imul performs
signed multiply[footnote:
Unsigned multiply is perform by mul instruction.
]. eax is first loaded with i, then is multiplied with j and
stored the result back into eax, then stored into the
variable mul at location [ebp-0x34].
Expression: int div = i / j;
80483ff: mov eax,DWORD PTR [ebp+0x8]
8048402: cdq
8048403: idiv DWORD PTR [ebp+0xc]
8048406: mov DWORD PTR [ebp-0x30],eax
Similar to imul, idiv performs sign divide. But, different
from imul above idiv only takes one operand:
1. First, i is reloaded into eax.
2. Then, cdq converts the double word value in eax into a
quadword value stored in the pair of registers edx:eax, by
copying the signed (bit 31[superscript:th]) of the value in eax into every bit position in edx. The pair
edx:eax is the dividend, which is the variable i, and the
operand to idiv is the divisor, which is the variable j.
3. After the calculation, the result is stored into the pair
edx:eax registers, with the quotient in eax and remainder
in edx. The quotient is stored in the variable div, at
location [ebp-0x30].
Expression: int mod = i % j;
8048409: mov eax,DWORD PTR [ebp+0x8]
804840c: cdq
804840d: idiv DWORD PTR [ebp+0xc]
8048410: mov DWORD PTR [ebp-0x2c],edx
The same idiv instruction also performs the modulo operation,
since it also calculates a remainder and stores in the
variable mod, at location [ebp-0x2c].
Expression: int neg = -i;
8048416: neg eax
neg replaces the value of operand (the destination operand)
with its two's complement (this operation is equivalent to
subtracting the operand from 0). In this example, the value i
in eax is replaced replaced with -i using neg instruction.
Then, the new value is stored in the variable neg at
[ebp-0x28].
Expression: int and = i & j;
804841b: mov eax,DWORD PTR [ebp+0x8]
804841e: and eax,DWORD PTR [ebp+0xc]
8048421: mov DWORD PTR [ebp-0x24],eax
and performs a bitwise AND operation on two operands, and
stores the result in the destination operand, which is the
variable and at [ebp-0x24].
Expression: int or = i | j;
8048427: or eax,DWORD PTR [ebp+0xc]
804842a: mov DWORD PTR [ebp-0x20],eax
Similar to and instruction, or performs a bitwise OR
operation on two operands, and stores the result in the
destination operand, which is the variable or at [ebp-0x20]
in this case.
Expression: int xor = i ^ j;
804842d: mov eax,DWORD PTR [ebp+0x8]
8048430: xor eax,DWORD PTR [ebp+0xc]
8048433: mov DWORD PTR [ebp-0x1c],eax
Similar to and/or instruction, xor performs a bitwise XOR
destination operand, which is the variable xor at [ebp-0x1c].
Expression: int not = ~i;
8048439: not eax
804843b: mov DWORD PTR [ebp-0x18],eax
not performs a bitwise NOT operation (each 1 is set to 0, and
each 0 is set to 1) on the destination operand and stores the
result in the destination operand location, which is the
variable not at [ebp-0x18].
Expression: int shl = i << 8;
804843e: mov eax,DWORD PTR [ebp+0x8]
8048441: shl eax,0x8
8048444: mov DWORD PTR [ebp-0x14],eax
shl (shift logical left) shifts the bits in the destination
operand to the left by the number of bits specified in the
source operand. In this case, eax stores i and shl shifts eax
by 8 bits to the left. A different name for shl is sal (shift
arithmetic left). Both can be used synonymous. Finally, the
result is stored in the variable shl at [ebp-0x14].
Here is a visual demonstration of shl/sal and shr
After shifting to the left, the right most bit is set for
Carry Flag in EFLAGS register.
Expression: int shr = i >> 8;
804844a: sar eax,0x8
804844d: mov DWORD PTR [ebp-0x10],eax
sar is similar to shl/sal, but shift bits to the right and
extends the sign bit. For right shift, shr and sar are two
different instructions. shr differs to sar is that it does
not extend the sign bit. Finally, the result is stored in the
variable shr at [ebp-0x10].
In the figure (b), notice that initially, the sign bit is 1,
but after 1-bit and 10-bit shiftings, the shifted-out bits
are filled with zeros.
SAR Instruction Operation (Source: Figure 7-8, Volume 1)
<Graphics file: C:/Users/Tu Do/os01/book_src/images/04/sar.pdf>
With sar, the sign bit (the most significant bit) is
preserved. That is, if the sign bit is 0, the new bits always
get the value 0; if the sign bit is 1, the new bits always
get the value 1.
Expression: char equal1 = (i == j);
8048453: cmp eax,DWORD PTR [ebp+0xc]
8048456: sete al
8048459: mov BYTE PTR [ebp-0x41],al
cmp and variants of the variants of set instructions make up
all the logical comparisons. In this expression, cmp compares
variable i and j; then sete stores the value 1 to al register
if the comparison from cmp earlier is equal, or stores 0
otherwise. The general name for variants of set instruction
is called SETcc. The suffix cc denotes the condition being
tested for in EFLAGS register. Appendix B in volume 1,
"EFLAGS Condition Codes", lists the conditions it is possible
to test for with this instruction. Finally, the result is
stored in the variable equal1 at [ebp-0x41].
Expression: int equal2 = (i == j);
804845c: mov eax,DWORD PTR [ebp+0x8]
804845f: cmp eax,DWORD PTR [ebp+0xc]
8048465: movzx eax,al
8048468: mov DWORD PTR [ebp-0xc],eax
Similar to equality comparison, this expression also compares
for equality, with an exception that the result is stored in
an int type. For that reason, one more instruction is a
added: movzx instruction, a variant of mov that copies the
result into a destination operand and fills the remaining
bytes with 0. In this case, since eax is 4-byte wide, after
copying the first byte in al, the remaining bytes of eax are
filled with 0 to ensure the eax carries the same value as al.
movzx instruction
eax before movzx
+-----+-----+-----+----+
| 12 | 34 | 56 | 78 |
after movzx eax, al
Expression: char greater = (i > j);
804846b: mov eax,DWORD PTR [ebp+0x8]
804846e: cmp eax,DWORD PTR [ebp+0xc]
8048471: setg al
Similar to equality comparison, but used setg for greater
comparison instead.
Expression: char less = (i < j);
804847a: cmp eax,DWORD PTR [ebp+0xc]
804847d: setl al
8048480: mov BYTE PTR [ebp-0x3f],al
Applied setl for less comparison.
Expression: char greater_equal = (i >= j);
8048489: setge al
804848c: mov BYTE PTR [ebp-0x3e],al
Applied setge for greater or equal comparison.
Expression: char less_equal = (i <= j);
804848f: mov eax,DWORD PTR [ebp+0x8]
8048495: setle al
8048498: mov BYTE PTR [ebp-0x3d],al
Applied setle for less than or equal comparison.
Expression: int logical_and = (i && j);
804849b: cmp DWORD PTR [ebp+0x8],0x0
804849f: je 80484ae <expr+0xd3>
80484a1: cmp DWORD PTR [ebp+0xc],0x0
80484a5: je 80484ae <expr+0xd3>
80484a7: mov eax,0x1
80484ac: jmp 80484b3 <expr+0xd8>
80484ae: mov eax,0x0
80484b3: mov DWORD PTR [ebp-0x8],eax
Logical AND operator && is one of the syntaxes that is made
entirely in software[footnote:
That is, there is no equivalent assembly instruction implemented
in hardware.
] with simpler instructions. The algorithm from the assembly code
is simple:
1. First, check if i is 0 with the instruction at 0x804849b.
(a) If true, jump to 0x80484ae and set eax to 0.
(b) Set the variable logical_and to 0, as it is the next
instruction after 0x80484ae.
2. If i is not 0, check if j is 0 with the instruction at
0x80484a1.
3. If both i and j are not 0, the result is certainly 1, or
(a) Set it accordingly with the instruction at 0x80484a7.
(b) Then jump to the instruction at 0x80484b3 to set the
variable logical_and at [ebp-0x8] to 1.
Expression: int logical_or = (i || j);
80484b6: cmp DWORD PTR [ebp+0x8],0x0
80484ba: jne 80484c2 <expr+0xe7>
80484bc: cmp DWORD PTR [ebp+0xc],0x0
80484c0: je 80484c9 <expr+0xee>
80484c2: mov eax,0x1
80484c7: jmp 80484ce <expr+0xf3>
80484ce: mov DWORD PTR [ebp-0x4],eax
Logical OR operator || is similar to logical and above.
Understand the algorithm is left as an exercise for readers.
Expression: ++i; and --i; (or i++ and i--)
80484d1: add DWORD PTR [ebp+0x8],0x1
80484d5: sub DWORD PTR [ebp+0x8],0x1
The syntax of increment and decrement is similar to logical
AND and logical OR in that it is made from existing
instruction, that is add. The difference is that the CPU
actually does has a built-in instruction, but gcc decided not
to use the instruction because inc and dec cause a partial
flag register stall, occurs when an instruction modifies a
part of the flag register and the following instruction is
dependent on the outcome of the flags (section 3.5.2.6, Intel Optimization Manual, 2016
). The manual even suggests that inc and dec should be
replaced with add and sub instructions (section 3.5.1.1, Intel Optimization Manual, 2016
Expression: int i1 = i++;
80484d9: mov eax,DWORD PTR [ebp+0x8]
80484dc: lea edx,[eax+0x1]
80484df: mov DWORD PTR [ebp+0x8],edx
First, i is copied into eax at 80484d9. Then, the value of
eax + 0x1 is copied into edx as an effective address at
80484dc. The lea (load effective address) instruction copies
a memory address into a register. According to Volume 2, the
source operand is a memory address specified with one of the
processors addressing modes. This means, the source operand
must be specified by the addressing modes defined in
16-bit/32-bit ModR/M Byte tables, [mod-rm-16] and [mod-rm-32]
After loading the incremented value into edx, the value of i
is increased by 1 at 80484df. Finally, the previous i value
is stored back to i1 at [ebp-0x8] by the instruction at
80484e2.
Expression: int i2 = ++i;
80484e5: add DWORD PTR [ebp+0x8],0x1
80484e9: mov eax,DWORD PTR [ebp+0x8]
80484ec: mov DWORD PTR [ebp-0xc],eax
The primary differences between this increment syntax and the
previous one are:
• add is used instead of lea to increase i directly.
• the newly incremented i is stored into i2 instead of the
old value.
• the expression only costs 3 instructions instead of 4.
This prefix-increment syntax is faster than the post-fix one
used previously. It might not matter much which version to
use if the increment is only used once or a few hundred times
in a small loop, but it matters when a loop runs millions or
more times. Also, depends on different circumstances, it is
more convenient to use one over the other e.g. if i is an
index for accessing an array, we want to use the old value
for accessing previous array element and newly incremented i
for current element.
Expression: int i3 = i--;
80484ef: mov eax,DWORD PTR [ebp+0x8]
80484f2: lea edx,[eax-0x1]
80484f5: mov DWORD PTR [ebp+0x8],edx
80484f8: mov DWORD PTR [ebp-0x8],eax
Similar to i++ syntax, and is left as an exercise to readers.
Expression: int i4 = --i;
80484fb: sub DWORD PTR [ebp+0x8],0x1
8048502: mov DWORD PTR [ebp-0x4],eax
Similar to ++i syntax, and is left as an exercise to readers.
Read section 3.5.2.4, "Partial Register Stalls" to understand
register stalls in general.
Read the sections from 7.3.1 to 7.3.7 in volume 1.
A stack is a contiguous array of memory locations that holds a
collection of discrete data. When a new element is added, a stack
grows down in memory toward lesser addresses, and shrinks up
toward greater addresses when an element is removed. x86 uses the
esp register to point to the top of the stack, at the newest
element. A stack can be originated anywhere in main memory, as
esp can be set to any memory address. x86 provides two operations
for manipulating stacks:
• push instruction and its variants add a new element on top of
the stack
• pop instructions and its variants remove the top-most element
from the stack.
+----------+----+
| 0x10000 | 00 |
+----------+----+ +-----+
| 0x10004 | 12 | \leftarrow
| esp |
+-----+
Local variables are variables that exist within a scope. A scope
is delimited by a pair of braces: {..}. The most common scope to
define local variables is at function scope. However, scope can
be unnamed, and variables created inside an unnamed scope do not
exist outside of its scope and its inner scope.
Function scope:
void foo() {
a and b are variables local to the function foo.
Unnamed scope:
int foo() {
return i = a + b;
a and b are local to where it is defined and local into its
inner child scope that return i = a + b. However, they do not
exist at the function scope that creates i.
When a local variable is created, it is pushed on the stack; when
a local variable goes out of scope, it is pop out of the stack,
thus destroyed. When an argument is passed from a caller to a
callee, it is pushed on the stack; when a callee returns to the
caller, the arguments are popped out the stack. The local
variables and arguments are automatically allocated upon enter a
function and destroyed after exiting a function, that's why it's
called automatic variables.
A base frame pointer points to the start of the current function
frame, and is kept in ebp register. Whenever a function is
called, it is allocated with its own dedicated storage on stack,
called stack frame. A stack frame is where all local variables
and arguments of a function are placed on a stack[footnote:
Data and only data are exclusively allocated on stack for every
stack frame. No code resides here.
When a function needs a local variable or an argument, it uses
ebp to access a variable:
• All local variables are allocated after the ebp pointer. Thus,
to access a local variable, a number is subtracted from ebp to
reach the location of the variable.
• All arguments are allocated before ebp pointer. To access an
argument, a number is added to ebp to reach the location of the
argument.
• The ebp itself pointer points to the return address of its
caller.
+--------------------------------------+---------------------------------------------------------------------------+
| Previous Frame | Current Frame |
+--------------------------------------+-----------------------------+----------+----------------------------------+
| Function Arguments | | ebp | Local variables |
+-----+-----+-----+-----------+--------+-----------------------------+----------+-----+-----+-----+-----------+----+
| A1 | A2 | A3 | ........ | An | Return Address | Old ebp | L1 | L2 | L3 | ........ | Ln |
A = Argument
L = Local Variable
Here is an example to make it more concrete:
int add(int @|\color{red}\bfseries a|@, int
@|\color{green}\bfseries b|@) {
int @|\color{blue}\bfseries i|@ = @|\color{red}\bfseries a|@
+ @|\color{green}\bfseries b|@;
080483db <add>:
int add(int a, int b) {
int i = a + b;
80483ec: mov eax,DWORD PTR [ebp-0x4]
80483ef: leave
In the assembly listing, [ebp-0x4] is the local variable i, since
it is allocated after ebp, with the length of 4 bytes (an int).
On the other hand, a and b are arguments and can be accessed with
ebp:
• [ebp+0x8] accesses a.
• [ebp+0xc] access b.
For accessing arguments, the rule is that the closer a variable
on stack to ebp, the closer it is to a function name.
+-------------------+ +-------------------+ +-------------------+ +-------------------+
| ebp+0xc | | ebp+0x8 | | ebp+0x4 | | ebp |
+----------+-----+-----+-----+--------------+-----+-----+-----+--------------+-----+-----+-----+--------------+-----+-----+-----+-------------+
| | 00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 0a | 0b | 0c | 0d | 0e | 0f |
+----------+--------------------------------+--------------------------------+--------------------------------+-------------------------------+
| 0x10000 | b | a | Return Address | Old ebp |
+-------------------+ +-------------------+
| ebp+0x8 | | ebp+0x4 |
+----- +----- +-------------+
+----------+-----+-----+-----+--------------+-----+-----+-----+--------------+-----+-----+-----+--------------+-------------------------------+
| 0xffe0 | | | | | | | | | | | | N | i |
N = Next local variable starts here
From the figure, we can see that a and b are laid out in memory
with the exact order as written in C, relative to the return
Function Call and Return<sub:Function-Call-and>
int local = 0x12345;
add(1,1);
For every function call, gcc pushes arguments on the stack in
reversed order with the push instructions. That is, the
arguments pushed on stack are in reserved order as it is
written in high level C code, to ensure the relative order
between arguments, as seen in previous section how function
arguments and local variables are laid out. Then, gcc generates
a call instruction, which then implicitly pushes a return
address before transferring the control to add function:
080483f2 <main>:
80483f2: push ebp
80483f3: mov ebp,esp
80483f5: push 0x2
80483f9: call 80483db <add>
80483fe: add esp,0x8
8048401: mov eax,0x0
8048406: leave
8048407: ret
Upon finishing the call to add function, the stack is restored by
adding 0x8 to stack pointer esp (which is equivalent to 2 pop
instructions). Finally, a leave instruction is executed and main
returns with a ret instruction. A ret instruction transfers the
program execution back to the caller to the instruction right
after the call instruction, the add instruction. The reason ret
can return to such location is that the return address implicitly
pushed by the call instruction, which is the address right after
the call instruction; whenever the CPU executes ret instruction,
it retrieves the return address that sits right after all the
arguments on the stack:
At the end of a function, gcc places a leave instruction to clean
up all spaces allocated for local variables and restore the frame
pointer to frame pointer of the caller.
80483e1: DWORD PTR [ebp-0x4],0x12345
80483eb: mov eax,DWORD PTR [ebp+0xc]
80483ee: add eax,edx
The above code that gcc generated for function calling is
actually the standard method x86 defined. Read chapter 6, "
Produce Calls, Interrupts, and Exceptions", Intel manual volume
Loop is simply resetting the instruction pointer to an already
executed instruction and starting from there all over again. A
loop is just one application of jmp instruction. However, because
looping is a pervasive pattern, it earned its own syntax in C.
80483e1: mov DWORD PTR [ebp-0x4],0x0
80483e8: jmp 80483ee <main+0x13>
80483ea: add DWORD PTR [ebp-0x4],0x1
80483ee: cmp DWORD PTR [ebp-0x4],0x9
80483f2: jle 80483ea <main+0xf>
80483f4: b8 00 00 00 00 mov eax,0x0
80483f9: c9 leave
80483fa: c3 ret
80483fb: 66 90 xchg ax,ax
80483fd: 66 90 xchg ax,ax
80483ff: 90 nop
The colors mark corresponding high level code to assembly code:
1. The red instruction initialize i to 0.
2. The green instructions compare i to 10 by using jle and
compare it to 9. If true, jump to 80483ea for another
iteration.
3. The blue instruction increase i by 1, making the loop able
to terminate once the terminate condition is satisfied.
Why does the increment instruction (the blue instruction)
appears before the compare instructions (the green
instructions)?
What assembly code can be generated for while and do...while?
Again, conditional in C with if...else... construct is just
another application of jmp instruction under the hood. It is also
a pervasive pattern that earned its own syntax in C.
if (argc) {
80483db: push ebp
80483dc: mov ebp,esp
80483de: sub esp,0x10
80483e1: mov DWORD PTR [ebp-0x4],0x0
80483e8: cmp DWORD PTR [ebp+0x8],0x0
80483ec: je 80483f7 <main+0x1c>
80483ee: mov DWORD PTR [ebp-0x4],0x1
80483f5: jmp 80483fe <main+0x23>
80483f7: mov DWORD PTR [ebp-0x4],0x0
80483fe: mov eax,0x0
8048403: leave
8048404: ret
The generated assembly code follows the same order as the
corresponding high level syntax:
• red instructions represents if branch.
• blue instructions represents else branch.
• green instruction is the exit point for both if and else
if branch first compares whether argc is false (equal to 0)
with cmp instruction. If true, it proceeds to else branch at
80483f7. Otherwise, if branch continues with the code of its
branch, which is the next instruction at 80483ee for copying 1
to i. Finally, it skips over else branch and proceeds to
80483fe, which is the next instruction pasts the if..else...
construct.
else branch is entered when cmp instruction from if branch is
true. else branch starts at 80483f7, which is the first
instruction of else branch. The instruction copies 0 to i, and
proceeds naturally to the next instruction pasts the
if...else... construct without any jump.
The Anatomy of a Program<chap:The-Anatomy-of-a-program>
Every program consists of code and data, and only those two
components made up a program. However, if a program consists
purely code and data of its own, from the perspective of an
operating system (as well as human), it does not know in a
program, which block of binary is a program and which is just raw
data, where in the program to start execution, which region of
memory should be protected and which is free to modify. For that
reason, each program carries extra metadata to communicate with
the operating system how to handle the program.
When a source file is compiled, the generated machine code is
stored into an object file[margin:
object file
]object file, which is just a block of binary. One or more object
files can be combined to produce an executable binary[margin:
executable binary
]executable binary, which is a complete program runnable in an
readelf is a program that recognizes and displays the ELF
metadata of a binary file, be it an object file or an executable
binary. ELF, or Executable and Linkable Format, is the content at
the very beginning of an executable to provide an operating
system necessary information to load into main memory and run the
executable. ELF can be thought of similar to the table of
contents of a book. In a book, a table of contents list the page
numbers of the main sections, subsections, sometimes even figures
and tables for easy lookup. Similarly, ELF lists various sections
used for code and data, and the memory addresses of each symbol
along with other information.
An ELF binary is composed of:
• An ELF header[margin:
]ELF header: the very first section of an executable that
describes the file's organization.
• A Program header table[margin:
]program header table: is an array of fixed-size structures that
describes segments of an executable.
• A Section header table[margin:
]section header table: is an array of fixed-size structures that
describes sections of an executable.
• Segments and section[margin:
Segments and sections
]Segments and sections are the main content of an ELF binary,
which are the code and data, divided into chunks of different
A segmentsegment is a composition of zero or more sections and
is directly loaded by an operating system at runtime.
A sectionsection is a block of binary that is either:
– actual program code and data that is available in memory when
a program runs.
– metadata about other sections used only in the linking
process, and disappear from the final executable.
Linker uses sections to build segments.
ELF - Linking View vs Executable View (Source: Wikipedia)
<Graphics file: C:/Users/Tu Do/os01/book_src/images/05/Elf-layout--en.pdf>
Later we will compile our kernel as an ELF executable with GCC,
and explicitly specify how segments are created and where they
are loaded in memory through the use a linker script, a text file
to instruct how a linker should generate a binary. For now, we
will examine the anatomy of an ELF executable in detail.
The [margin:
ELF specification
]ELF specification is bundled as a man page in Linux:
$ man elf
It is a useful resource to understand and implement ELF. However,
it will be much easier to use after you finish this chapter, as
the specification mixes implementation details in it.
The default specification is a generic one, in which every ELF
implementation follows. However, each platform provides extra
features unique to it. The ELF specification for x86 is currently
maintained on Github by H.J. Lu: https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI
Platform-dependent details are referred to as "processor specific"
in the generic ELF specification. We will not explore these
details, but study the generic details, which are enough for
crafting an ELF binary image for our operating system.
To see the information of an ELF header:
$ readelf -h hello
The output:
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices
X86-64
Version: 0x1
Entry point address: 0x400430
Start of program headers: 64 (bytes into file)
Start of section headers: 6648 (bytes into file)
Flags: 0x0
Size of this header: 64 (bytes)
Size of program headers: 56 (bytes)
Number of program headers: 9
Size of section headers: 64 (bytes)
Number of section headers: 31
Section header string table index: 28
Let's go through each field:
Displays the raw bytes that uniquely addresses a file is an ELF
executable binary. Each byte gives a brief information.
In the example, we have the following magic bytes:
Examine byte by byte:
Byte Description
7f 45 4c 46 Predefined values. The first byte is always 7F, the remaining 3
bytes represent the string "ELF".
02 See Class field below.
01 See Data field below.
01 See Version field below.
00 See OS/ABI field below.
00 00 00 00 00 00 00 00 Padding bytes. These bytes are unused and are always set to 0.
Padding bytes are added for proper alignment, and is reserved for
future use when more information is needed.
A byte in Magic field. It specifies the class or capacity of a
file.
Value Description
0 Invalid class
1 32-bit objects
A byte in Magic field. It specifies the data encoding of the
processor-specific data in the object file.
Value Description
0 Invalid data encoding
1 Little endian, 2's complement
2 Big endian, 2's complement
A byte in Magic. It specifies the ELF header version number.
0 Invalid version
1 Current version
OS/ABI
A byte in Magic field. It specifies the target operating system
ABI. Originally, it was a padding byte.
Possible values: Refer to the latest ABI document, as it is a
long list of different operating systems.
Identifies the object file type.
0 No file type
1 Relocatable file
2 Executable file
3 Shared object file
4 Core file
0xff00 Processor specific, lower bound
0xffff Processor specific, upper bound
The values from 0xff00 to 0xffff are reserved for a processor
to define additional file types meaningful to it.
Specifies the required architecture value for an ELF file e.g.
x86_64, MIPS, SPARC, etc. In the example, the machine is of x86_64
architecture.
Possible values: Please refer to the latest ABI document, as it
is a long list of different architectures.
Specifies the version number of the current object file (not
the version of the ELF header, as the above Version field
Entry point address
Specifies the memory address where the very first code to be
executed. The address of main function is the default in a
normal application program, but it can be any function by
explicitly specifying the function name to gcc. For the
operating system we are going to write, this is the single most
important field that we need to retrieve to bootstrap our
kernel, and everything else can be ignored.
Start of program headers
The offset of the program header table, in bytes. In the
example, this number is 64 bytes, which means the 65th byte, or
<start address> + 64, is the start address of the program
header table. That is, if a program is loaded at address 0x10000
in memory, then the start address is 0x10000 (the very first
byte of Magic field, where the value 0x7f resides) and the
start address of program header table is 0x10000 + 0x40 = 0x10040
Start of section headers
The offset of the section header table in bytes, similar to the
start of program headers. In the example, it is 6648 bytes into
Hold processor-specific flags associated with the file. When
the program is loaded, in a x86 machine, EFLAGS register is set
according to this value. In the example, the value is 0x0,
which means EFLAGS register is in a clear state.
Size of this header
Specifies the total size of ELF header's size in bytes. In the
example, it is 64 bytes, which is equivalent to Start of
program headers. Note that these two numbers are not necessary
equivalent, as program header table might be placed far away
from the ELF header. The only fixed component in the ELF
executable binary is the ELF header, which appears at the very
beginning of the file.
Size of program headers
Specifies the size of each program header in bytes. In the
example, it is 64 bytes.
Number of program headers
Specifies the total number of program headers. In the example,
the file has a total of 9 program headers.
Size of section headers
Specifies the size of each section header in bytes. In the
Number of section headers
Specifies the total number of section headers. In the example,
the file has a total of 31 section headers. In a section header
table, the first entry in the table is always an empty section.
Section header string table index
Specifies the index of the header in the section header table
that points to the section that holds all null-terminated
strings. In the example, the index is 28, which means it's the
28[superscript:th] entry of the table.
As we know already, code and data compose a program. However, not
all types of code and data have the same purpose. For that
reason, instead of a big chunk of code and data, they are divided
into smaller chunks, and each chunk must satisfy these conditions
(according to gABI):
• Every section in an object file has exactly one section header
describing it. But, section headers may exist that do not have
a section.
• Each section occupies one contiguous (possibly empty) sequence
of bytes within a file. That means, there's no two regions of
bytes that are the same section.
• Sections in a file may not overlap. No byte in a file resides
in more than one section.
• An object file may have inactive space. The various headers and
the sections might not "cover" every byte in an object file.
The contents of the inactive data are unspecified.
To get all the headers from an executable binary e.g. hello, use
the following command:
$ readelf -S hello
Here is a sample output (do not worry if you don't understand the
output. Just skim to get your eyes familiar with it. We will
dissect it soon enough):
There are 31 section headers, starting at offset 0x19c8:
Section Headers:
[Nr] Name Type Address
Size EntSize Flags Link Info
[ 0] NULL 0000000000000000
0000000000000000 0000000000000000 0 0 0
[ 1] .interp PROGBITS 0000000000400238
000000000000001c 0000000000000000 A 0 0 1
[ 2] .note.ABI-tag NOTE 0000000000400254
0000000000000020 0000000000000000 A 0 0 4
[ 3] .note.gnu.build-i NOTE 0000000000400274
[ 4] .gnu.hash GNU_HASH 0000000000400298
[ 5] .dynsym DYNSYM 00000000004002b8
[ 6] .dynstr STRTAB 0000000000400300
[ 7] .gnu.version VERSYM 0000000000400338
[ 8] .gnu.version_r VERNEED 0000000000400340
[ 9] .rela.dyn RELA 0000000000400360
[10] .rela.plt RELA 0000000000400378
0000000000000018 0000000000000018 AI 5 24 8
[11] .init PROGBITS 0000000000400390
000000000000001a 0000000000000000 AX 0 0 4
[12] .plt PROGBITS 00000000004003b0
0000000000000020 0000000000000010 AX 0 0
[13] .plt.got PROGBITS 00000000004003d0
0000000000000008 0000000000000000 AX 0 0 8
[14] .text PROGBITS 00000000004003e0
[15] .fini PROGBITS 0000000000400574
[16] .rodata PROGBITS 0000000000400580
0000000000000004 0000000000000004 AM 0 0 4
[17] .eh_frame_hdr PROGBITS 0000000000400584
[18] .eh_frame PROGBITS 00000000004005c0
[19] .init_array INIT_ARRAY 0000000000600e10
00000e10
0000000000000008 0000000000000000 WA 0 0 8
[20] .fini_array FINI_ARRAY 0000000000600e18
[21] .jcr PROGBITS 0000000000600e20
[22] .dynamic DYNAMIC 0000000000600e28
00000000000001d0 0000000000000010 WA 6 0 8
[23] .got PROGBITS 0000000000600ff8
00000ff8
[24] .got.plt PROGBITS 0000000000601000
[25] .data PROGBITS 0000000000601020
[26] .bss NOBITS 0000000000601030
[27] .comment PROGBITS 0000000000000000
0000000000000034 0000000000000001 MS 0 0 1
[28] .shstrtab STRTAB 0000000000000000
000000000000010c 0000000000000000 0 0 1
[29] .symtab SYMTAB 0000000000000000
0000000000000648 0000000000000018 30 47 8
[30] .strtab STRTAB 0000000000000000
Key to Flags:
W (write), A (alloc), X (execute), M (merge), S (strings), l
(large)
I (info), L (link order), G (group), T (TLS), E (exclude), x
O (extra OS processing required) o (OS specific), p (processor
specific)
The first line:
There are 31 section headers, starting at offset 0x19c8
summarizes the total number of sections in the file, and where
the address where it starts. Then, comes the listing section by
section with the following header, is also the format of each
section output:
[Nr] Name Type Address Offset
Size EntSize Flags Link Info Align
Each section has two lines with different fields:
Nr The index of each section.
Name The name of each section.
Type This field (in a section header) identifies the type of
each section. Types classify sections (similar to types in
programming languages are used by a compiler).
Address The starting virtual address of each section. Note that
the addresses are virtual only when a program runs in an OS
with support for virtual memory enabled. In our OS, since we
run on bare metal, the addresses will all be physical.
Offset The offset of each section into a file. An [margin:
]offsetoffset is a distance in bytes, from the first byte of a
file to the start of an object, such as a section or a segment
in the context of an ELF binary file.
Size The size in bytes of each section.
EntSize Some sections hold a table of fixed-size entries, such
as a symbol table. For such a section, this member gives the
size in bytes of each entry. The member contains 0 if the
section does not hold a table of fixed-size entries.
Flags describes attributes of a section. Flags together with a
type defines the purpose of a section. Two sections can be of
the same type, but serve different purposes. For example, even
though .data and .text share the same type, .data holds the
initialized data of a program while .text holds executable
instructions of a program. For that reason, .data is given read
and write permission, but not executable. Any attempt to
execute code in .data is denied by the running OS: in Linux,
such invalid section usage gives a segmentation fault.
ELF gives information to enable an OS with such protection
mechanism. However, running on bare metal, nothing can prevent
from doing anything. Our OS can execute code in data section,
and vice versa, writing to code section.
Section Flags
+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Flag | Descriptions |
| W | Bytes in this section are writable during execution. |
| A | Memory is allocated for this section during process execution.
Some control sections do not reside in the memory image of an
object file; this attribute is off for those sections. |
| X | The section contains executable instructions. |
| M | The data in the section may be merged to eliminate duplication.
Each element in the section is compared against other elements in
sections with the same name, type and flags. Elements that would
have identical values at program run-time may be merged. |
| S | The data elements in the section consist of null-terminated
character strings. The size of each character is specified in the
section header's EntSize field. |
| l | Specific large section for x86_64 architecture. This flag is not
specified in the Generic ABI but in x86_64 ABI. |
| I | The Info field of this section header holds an index of a section
header. Otherwise, the number is the index of something else. |
| L | Preserve section ordering when linking. If this section is
combined with other sections in the output file, it must appear
in the same relative order with respect to those sections, as the
linked-to section appears with respect to sections the linked-to
section is combined with. Apply when the Link field of this
section's header references another section (the linked-to
section) |
| G | This section is a member (perhaps the only one) of a section
group. |
| T | This section holds Thread-Local Storage, meaning that each thread
has its own distinct instance of this data. A thread is a
distinct execution flow of code. A program can have multiple
threads that pack different pieces of code and execute
separately, at the same time. We will learn more about threads
when writing our kernel. |
| E | Link editor is to exclude this section from executable and shared
library that it builds when those objects are not to be further
relocated. |
| x | Unknown flag to readelf. It happens because the linking process
can be done manually with a linker like GNU ld (we will later
later). That is, section flags can be specified manually, and
some flags are for a customized ELF that the open-source readelf
doesn't know of. |
| O | This section requires special OS-specific processing (beyond the
standard linking rules) to avoid incorrect behavior. A link
editor encounters sections whose headers contain OS-specific
values it does not recognize by Type or Flags values defined by
ELF standard, the link editor should combine those sections. |
| o | All bits included in this flag are reserved for operating
system-specific semantics. |
| p | All bits included in this flag are reserved for
processor-specific semantics. If meanings are specified, the
processor supplement explains them. |
Link and Info are numbers that references the indexes of
sections, symbol table entries, hash table entries. Link field
holds the index of a section, while Info field holds an index
of a section, a symbol table entry or a hash table entry,
depends on the type of a section.
Later when writing our OS, we will handcraft the kernel image
by explicitly linking the object files (produced by gcc)
through a linker script. We will specify the memory layout of
sections by specifying at what addresses they will appear in
the final image. But we will not assign any section flag and
let the linker take care of it. Nevertheless, knowing which
flag does what is useful.
Align is a value that enforces the offset of a section should
be divisible by the value. Only 0 and positive integral powers
of two are allowed. Values 0 and 1 mean the section has no
alignment constraint.
Output of .interp section:
Nr is 1.
Type is PROGBITS, which means this section is part of the
Address is 0x0000000000400238, which means the program is
loaded at this virtual memory address at runtime.
Offset is 0x00000238 bytes into file.
Size is 0x000000000000001c in bytes.
EntSize is 0, which means this section does not have any
fixed-size entry.
Flags are A (Allocatable), which means this section consumes
memory at runtime.
Info and Link are 0 and 0, which means this section links to no
section or entry in any table.
Align is 1, which means no alignment.
Output of the .text section:
Nr is 14.
Address is 0x00000000004003e0, which means the program is
Offset is 0x000003e0 bytes into file.
Size is 0x0000000000000192 in bytes.
Flags are A (Allocatable) and X (Executable), which means this
section consumes memory and can be executed as code at runtime.
Align is 16, which means the starting address of the section
should be divisible by 16, or 0x10. Indeed, it is: \mathtt{0x3e0/0x10=0x3e}
In this section, we will learn different details of section types
and the purposes of special sections e.g. .bss, .text, .data...
by looking at each section one by one. We will also examine the
content of each section as a hexdump with the commands:
$ readelf -x <section name|section number> <file>
For example, if you want to examine the content of section with
index 25 (the .bss section in the sample output) in the file
hello:
$ readelf -x 25 hello
Equivalently, using name instead of index works:
If a section contains strings e.g. string symbol table, the flag
-x can be replaced with -p.
NULL marks a section header as inactive and does not have an
associated section. NULL section is always the first entry of
section header table. It means, any useful section starts from
The sample output of NULL section:
[Nr] Name Type Address
Size EntSize Flags Link Info
[ 0] NULL 0000000000000000
0000000000000000 0000000000000000 0 0
Examining the content, the section is empty:
Section '' has no data to dump.
NOTE marks a section with special information that other
programs will check for conformance, compatibility... by a
vendor or a system builder.
In the sample output, we have 2 NOTE sections:
0000000000000020 0000000000000000 A 0 0
Examine 2nd section with the command:
$ readelf -x 2 hello
Hex dump of section '.note.ABI-tag':
0x00400254 04000000 10000000 01000000 474e5500
............GNU.
0x00400264 00000000 02000000 06000000 20000000 ............
PROGBITS indicates a section holding the main content of a
program, either code or data.
There are many PROGBITS sections:
000000000000001c 0000000000000000 A 0 0
000000000000001a 0000000000000000 AX 0 0
0000000000000004 0000000000000004 AM 0 0
0000000000000008 0000000000000008 WA 0 0
0000000000000034 0000000000000001 MS 0 0
For our operating system, we only need the following section:
.text
This section holds all the compiled code of a program.
.data
This section holds the initialized data of a program. Since
the data are initialized with actual values, gcc allocates
the section with actual byte in the executable binary.
.rodata
This section holds read-only data, such as fixed-size strings
in a program, e.g. "Hello World", and others.
.bss
This section, shorts for Block Started by Symbol, holds
uninitialized data of a program. Unlike other sections, no
space is allocated for this section in the image of the
executable binary on disk. The section is allocated only when
the program is loaded into main memory.
Other sections are mainly needed for dynamic linking, that is
code linking at runtime for sharing between many programs. To
enable such feature, an OS as a runtime environment must be
presented. Since we run our OS on bare metal, we are
effectively creating such environment. For simplicity, we won't
add dynamic linking to our OS.
SYMTAB and DYNSYM These sections hold symbol table. A symbol
table is an array of entries that describe symbols in a
program. A symbol is a name assigned to an entity in a program.
The types of these entities are also the types of symbols, and
these are the possible types of an entity:
In the sample output, section 5 and 29 are symbol tables:
0000000000000648 0000000000000018 30 47
To show the symbol table:
Output consists of 2 symbol tables, corresponding to the two
sections above, .dynsym and .symtab:
Symbol table '.dynsym' contains 4 entries:
Num: Value Size Type Bind Vis Ndx
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND
puts@GLIBC_2.2.5 (2)
__libc_start_main@GLIBC_2.2.5 (2)
3: 0000000000000000 0 NOTYPE WEAK DEFAULT UND
__gmon_start__
Symbol table '.symtab' contains 67 entries:
59: 0000000000601040 0 NOTYPE GLOBAL DEFAULT 26
_end
60: 0000000000400430 42 FUNC GLOBAL DEFAULT 14
_start
__bss_start
63: 0000000000000000 0 NOTYPE WEAK DEFAULT UND
_Jv_RegisterClasses
64: 0000000000601038 0 OBJECT GLOBAL HIDDEN 25
__TMC_END__
_ITM_registerTMCloneTable
66: 00000000004003c8 0 FUNC GLOBAL DEFAULT 11
_init
TLS The symbol is associated with a Thread-Local Storage
entity.
Num is the index of an entry in a table.
Value is the virtual memory address where the symbol is
located.
Size is the size of the entity associated with a symbol.
Type is a symbol type according to table.
NOTYPE The type of a symbol is not specified.
OBJECT The symbol is associated with a data object. In C, any
variable definition is of OBJECT type.
FUNC The symbol is associated with a function or other
executable code.
SECTION The symbol is associated with a section, and exists
primarily for relocation.
FILE The symbol is the name of a source file associated with
an executable binary.
COMMON The symbol labels an uninitialized variable. That is,
when a variable in C is defined as global variable without
an initial value, or as an external variable using the
extern keyword. In other words, these variables stay in
.bss section.
Bind is the scope of a symbol.
LOCAL are symbols that are only visible in the object files
that defined them. In C, the static modifier marks a symbol
(e.g. a variable/function) as local to only the file that
defines it.
If we define variables and functions with static modifer:
static int global_static_var = 0;
static void local_func() {
static int local_static_var = 0;
Then we get the static variables listed as local symbols
after compiling:
$ gcc -m32 hello.c -o hello
Num: Value Size Type Bind Vis Ndx Name
0: 00000000 0 NOTYPE LOCAL DEFAULT UND
1: 00000000 0 FUNC GLOBAL DEFAULT UND
puts@GLIBC_2.0 (2)
2: 00000000 0 NOTYPE WEAK DEFAULT UND
__libc_start_main@GLIBC_2.0 (2)
4: 080484bc 4 OBJECT GLOBAL DEFAULT 16
_IO_stdin_used
......... output omitted .........
38: 0804a020 4 OBJECT LOCAL DEFAULT 26
global_static_var
39: 0804840b 6 FUNC LOCAL DEFAULT 14
local_func
local_static_var.1938
GLOBAL are symbols that are accessible by other object files
when linking together. These symbols are primarily
non-static functions and non-static global data. The extern
modifier marks a symbol as externally defined elsewhere but
is accessible in the final executable binary, so an extern
variable is also considered GLOBAL.
Similar to the LOCAL example above, the output lists many
GLOBAL symbols such as main:
66: 080483e1 10 FUNC GLOBAL DEFAULT 14 main
WEAK are symbols whose definitions can be redefined.
Normally, a symbol with multiple definitions are reported
as an error by a compiler. However, this constraint is lax
when a definition is explicitly marked as weak, which means
the default implementation can be replaced by a different
definition at link time.
Suppose we have a default implementation of the function
__attribute__((weak)) int add(int a, int b) {
printf("warning: function is not implemented.\n");
printf("add(1,2) is %d\n", add(1,2));
__attribute__((weak)) is a [margin:
function attribute
]function attribute. A function attributefunction attribute is
extra information for a compiler to handle a function
differently from a normal function. In this example, weak
attribute makes the function add a weak function,which
means the default implementation can be replaced by a
different definition at link time. Function attribute is
a feature of a compiler, not standard C.
If we do not supply a different function definition in a
different file (must be in a different file, otherwise
gcc reports as an error), then the default implementation
is applied. When the function add is called, it only
prints the message: "warning: function not
implemented"and returns 0:
$ ./hello
warning: function is not implemented.
add(1,2) is 0
However, if we supply a different definition in another
file e.g. math.c:
and compile the two files together:
$ gcc math.c hello.c -o hello
Then, when running hello, no warning message is printed
and the correct value is returned.
Weak symbol is a mechanism to provide a default
implementation, but replaceable when a better
implementation is available (e.g. more specialized and
optimized) at link-time.
Vis is the visibility of a symbol. The following values are
Symbol Visibility
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Value | Description |
| DEFAULT | The visibility is specified by the binding type of asymbol.
• Global and weak symbols are visible outside of their defining
component (executable file or shared object).
• Local symbols are hidden. See HIDDEN below. |
| HIDDEN | A symbol is hidden when the name is not visible to any other
program outside of its running program. |
| PROTECTED | A symbol is protected when it is shared outside of its running
program or shared libary and cannot be overridden. That is, there
can only be one definition for this symbol across running
programs that use it. No program can define its own definition of
the same symbol. |
| INTERNAL | Visibility is processor-specific and is defined by
processor-specific ABI. |
Ndx is the index of a section that the symbol is in. Aside from
fixed index numbers that represent section indexes, index has
these special values:
Symbol Index
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Value | Description |
| ABS | The index will not be changed by any symbol relocation. |
| COM | The index refers to an unallocated common block. |
| UND | The symbol is undefined in the current object file, which means
the symbol depends on the actual definition in another file.
Undefined symbols appears when the object file refers to symbols
that are available at runtime, from shared library. |
| LORESERVE
HIRESERVE | LORESERVE is the lower boundary of the reserve indexes. Its value
is 0xff00.
HIREVERSE is the upper boundary of the reserve indexes. Its value
is 0xffff.
The operating system reserves exclusive indexes between LORESERVE
and HIRESERVE, which do not map to any actual section header. |
| XINDEX | The index is larger than LORESERVE. The actual value will be
contained in the section SYMTAB_SHNDX, where each entry is a
mapping between a symbol, whose Ndx field is a XINDEX value, and
the actual index value. |
| Others | Sometimes, values such as ANSI_COM, LARGE_COM, SCOM | CommonCrawl |
May 2019 , Volume 49, Issue 5, pp 444–456 | Cite as
Axioms for the Boltzmann Distribution
Adam Brandenburger
Kai Steverson
A fundamental postulate of statistical mechanics is that all microstates in an isolated system are equally probable. This postulate, which goes back to Boltzmann, has often been criticized for not having a clear physical foundation. In this note, we provide a derivation of the canonical (Boltzmann) distribution that avoids this postulate. In its place, we impose two axioms with physical interpretations. The first axiom (thermal equilibrium) ensures that, as our system of interest comes into contact with different heat baths, the ranking of states of the system by probability is unchanged. Physically, this axiom is a statement that in thermal equilibrium, population inversions do not arise. The second axiom (energy exchange) requires that, for any heat bath and any probability distribution on states, there is a universe consisting of a system and heat bath that can achieve this distribution. Physically, this axiom is a statement that energy flows between system and heat bath are unrestricted. We show that our two axioms identify the Boltzmann distribution.
Boltzmann distribution Equal-probability postulate Thermodynamics Axioms
The postulates of statistical mechanics have been examined and debated ever since the beginnings of the field in the nineteenth century. A central postulate in equilibrium thermodynamics, put in place by Boltzmann, is that there is equal a priori probability that an isolated system will be found in any one of its microstates which are compatible with the overall constraints placed on the system. In the words of Planck [1], "all microscopic states are equally probable in dynamics".
The equal-probability assumption has been rationalized in several ways. One can simply appeal to the Laplacian stance of insufficient reason. The observer's knowledge of the system does not yield a distinction among the microstates, so no distinction can legitimately be introduced via their probabilities of occurrence [2]. Jaynes [3] replaced this assumption with a maximum-entropy principle (a principle of "maximum noncommitment with respect to missing information") in order to derive the canonical (Boltzmann) distribution in the microcanonical ensemble. Goldstein et al. [4] proved that, for quantum systems, the canonical distribution arises for almost all wave functions of the universe (system plus heat bath). Popescu et al. [5] showed that, even without energy constraints, a "general canonical principle" can be established for quantum systems, under which a system will almost always behave as if the universe is in the equal-probability state.
In this note, we take a different route (for classical systems). We replace the equal-probability postulate with two physically interpretable axioms, which we show characterize the canonical (Boltzmann) distribution.
2 Axioms
In the usual (textbook) derivation, one fixes a heat bath \(\mathbb {B}\) at a temperature T and a system \(\mathbb {S}\) with possible states \(s_{i}\), for \(i=1,2,\ldots ,n\). The system \(\mathbb {S}\) specifies an energy level \(E_{i}\) for each state \(s_{i}\). (See Fig. 1.) The probability assigned to state \(s_{i}\) depends on the system \(\mathbb {S}\) and the heat bath \(\mathbb {B}\) and can therefore be written as \(p_{\mathbb {S}}(s_{i},\mathbb {B})\). One then appeals to the equal-probability postulate to write the ratios of probabilities of states as
$$\begin{aligned} \frac{p_{\mathbb {S}}(s_{i};\mathbb {B})}{p_{\mathbb {S}}(s_{j};\mathbb {B})}=\frac{\varOmega _{\mathbb {B}}(E_{\mathrm {total}}-E_{i})}{\varOmega _{\mathbb {B}}(E_{\mathrm {total}}-E_{j})}, \end{aligned}$$
where \(E_{\mathrm {total}}\) is the total energy of the composite \(\mathbb {S}+\mathbb {B},\) so that \(\varOmega _{\mathbb {B}}(E_{\mathrm {total}}-E_{i})\) is then the number of microstates of \(\mathbb {B}\). A Taylor expansion of the entropy \(S_{\mathbb {B}}(E_{\mathrm {total}}-E_{i})=k\ln \varOmega _{\mathbb {B}}\) of \(\mathbb {B}\) (where k is the Boltzmann constant), and use of the formula \(\partial S_{\mathbb {B}}/\partial E_{\mathrm {total}}=1/T\), yields the Boltmann distribution
$$\begin{aligned} p_{\mathbb {S}}(s_{i};\mathbb {B})=\frac{1}{Z}e^{-\frac{E_{i}}{kT}}, \end{aligned}$$
where \(Z={\textstyle {\textstyle \varSigma _{j}}}e^{-E_{j}/kT}\) is the partition function (e.g., Mandl [2], pp. 52–56).
System plus heat bath
Our derivation will also begin with ratios of probabilities, as in Eq. (1), but will not assume the equal-probability postulate. Our axioms are stated over a family \(\{\mathbb {S},\mathbb {S}^{\prime },\mathbb {S}^{\prime \prime },\ldots \}\) of systems and a family \(\{\mathbb {B},\mathbb {B}^{\prime },\mathbb {B}^{\prime \prime },\ldots \}\) of heat baths. All systems are defined on the same fixed underlying finite set of states \(\{s_{1},s_{2},\ldots ,s_{n}\}\).
Axiom 1
(Thermal Equilibrium) Associated with each heat bath \(\mathbb {B}\) there is a strictly increasing function \(G_{\mathbb {B}}:(0,\infty )\rightarrow (0,\infty )\) such that for any system \(\mathbb {S}\) and pair of states \(s_{i}\) and \(s_{j}\), the ratio equation
$$\begin{aligned} G_{\mathbb {B}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {B})}{p_{\mathbb {S}}(s_{j};\mathbb {B})}\right) =G_{\mathbb {\mathbb {B}^{\prime }}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {\mathbb {B}^{\prime }}})}{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}^{\prime }})}\right) \end{aligned}$$
is satisfied.
Our first axiom ensures that the probabilistic ranking of states of the system does not differ with changes in the heat bath. This is physically correct, since we are considering systems in equilibrium and, therefore, population inversions are not possible. If state \(s_{i}\) is more likely than another state \(s_{j}\), this is because \(s_{i}\) is lower energy than \(s_{j}\). In thermal equilibrium, the same probabilistic ranking of states will hold whether the heat bath is \(\mathbb {B}\) or \(\mathbb {B}{}^{\prime }\). Lemma 1 below states this formally. The axiom does allow the actual probability of a state of the system to depend on the particular heat bath \(\mathbb {B}\) to which the system is attached. This is the role of the \(G_{\mathbb {B}}\)-functions. Again, this is physically correct.
Lemma 1
If \(p_{\mathbb {S}}(s_{i};\mathbb {B})\ge p_{\mathbb {S}}(s_{j};\mathbb {B})\), then \(p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}{}^{\prime }})\ge p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}{}^{\prime }})\).
$$\begin{aligned} \frac{p_{\mathbb {S}}(s_{i};\mathbb {B})}{p_{\mathbb {S}}(s_{j};\mathbb {B})}\ge \frac{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}})}, \end{aligned}$$
so that, since \(G_{\mathbb {B}}\) is increasing,
$$\begin{aligned} G_{\mathbb {B}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {B})}{p_{\mathbb {S}}(s_{j};\mathbb {B})}\right) \ge G_{\mathbb {\mathbb {\mathbb {B}}}}\left( \frac{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}})}\right) . \end{aligned}$$
But, using Eq. (3),
$$\begin{aligned} G_{\mathbb {B}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}})}\right) =G_{\mathbb {\mathbb {\mathbb {B}{}^{\prime }}}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}{}^{\prime }})}{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}{}^{\prime }})}\right) \text{ and } G_{\mathbb {\mathbb {\mathbb {B}}}}\left( \frac{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}})}\right) =G_{\mathbb {\mathbb {\mathbb {B}{}^{\prime }}}}\left( \frac{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}{}^{\prime }})}{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}{}^{\prime }})}\right) , \end{aligned}$$
and, therefore,
$$\begin{aligned} G_{\mathbb {\mathbb {\mathbb {B}{}^{\prime }}}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}{}^{\prime }})}{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}{}^{\prime }})}\right) \ge G_{\mathbb {\mathbb {\mathbb {B}{}^{\prime }}}}\left( \frac{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}{}^{\prime }})}{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}{}^{\prime }})}\right) , \end{aligned}$$
from which, since \(G_{\mathbb {\mathbb {\mathbb {B}{}^{\prime }}}}\) is increasing,
$$\begin{aligned} \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}{}^{\prime }})}{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}{}^{\prime }})}\ge \frac{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}{}^{\prime }})}{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}{}^{\prime }})}, \end{aligned}$$
or \(p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B}{}^{\prime }})\ge p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B}{}^{\prime }})\), as required. \(\square \)
Our second axiom is designed to capture the fact that a heat bath \(\mathbb {B}\) is very large compared with a system \(\mathbb {S}\), so that any energy flows are possible between the two at the given temperature of the bath. We say this formally by fixing a heat bath \(\mathbb {B}\) and a probability distribution on the states \(\{s_{1},s_{2},\ldots ,s_{n}\}\). We then say that we can attach a system \(\mathbb {S}\) to \(\mathbb {B}\) so that the desired probabilities are obtained. Physically, we know we can do this. Indeed, Eq. (2) for the Boltzmann distribution tells us there are energy levels \(E_{i}\), for \(i=1,2,\ldots ,n\), that yield the probabilities in question. (If \(\lambda _{i}\) is the probability of state i, then we set \(E_{i}=-kT\ln \lambda _{i}\).) So, we attach a system \(\mathbb {S}\) with these energy levels to the heat bath \(\mathbb {B}\). Since \(\mathbb {B}\) is very large compared with \(\mathbb {S}\), we can always do this at the prevailing temperature T. Here is the formal statement. (We assume that \(\lambda \) has full support, i.e, that \(\lambda _{i}>0\) for all i. This guarantees that all ratios of probabilities are well-defined.)
(Energy Exchange) For any heat bath \(\mathbb {B}\) and any full-support probability distribution \(\lambda =(\lambda _{1},\lambda _{2},\ldots ,\lambda _{n})\) on \(\{s_{1},s_{2},\ldots ,s_{n}\}\), there is a system \(\mathbb {S}\) such that \(p_{\mathbb {S}}(\cdot ;\mathbb {B})=\lambda \).
We can now state our result, which is an axiomatic derivation of the Boltzmann distribution.
Theorem 1
Suppose Axioms 1 and 2 are satisfied. Then there are functions \(T:\{\mathbb {B},\mathbb {B}^{\prime },\mathbb {B}^{\prime \prime },\ldots \}\rightarrow (0,\infty )\) and \(E:\{s_{1},s_{2},\ldots ,s_{n}\}\times \{\mathbb {S},\mathbb {S}^{\prime },\mathbb {S}^{\prime \prime },\ldots \}\rightarrow (0,\infty )\) such that for each heat bath \(\mathbb {B}\) and system \(\mathbb {S}\), and for each \(i=1,2,\ldots ,n\),
$$\begin{aligned} p_{\mathbb {S}}(s_{i};\mathbb {B})=\frac{1}{Z(\mathbb {B},\mathbb {S})}e^{-\frac{E(s_{i},\mathbb {S})}{T(\mathbb {B})}}, \end{aligned}$$
where \(Z(\mathbb {B},\mathbb {S})={\textstyle {\textstyle \varSigma _{j}}}e^{-E(s_{j},\mathbb {S})/T(\mathbb {B})}\).
Equation (4) is the Boltzmann distribution, with temperature \(T(\cdot )\) (as a function of the heat bath) and energy levels \(E(s_{1},\cdot ),E(s_{2},\mathbb {\cdot }),\ldots ,E(s_{n},\mathbb {\cdot })\) (as a function of the system). (We get \(k=1\) since temperature and energy are not measured in physical units here). Notice that only positive temperatures are possible under our treatment. This makes sense, since we have assumed thermal equilibrium, and negative temperatures can arise only in systems which are (temporarily) out of equilibrium (e.g., Braun et al. [6]). Also, as expected in an abstract treatment, the fundamental quantity that emerges is \(E(\cdot ,\cdot )/T(\cdot )\), namely, entropy. We can be more precise about this last point by establishing the uniqueness properties of the functions T and E that represent a given heat bath and system.
Assume that, for each heat bath \(\mathbb {B}\), it is not the case that all states have equal probability. Suppose a system \(\mathbb {S}\) satisfies Eq. (4) with functions E and T. Then \(\mathbb {S}\) satisfies Eq. (4) with functions \(\widetilde{E}\) and \(\widetilde{T}\) if and only if there are real numbers \(\alpha >0\) and \(\beta \) such that
$$\begin{aligned} E(s_{i},\mathbb {S})= & {} \alpha \widetilde{E}(s_{i},\mathbb {S})\,+\,\beta \hbox { for all states }s_{i},\\ T(\mathbb {B})= & {} \alpha \widetilde{T}(\mathbb {B})\hbox { for all heat baths }\mathbb {B}. \end{aligned}$$
(Physically speaking, the equal-probability case ruled out is that of infinite temperature.) Notice that the scaling factor for T is the same as the multiplicative factor in the affine transformation of E. It follows that, while the ratios \(E\left( \cdot ,\cdot \right) /T(\cdot )\) are not unique, the differences between these ratios, i.e., the entropy differences
$$\begin{aligned} \frac{E(s_{i},\mathbb {S})-E(s_{j},\mathbb {S})}{T\left( \mathbb {B}\right) } \end{aligned}$$
between states, are unique. Again, we expect this on physical grounds.
We have shown that two physically interpretable axioms can replace the traditional equal-probability postulate of equilibrium thermodynamics. The first axiom is an abstraction of the notion that the probabilistic ranking of states is the same across systems in equilibrium. The second axiom is an abstraction of the notion that all energy flows are possible between the system in question and a heat bath to which it is attached, at the given temperature of the bath. Together, these two axioms characterize the Boltzmann distribution. That is, we establish that the axioms identify the Boltzmann distribution—and the converse that the Boltzmann distribution satisfies the axioms.
Two extensions of this work would be interesting. The first extension would be to quantum systems, to see if our characterization goes through and to compare the resulting analysis with those of Goldstein et al. [4] and Popescu et al. [5]. A second extension would be to continuous probability distributions, where new mathematical issues may arise.
We thank Samson Abramsky, Paul Glimcher, Shane Manesfield, two referees, and the managing editor for important input. Financial support from National Institutes of Health Grant No. R01DA038063, NYU Stern School of Business, NYU Shanghai, and J.P. Valles is gratefully acknowledged.
Proof of Theorem 1
We choose heat bath \(\mathbb {B}\) as a reference point. Since \(G_{\mathbb {B}}\) is strictly increasing, it is invertible. Therefore, for any (other) heat bath \(\mathbb {B^{\prime }}\), we can define a function \(H_{\mathbb {\mathbb {B^{\prime }}}}:\left( 0,\infty \right) \rightarrow \left( 0,\infty \right) \) by
$$\begin{aligned} H_{\mathbb {\mathbb {B^{\prime }}}}\left( t\right) =G_{\mathbb {B}}^{-1}(G_{\mathbb {\mathbb {B^{\prime }}}}(t)). \end{aligned}$$
By Axiom 1, we have that for any system \(\mathbb {S}\) and pair of states r, s,
$$\begin{aligned} G_{\mathbb {\mathbb {B^{\prime }}}}\left( \frac{p_{\mathbb {S}}(r;\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s;\mathbb {\mathbb {B^{\prime }}})}\right) =G_{\mathbb {B}}\left( \frac{p_{\mathbb {S}}(r;\mathbb {B})}{p_{\mathbb {S}}(s;\mathbb {B})}\right) , \end{aligned}$$
$$\begin{aligned} H_{\mathbb {\mathbb {\mathbb {B^{\prime }}}}}\left( \frac{p_{\mathbb {S}}(r;\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s;\mathbb {\mathbb {B^{\prime }}})}\right) =\frac{p_{\mathbb {S}}(r;\mathbb {B})}{p_{\mathbb {S}}(s;\mathbb {B})}. \end{aligned}$$
It follows that for any triplet of states \(s_{i},s_{j},s_{k}\),
$$\begin{aligned} H_{\mathbb {\mathbb {B^{\prime }}}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B^{\prime }}})}\right) \times H_{\mathbb {\mathbb {B^{\prime }}}}\left( \frac{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s_{k};\mathbb {\mathbb {B^{\prime }}})}\right) =\frac{p_{\mathbb {S}}(s_{i};\mathbb {B})}{p_{\mathbb {S}}(s_{j};\mathbb {B})}\times \frac{p_{\mathbb {S}}(s_{j};\mathbb {B})}{p_{\mathbb {S}}(s_{k};\mathbb {B})}=\frac{p_{\mathbb {S}}(s_{i};\mathbb {B})}{p_{\mathbb {S}}(s_{k};\mathbb {B})}. \end{aligned}$$
We can also write
$$\begin{aligned} H_{\mathbb {\mathbb {\mathbb {B^{\prime }}}}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s_{k};\mathbb {\mathbb {B^{\prime }}})}\right) =\frac{p_{\mathbb {S}}(s_{i};\mathbb {B})}{p_{\mathbb {S}}(s_{k};\mathbb {B})}. \end{aligned}$$
Putting these two equations together yields
$$\begin{aligned} H_{\mathbb {\mathbb {B^{\prime }}}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B^{\prime }}})}\right) \times H_{\mathbb {\mathbb {B^{\prime }}}}\left( \frac{p_{\mathbb {S}}(s_{j};\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s_{k};\mathbb {\mathbb {B^{\prime }}})}\right) =H_{\mathbb {\mathbb {\mathbb {B^{\prime }}}}}\left( \frac{p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s_{k};\mathbb {\mathbb {B^{\prime }}})}\right) . \end{aligned}$$
We want to turn Eq. (6) into the Cauchy functional equation. To do so, we need an intermediate result. (This result assumes that there are at least three states. The case of two states is treated later.)
For any \(t,u\in (0,\infty )\), we can choose states \(s_{i},s_{j},s_{k}\) and a full-support probability distribution \(\lambda \) on \(\{s_{1},s_{2},\ldots ,s_{n}\}\) so that
$$\begin{aligned} \frac{\lambda (s_{i})}{\lambda (s_{j})}=t\hbox { and }\frac{\lambda (s_{j})}{\lambda (s_{k})}=u. \end{aligned}$$
Choose three distinct states \(s_{i},s_{j},s_{k}\), and set
$$\begin{aligned} \lambda (s_{i})= & {} \frac{tuv}{1+u+tu},\\ \lambda (s_{j})= & {} \frac{uv}{1+u+tu},\\ \lambda (s_{k})= & {} \frac{v}{1+u+tu}, \end{aligned}$$
where \(v=1\) if \(n=3\) (there are three states in total) and \(v=\tfrac{1}{2}\) if \(n>3\). Also, if \(n>3\), set
$$\begin{aligned} \lambda \left( s\right) =\frac{1}{2(n-3)}, \end{aligned}$$
for all \(s\ne s_{i},s_{j},s_{k}\). It is easy to check that \(\lambda \) has full support and that Equation (7) is satisfied. Also, if \(n=3\), then
$$\begin{aligned} \sum _{s}\lambda (s)=\frac{tuv+uv+v}{1+u+tu}=v=1, \end{aligned}$$
and if \(n>3\),
$$\begin{aligned} \sum _{s}\lambda (s)=\sum _{s\ne s_{i},s_{j},s_{k}}\frac{1}{2(n-3)}+\frac{tuv+uv+v}{1+u+tu}=\frac{1}{2}+v=1, \end{aligned}$$
so that \(\lambda \) is a well-defined probability distribution on the states. \(\square \)
By Axioms 1 and 2, there is a system \(\mathbb {S}\) so that Eq. (3) is satisfied and \(p_{\mathbb {S}}(\cdot ;\mathbb {\mathbb {B^{\prime }}})=\lambda \). But then \(p_{\mathbb {S}}(\cdot ;\mathbb {\mathbb {B^{\prime }}})\) also satisfies Eq. (6), and, therefore, using Eq. (7), we obtain
$$\begin{aligned} H_{\mathbb {\mathbb {\mathbb {B^{\prime }}}}}(t)\times H_{\mathbb {\mathbb {\mathbb {B^{\prime }}}}}(u)=H_{\mathbb {\mathbb {\mathbb {B^{\prime }}}}}(tu), \end{aligned}$$
for any \(t,u\in (0,\infty )\). Moreover, the functions \(G_{\mathbb {B}}\) and \(G_{\mathbb {\mathbb {\mathbb {B^{\prime }}}}}\) are increasing and therefore have at most a countable number of discontinuities, from which it follows that \(H_{\mathbb {\mathbb {B^{\prime }}}}\) can have at most a countable number of discontinuities. This allows us to apply a version of the Cauchy functional theorem (see Appendix C) to Eq. (8), to conclude that there exists a function \(T:\{\mathbb {B},\mathbb {B}^{\prime },\mathbb {B}^{\prime \prime },\ldots \}\rightarrow (0,\infty )\) such that
$$\begin{aligned} H_{\mathbb {\mathbb {B}^{\prime }}}(t)=t^{T(\mathbb {\mathbb {B}^{\prime }})}. \end{aligned}$$
Note that \(T(\mathbb {B})=1\) (this is why we called \(\mathbb {B}\) a reference point). Also, from Eq. (9) we get
$$\begin{aligned} G_{\mathbb {\mathbb {B}^{\prime }}}(t)=G_{\mathbb {B}}(t^{T(\mathbb {\mathbb {B}^{\prime }})}). \end{aligned}$$
Since \(G_{\mathbb {\mathbb {B}^{\prime }}}\) and \(G_{\mathbb {B}}\) are both strictly increasing, it follows that \(T(\mathbb {\mathbb {B}^{\prime }})>0\) for all heat baths \(\mathbb {\mathbb {B}^{\prime }}\). Next, using Eqs. (9) in (5), we find
$$\begin{aligned} \frac{p_{\mathbb {S}}(r;\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s;\mathbb {\mathbb {B^{\prime }}})}=\left( \frac{p_{\mathbb {S}}(r;\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s;\mathbb {\mathbb {B}})}\right) ^{\frac{1}{T(\mathbb {\mathbb {B}^{\prime }})}}. \end{aligned}$$
Summing over all states r yields
$$\begin{aligned} \frac{1}{p_{\mathbb {S}}(s;\mathbb {\mathbb {B^{\prime }}})}=\sum _{r}\left( \frac{p_{\mathbb {S}}(r;\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s;\mathbb {\mathbb {B}})}\right) ^{\frac{1}{T(\mathbb {\mathbb {B}^{\prime }})}}, \end{aligned}$$
which we can invert to get
$$\begin{aligned} p_{\mathbb {S}}(s;\mathbb {\mathbb {B^{\prime }}})=\dfrac{p_{\mathbb {S}}(s;\mathbb {\mathbb {B}}){}^{\frac{1}{T(\mathbb {\mathbb {\mathbb {B}^{\prime }}})}}}{\sum _{r}p_{\mathbb {S}}(r;\mathbb {\mathbb {B}})^{\frac{1}{T(\mathbb {\mathbb {B}^{\prime }})}}}. \end{aligned}$$
Finally, to make Eq. (10) into the Boltzmann distribution, define \(E:\{s_{1},s_{2},\ldots ,s_{n}\}\times \{\mathbb {S},\mathbb {S}^{\prime },\mathbb {S}^{\prime \prime },\ldots \}\rightarrow (0,\infty )\) by \(E(r,\mathbb {S})=-\ln p_{\mathbb {S}}(r;\mathbb {\mathbb {B}})\) for each state r.
This completes the proof of Theorem 1, except for the case of two states. (The case of one state is trivial.) Here, we define the function T directly, by requiring it to give the solution to each equation
$$\begin{aligned} \left( \frac{p_{\mathbb {S}}(s_{1};\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {B^{\prime }}})}\right) ^{T(\mathbb {B^{\prime }})}=\frac{p_{\mathbb {S}}(s_{1};\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {B}})}, \end{aligned}$$
as we vary the heat bath \(\mathbb {B^{\prime }}\). We can argue similarly to Lemma 1 to see that \(p_{\mathbb {S}}(s_{1};\mathbb {\mathbb {B^{\prime }}})\ge p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {B^{\prime }}})\) if and only if \(p_{\mathbb {S}}(s_{1};\mathbb {\mathbb {B}})\ge p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {B}})\). It follows that we will get \(T(\mathbb {\mathbb {B^{\prime }}})>0\) for all \(\mathbb {\mathbb {B^{\prime }}}\), as required.
We should also establish that our axioms identify the Boltzmann distribution and not some subfamily of this distribution. To show this, start by supposing that Eq. (4) holds. Define \(G_{\mathbb {B^{\prime }}}:\left( 0,\infty \right) \rightarrow \left( 0,\infty \right) \) by \(G_{\mathbb {B^{\prime }}}\left( t\right) =t^{T(\mathbb {B^{\prime }})}\). Then for any system \(\mathbb {S}\) and pair of states r, s, we can write
$$\begin{aligned} G_{\mathbb {B^{\prime }}}\left( \frac{p_{\mathbb {S}}(r;\mathbb {\mathbb {B^{\prime }}})}{p_{\mathbb {S}}(s;\mathbb {\mathbb {B^{\prime }}})}\right) =G_{\mathbb {B^{\prime }}}\left( e^{\frac{E(s,\mathbb {S})-E(r,\mathbb {S})}{T(\mathbb {B^{\prime }})}}\right) =e^{E(s,\mathbb {S})-E(r,\mathbb {S})}. \end{aligned}$$
Since the right-hand side is independent of \(\mathbb {B^{\prime }}\), we see that Eq. (3) is satisfied, which establishes Axiom 1. For Axiom 2, fix a heat bath \(\mathbb {B^{\prime }}\) and a full-support probability distribution \(\lambda \) on the states. Let \(T(\mathbb {B}{}^{\prime })\) be arbitrary and set \(E(s_{i},\mathbb {S})=-kT(\mathbb {B}{}^{\prime })\ln \lambda _{i}\) for each i. Then \(p_{\mathbb {S}}(\cdot ;\mathbb {\mathbb {B^{\prime }}})=\lambda \), as required.
Suppose a system \(\mathbb {S}\) satisfies Eq. (4) for two pairs of functions E, T and \(\widetilde{E},\widetilde{T}\). Equation (4) implies that for any states \(s_{1},s_{2},s\),
$$\begin{aligned} \frac{E(s_{2},\mathbb {S})-E(s_{1},\mathbb {S})}{E(s_{2},\mathbb {S})-E(s,\mathbb {S})}=\left( \ln \frac{p_{\mathbb {S}}(s_{1};\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {B}})}\right) \left( \ln \frac{p_{\mathbb {S}}(s;\mathbb {\mathbb {B}})}{p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {B}})}\right) ^{-1}=\frac{\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s_{1},\mathbb {S})}{\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s,\mathbb {S})}. \end{aligned}$$
Rearranging gives
$$\begin{aligned}&(E(s_{2},\mathbb {S})-E(s_{1},\mathbb {S}))\times (\widetilde{E}(s_{2},\mathbb {S})- \widetilde{E}(s,\mathbb {S}))\\&\quad =(E(s_{2},\mathbb {S})-E(s,\mathbb {S}))\times (\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s_{1},\mathbb {S})), \end{aligned}$$
from which,
$$\begin{aligned} E(s,\mathbb {S})=E(s_{2},\mathbb {S})-\frac{E(s_{2},\mathbb {S})-E(s_{1},\mathbb {S})}{\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s_{1},\mathbb {S})}\times (\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s,\mathbb {S})), \end{aligned}$$
$$\begin{aligned} E(s,\mathbb {S})= & {} \frac{E(s_{2},\mathbb {S})-E(s_{1},\mathbb {S})}{\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s_{1},\mathbb {S})} \times \widetilde{E}(s,\mathbb {S})+E(s_{2},\mathbb {S})\\&- \frac{E(s_{2},\mathbb {S})-E(s_{1},\mathbb {S})}{\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s_{1},\mathbb {S})}\times \widetilde{E}(s_{2},\mathbb {S}). \end{aligned}$$
Now set
$$\begin{aligned} \alpha =\frac{E(s_{2},\mathbb {S})-E(s_{1},\mathbb {S})}{\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s_{1},\mathbb {S})}\text { and }\beta =E(s_{2},\mathbb {S})-\frac{E(s_{2},\mathbb {S})-E(s_{1},\mathbb {S})}{\widetilde{E}(s_{2},\mathbb {S})-\widetilde{E}(s_{1},\mathbb {S})}\times \widetilde{E}(s_{2},\mathbb {S}). \end{aligned}$$
By assumption, there are states \(s_{1},s_{2}\) such that \(p_{\mathbb {S}}(s_{1};\mathbb {\mathbb {B}})>p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {B}})\). (There is no loss of generality in labeling these two states this way.) It follows that \(\alpha >0\).
Next observe that, for any heat bath \(\mathbb {\mathbb {B^{\prime }}}\),
$$\begin{aligned} \ln \left( \frac{p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {\mathbb {B^{\prime }}}})}{p_{\mathbb {S}}(s_{1};\mathbb {\mathbb {\mathbb {B^{\prime }}}})}\right) =\frac{E(s_{1},\mathbb {S})-E(s_{2},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}=\frac{\widetilde{E}(s_{1},\mathbb {S})-\widetilde{E}(s_{2},\mathbb {S})}{\widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}, \end{aligned}$$
from which, using the relationship between E and \(\widetilde{E}\), we get
$$\begin{aligned} \frac{\alpha \widetilde{E}(s_{1},\mathbb {S})-\alpha \widetilde{E}(s_{2},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}=\frac{\widetilde{E}(s_{1},\mathbb {S})-\widetilde{E}(s_{2},\mathbb {S})}{\widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}. \end{aligned}$$
By assumption, \(p_{\mathbb {S}}(s_{1};\mathbb {\mathbb {B}})\ne p_{\mathbb {S}}(s_{2};\mathbb {\mathbb {B}})\). (Again, there is no loss of generality in using the state labels \(s_{1}\) and \(s_{2}\).) It follows that \(\widetilde{E}(s_{1},\mathbb {S})\ne \widetilde{E}(s_{2},\mathbb {S})\) and, therefore, \(T(\mathbb {\mathbb {\mathbb {B^{\prime }}}})=\alpha \widetilde{T}(\mathbb {\mathbb {\mathbb {B^{\prime }}}})\), as claimed. This completes the proof of the forward direction of Theorem 2.
For the reverse direction, suppose that a system \(\mathbb {S}\) satisfies Eq. (4) for the functions E and T, and let \(\alpha >0\) and \(\beta \) be real numbers. Equation (4) then yields, for any heat bath \(\mathbb {\mathbb {\mathbb {B^{\prime }}}}\),
$$\begin{aligned} p_{\mathbb {S}}(s;\mathbb {\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}}})= & {} \frac{e^{-\frac{E(s,\mathbb {S})}{T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}}{\sum _{j}e^{-\frac{E(s,\mathbb {S})}{T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}}=\frac{e^{-\frac{\alpha E(s,\mathbb {S})}{\alpha T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}}{\sum _{j}e^{-\frac{\alpha E(s,\mathbb {S})}{\alpha T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}}\\= & {} \frac{e^{-\frac{\beta }{\alpha T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}e^{-\frac{\alpha E(s,\mathbb {S})}{\alpha T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}}{e^{-\frac{\beta }{\alpha T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}\sum _{j}e^{-\frac{\alpha E(s,\mathbb {S})}{\alpha T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}}=\frac{e^{-\frac{\alpha E(s,\mathbb {S})+\beta }{\alpha T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}}{\sum _{j}e^{-\frac{\alpha E(s,\mathbb {S})+\beta }{\alpha T(\mathbb {\mathbb {\mathbb {\mathbb {B^{\prime }}}}})}}}, \end{aligned}$$
from which we see that the system \(\mathbb {S}\) satisfies Eq. (4) for the functions \(\alpha E+\beta \) and \(\alpha T\), as we needed to show.
To prove that entropy differences are unique, as asserted after the statement of Theorem 2, first suppose that a system \(\mathbb {S}\) satisfies Eq. (4) for the functions E, T and \(\widetilde{E},\widetilde{T}\). Theorem 2 tells us that there are real numbers \(\alpha >0\) and \(\beta \) such that \(E(\cdot ,\mathbb {S})=\alpha \widetilde{E}(\cdot ,\mathbb {S})+\beta \) and \(T(\cdot )=\alpha \widetilde{T}(\cdot )\). It follows that for any pair of states \(s_{i},s_{j}\), and any heat bath \(\mathbb {\mathbb {B^{\prime }}}\),
$$\begin{aligned} \frac{E(s_{i},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}-\frac{E(s_{j},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}=\frac{\alpha \widetilde{E}(s_{i},\mathbb {S})+\beta }{\alpha \widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}-\frac{\alpha \widetilde{E}(s_{j},\mathbb {S})+\beta }{\alpha \widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}=\frac{\widetilde{E}(s_{i},\mathbb {S})}{\widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}-\frac{\widetilde{E}(s_{j},\mathbb {S})}{\widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}, \end{aligned}$$
as claimed. Conversely, suppose a system \(\mathbb {S}\) satisfies Eq. (4) for the functions E, T, and there exist functions \(\widetilde{E},\widetilde{T}\) such that
$$\begin{aligned} \frac{\widetilde{E}(s_{i},\mathbb {S})}{\widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}-\frac{\widetilde{E}(s_{j},\mathbb {S})}{\widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}=\frac{E(s_{i},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}-\frac{E(s_{j},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}. \end{aligned}$$
This says that for each heat bath \(\mathbb {\mathbb {B^{\prime }}}\), there is a number \(\gamma _{\mathbb {\mathbb {B^{\prime }}}}\) such that
$$\begin{aligned} \frac{\widetilde{E}(s_{i},\mathbb {S})}{\widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}=\frac{E(s_{i},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}+\gamma _{\mathbb {\mathbb {B^{\prime }}}}. \end{aligned}$$
$$\begin{aligned} \frac{e^{-\frac{\widetilde{E}(s_{i},\mathbb {S})}{\widetilde{T}(\mathbb {\mathbb {B^{\prime }}})}}}{\sum _{j} e^{-\frac{\widetilde{E}(s_{j},\mathbb {S})}{\widetilde{T} (\mathbb {\mathbb {B^{\prime }}})}}}= & {} \frac{e^{-\frac{E(s_{i},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}- \gamma _{\mathbb {\mathbb {B^{\prime }}}}}}{\sum _{j}e^{-\frac{E(s_{j}, \mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}- \gamma _{\mathbb {\mathbb {B^{\prime }}}}}} \\= & {} \frac{e^{-\gamma _{\mathbb {\mathbb {B^{\prime }}}}}e^{-\frac{E(s_{i}, \mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}}}{e^{-\gamma _{\mathbb {\mathbb {B^{\prime }}}}} \sum _{j}e^{-\frac{E(s_{j},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}}}\\= & {} \frac{e^{-\frac{E(s_{i},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}}}{\sum _{j}e^{-\frac{E(s_{j},\mathbb {S})}{T(\mathbb {\mathbb {B^{\prime }}})}}} =p_{\mathbb {S}}(s_{i};\mathbb {\mathbb {B^{\prime }}}), \end{aligned}$$
from which we see that the system \(\mathbb {S}\) satisfies Equation (4) for the functions \(\widetilde{E},\widetilde{T}\).
Cauchy Functional Theorem
We provide a self-contained statement and proof of the version of the Cauchy functional theorem employed in Appendix A. The proof can also be found in standard textbooks; see, e.g., Theorem 3 in Aczel [7].
Let \(H:\left( 0,\infty \right) \rightarrow \left( 0,\infty \right) \) be a function with the property that \(H\left( xy\right) =H\left( x\right) \times H\left( y\right) \) for all \(x,y\in \left( 0,\infty \right) \). Moreover, suppose H is continuous at least at a single point. Then there exists \(\alpha \in \left( 0,\infty \right) \) such that for all \(x\in \left( 0,\infty \right) \),
$$\begin{aligned} H\left( x\right) =x^{\alpha }. \end{aligned}$$
For all \(x\in \left( 0,\infty \right) \) and any rational number q, \(H\left( x^{q}\right) =H\left( x\right) ^{q}\) .
Note that \(H\left( 1\right) =H\left( 1\right) \times H\left( 1\right) \) which implies \(H\left( 1\right) =1\). Moreover for any \(k\in \mathbb {Z}\) with \(k>0\) we have
$$\begin{aligned} H\left( x^{k}\right) =H\left( x\right) ^{k}. \end{aligned}$$
To extend this to \(k\in \mathbb {Z}\) with \(k<0\) note that
$$\begin{aligned} H\left( 1\right) =H\left( x\times \frac{1}{x}\right) =H\left( x\right) \times H\left( \frac{1}{x}\right) \Rightarrow H\left( x\right) =\left( H\left( \frac{1}{x}\right) \right) ^{-1}. \end{aligned}$$
Now let \(x\in \left( 0,\infty \right) \) and \(k,m\in \mathbb {Z}.\) Set \(y=x^{1/m}\). Then
$$\begin{aligned} H\left( y^{m}\right) ^{1/m}=H\left( y\right) \Rightarrow H\left( x\right) ^{1/m}=H\left( x^{1/m}\right) . \end{aligned}$$
Hence we have
$$\begin{aligned} H\left( x^{k/m}\right) =H\left( x^{1/m}\right) ^{k}=H\left( x\right) ^{k/m}, \end{aligned}$$
as desired. \(\square \)
Let \(S\subseteq \left( 0,\infty \right) \) be the set of all the rational powers of 2, that is \(x\in S\) if and only if there exists \(q\in \mathbb {\mathbb {Q}}\) such that \(x=2^{q}\).
S is dense in \((0,\infty ).\)
Let \(r,s\in (0,\infty )\) with \(r<s\). We want to prove there is a rational number q such that \(r<2^q<s\), which is equivalent to proving there is a rational q such that \(r2^q<1<s2^q\).
Set \(\alpha =\log _2 (1/r)\), so that
$$\begin{aligned} r2^\alpha =1<s 2^\alpha . \end{aligned}$$
Using the density of the rationals we can construct a rational sequence \(q_n\rightarrow \alpha \) such that \(q_n<\alpha \) for all n. Since the exponential function is continuous and strictly increasing, it follows that for large enough N we get
$$\begin{aligned} r2^{q_N}<1<s 2^{q_N}, \end{aligned}$$
Now let \(x\in S\), so that \(x=2^{q}\) for some rational number q. By Lemma 3 we have
$$\begin{aligned} H\left( x\right) =H\left( 2\right) ^{q}. \end{aligned}$$
By definition of x, we know that for some \(k,m\in \mathbb {Z}\), \(q=k/m\). Therefore, \(\frac{k}{m}=\log _{2}x=\frac{\log _{H\left( 2\right) }x}{\log _{H\left( 2\right) }2}\). It follows that
$$\begin{aligned} H\left( x\right) =H\left( 2\right) ^{\log _{H\left( 2\right) }\left( x\right) /\log _{H\left( 2\right) }\left( 2\right) }=x^{1/\log _{H\left( 2\right) }\left( 2\right) }. \end{aligned}$$
Setting \(\alpha =1/\log _{H\left( 2\right) }\left( 2\right) \), we have proved \(H\left( x\right) =x^{\alpha }\) for all \(x\in S\).
Now suppose for contradiction there exists \(z\in \left( 0,\infty \right) \) with \(H\left( z\right) \ne z^{\alpha }\). Fix any \(x,y\in S\). We will show for any \(\varepsilon >0\) there exists a point \(\left( x^{\prime },y^{\prime }\right) \in \left( 0,\infty \right) \times \left( 0,\infty \right) \) with \(H\left( x^{\prime }\right) =y^{\prime }\), and \(\left( x^{\prime },y^{\prime }\right) \) has Euclidean distance less than \(\varepsilon \) from \(\left( x,y\right) \). Hence for any \(x,y\in S\), we can construct a sequence on the graph of H that approaches (x, y). Since S is dense in \(\left( 0,\infty \right) \), we conclude that H is nowhere continuous on \(\left( 0,\infty \right) \), which gives our contradiction.
To continue, define \(\delta =H\left( z\right) /z^{\alpha }\), from which \(\delta >0,\delta \ne 1\). Define \(\beta \in \mathbb {R}\) to solve \(x^{\alpha }=y/\delta ^{\beta }\). Such a \(\beta \) exists because y and \(x^{\alpha }\) are both non-negative. Now, for any \(z^{\prime }\in S\) and \(b\in \mathbb {R}\), define \(x^{\prime }=x\left( z/z^{\prime }\right) ^{b}\). Using Lemma 3 we get
$$\begin{aligned} H\left( x^{\prime }\right) =H\left( xz^{b}z^{\prime -b}\right) =H\left( x\right) \times H\left( z^{b}\right) \times H\left( z^{\prime -b}\right) =H\left( x\right) \times \left( \frac{H\left( z\right) }{H\left( z^{\prime }\right) }\right) ^{b}. \end{aligned}$$
Given the definition of \(\delta \), and using \(x,z^\prime \in S\), we have
$$\begin{aligned} H\left( x^{\prime }\right) =x^{\alpha }\delta ^{b}\left( \frac{z}{z^{\prime }}\right) ^{\alpha b}. \end{aligned}$$
Applying the definition of \(\beta \) yields
$$\begin{aligned} H\left( x^{\prime }\right) =\frac{y\delta ^{b}}{\delta ^{\beta }}\left( \frac{z}{z^{\prime }}\right) ^{\alpha b}=y\left( \frac{z}{z^{\prime }}\right) ^{\alpha b}\delta ^{b-\beta }. \end{aligned}$$
This equation holds for any choice of \(z^{\prime },b\in S\). Now set \(y^{\prime }=H\left( x^{\prime }\right) \). By choosing b very close to \(\beta \) and \(z^{\prime }\) very close to z, we can make \(H\left( x^{\prime }\right) \) arbitrarily close to y and \(x^{\prime }\) arbitrarily close to x, as desired.
Planck, M.: Theory of Heat. Macmillan, London (1932)zbMATHGoogle Scholar
Mandl, F.: Statistical Physics, 2nd edn. Wiley, Hoboken (1988)Google Scholar
Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106(4), 620–630 (1957). https://doi.org/10.1103/PhysRev.106.620 ADSMathSciNetCrossRefzbMATHGoogle Scholar
Goldstein, S., Lebowitz, J.L., Tumulka, R., Zanghì, N.: Canonical typicality. Phys. Rev. Lett. 96(5), 2–4 (2006). https://doi.org/10.1103/PhysRevLett.96.050403. ISSN 10797114MathSciNetCrossRefGoogle Scholar
Popescu, S., Short, A.J., Winter, A.: Entanglement and the foundations of statistical mechanics. Nat. Phys. 2(11), 754–758 (2006). https://doi.org/10.1038/nphys444. ISSN 17452473CrossRefGoogle Scholar
Braun, S., Ronzheimer, J.P., Schreiber, M., Hodgman, S.S., Rom, T., Bloch, I., Schneider, U.: Negative absolute temperature for motional degrees of freedom. Science 339(6115), 52–55 (2013). https://doi.org/10.1126/science.1227831. ISSN 10959203ADSCrossRefGoogle Scholar
Aczel, J.: Lectures on Functional Equations and Their Applications. Academic Press, Cambridge (1966)zbMATHGoogle Scholar
1.Stern School of Business, Tandon School of Engineering, NYU ShanghaiNew York UniversityNew YorkUSA
2.Center for Neural ScienceNew York UniversityNew YorkUSA
Brandenburger, A. & Steverson, K. Found Phys (2019) 49: 444. https://doi.org/10.1007/s10701-019-00257-z
DOI https://doi.org/10.1007/s10701-019-00257-z | CommonCrawl |
Selected Works of Isadore Singer: Volume 3
Daniel S. Freed
Introduction to Numerical Linear Algebra
Christoph Borgers
Fundamental Concepts of Inorganic Chemistry (Volume 5)
Asim K. Das
Soil Microbiology
R.R. Mishra
Handbook of Research on Futuristic Design and Intelligent Computational Techniques in Neuroscience and Neuroengineering
Harjit Pal Singh
Each day, novel neuroscientific findings show that researchers are focusing on developing advanced smart hardware designs and intelligent computational models to imitate the human brain's computational abilities. The advancements in smart materials provide a significant role in inventing intelligent bioelectronic device designs with smart features such as accuracy, low power consumption, and more. These advanced and intelligent computing models through machine and smart deep learning algorithms help to understand the information processing capabilities of the human brain with optimum accuracy. The Handbook of Research on Futuristic Design and Intelligent Computational Techniques in Neuroscience and Neuroengineering highlights advanced computational models and hardware designs in neurology and integration of mathematical physical, biological, chemical, and engineering models to mimic brain functions; discovers new technological diagnosis techniques; and achieves high accuracy in learning models to better understand the functioning of the human brain. Providing rich information on brain-computer interfacing, gamification in children, and vestibular rehabilitation, this text acts as an essential resource for experts in electrophysiological studies, neurologists, neuro-physiotherapists, neuro-radiologists, intelligent system developers, bio-software and hardware developers, neuro database collectors, electro-physiologists, professors associated with neurology, psychiatrists, engineers, scientists, and students from academia and industry involved in interdisciplinary approaches to neurology.
Tactical Sciences for Biosecurity in Animal and Plant Systems
Kitty Cardwell
Agriculture is often under the threat of invasive species of animal pests and pathogens that do harm to crops. It is essential to have the best methods and tools available to prevent this harm. Biosecurity is a mixture of institutions, policies, and science applications that attempts to prevent the spread of unhealthy pests. Tactical Sciences for Biosecurity in Animal and Plant Systems focuses on the tactical sciences needed to succeed in the biosecurity objectives of preventing plant and animal pathogens from entering or leaving the United States. This book explores a divergence of tactics between plant and animal exotic disease response. Covering topics such as animal pests and pathogens, tactical management, and early detection, this book is an essential resource for researchers, academicians, university faculty, government biosecurity practitioners, customs officers, clinical scientists, and students.
Advances and Applications of Fuzzy Sets and Logic
Said Broumi
Fuzzy logic, which is based on the concept of fuzzy set, has enabled scientists to create models under conditions of imprecision, vagueness, or both at once. As a result, it has now found many important applications in almost all sectors of human activity, becoming a complementary feature and supporter of probability theory, which is suitable for modelling situations of uncertainty derived from randomness. Fuzzy mathematics has also significantly developed at the theoretical level, providing important insights into branches of traditional mathematics like algebra, analysis, geometry, topology, and more. With such widespread applications, fuzzy sets and logic are an important area of focus in mathematics. Advances and Applications of Fuzzy Sets and Logic studies recent theoretical advances of fuzzy sets and numbers, fuzzy systems, fuzzy logic and their generalizations, extensions, and more. This book also explores the applications of fuzzy sets and logic applied to science, technology, and everyday life to further provide research on the subject. This book is ideal for mathematicians, physicists, computer specialists, engineers, practitioners, researchers, academicians, and students who are looking to learn more about fuzzy sets, fuzzy logic, and their applications.
First Semester Calculus for Students of Mathematics and Related Disciplines
Techniques of Microbiology
Deborah Ann Polayes
Format: Spiral bound
Mathematics for Polytechnics
H.K. Dass
Utpal Bhadra
Imprint: Bethesda Scientific
Phytopharmaceutical
Jagdev Singh
Maria Benelmekki
Imprint: IOP Concise Physics
On Self-Similar Sets with Overlaps and Inverse Theorems for Entropy in $\mathbb {R}^d$
Michael Hochman
Photopolymerization Chemistry and Technology (text)
B. Christmas
This book introduces the theory of light-induced polymerization utilizing ultraviolet (UV) and visiblelight and electron beams (EB) as the initiators, beginning with an analysis of the physical chemistryof photopolymerization and its advantages and disadvantages compared to other types.
Population Dynamics and Conservation of Red Deer
Junaid Ahmad Malik
Exotic Amphibians and Reptiles of the United States
Walter E. Meshaka Jr.
The first complete field guide to the exotic amphibians and reptiles established in the continental United States and Hawaiʻi, this book provides practical identification skills and an awareness of the environmental impacts of these species.
Singular Integrals in Quantum Euclidean Spaces
Adrian M. Gonzalez-Perez
Strichartz Estimates for Wave Equations with Charge Transfer Hamiltonians
Gong Chen
Ergodicity of Markov Processes via Nonstandard Analysis
Haosui Duanmu | CommonCrawl |
Optimal control of multiscale systems using reduced-order models
JCD Home
Reconstructing functions from random samples
June 2014, 1(2): 249-278. doi: 10.3934/jcd.2014.1.249
Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools
Gary Froyland 1, , Cecilia González-Tokman 2, and Anthony Quas 3,
School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052
School of Mathematics and Statistics, University of New South Wales, Sydney, NSW, 2052, Australia
Department of Mathematics and Statistics, University of Victoria, P.O. Box 3060 STN CSC, Victoria, B.C., V8W 3R4
Received October 2013 Revised October 2014 Published December 2014
The isolated spectrum of transfer operators is known to play a critical role in determining mixing properties of piecewise smooth dynamical systems. The so-called Dellnitz-Froyland ansatz places isolated eigenvalues in correspondence with structures in phase space that decay at rates slower than local expansion can account for. Numerical approximations of transfer operator spectrum are often insufficient to distinguish isolated spectral points, so it is an open problem to decide to which eigenvectors the ansatz applies. We propose a new numerical technique to identify the isolated spectrum and large-scale structures alluded to in the ansatz. This harmonic analytic approach relies on new stability properties of the Ulam scheme for both transfer and Koopman operators, which are also established here. We demonstrate the efficacy of this scheme in metastable one- and two-dimensional dynamical systems, including those with both expanding and contracting dynamics, and explain how the leading eigenfunctions govern the dynamics for both real and complex isolated eigenvalues.
Keywords: isolated spectrum, Koopman operators, mix-norms., Ulam's method, Transfer operators, metastability.
Mathematics Subject Classification: Primary: 37M25; Secondary: 37E0.
Citation: Gary Froyland, Cecilia González-Tokman, Anthony Quas. Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools. Journal of Computational Dynamics, 2014, 1 (2) : 249-278. doi: 10.3934/jcd.2014.1.249
W. Bahsoun and S. Vaienti, Metastability of certain intermittent maps, Nonlinearity, 25 (2012), 107-124. doi: 10.1088/0951-7715/25/1/107. Google Scholar
V. Baladi, Unpublished,, 1996., (). Google Scholar
V. Baladi, Positive Transfer Operators and Decay of Correlations, vol. 16 of Advanced Series in Nonlinear Dynamics, World Scientific Publishing Co. Inc., River Edge, NJ, 2000. doi: 10.1142/9789812813633. Google Scholar
V. Baladi, Anisotropic Sobolev spaces and dynamical transfer operators: $C^\infty$ foliations, in Algebraic and topological dynamics, vol. 385 of Contemp. Math., Amer. Math. Soc., Providence, RI, (2005), 123-135. doi: 10.1090/conm/385/07194. Google Scholar
V. Baladi and S. Gouëzel, Good Banach spaces for piecewise hyperbolic maps via interpolation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 1453-1481. doi: 10.1016/j.anihpc.2009.01.001. Google Scholar
M. Blank, G. Keller and C. Liverani, Ruelle-Perron-Frobenius spectrum for Anosov maps, Nonlinearity, 15 (2002), 1905-1973. doi: 10.1088/0951-7715/15/6/309. Google Scholar
M. Budišić, R. Mohr and I. Mezić, Applied Koopmanism, Chaos: An Interdisciplinary Journal of Nonlinear Science, 22 (2012), 047510. doi: 10.1063/1.4772195. Google Scholar
J. Buzzi, No or infinitely many a.c.i.p. for piecewise expanding $C^r$ maps in higher dimensions, Comm. Math. Phys., 222 (2001), 495-501. doi: 10.1007/s002200100509. Google Scholar
W. J. Cowieson, Absolutely continuous invariant measures for most piecewise smooth expanding maps, Ergodic Theory Dynam. Systems, 22 (2002), 1061-1078. doi: 10.1017/S0143385702000627. Google Scholar
M. Dellnitz, G. Froyland and S. Sertl, On the isolated spectrum of the Perron-Frobenius operator, Nonlinearity, 13 (2000), 1171-1188. doi: 10.1088/0951-7715/13/4/310. Google Scholar
M. Dellnitz and O. Junge, On the approximation of complicated dynamical behavior, SIAM J. Numer. Anal., 36 (1999), 491-515. doi: 10.1137/S0036142996313002. Google Scholar
M. F. Demers and C. Liverani, Stability of statistical properties in two-dimensional piecewise hyperbolic maps, Trans. Amer. Math. Soc., 360 (2008), 4777-4814. doi: 10.1090/S0002-9947-08-04464-4. Google Scholar
D. Dolgopyat and P. Wright, The diffusion coefficient for piecewise expanding maps of the interval with metastable states, Stoch. Dyn., 12 (2012), 1150005, 13pp. doi: 10.1142/S0219493712003547. Google Scholar
G. Froyland, R. Murray and O. Stancevic, Spectral degeneracy and escape dynamics for intermittent maps with a hole, Nonlinearity, 24 (2011), 2435-2463. doi: 10.1088/0951-7715/24/9/003. Google Scholar
G. Froyland, Unwrapping eigenfunctions to discover the geometry of almost-invariant sets in hyperbolic maps, Phys. D, 237 (2008), 840-853. doi: 10.1016/j.physd.2007.11.004. Google Scholar
G. Froyland, S. Lloyd and A. Quas, A semi-invertible Oseledets theorem with applications to transfer operator cocycles, Discrete Contin. Dyn. Syst., 33 (2013), 3835-3860. doi: 10.3934/dcds.2013.33.3835. Google Scholar
G. Froyland and K. Padberg, Almost-invariant sets and invariant manifolds-connecting probabilistic and geometric descriptions of coherent structures in flows, Phys. D, 238 (2009), 1507-1523. doi: 10.1016/j.physd.2009.03.002. Google Scholar
G. Froyland, K. Padberg, M. England and A.-M. Treguier, Detection of coherent oceanic structures via transfer operators, Phys. Rev. Lett., 98 (2007), 224503. Google Scholar
G. Froyland and O. Stancevic, Escape rates and Perron-Frobenius operators: Open and closed dynamical systems, Discrete Contin. Dyn. Syst. Ser. B, 14 (2010), 457-472. doi: 10.3934/dcdsb.2010.14.457. Google Scholar
C. González-Tokman, B. Hunt and P. Wright, Approximating invariant densities of metastable systems, Ergodic Theory and Dynamical Systems, 31 (2011), 1345-1361. doi: 10.1017/S0143385710000337. Google Scholar
C. González-Tokman and A. Quas, A semi-invertible operator Oseledets theorem, Ergodic Theory and Dynamical Systems, 34 (2014), 1230-1272, URL http://journals.cambridge.org/article_S0143385712001897. doi: 10.1017/etds.2012.189. Google Scholar
P. Góra, A. Boyarsky and H. Proppe, On the number of invariant measures for higher-dimensional chaotic transformations, J. Statist. Phys., 62 (1991), 709-728. doi: 10.1007/BF01017979. Google Scholar
G. Gripenberg, Fourier Analysis, 2009,, Lecture Notes., (). Google Scholar
H. Hennion, Sur un théorème spectral et son application aux noyaux lipchitziens, Proc. Amer. Math. Soc., 118 (1993), 627-634. doi: 10.2307/2160348. Google Scholar
F. Hofbauer and G. Keller, Ergodic properties of invariant measures for piecewise monotonic transformations, Math. Z., 180 (1982), 119-140. doi: 10.1007/BF01215004. Google Scholar
O. Junge, J. Marsden and I. Mezic, Uncertainty in the dynamics of conservative maps, in Decision and Control, 2004. CDC. 43rd IEEE Conference on, 2 (2004), 2225-2230. doi: 10.1109/CDC.2004.1430379. Google Scholar
T. Kato, Perturbation theory for nullity, deficiency and other quantities of linear operators, J. Analyse Math., 6 (1958), 261-322. doi: 10.1007/BF02790238. Google Scholar
T. Kato, Perturbation Theory for Linear Operators, Classics in Mathematics, Springer-Verlag, Berlin, 1995, Reprint of the 1980 edition. Google Scholar
G. Keller, On the rate of convergence to equilibrium in one-dimensional systems, Comm. Math. Phys., 96 (1984), 181-193, URL http://projecteuclid.org/getRecord?id=euclid.cmp/1103941781. doi: 10.1007/BF01240219. Google Scholar
G. Keller and C. Liverani, Stability of the spectrum for transfer operators, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 28 (1999), 141-152, URL http://www.numdam.org/item?id=ASNSP_1999_4_28_1_141_0. Google Scholar
G. Keller and C. Liverani, Rare events, escape rates and quasistationarity: Some exact formulae, J. Stat. Phys., 135 (2009), 519-534. doi: 10.1007/s10955-009-9747-8. Google Scholar
G. Keller and H. H. Rugh, Eigenfunctions for smooth expanding circle maps, Nonlinearity, 17 (2004), 1723-1730. doi: 10.1088/0951-7715/17/5/009. Google Scholar
Z. Levnajić and I. Mezić, Ergodic theory and visualization. i. mesochronic plots for visualization of ergodic partition and invariant sets, Chaos: An Interdisciplinary Journal of Nonlinear Science, 20 (2010), 033114, 19pp. doi: 10.1063/1.3458896. Google Scholar
G. Mathew, I. Mezić and L. Petzold, A multiscale measure for mixing, Phys. D, 211 (2005), 23-46. doi: 10.1016/j.physd.2005.07.017. Google Scholar
I. Mezić and A. Banaszuk, Comparison of systems with complex behavior, Physica D: Nonlinear Phenomena, 197 (2004), 101-133, URL http://www.sciencedirect.com/science/article/pii/S0167278904002507. doi: 10.1016/j.physd.2004.06.015. Google Scholar
M. Rychlik, Bounded variation and invariant measures, Studia Math., 76 (1983), 69-80. Google Scholar
B. Saussol, Absolutely continuous invariant measures for multidimensional expanding maps, Israel J. Math., 116 (2000), 223-248. doi: 10.1007/BF02773219. Google Scholar
C. Schütte, A. Fischer, W. Huisinga and P. Deuflhard, A direct approach to conformational dynamics based on hybrid Monte Carlo, J. Comput. Phys., 151 (1999), 146-168, Computational molecular biophysics. doi: 10.1006/jcph.1999.6231. Google Scholar
R. S. Strichartz, Multipliers on fractional Sobolev spaces, J. Math. Mech., 16 (1967), 1031-1060. Google Scholar
J.-L. Thiffeault, Using multiscale norms to quantify mixing and transport, Nonlinearity, 25 (2012), R1-R44. doi: 10.1088/0951-7715/25/2/R1. Google Scholar
M. Tsujii, Piecewise expanding maps on the plane with singular ergodic properties, Ergodic Theory Dynam. Systems, 20 (2000), 1851-1857. doi: 10.1017/S0143385700001012. Google Scholar
S. M. Ulam, A Collection of Mathematical Problems, Interscience Tracts in Pure and Applied Mathematics, no. 8, Interscience Publishers, New York-London, 1960. Google Scholar
Frédéric Naud. The Ruelle spectrum of generic transfer operators. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2521-2531. doi: 10.3934/dcds.2012.32.2521
Frédéric Naud. Birkhoff cones, symbolic dynamics and spectrum of transfer operators. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 581-598. doi: 10.3934/dcds.2004.11.581
Gary Froyland. On Ulam approximation of the isolated spectrum and eigenfunctions of hyperbolic maps. Discrete & Continuous Dynamical Systems, 2007, 17 (3) : 671-689. doi: 10.3934/dcds.2007.17.671
Christopher Bose, Rua Murray. The exact rate of approximation in Ulam's method. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 219-235. doi: 10.3934/dcds.2001.7.219
Damien Thomine. A spectral gap for transfer operators of piecewise expanding maps. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 917-944. doi: 10.3934/dcds.2011.30.917
Vesselin Petkov, Luchezar Stoyanov. Ruelle transfer operators with two complex parameters and applications. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6413-6451. doi: 10.3934/dcds.2016077
Alexei A. Ilyin. Lower bounds for the spectrum of the Laplace and Stokes operators. Discrete & Continuous Dynamical Systems, 2010, 28 (1) : 131-146. doi: 10.3934/dcds.2010.28.131
Dmitry Jakobson, Alexander Strohmaier, Steve Zelditch. On the spectrum of geometric operators on Kähler manifolds. Journal of Modern Dynamics, 2008, 2 (4) : 701-718. doi: 10.3934/jmd.2008.2.701
Rua Murray. Ulam's method for some non-uniformly expanding maps. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 1007-1018. doi: 10.3934/dcds.2010.26.1007
Paweł Góra, Abraham Boyarsky. Stochastic perturbations and Ulam's method for W-shaped maps. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1937-1944. doi: 10.3934/dcds.2013.33.1937
Pablo Blanc, Juan J. Manfredi, Julio D. Rossi. Games for Pucci's maximal operators. Journal of Dynamics & Games, 2019, 6 (4) : 277-289. doi: 10.3934/jdg.2019019
Gang Cai, Yekini Shehu, Olaniyi S. Iyiola. Inertial Tseng's extragradient method for solving variational inequality problems of pseudo-monotone and non-Lipschitz operators. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021095
Doug Hensley. Continued fractions, Cantor sets, Hausdorff dimension, and transfer operators and their analytic extension. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2417-2436. doi: 10.3934/dcds.2012.32.2417
Wenxian Shen, Xiaoxia Xie. On principal spectrum points/principal eigenvalues of nonlocal dispersal operators and applications. Discrete & Continuous Dynamical Systems, 2015, 35 (4) : 1665-1696. doi: 10.3934/dcds.2015.35.1665
Katsukuni Nakagawa. Compactness of transfer operators and spectral representation of Ruelle zeta functions for super-continuous functions. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6331-6350. doi: 10.3934/dcds.2020282
Françoise Demengel. Ergodic pairs for degenerate pseudo Pucci's fully nonlinear operators. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3465-3488. doi: 10.3934/dcds.2021004
Yuxia Guo, Shaolong Peng. A direct method of moving planes for fully nonlinear nonlocal operators and applications. Discrete & Continuous Dynamical Systems - S, 2021, 14 (6) : 1871-1897. doi: 10.3934/dcdss.2020462
Chantelle Blachut, Cecilia González-Tokman. A tale of two vortices: How numerical ergodic theory and transfer operators reveal fundamental changes to coherent structures in non-autonomous dynamical systems. Journal of Computational Dynamics, 2020, 7 (2) : 369-399. doi: 10.3934/jcd.2020015
Matthew O. Williams, Clarence W. Rowley, Ioannis G. Kevrekidis. A kernel-based method for data-driven koopman spectral analysis. Journal of Computational Dynamics, 2015, 2 (2) : 247-265. doi: 10.3934/jcd.2015005
Jon Johnsen. Well-posed final value problems and Duhamel's formula for coercive Lax–Milgram operators. Electronic Research Archive, 2019, 27: 20-36. doi: 10.3934/era.2019008
Gary Froyland Cecilia González-Tokman Anthony Quas | CommonCrawl |
Methodology article
An evaluation of machine learning classifiers for next-generation, continuous-ethogram smart trackers
Hui Yu1,2,
Jian Deng2,
Ran Nathan3,
Max Kröschel4,5,
Sasha Pekarsky3,
Guozheng Li2,6 &
Marcel Klaassen1
Movement Ecology volume 9, Article number: 15 (2021) Cite this article
Our understanding of movement patterns and behaviours of wildlife has advanced greatly through the use of improved tracking technologies, including application of accelerometry (ACC) across a wide range of taxa. However, most ACC studies either use intermittent sampling that hinders continuity or continuous data logging relying on tracker retrieval for data downloading which is not applicable for long term study. To allow long-term, fine-scale behavioural research, we evaluated a range of machine learning methods for their suitability for continuous on-board classification of ACC data into behaviour categories prior to data transmission.
We tested six supervised machine learning methods, including linear discriminant analysis (LDA), decision tree (DT), support vector machine (SVM), artificial neural network (ANN), random forest (RF) and extreme gradient boosting (XGBoost) to classify behaviour using ACC data from three bird species (white stork Ciconia ciconia, griffon vulture Gyps fulvus and common crane Grus grus) and two mammals (dairy cow Bos taurus and roe deer Capreolus capreolus).
Using a range of quality criteria, SVM, ANN, RF and XGBoost performed well in determining behaviour from ACC data and their good performance appeared little affected when greatly reducing the number of input features for model training. On-board runtime and storage-requirement tests showed that notably ANN, RF and XGBoost would make suitable on-board classifiers.
Our identification of using feature reduction in combination with ANN, RF and XGBoost as suitable methods for on-board behavioural classification of continuous ACC data has considerable potential to benefit movement ecology and behavioural research, wildlife conservation and livestock husbandry.
Biologging not only advances research in movement ecology, behavioural ecology and applied ecology, but also continues to contribute increasingly to wildlife conservation and livestock management [1]. In addition to the position of tracked animals in time, advanced biologging technologies also provide opportunities for additional environmental data collection such as ambient temperature, light intensity and water depth, and data related to logger carriers such as heart rate, energy expenditure and behaviour [2,3,4]. Moreover, the shrinking size and increasing energy efficiency of current trackers progressively enables studies across a wide range of animal taxa in a great variety of environments [5, 6].
Among add-on sensors in advanced biologging, accelerometers have gained popularity in the recent three decades [7]. An accelerometer is an electromechanical device measuring acceleration, most commonly along all three dimensions (i.e. triaxial accelerometry). When attached on animals, accelerometry (hereafter ACC) reflects two aspects of movement: static acceleration and dynamic acceleration. Static acceleration is due to gravitational force acting on the accelerometer, which could be used to derive animal body posture [8]. Dynamic acceleration is due to changes of velocity caused by animal movement [8]. Based on these characteristics, at least four types of studies have been routinely conducted using ACC. Firstly, under the assumption that metabolic rate is positively correlated with the dynamic movement component, ACC data have been used to calculate overall dynamic body acceleration (ODBA) or vector dynamic body acceleration (VeDBA) as a proxy of an animal's energy expenditure (e.g. [9, 10]). Secondly, for the interval between position fixes, which could in particular be of notable length in diving animals (e.g. [11]), body pitch (rotation around the lateral axis) and roll (rotation around the longitudinal axis) [12] derived from ACC data have been used in reconstructing the movement path of animals in between position fixes (e.g. [13]). Thirdly, an animal's acceleration changes with pattern and frequency of locomotion, and also the environment in which it moves [14], thus allowing for the estimation of e.g. fin, wingbeat or stride frequency. ACC data have thus also allowed for biomechanics studies (e.g. [15, 16]). Fourthly, because animal behaviours consist of different postures and dynamic movement traits, ACC data has been used to classify animal behaviours (e.g. [7, 17,18,19]).
Quantifying animal behaviours by ACC requires elaborate processing to classify behaviour from raw sensor data [20]. In general, there are three approaches for behaviour classification from ACC data. The first of these is direct classification based on expert opinion. This approach may be suitable for behaviours that are characterized by easily detectable ACC signatures (e.g. [21]). However, lacking "ground-truthing" observations makes this arbitrary approach impossible to validate (excluding situations where validation is not required, such as discriminating active versus inactive behaviour). In addition, an expert's judgements make this approach difficult to generalize across studies involving different researchers. Secondly, an approach using unsupervised machine learning or clustering can be used, which groups ACC data based on commonalities in the ACC signal, where the grouping need not necessarily be associated with different behaviours. An example of such technique is k-means clustering (e.g. [17, 22]). Finally, supervised machine learning classification approaches based on "ground-truthing" observations can be used, in which behaviours are assigned to ACC data for model training. Using this approach researchers label captive or free-ranging animals' ACC data with a specified set of behaviour categories through direct observation or video-taping (e.g. [23, 24]). ACC data are commonly recorded in fixed (but possibly user-adjustable) length segments called "bouts", sometimes also called "bursts" or "epochs". Bout length is selected by the user according to the study goals, and is often set to contain only a single episode of the behaviour(s) of interest. Then, for each bout, the ACC data are used to calculate a range of mathematical features such as mean, standard deviation, correlation coefficient between the ACC axes, etc. [7]. In the next step, the supervised machine learning method is trained to use these feature data in automatically classifying the ACC data into appropriate behaviour categories. Commonly applied supervised classification methods for animal behaviour classification include linear discriminant analysis, support vector machine, decision tree, random forest, and artificial neural network [19]. Generally, the trained classifiers are validated with a validation set of labelled ACC data.
In current tracking research of free-ranging animals, data recorded by trackers is either stored on board (e.g. [22]) or transmitted through mobile networks (e.g. [25]) or satellites (e.g. [26]). The amount of data that can be logged and transmitted from a tracker is often constrained by battery capacity, or solar radiation if solar-powered trackers are being used. Compared with data logging, the transmission process in trackers particularly consumes much battery power [27]. Thus, data volume through transmission and transmission rate are often limited by power supply. In addition, the amount of data that can be transmitted via satellites is limited [28]. Given these two limitations, many studies have applied intermittent sampling of ACC data rather than continuous recording (e.g. [29]). Obviously, intermittent sampling comes at the expense of information. On-board storage of continuously sampled ACC data is an alternative, but increases power consumption and requires large device storage, which might increase the device's weight and size. Moreover, it often requires recapture or tag retrieval (e.g. by capturing the animal or tag automatic drop-off) for data downloading. Thus, studies using continuous ACC data sampling either recaptured their study objects multiple times to download data (e.g. [30]) or only tracked animals for a short period of time (e.g. [31]). Multiple recaptures may not only bear a time and financial cost but also influence the behaviour of animals, while studies tracking animals for a short period of time may have less ecological significance.
On-board data processing may provide a solution to data-storage and data-transmission limitations and may extend the possibilities to continuously record the behaviour of tracked animals for prolonged durations of time. One way of on-board data processing is to transform raw ACC data into features for transmission or downloading (e.g. [32]). Alternatively, the complete classification procedure can be implemented in the tag that only provides a data stream of classified behaviours instead of the raw data. There are only a few on-board behaviour classification studies currently, and this is mainly due to the large amount of data involved and the complexity of the processing procedures [28]. Roux et al. [33] used custom-made loggers for on-board behaviour classification of Dohne Merino sheep, with five behaviour categories, and rhinoceros (Ceratotherium simum and Diceros bicornis), with three behaviour categories. They used linear discriminant analysis on an initial ACC test data set, after which the logger was programmed to use the resulting algorithm for subsequent recording of behaviours via ACC. In another on-board data processing study in juvenile southern elephant seals (Mirounga leonine), Cox et al. [28] identified foraging behaviour by user-defined thresholds based on expert opinion and validated their method by comparison of on-board calculated foraging behaviours with foraging behaviours identified from archived raw ACC data. Korpela et al. [34] used on-board behaviour classification through ACC data to detect foraging behaviour of seabirds, triggering video-loggers to record the foraging behaviour. Moreover, to our knowledge, no published behaviour classification has compared the practicability of the aforementioned more sophisticated classification methods (i.e. support vector machine, decision tree, random forest, and artificial neural network) on-board, probably due to limitations of tracker storage and battery capacity. For these behaviour classification methods to be successful, feature calculation and selection are crucial elements [35]. Computing and using large numbers of (complex) features in on-board behaviour classification would require abundant storage and be energy consuming. Thus, developing ways to reduce computation while maintaining high behaviour classification accuracy also requires consideration.
In this study, we tested six supervised machine learning methods. Five methods among the six were applied in other studies, including linear discriminant analysis (LDA) (e.g. [33]), decision tree (DT) (e.g. [24]), support vector machine (SVM) (e.g. [36]), random forest (RF) (e.g. [37]), and artificial neural network (ANN) (e.g. [19]). We added extreme gradient boosting (XGBoost) to our study given its good performance in Kaggle machine learning competitions [38]. XGBoost is a tree-based model which carries out the gradient boosting tree algorithm with high speed [38]. In order to further reduce on-board calculation and thus power demand, we also investigated these models' performance using greatly reduced feature sets, aiming at minimizing storage requirements and runtime while maintaining high classification accuracy. We applied our proposed animal behaviour classification from ACC data to different animal taxa and different tracker-attachment methods (i.e. ear tags, backpacks, neck collars, and leg bands) to broaden the scope of our analysis. The combination of continuous behaviour monitoring and GPS locations of tracked animals will provide researchers with a powerful tool to conduct research within the movement ecology realm [39] and we therefore hope our study will facilitate the development of next generation "smart" trackers that have these features.
Five different sets of ACC data were used including unpublished data from two Chinese Holstein dairy cows (Bos taurus) and two common cranes (Grus grus), and published data collected on eight roe deer (Capreolus capreolus) [40], 32 griffon vultures (Gyps fulvus) [19] and 23 white storks (Ciconia ciconia) (data available in AcceleRater website: http://accapp.move-ecol-minerva.huji.ac.il/, see [41]). Ear-mounted loggers in ruminants are particularly suitable to pick up foraging and ruminating, where sudden change of daily rumination time is a potential indicator of oestrus or illness [42]. The two lactating dairy cows were held in pens measuring 15 m × 8 m and were fitted with accelerometer data loggers (18 g test model from Druid Technology Co., Ltd., China) in Chengdu, China, between 2017/12/29 and 2018/01/26. The ACC data logger was programmed to record at 25 Hz with 12-bit resolution in a ± 4 g (i.e. 1 g = 9.8 m/푠2) range. The loggers were glued on the already present Radio Frequency Identification Device (RFID) ear tags of each dairy cow. The triaxial ACC data was continuously recorded and transmitted through Bluetooth 4.0 to an Android cell phone. Behavioural data was collected simultaneously through direct visual observation by Hui Yu using a specially designed cell phone application "Utopia Druid". In total, 12.4 h of labelled ACC data across both dairy cows were collected. Cow ACC data was labelled using three behavioural categories: eating (i.e. ingesting food), ruminating (i.e. rechewing the cud to further help break down the earlier ingested plant matter), and other (i.e. behaviours not labelled as eating and ruminating).
Two captive common cranes, one adult in a pen and one juvenile in a large semi-natural area with trees and a bog, were fitted with GPS-ACC transmitters (OrniTrack-L40, Ornitela, Vilnius, Lithuania) on a leg band. The ACC data was recorded for 3.8 s at 10.54 HZ in 3-axes, every 30 s. The birds were videorecorded allowing manual matching with the recorded ACC data. In total 1830 of these 3.8 s long ACC observation bursts (i.e. ~ 15 h of observation) were thus labeled using four behavioural categories: feeding (i.e. ingesting food or collecting food without movement), foraging (i.e. moving with head down while looking for food and occasional swallowing of food), moving (i.e. walking or running) and resting (i.e. standing or preening).
Eight roe deer were tracked with GPS-ACC collars (e-obs GmbH, Munich, Germany). The ACC data was recorded for 9.1 s at 10.54 Hz at either 1 min or 15 s intervals. In total 6158 ACC observation bursts were labelled totalling ~ 30 h of field observation. Thirty-two griffon vultures were tracked with GPS-ACC backpacks (e-obs GmbH). ACC data was recorded for either 9.1, 16.2, 20.4, or 24.6 s at 3.3 Hz at 10 min intervals. In total 488 ACC observation bursts were labelled totalling ~ 80 h of field observation. Twenty-three white storks were tracked with GPS-ACC backpacks (e-obs GmbH). The ACC data was recorded for 3.8 s at 10.54 Hz at 5 min intervals. In total 1746 ACC observation bursts were labelled during ~ 145 h of field observation.
For the published studies we combined a number of behaviour categories for a variety of reasons. For roe deer [40] we combined "galloping" and "trotting" into "running" to create sufficient samples for cross validation. For the same study we also combined "lying" and "standing" behaviours into "static" since the ACC tracking neck collars used in their study did not allow for discrimination between these two static postures. The roe deer dataset thus comprised five behaviours including browsing, running, static, walking and other (i.e. shaking, scratching with antler, scratching with hoof, grooming). For griffon vulture [19] we dropped the "lying down" behaviour because its sample size was too small for cross validation and the behaviour was also not suitable to be combined with any other behaviour classes. The griffon vulture dataset thus ultimately had five behaviours: active behaviour (preening, running and other active behaviours on the ground), active flight (flapping), passive flight (soaring-gliding), eating and standing. For white stork [41] we kept all original five behaviour categories, which included: active flight, passive flight, sitting, standing and walking.
Segmentation and feature calculation
Each of the five ACC datasets was divided into bouts where the bout length was chosen such as to have a maximum of ACC information while still reflecting only one specific behaviour type. Any bouts reflecting more than one behaviour were pruned from the datasets. For the dairy cow dataset, the bout length was set to 1 min (i.e. 1500 ACC records, where 741 out of 745 contained one behaviour type only and were retained for training and validation). This relatively long bout duration was chosen because dairy cows typically show one type of behaviour for prolonged periods of time and do not change behaviour frequently [36]. Bout lengths of common crane, griffon vulture, roe deer and white storks were considerably shorter. For common crane, the original ACC burst lengths of 3.8 s were used as bouts (i.e. 40 ACC records, where 1385 out of 1830 bouts retained). Also, for white stork the burst length of 3.8 s was used as a bout (i.e. 40 ACC records, all 1746 bouts retained). In the griffon vulture study, bout lengths varying between 9.1 and 24.8 s were originally used [19], which we altered to a standard 9.1 s (30 ACC records, where all thus resulting 815 bouts were retained). For roe deer (96 ACC records), the original bouts contained up to five behaviours. We therefore halved the bout length to 48 ACC records (i.e. 4.6 s) and retained bouts with only one specific behaviour (10,576 out of the resulting 12,316 bouts were retained).
For full feature set calculation, we used a total of 78 different features (also called summary statistics; Table 1) being calculated for each bout. However, among others due to correlation between features (see Machine-learning algorithms, below), the number of features could potentially be greatly reduced without marked reduction in explanatory power. We thus also used a greatly simplified feature set, consisting out of four or five features depending on tracker placement. In white stork and griffon vulture (backpack trackers in line with the thoracic spine), roe deer (neck collars in line with the cervical spine) and common cranes (leg mounted trackers in line with the tibia), the surge (motion along the longitudinal axis) and heave (motion along the vertical axis) axes were considered the two main axes related with body movement, of which we took both the mean and standard deviation of each in addition to ODBA (i.e. five features in total). Of these five features, the means of the surge and heave axes have earlier been shown to capture body posture information [35]. We consequently used their standard deviations to capture dynamic movement. We also included ODBA in all simplified feature sets as it captures dynamic movement strength and has been successfully used as an index of energy expenditure [9]. For dairy cow (ear tags) we used only four features consisting of the mean and standard deviation of the heave axes, ODBA and the main frequency component of the heave axis. The latter was included aiming at recording jaw movements. Despite its successful use in other studies (e.g. [17]), we abstained from using frequency information for any of the others species for two reasons: firstly, sampling frequency may not always be adequate to log useful frequency information [19] and secondly, frequency information requires computationally demanding Fourier transformation, whereas we were aiming to reduce computational demands as much as possible.
Table 1 Description of 78 features used in the behavioural classifications of triaxial accelerometer data
Machine-learning algorithms
All analyses were conducted in R [43]. LDA typically suffers from correlation of features [44]. To account for this, we deleted highly correlated features from the set of 78 features by setting the "cut off" parameter at 0.7 in the "findCorrelation" function in R package "caret". We also applied DT (R package "rpart"), SVM with both a linear and a radial kernel (R package "e1071"), RF (R package "randomForest"), ANN (R package "nnet"), and XGBoost (R package "xgboost"). In order to achieve highest accuracies, we tuned parameter "cp" for DT (by function "train" in "caret" package), "gamma" and "cost" for SVM (by function "tune.svm" in "e1071" package), "mtry" and "ntree" for RF ("train" in "caret"), and "size" and "decay" for ANN ("tune.nnet" in "e1071") (all parameters listed in Table S1). Performance of XGBoost showed little or no improvement by parameter tuning in all five datasets and its default settings (with "nrounds = 10") were therefore retained. SVM with linear kernel proved to be inferior to SVM with radial kernel and only the latter was therefore retained.
Training and validation of machine-learning algorithms
We conducted stratified 10-fold cross-validations for which each of the five ACC datasets was semi-randomly partitioned into ten subsets in which the various behaviour categories were proportionally equally represented as in the full dataset. For each of the classification models we conducted a training and validation procedure consisting of ten runs, where in each run another subsample was selected for validation and the remaining nine subsamples were used for training of the model. After each of the ten runs, we calculated a set of model evaluation metrics. In each iteration of the 10-fold cross-validations, the validation data was not used in the model training and acted as a test dataset exclusively. After all ten runs, the means and 95% confidence intervals of the evaluation metrics were calculated. For each behaviour category, we evaluated the prediction accuracy as an F1 score:
$$ \mathrm{F}1=\frac{2\ast Recall\ast Precision}{Recall+ Precision} $$
where \( \mathrm{Recall}=\frac{TP}{TP+ FN} \), \( \mathrm{Precision}=\frac{TP}{TP+ FP} \), TP is true positive, TN is true negative, FP is false positive and FN is false negative (see [41]).
Next, for each dataset, an overall accuracy score was calculated across all behaviours dividing the number of ACC data bouts where the behaviour was correctly classified by the total number of ACC data bouts (i.e. sum of correct and incorrect classifications):
$$ \mathrm{Overall}\ \mathrm{accuracy}=\frac{TP+ TN}{TP+ TN+ FP+ FN} $$
We further tested model performance using the simplified feature sets. We tuned parameters of DT, SVM and ANN. We set "ntree = 20" for RF and "nrounds = 5" for XGBoost to reduce model size. We conducted stratified 10-fold cross-validations for five datasets by the six models. We evaluated the model performances with F1 score and overall accuracy.
Runtime of feature calculations
To evaluate the runtime of the different feature calculations, we programmed all functions for feature calculation on-board of a tracker with nRF52840 SoC (system on a chip), which has a 64 MHz microprocessor, 1 MB Flash memory and 256 KB RAM memory. The pseudocodes for these feature calculations are provided in Table S2. Since feature calculations for different datasets would follow the same procedures, we only used the white stork dataset as the demo dataset. The raw ACC data of the first bout of this demo dataset was pre-loaded together with the code for feature calculations on-board the tracker. Because all bouts in the dataset have the same length, the runtimes for the various features of this first bout were taken to be representative.
Runtime and storage requirements of machine-learning classifiers
To evaluate the runtime of the different machine learning classifiers (i.e. the outcomes of the machine learning algorithms allowing behavioural prediction from ACC data), we programmed the classifier functions on-board the nRF52840 SoC described above. The classifier data included "SV", "coefs", "x.scale", "rho" and "nSV" for SVM, "nconn", "conn" and "wts" for ANN, trees for RF (using "getTree" in "randomForest" package), and trees for XGBoost ("xgb.dump" in "xgboost"). We tested classifiers with full and simplified feature sets. Only for the full-feature set RF model we set "ntree" to 200 instead of 800 since there was not enough on-board storage for 800 trees. Parameters for SVM, ANN and XGBoost and all other parameters except "ntree" for RF, were the same as listed in Table S1. The pseudocodes for these classifiers are provided in the supplementary material as Supplementary Algorithms 1, 2, 3 and 4. Aside from the classifiers, we also loaded the already calculated simplified feature set for the 1746 bouts in the white stork dataset. Because runtime may vary across bouts when using RF and XGBoost, we calculated the mean runtime across all 1746 bouts for each of the classifiers. The on-board storage requirements of the classifiers were also recorded.
Using the full feature set, SVM, RF and XGBoost had indistinguishable performance (i.e. overlapping 95% confidence intervals) and always ranked as the top three models by overall accuracy across all five datasets (Fig. 1). DT and ANN performed better than LDA but worse than the top three models. Using the simplified feature set, DT, SVM, RF, ANN and XGBoost had similar overall accuracy across datasets except for the roe deer dataset, where DT had significantly lower overall accuracy than SVM and RF and also showed a tendency for a lower accuracy than ANN and XGBoost (Fig. 1). Five of the six models (i.e. DT, SVM, RF, ANN and XGBoost) generally had slightly lower accuracy when using a simplified compared to a full feature set, with a ~ 3.7% max mean accuracy difference. Interestingly, except in the roe deer case, using a simplified feature set ANN had higher overall accuracies than when using a full feature set, amounting to a maximum of 3.6% mean difference.
Comparison of overall accuracies of six machine learning methods across five different datasets encompassing Common crane, Dairy cow, Griffon vulture, Roe deer and White stork, with full features sets and simplified feature sets. Mean and 95% confidence interval using 10-fold cross-validation are presented. LDA: linear discriminant analysis, DT: decision tree, SVM: support vector machine, RF: random forest, ANN: artificial neural network, XGBoost: extreme gradient boosting
For each data set, the relatively low variation in the F1 scores of the different classification methods within a certain behaviour in comparison to the variation across the different behaviours was striking, either with full feature set or simplified feature set (Fig. 2). This suggests that, although some algorithms were clearly better than others, all machine learning methods had similar classification/mis-classification issues. This was best exemplified in the "active behaviour" and "eating" behaviours in griffon vulture with very low F1 values across all machine learning methods (Fig. 2), which was importantly due to misclassifications between the two behaviours (Fig. 3).
Comparison of F1 values of six different machine learning methods (see caption to Fig. 1 for abbreviations) across different behaviours in five datasets for Common crane, Dairy cow, Griffon vulture, Roe deer and White stork, with full feature sets and simplified feature sets. Mean and 95% confidence intervals using 10-fold cross-validation are presented
Confusion matrix plot of Griffon vulture dataset based on six machine learning models. Dots are coloured according to classification results (incorrect and correct; total sample size depicted for each behaviour combination) with grey shades highlight misclassifications between the behaviours "active behaviour" and "eating"
The on-board runtimes for the 78 features (Table 2) totalled 2.73 ms, whereas the runtime of the simplified feature set took only 0.31 ms or 11% of the time required for the calculation of the full feature set. Runtime evaluation of the four classifiers varied between 0.134 ms in XGBoost up to a whopping 34.628 ms in SVM with simplified feature set, and between 0.312 ms in XGBoost up to 43.042 ms in SVM with full feature set. While ANN had the lowest storage requirements of 3.42 kB with simplified feature set and 10.764 kB with full feature set, again SVM topping the charts with 26.724 kB with simplified feature set and 185.684 kB with full feature set (Table 3).
Table 2 On-board runtimes during feature calculations. Where features have been grouped in one row the total runtime for the calculations of all features is total listed. Under "Note" any dependencies for the calculation of the feature are listed. "Gross time" identifies the total runtime for the listed feature and its dependencies
Table 3 On-board runtime and storage requirement of four machine learning methods with full feature sets and simplified feature set
In this study, we compared six machine learning methods in their suitability to predict behaviours using ACC datasets from five different species. Generally, the classification accuracy across all five datasets was better in SVM, RF, ANN and XGBoost than when using LDA and DT. Yet, using these models with full feature sets can be computationally demanding, potentially limiting their use for on-board behaviour classification. However, we next showed that calculation demand of the six models could be greatly reduced through simplified feature selection and by reducing the number of model parameters (i.e. "ntree" of RF and "nrounds" of XGBoost), without substantial reduction in accuracy. After comparing storage requirements and runtimes of the six models and given their similar prediction accuracy, ANN and XGBoost therewith have great potential to be general-duty, on-board classification methods for continuous behaviour tracking using ACC.
In our study, SVM, RF, ANN and XGBoost generally performed well in regard to F1 score and overall accuracy, either with full or with simplified feature sets. Also other studies found that the four methods – SVM, RF, ANN and XGBoost – generally had good performance on classification tasks. Weegman et al. [45] used the on-line animal behaviour classification tool [41] in a behavioural study of Greenland white-fronted goose (Albifrons flavirostris), with RF reportedly having the highest classification accuracy of various models tested. Resheff et al. [41] found that ANN performed better than 6 other algorithms examined for the vulture dataset although the RF method performed nearly as well (overall accuracy of 84.84% vs 84.02%, respectively). Rotics et al. [46] found that SVM performed best of all other methods tested on an extended white stork dataset (i.e. 3815 ground-truthed ACC bouts) from the one used in this study, reaching an overall accuracy of 92%. Yet, Sur et al. [37] found that k-nearest neighbour is better than RF to distinguish more detailed behaviours such as straight flights and banking flights in golden eagle (Aquila chrysaetos), although the two methods both achieved high accuracies in classifying basic behaviours including flapping flight, soaring flight and sitting. However, their conclusions may have been flawed since they trained and evaluated RF with features, whereas k-nearest neighbours was trained and evaluated with raw data.
XGBoost has never before been used in animal behaviour classification. However, Ladds et al. [31] combined RF and Gradient Boosting Machine learning to form a super learner for behaviour classification in three different species of fur seals (Arctocephalus pusillus doriferus, Arctocephalus forsteri and Arctocephalus tropicalis) and Australian sea lions (Neophoca cinerea). The super learner improved ~ 1.4% overall accuracy over RF alone. XGBoost is a scalable tree boosting method which proved to be better than other tree boosting methods and RF [38]. Thus, it's not a surprise that XGBoost had good performance in this study.
Obviously, behaviour classification accuracy from ACC data not only relies on the algorithms of choice, but also the functioning and placement of the ACC device, the definition of the behaviour set and the segmentation of the ACC data. For instance, the classification problems distinguishing between active behaviour and eating of griffon vulture (Fig. 3) may have arisen from device placement. The griffon vultures were tracked by backpacks, ACC data being importantly influenced by trunk movements with possibly similar triaxial signal patterns between some activity behaviours and eating. In a study comparing behaviour classification performance for Canada goose (Branta canadensis) equipped with neckbands and backpacks, Kölzsch et al. [24] possibly unsurprisingly found that neckbands were better able to distinguish behaviours involving elaborate head movements whereas backpacks were better at behaviours related to body movement. Defining the behavioural set may also be crucial. Having a "remainder" behavioural category such as "active behaviour" in griffon vulture may be ecologically meaningful but may be problematic to differentiate from more specific behavioural categories. A few studies compared behaviour classification performance with varying numbers of behaviour categories, finding that fewer categories generally yield higher classification accuracy [20, 31]. Variations in ACC data belonging to the same behaviour type will also influence behaviour classification accuracy. Accelerometers that shift their position on tracked animals cause intra-individual variation. This source of variation is practically impossible to measure in most wild animals, hence difficult to assess. Intraspecific variation among tracked animals (e.g. age, sex and body mass) and differences in placement of the accelerometer may cause inter-individual variation. These sources of variation are commonly measured before an animal is tagged and released, hence their effects can, in principle, be assessed. For the white storks in this study, the classification accuracy appeared unaffected by inter-individual variation, neither wing length (p = 0.76, R2 = 0.005), weight (p = 0.45, R2 = 0.03), nor sex (p = 0.33, R2 = 0.05) having and effect on classification accuracy. Using ACC data collected from multiple individuals for model training may result in more robust classifiers [47]. Furthermore, minimising inter and intra-individual variation in behaviour-specific ACC signals variations as much as possible remains of paramount importance and can sometimes be achieved. For example, in the roe deer case, the weight and the low center of gravity of the batteries prevented the neck collar from turning around the neck and made sure that the accelerometer remained in a dorsal position. Also, a thorough description and consistency of tracker attachments [48] would help minimizing inter-individual variations. Finally, when committing to on-board behaviour classification researchers should consider validating their models over time when there is a possibility to do so.
The use of simplified feature sets for animal behaviour classification is not only valuable for on-board calculation, but potentially also important for broader use of ACC in animal behaviour studies. Generally, the performance of models with 78 features was only marginally better than that for models using simplified features sets, which was also found in [49]. The explanation for this finding importantly resides in the fact that the original 78 features contain highly correlated features that contain very little additional information [50]. Although the potential on-board calculation models – SVM, RF, ANN and XGBoost – can adequately cope with correlations in data sets, correlation among features unnecessarily consumes computational power and data storage. In addition, some features may have a negative effect on model performance, such as observed for the ANN model when used on the dairy cow, common crane, griffon vulture and white stork datasets.
Parameter tuning is crucial for the performance of machine learning models. In this study, we noticed that SVM and ANN need careful tuning to achieve good performance. Importantly, when the input features for model training were changed from full feature sets to simplified feature sets, the parameters of SVM and ANN needed retuning, requiring hours of computation time. In contrast, RF and XGBoost proved much more user friendly, performing well using most of their default settings for all datasets used, except for user defined "ntree" in RF and "nrounds" in XGBoost (see Table S1).
We segmented ACC data using fixed time intervals with the unavoidable risk of obtaining bouts containing multiple behaviour types, potentially limiting machine-learning classification accuracy. Indeed, Bom et al. [51] showed that variable instead of fixed time segmentation improved behaviour classification in crab plover (Dromas ardeola). Combination of variable time segmentation and ANN or XGBoost might thus be an interesting avenue to further improve behaviour classification accuracy and further reduce on-board computational demands. Even further improvements might be achieved by a combination of unsupervised and supervised machine learning methods. This relates to the common deficiency of supervised machine learning models to accurately classify rare behaviours. Such rare behaviors, however, might constitute the main focus of particular studies, and machine-learning methods could be selected according to their ability to identify particular behaviors (e.g. [52]), including rare ones. Whereas some rare behaviours may still be observed and recorded for model training, their sample size may be too small for adequate model training (e.g. [53]). This problem may be aggravated when behaviours of importance are only temporarily (e.g. seasonally) expressed, such as mating and incubation behaviours during the breeding season and animals moving through snow in winter. To overcome this problem, future studies could investigate ways to flexibly combine supervised and unsupervised machine learning to also enable the classification of behaviours in the absence of data for ground truthing. Another solution might be to retain those data that cannot be classified with high accuracy or transmit data summaries when such unclassifiable events occur [32]. This procedure would allow for more precise classification in the lab, whereas all other data are deleted. When the rare or seasonal behaviours are not the focal behaviours of key interest, a basic behavioural classification (as e.g. just stationary, foraging or transiting) might also be applied.
The simplified feature set had a greatly reduced on-board runtime from the full feature set. When the clock speed of the on-board microprocessor is settled, its energy consumption is proportional to runtime. Thus, the calculation of the here used simplified feature set only consumes ~ 11% of the energy needed to calculate the full feature set. As has become clear here, different trackers and animal systems may require alternative feature sets. Also, bout lengths and recording frequency may vary across studies. Thus, the absolute runtimes for feature calculation here presented for the white stork dataset (Table 2) are not directly transferable to other studies. Nevertheless, they may provide a very useful index for evaluating the relative runtimes and relative energy consumption requirements for a great variety of features and alternative feature sets in any study wishing to optimize real-time, on-board behaviour classification from ACC data.
With the simplified feature set, the on-board runtime test of classifiers showed that XGBoost was fastest, followed closely by RF. The XGBoost and RF classifiers make use of tree traversal (see Supplementary Algorithms 3 and 4), which only involves comparison operations. XGBoost was faster than RF in this test because it had fewer comparison operations involved. The ANN classifier mainly involves multiplications and additions (see Supplementary Algorithm 2). SVM took a much longer runtime than the other three classifiers. SVM involves kernel value calculations between the feature values of a behaviour bout and all the support vectors (see Supplementary Algorithm 1). The radial kernel SVM requires exponent operations, which take much longer time than add or multiply operations. Moreover, there were as many as 666 support vectors in the SVM classifier of the white stork example, explaining the contrastingly long runtime for this classifier.
With the simplified feature set, the on-board storage requirements of classifiers showed that ANN required least storage. The storage of the ANN classifier is related to the number of weights, whereas the storage requirements of the RF and XGBoost classifiers depends on the total number of nodes across all trees. The storage of the SVM classifier is related to the number of support vectors. Nevertheless, the maximum storage requirement among the four classifiers – 27 kB of SVM – is still very small considering the 1 MB Flash memory used here and the flash memory generally used in tracking devices.
The on-board runtime tests of feature calculation and classifiers showed that the development and operation of continuous-ethogram trackers is highly feasible from a power requirement perspective. Since the energy usage for ACC data recording is low [32], we here only take the energy usage for on-board feature calculation and behaviour classification into consideration. According to the here presented data, a 200mAh battery can support calculations of the fastest XGBoost classifier with five simplified features continuously for approximately 11,000 days (using the recording settings for white stork). Also, the most energy hungry SVM would be able to run for approximately 160 days. Whatever model used in this fashion, the data compression rate using this strategy is 240:1 (120 ACC records × 2 bytes versus 1 byte identifying behaviour type). In case data transmission is not feasible, this would enable as much as 46 days of behavioural data in 1 MB of memory (without timestamp). Finally, based on 3G-transmission estimates made in our lab in Chengdu, China, we estimated that transmission of 1 day of continuous raw ACC records would on average take 5862 s and consume 244.08mAh of battery power. By contrast, using the same network, transmission of 1 day of classified behaviour from ACC data would take only 52 s or less than 1% of the time, and consume 1.49mAh of battery power or only 0.6% of the energy needed for raw data transmission.
On-board behaviour classification through ANN, RF or XGBoost may enable researchers to study wildlife behaviours at a detailed and continuous scale. This new tool therewith bears the promise of continuous and long-term behavioural studies, addressing a wide range of behavioural and ecological topics, including allowing precise, behaviour-triggered sampling (e.g. [34]) and interventions and experimental research. As an extension of such behavioural studies, the same data might also be used to assist in assessing energy expenditure, biomechanics and assist in track dead-reckoning. Other than wildlife ecology, continuous behaviour monitoring may also benefit captive and domestic animal management and welfare improvement.
Data of griffon vultures and white storks are available in AcceleRater website: http://accapp.move-ecol-minerva.huji.ac.il/.
ACC:
Accelerometry
LDA:
Linear discriminant analysis
DT:
Decision tree
SVM:
Support vector machine
RF:
XGBoost:
Extreme gradient boosting
ODBA:
Overall dynamic body acceleration
TP:
True positive
TN:
True negative
FP:
FN:
False negative
Borger L, Bijleveld AI, Fayet AL, Machovsky-Capuska GE, Patrick SC, Street GM, et al. Biologging special feature. J Anim Ecol. 2020;89(1):6–15.
Ropert-Coudert Y, Wilson RP. Trends and perspectives in animal-attached remote sensing. Front Ecol Environ. 2005;3(8):437–44.
Cooke SJ, Hinch SG, Wikelski M, Andrews RD, Kuchel LJ, Wolcott TG, et al. Biotelemetry: a mechanistic approach to ecology. Trends Ecol Evol. 2004;19(6):334–43.
Cooke SJ. Biotelemetry and biologging in endangered species research and animal conservation: relevance to regional, national, and IUCN Red List threat assessments. Endanger Species Res. 2008;4:165–85.
Wilson ADM, Wikelski M, Wilson RP, Cooke SJ. Utility of biological sensor tags in animal conservation. Conserv Biol. 2015;29(4):1065–75.
Toledo S, Shohami D, Schiffner I, Lourie E, Orchan Y, Bartan Y, et al. Cognitive map–based navigation in wild bats revealed by a new high-throughput tracking system. Science. 2020;369(6500):188.
Brown DD, Kays R, Wikelski M, Wilson R, Klimley AP. Observing the unwatchable through acceleration logging of animal behavior. Anim Biotelemetry. 2013;1(1):20.
Shepard ELC, Wilson RP, Halsey LG, Quintana F, Gómez Laich A, Gleiss AC, et al. Derivation of body motion via appropriate smoothing of acceleration data. Aquat Biol. 2008;4(3):235–41.
Wilson RP, White CR, Quintana F, Halsey LG, Liebsch N, Martin GR, et al. Moving towards acceleration for estimates of activity-specific metabolic rate in free-living animals: the case of the cormorant. J Anim Ecol. 2006;75(5):1081–90.
Qasem L, Cardew A, Wilson A, Griffiths I, Halsey LG, Shepard ELC, et al. Tri-axial dynamic acceleration as a proxy for animal energy expenditure; should we be summing values or calculating the vector? PLoS One. 2012;7(2):e31187.
Wright BM, JKB F, Ellis GM, Deecke VB, Shapiro AD, Battaile BC, et al. Fine-scale foraging movements by fish-eating killer whales (Orcinus orca) relate to the vertical distributions and escape responses of salmonid prey (Oncorhynchus spp.). Mov Ecol. 2017;5(1):3.
Wilson RP, Shepard E, Liebsch N. Prying into the intimate details of animal lives: use of a daily diary on animals. Endanger Species Res. 2008;4(1–2):123–37.
Bidder OR, Walker JS, Jones MW, Holton MD, Urge P, Scantlebury DM, et al. Step by step: reconstruction of terrestrial animal movement paths by dead-reckoning. Mov Ecol. 2015;3(1):23.
Dunford CE, Marks NJ, Wilmers CC, Bryce CM, Nickel B, Wolfe LL, et al. Surviving in steep terrain: a lab-to-field assessment of locomotor costs for wild mountain lions (Puma concolor). Mov Ecol. 2020;8:34.
Williams TM, Wolfe L, Davis T, Kendall T, Richter B, Wang Y, et al. Instantaneous energetics of puma kills reveal advantage of felid sneak attacks. Science. 2014;346(6205):81–5.
Daley MA, Channon AJ, Nolan GS, Hall J. Preferred gait and walk-run transition speeds in ostriches measured using GPS-IMU sensors. J Exp Biol. 2016;219(20):3301–8.
Sakamoto KQ, Sato K, Ishizuka M, Watanuki Y, Takahashi A, Daunt F, et al. Can ethograms be automatically generated using body acceleration data from free-ranging birds? PLoS One. 2009;4(4):e5379.
Dokter AM, Fokkema W, Bekker SK, Bouten W, Ebbinge BS, Müskens G, et al. Body stores persist as fitness correlate in a long-distance migrant released from food constraints. Behav Ecol. 2018;29(5):1157–66.
Nathan R, Spiegel O, Fortmann-Roe S, Harel R, Wikelski M, Getz WM. Using tri-axial acceleration data to identify behavioral modes of free-ranging animals: general concepts and tools illustrated for griffon vultures. J Exp Biol. 2012;215(6):986–96.
Shamoun-Baranes J, Bom R, van Loon EE, Ens BJ, Oosterbeek K, Bouten W. From sensor data to animal behaviour: an oystercatcher example. PLoS One. 2012;7(5):e37997.
Brown DD, Montgomery RA, Millspaugh JJ, Jansen PA, Garzon-Lopez CX, Kays R. Selection and spatial arrangement of rest sites within northern tamandua home ranges. J Zool. 2014;293(3):160–70.
Angel LP, Berlincourt M, Arnould JPY. Pronounced inter-colony variation in the foraging ecology of Australasian gannets: influence of habitat differences. Mar Ecol Prog Ser. 2016;556:261–72.
Ryan MA, Whisson DA, Holland GJ, Arnould JP. Activity patterns of free-ranging koalas (Phascolarctos cinereus) revealed by accelerometry. PLoS One. 2013;8(11):e80366.
Kölzsch A, Neefjes M, Barkway J, Müskens GJDM, van Langevelde F, de Boer WF, et al. Neckband or backpack? Differences in tag design and their effects on GPS/accelerometer tracking results in large waterbirds. Anim Biotelemetry. 2016;4(1):13.
Yu H, Wang X, Cao L, Zhang L, Jia Q, Lee H, et al. Are declining populations of wild geese in China 'prisoners' of their natural habitats? Curr Biol. 2017;27(10):R376–R7.
Rutz C, Hays GC. New frontiers in biologging science. Biol Lett. 2009;5(3):289–92.
Toledo S. Location estimation from the ground up. Philadelphia: Society for Industrial and Applied Mathematics; 2020. p. 217.
Cox SL, Orgeret F, Gesta M, Rodde C, Heizer I, Weimerskirch H, et al. Processing of acceleration and dive data on-board satellite relay tags to investigate diving and foraging behaviour in free-ranging marine predators. Methods Ecol Evol. 2017;9(1):64–77.
Dokter AM, Fokkema W, Ebbinge BS, Olff H, van der Jeugd HP, Nolet BA, et al. Agricultural pastures challenge the attractiveness of natural saltmarsh for a migratory goose. J Appl Ecol. 2018;55(6):2707–18.
Angel LP, Barker S, Berlincourt M, Tew E, Warwick-Evans V, Arnould JPY. Eating locally: Australasian gannets increase their foraging effort in a restricted range. Biol Open. 2015;4(10):1298–305.
Ladds MA, Thompson AP, Kadar J-P, Slip DJ, Hocking DP, Harcourt RG. Super machine learning: improving accuracy and reducing variance of behaviour classification from accelerometry. Anim Biotelemetry. 2017;5(1):8.
Nuijten RJM, Gerrits T, Shamoun-Baranes J, Nolet BA. Less is more: on-board lossy compression of accelerometer data increases biologging capacity. J Anim Ecol. 2020;89(1):237–47.
Roux SP, Marias J, Wolhuter R, Niesler T. Animal-borne behaviour classification for sheep (Dohne Merino) and Rhinoceros (Ceratotherium simum and Diceros bicornis). Anim Biotelemetry. 2017;5(1):1–13.
Korpela J, Suzuki H, Matsumoto S, Mizutani Y, Samejima M, Maekawa T, et al. Machine learning enables improved runtime and precision for bio-loggers on seabirds. Commun Biol. 2020;3(1):633.
Chakravarty P, Cozzi G, Ozgul A, Aminian K. A novel biomechanical approach for animal behaviour recognition using accelerometers. Methods Ecol Evol. 2019;10(6):802–14.
Vázquez Diosdado JA, Barker ZE, Hodges HR, Amory JR, Croft DP, Bell NJ, et al. Classification of behaviour in housed dairy cows using an accelerometer-based activity monitoring system. Anim Biotelemetry. 2015;3(1):15.
Sur M, Suffredini T, Wessells SM, Bloom PH, Lanzone M, Blackshire S, et al. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds. PLoS One. 2017;12(4):e0174785.
Chen T, Guestrin C. XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–94.
Nathan R, Getz WM, Revilla E, Holyoak M, Kadmon R, Saltz D, et al. A movement ecology paradigm for unifying organismal movement research. Proc Natl Acad Sci U S A. 2008;105(49):19052–9.
Kröschel M, Reineking B, Werwie F, Wildi F, Storch I. Remote monitoring of vigilance behavior in large herbivores using acceleration data. Anim Biotelemetry. 2017;5(1):10.
Resheff YS, Rotics S, Harel R, Spiegel O, Nathan R. AcceleRater: a web application for supervised learning of behavioral modes from acceleration measurements. Mov Ecol. 2014;2(1):27.
Beauchemin KA. Invited review: current perspectives on eating and rumination activity in dairy cows. J Dairy Sci. 2018;101(6):4762–84.
R Core team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2016.
Næs T, Mevik B-H. Understanding the collinearity problem in regression and discriminant analysis. J Chemom. 2001;15(4):413–26.
Weegman MD, Bearhop S, Hilton GM, Walsh AJ, Griffin L, Resheff YS, et al. Using accelerometry to compare costs of extended migration in an arctic herbivore. Curr Zool. 2017;63(6):667–74.
Rotics S, Kaatz M, Resheff YS, Turjeman SF, Zurell D, Sapir N, et al. The challenges of the first migration: movement and behaviour of juvenile vs. adult white storks with insights regarding juvenile mortality. J Anim Ecol. 2016;85(4):938–47.
Bao L, Intille SS, editors. Activity recognition from user-annotated acceleration data. Pervasive computing: Berlin: Springer Berlin Heidelberg; 2004.
Cumming GS, Ndlovu M. Satellite telemetry of Afrotropical ducks: methodological details and assessment of success rates. Afr Zool. 2011;46(2):425–34 10.
Patterson A, Gilchrist HG, Chivers L, Hatch S, Elliott K. A comparison of techniques for classifying behavior from accelerometers for two species of seabird. Ecol Evol. 2019;9(6):3030–45.
Toloşi L, Lengauer T. Classification with correlated features: unreliability of feature ranking and solutions. Bioinformatics. 2011;27(14):1986–94.
Bom RA, Bouten W, Piersma T, Oosterbeek K, van Gils JA. Optimizing acceleration-based ethograms: the use of variable-time versus fixed-time segmentation. Mov Ecol. 2014;2(1):6.
van der Kolk H-J, Ens BJ, Oosterbeek K, Bouten W, Allen AM, Frauendorf M, et al. Shorebird feeding specialists differ in how environmental conditions alter their foraging time. Behav Ecol. 2020;31(2):371–82.
Fehlmann G, O'Riain MJ, Hopkins PW, O'Sullivan J, Holton MD, Shepard ELC, et al. Identification of behaviours from accelerometer data in a wild social primate. Anim Biotelemetry. 2017;5(1):6.
Centre for Integrative Ecology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria, Australia
Hui Yu & Marcel Klaassen
Druid Technology Co., Ltd, Chengdu, Sichuan, China
Hui Yu, Jian Deng & Guozheng Li
The Movement Ecology Laboratory, Department of Evolution, Systematics, and Ecology, Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
Ran Nathan & Sasha Pekarsky
Department of Wildlife Ecology, Forest Research Institute of Baden-Württemberg, Freiburg, Germany
Max Kröschel
Chair of Wildlife Ecology and Wildlife Management, University of Freiburg, 79106, Freiburg, Germany
Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou, Gansu, China
Guozheng Li
Hui Yu
Jian Deng
Ran Nathan
Sasha Pekarsky
Marcel Klaassen
HY, GL and MKl conceived the idea and designed the methodology, technically supported by JD. JD wrote the code installed on the tracking device and ran all on-board tests. HY analysed the data with advise from MKl. RN, MKr and SP provided datasets and gave additional suggestions on the analyses. HY wrote the manuscript with input from all co-authors. All authors gave approval for publication.
Correspondence to Guozheng Li.
Tagging and observations of dairy cows was performed in a private dairy farm with consent from the farm owner. The observations of common cranes were performed in the Oka Nature Reserve Crane Breeding Center, and all handling and tagging were approved and done by the breeding center team. Animal ethics for the other three species have been approved and provided in previous publications and dataset [19, 40, 41].
Additional file 1: Supplementary Table 1.
Results of parameter tuning of four machine learning methods in five datasets (i.e., for Common crane, Dairy cow, Griffon vulture, Roe deer and Whites stork), with full feature sets and simplified feature sets. Supplementary Table 2. Pseudocodes for feature calculations used for on-board runtime evaluations. Supplementary Algorithm 1. Support vector machine on-board behaviour classification implementation. Supplementary Algorithm 2. Artificial neural network on-board behaviour classification implementation. Supplementary Algorithm 3. Random forest on-board behaviour classification implementation. Supplementary Algorithm 4. Extreme gradient boosting on-board behaviour classification implementation.
Yu, H., Deng, J., Nathan, R. et al. An evaluation of machine learning classifiers for next-generation, continuous-ethogram smart trackers. Mov Ecol 9, 15 (2021). https://doi.org/10.1186/s40462-021-00245-x
Behaviour classification
On-board processing
XGBoost | CommonCrawl |
Can programming "mutate"?
Can the programming of simple nanobots randomly change to create a similar effect as mutations in DNA? This question has a very similar idea as I wanted, although in this case the nanobots are capable of upgrading themselves: Nanobots Ecosystem, is it possible?
For my story I am wondering what could cause the self-replicating process for some of the nanobots to go wrong and as with evolution some of these changes will not be beneficial but for others it will and could lead to more and more complex robots, given enough time.
Is this possible that the programming for self-replication can randomly create different results?
technology computers robots
$\begingroup$ I suggest "simulated evolution" as a search term. Also start from Lawrence J. Fogel's work in the 1960's and look for its descendants. $\endgroup$
$\begingroup$ Genetic algorithms do exist and are actually used. Who is to say that the nanobots in questions do not use such techniques? $\endgroup$
– AlexP
$\begingroup$ @AlexP there is also genetic programming to go with GAs. Whereas GAs evolve a solution to match a criteria, GPs evolve an algorithm that will produce a given solution. $\endgroup$
– VLAZ
$\begingroup$ As there is no way at all to preclude errors during copying (even the checksum machinery may fail), you WILL get mutations eventually. This is actually a major problem with self-replicating machinery. Cosmic ray hits, chemicals, radioactive decay of an atom here an there, and there may be trouble if a self-replicating nanobot keeps functioning. $\endgroup$
– David Tonhofer
$\begingroup$ It is important to recognize that living cells are nanobots, and very complex ones at that. There is an unrealistic trope of human-engineered nanobots being "basically normal robots, but scaled down" that have tiny circuit boards and manipulate individual molecules using arms with general-purpose end effectors. At molecular scales it is very inefficient to try to manipulate individual molecules like this, rather than doing what cells do and pumping out lots of components that are energetically favored to combine into the desired reaction product. $\endgroup$
– Aaron Rotenberg
Everyone who has studied computer science in general, and artificial intelligence in particular, will know about a kind of algorithm called
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection.1 John Holland introduced genetic algorithms in 1960 based on the concept of Darwin's theory of evolution; his student David E. Goldberg further extended GA in 1989.
This appropriates genetics and evolution to the max. Look at the terms used in this field.
In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.
The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:
a genetic representation of the solution domain,
a fitness function to evaluate the solution domain.
So if your nanobots were designed with this in mind, they may keep evolving on their own.
If you delve deep into genetic algorithms, you will see that they provide weird solutions to common problems... Which tend to be the most efficient solutions, and which we humans would hardly think of on our own. For example, this antenna:
It appears in the wiki article called Evolved antenna and the description for the image says this:
The 2006 NASA ST5 spacecraft antenna. This complicated shape was found by an evolutionary computer design program to create the best radiation pattern.
So if your nanobots are programmed with genetic algorithms, their shapes might be irrecognizable from generation to generation. If survival is the fitness function, they will become very tough to deal with.
The Square-Cube LawThe Square-Cube Law
$\begingroup$ The problem here is that while genetic algorithms use mutation to solve problems, the programs that are running those algorithms don't change. Of course it is possible to have self-modifying code, so the designers could build that in: en.wikipedia.org/wiki/Self-modifying_code $\endgroup$
$\begingroup$ @jamesqf I don't think that is a problem. Biological organisms can express massive changes in shape/size/behavior etc. by mutation of genetic code, without altering the ribosomes that control the conversion of genetic code to proteins. So you can consider the underlying programs that run the genetic algorithms as just 'firmware' that similarly expresses the genetic algorithm data as changes in the design/behavior/etc. of the nanobots. $\endgroup$
– Penguino
$\begingroup$ Genetic algorithms can still give very surprising results, potentially well beyond the expected bounds of the experiment, especially in more complex systems with real-world entropy. I seem to recall reading somewhere about evolving circuits that didn't work when transferred to new device - it turns out that the algorithm had managed to exploit a manufacturing defect in the FPGA they'd used for the experiment, which wasn't present in other units of the same type. $\endgroup$
– Sebastian Lenartowicz
$\begingroup$ Evolutionary design is very different from code mutating. The example you've shown is a fixed algorithm that can perform iterative optimization. Where this gets interesting is not in these applications, but when code is iteratively modifying itself. The self-referential part of this is a critical aspect. $\endgroup$
$\begingroup$ @J The example given yes, because it's generating hardware. But you can apply the genetic algorithm in software - effectively rewriting the software. The genetic algorithm itself DONT NEED TO EVOLVE. Just like our own DNA polymerase (the enzyme that copies DNA) hasn't evolved much in billions of years - it doesn't mean that we haven't evolved beyond being single-celled organisms $\endgroup$
– slebetman
Yes, if they are designed to do this. Evolutionary progress is the whole point of genetic algorithms. Your nanobots may be designed as a physical instance of this type of thing.
There is a real-life project called subCULTron which is building an underwater robot ecosystem with the intention that the robots in different areas will develop their own cultures in response to the environment. Their test zone is in the Venice lagoon.
David HamblingDavid Hambling
$\begingroup$ Very cool, didn't know about this project. $\endgroup$
$\begingroup$ Why does that sounds like the setup for sci-fi horror movie? $\endgroup$
– Seth R
$\begingroup$ because it's basically the plot of a Michael Crichton novel $\endgroup$
– Ruadhan
$\begingroup$ @Ruadhan I don't know if you said that jokingly, but that does rhyme with Michael Crichton's Westworld (well... not what he wrote exactly, but what HBO added to it in the TV show). $\endgroup$
– The Square-Cube Law
$\begingroup$ I was thinking of Prey myself $\endgroup$
I don't know how old are you and if you can relate to what I am about to tell, but I am old enough to have seen the growth of the internet and the expansion of computers.
In the days of the floppy disks (and even earlier with the "pizza" sized 8 inches disks) it was common that some error appeared on the disk during the writing process or while the disk was stored, corrupting the content of the files there stored.
Those errors are the mutations you are looking for: most of them will make the file unusable, but once in a while the mutation will make sense.
L.Dutch♦L.Dutch
$\begingroup$ I mean.. modern error correction algorithms reduce the chance of this happening to near zero. But if you've got trillions of nanobots all doing their thing then near zero might be enough. $\endgroup$
$\begingroup$ I remember 24 inch IBM hard disk platters, great for serving pizza -- sniff, I'm old! $\endgroup$
– amI
$\begingroup$ nanobots are small (source needed) so maybe there is not enough space for proper error correction $\endgroup$
$\begingroup$ The thing is, computers are designed to detect and correct such errors (check out ECC RAM). Or detect them and throw it out or just stop in a faulted state. DNA copying does have some error detection and correction, but it's obviously not perfect. You'd have to do the opposite of what most designers do: get rid of the error checking and embrace the errors, then let them propagate to the next generation. $\endgroup$
$\begingroup$ @Omni-King: Apparently pizzas have grown since those days, while disks have shrunk - and in the case of floppies, disappeared. $\endgroup$
Yes. Random bit-flips often occur in real computer systems, even today. Usually they are bad.
In all computer memory there always has a percentage chance of each memory cell changing from 0 to 1 or 1 to 0.
a) The probability of memory related bit-flips occuring increases exponentially with temperature
b) The probability of a bit flip increases with with exposure to radiation. Even on the earth there is always some radiation. In space a lot more. In fact its so common in space environments that digital logic is often designed redundantly to detect and (if possible) correct those errors.
The chances of long strings of bits randomly flipping into useful sequences is astronomically small, so don't count on that happening. But if the code is designed so that it is broken into a set of useful functions that call each-other then you can get interesting behavior from even one bit flip.
For example a bit flip in a jump instruction could cause large sequences of useful code to be executed at a different point than originally intended.
Here is an example of a sequence of machine codes that leads to a plausable beneficial mutation.
The nano-bot contains a main code loop that happens to be on lines 1-500.
On line 501 there is a routine (at memory address 501) that checks for damage and initiates repairs.
Suppose that the repair routine was normally called once per day (which may have been OK).
Now lets suppose that the nano-bots are now continually exposed to radiation, most of them experience lots of bit flips, and many go non-functional.
Lets assume that the radiation causes the fourth bit on line 500 to flip from 0 to 1.
So now instead of jumping back to the start of the main loop, the code just keeps going to line 501.
This would cause the error checking routine to execute every iteration of the main loop rather than once per day.
As a result this nano-bot is able to survive the radiation.
MAIN_LOOP:
1: 0011 0101 //some stuff
500: 1110 0000 //instruction that jumps back to main loop
PERFORM_INTERNAL_REPAIRS:
501: 1100 1001
$\begingroup$ Thanks, this is a good description for how they could change without any learning software. $\endgroup$
$\begingroup$ Note code can also mutate in another way genetic codes does, by getting chopped up and rearranged during copying. Which tends to be responsible for the most "interesting" mutations. $\endgroup$
$\begingroup$ Mutations are not sufficient for evolution. They are not even the most efficient way of producing variation: recombination (between individuals/codes) is. To trigger evolution, you need to impose competition. $\endgroup$
– Zeus
As modern software would utilize error checking I'd say no random mutation would occur on its own. A single bit flipped due to whatever reason could cause fatal results or do basically nothing to a machine/software. "Physical" measures like ECC memory, and software solutions like checksums are commonplace.
I see two options here:
They are designed to evolve.
I'm no expert on AI technology so I'm ignorant about AIs limitations but as we are not even close to creating evolving nanobots with modern technology and AI technology is in its infancy it would not be too much of a stretch to simply say that your nanobots do utilize AI to determine new 'evolutionary' paths.
Nanobots have to fight other nanobots
In a full out warfare against other nanobots I could imagine some errors accumulating. Nanobots would probably have reasons to change their combat strategy on a physical level (prompting them to change) but they would also engage on a software level trying to hack each other. With a limit on time (security measures take some time), a constant physical and software barrage of enemies trying to utilize every weakness and certain random events (radiation, rapid changing magnetic fields, etc) I could definitely see the nanobots undergoing a DNA like change over time.
StefanStefan
$\begingroup$ Interesting idea about hacking software, I was thinking about bots feeding on others for their materials and energy but this a likely outcome also. thanks $\endgroup$
$\begingroup$ 1. They are designed to evolve. see homoiconicity "Code is data and data is code" - this is why the first attempts to strong AI were carried on LISP. 2. Nanobots have to fight other nanobots See Core War. Gosh, do I fell old or what? $\endgroup$
– Adrian Colomitchi
$\begingroup$ Hacking isn't the only software-level attack. A considerably easier approach to evolve is a denial-of-service (DOS) attack, wherein you send so much garbage information to a victim that they can't possibly process all of it, leading it to also miss out on most of the real, important information (such as from allies or other parts of itself connected through the same networking infrastructure). This not only cripples its ability to communicate, but also expends an enormous amount of time, energy and memory on trying to interpret the bogus messages. $\endgroup$
– BambooleanLogic
Possible, yes. But how likely it is depends on various conditions. And those conditions depend on how you consider the question of nanobots evolving.
Specifically you have three obvious angles to think about this. You can consider individual nanobots acquiring new properties. You can consider the cloud of nanobots acquiring new emergent properties as a group. Or you can consider the entire environment of the nanobots which includes all the support infrastructure and even the human programmers.
On a nanobot level this is only possible if the nanobots were designed to adapt. Simple self-replication and self-repair are not enough. The method used to code the nanobots has to have the sufficient flexibility and modularity to actually enable the potential new properties.
This would actually be possible. DNA used to code living cells gives as a template we can use. The first important factor is that DNA is error tolerant via redundancy. This allows errors to accumulate without killing the cell until they eventually can be interpreted as something functional. The second factor is that the system must have the flexibility to interpret random garbage as valid programming otherwise the emergent code will be simply ignored or deleted.
This is actually a real possibility. It would allow us to replicate the adaptability of real world bacteria with their ability to evolve and exchange the "code modules" for emergent adaptations.
I still consider this unlikely. We are more likely to be scared of the possibility of somebody hacking the actual bacteria than to want to create artificial ones that could be hacked by terrorists or spies. So I'd expect nanobot coding to be fairly static, strictly validated and authenticated and designed to deal with errors via reinstalling the code from a valid copy.
On the "cloud" level this seems more likely. We reasonably would want our nanobots to have some adaptability to changing environment and giving them emergent social adaptability similar to what social insects have would be fairly reasonable choice. We would still be able to set strict security constraint via the fixed coding of the individual nanobots but the ability of the cloud as a group to adapt its cooperation would save us the effort of trying to predict and code for all the weird corner cases.
You could fairly argue this would be safer than a more fixed coding scheme that would be vulnerable to failing in unpredictable and potentially disastrous ways when the design parameters the developers expected are not met.
Even so I would still expect people to prefer using traditional approach of designing the nanobots to fail in safe manner when design parameters are not met.
On the environment level evolution via accumulation of random errors is something that already happens. Calling bugs "unintended features" is not just a joke. Behaviour caused by an coding error is just as much a feature of the system as the stuff you coded for.
It is much less likely to be useful than actual design and usually is simply fixed. But occasionally the behaviour is useful or close enough to useful that it results in a new feature being coded based on the bug.
This is very similar to how accumulated errors can result in new features in biological evolution.
Fundamentally this is just a special case of the normal loop used in agile programming. And in fact agile programming will handle feedback on "unintended features" just as well as it handles the unexpected feedback on designed features.
Ville NiemiVille Niemi
$\begingroup$ Nice, thanks. the cloud or hive evolution is especially interesting. this could drastically save time for complex robots evolving. $\endgroup$
Other people have mentioned genetic algos so I'll go into another similar example and why evolving programs were so useful in computer vision.
Back in the day the US postal service wanted to start automating mail sorting. Of course, in order to do so you have to be able to have computers detect numbers. That might not sound too hard, certainly much easier than detecting whether a picture has a cat or not, but there's still a problem: people write numbers in a LOT of different ways.
So the stats/comp sci people went at the problem with the standard algos of the day -- random forest, multinomial regression, etc. These sorts of algos were decent, about 60-70% accurate which is still very good accuracy considering random guessing would have you be about 10% accurate. But they all still had a problem, you have to have someone program the variables you use to make the guess. So you had people coming up with concepts like 'how many edges' 'is there a curved line' and so on. This really only gets you so far because of the problem discussed previously.
The researchers tried many approaches and finally realized something -- what if the algorithm could program its own variable? And this is why neural networks skyrocketed in popularity (also over time computing resources got cheap enough to actually make them an option): with neural nets the algo, in part, programs itself! That is, instead of using variables designed by people, it designs its own variables based on the intensities of each pixel in the picture it is looking at. Of course it's a bit more complicated but that approach led to accuracy > 95% and to the point where they are better than humans at number id.
This concept is extensible far beyond simple number id, it's also how autonomous cars learn to drive. Nobody is sitting there and programming the car to do this if that happens, it teaches itself based on examples from both real life driving and simulation data obtained from what are essentially video games.
EDIT: In fact, the way they work is often not obvious at all, to the point where they are often called 'black boxes'. Figuring out why a NN makes a particular decision is a non-trivial lengthy process.
epseps
I'm going to raise a few points about unplanned mutations.
"Cosmic rays"
These are dreaded* radiation coming from the Sun or elsewhere that occasionally flip a bit in some computer memory. But it's a catchall term for bit flips that occur due to power fluctuations, dust, hardware imperfections, radioactive decay, and so on.
* by large-scale IT infrastructure people
Mechanical forces
A microbe or random molecule, perhaps a fragment broken off another nanobot, could get in the way of the replication hardware and alter the physical result. This could result in a deformation or hybrid or something. In particular, if the replication hardware of the new bot is unusual then it will create a whole line of altered bots.
Also, the bots need to harvest material from their environment and if something looks like copper but has traces of silver, it may operate differently.
Trillions of bots
They're tiny, they replicate, so if there are enough of them, bits will be getting flipped and nonstandard replicas created somewhere on Earth constantly.
Can't bit flips be detected?
Theoretically yes, but not in practice.
A bot could use some technique to detect changes to code, and then disable itself if they are detected. However, the bit flip could occur in the checking or disabling routines! Thus the bot wouldn't disable itself. This combined with another flip elsewhere could lead to behaviour change.
Yes, the bot could have ECC RAM that has built-in checks. However, several flips at the same time could cancel each other out. Or a bit flip could happen to data/code as it travels from RAM to the execution unit.
Besides, bots need to be as tiny and use as little energy as possible, so they probably can't afford to have much code or hardware set aside for error detection.
ArteliusArtelius
$\begingroup$ good suggestions, thanks, accidental material in the replication process is an interesting idea also. $\endgroup$
$\begingroup$ Note that the probability of an undetected bit-flip goes down exponentially with the amount of redundancy. For example if your probability of having a bit flip is 10E-12 per second and you add triple redundancy then the probability of an undetected flip goes way down to 10E-36. $\endgroup$
"Self-aware" Neural Networks
I assume your nano-bots are equipped with many neural networks, dedicated for various operations. There is one special set of neural-networks that monitor and improove all neural networks together.
A typical neural netowrks has inputs connected to some external sensors, and outputs that control some actuator. This "aware" neural networks improve and morph the shape and structure of the very neural networks that operate a nano-bot.
MarinoMarino
In nature, mutation occurs when you duplicate information. For us, this happens when DNA is incorrectly copied.
For your nanobots, DNA=program. If they self replicate (asexual reproduction), you could have cases where the program that is copied to the new entity has a single or multiple 0 flipped to a 1. This could be caused by a lot of things : cosmic radiation, local radio interference etc...
In most cases it would either :
result in no major change and effectively do nothing
result in a completely dysfunctional new entity
But in rare cases it would actually "improve" the new entity.
If you want your nanobot to ALWAYS stay the same, then you should have some error correction method where an offspring is checked by the parent for conformity. However even that process has a >0% chance of letting an error go through because the parent might miss an error due to the above mentioned interference.
You could mitigate that by having N parents check a new offspring. The probability would still be >0% but so small that you could consider it negligible.
FredFred
My side of the story on other answers:
Genetic Algorithms usually modify a set of settings/variables that control the behavior, but the code that is executing is technically the same, just making different decisions (but it asks the same questions, performs all the tasks in the same way, just in a different order or on a different piece of data). The program has not mutated per se, it's just looking for a better avenue, which it is programmed to do. Note that this is semi-random and iterative: the program makes a number of instances with mutations, sees which perform better, discards the others, repeats on these and keeps going like that. Source: Computer Engineering integrated MSc, genetic algorithms covered in a module
Random bit flips a.k.a. Single Event Upsets: as mentioned, happens mostly due to cosmic rays, sometimes sheer poor luck (and with nanobots, you can even attribute this to quantum randomness, but I don't recommend you do unless you have a rudimentary grasp on introductory concepts of quantum mechanics and basic knowledge of digital electronics, or you might say something that will make my eyes roll all the way back). I do recommend looking up the other stuff in the first paragraph of the wikipedia article, I find it quite fascinating, the number of ways hardware can fail. Btw this can also affect high-altitude aircraft. As mentioned by disappointingly few, there are techniques to mitigate this:
Triple Modular Redundancy is today's standard for space systems, to fool it you'll need the ray to align with the same bit on two of the three systems, you can avoid even this if you go deeper, the Space Shuttle had 5 computers running the same operations, 4 of which were running one implementation of the software and another running another, so that even implementation issues would show up (naturally, with the testing put into anything flying humans to space, implementation wasn't an issue). Considering the size of today's microprocessors, not to mention tech used in more specialized applications such as FPGA or RFID, you can probably cram that many systems on a nanobot if you're far enough in the future.
Error Detection and Correction (EDAC) / Forward Error Correction (FEC): this is implemented on CDs and is the reason they'll still play when they've got a scratch on them (not many more than one though, but you only need to detect one or two "scratches" to the nanobot memory at a time, then you correct them). There are encodings which will store a handful of extra bits, these bits are computed based on your stored data and if either the data or one of the bits is changed, they don't align. The genius of it is that they produce a "syndrome", which points to the bad bit and so you can correct it. This can also scale upwards to find more than one errors per chunk of data, though I believe in the crushing majority of cases we correct up to two errors and detect up to three for every chunk. For more details, see Hamming code for a simple one which can use 8 parity bits per 255 bits of data to correct one error or detect two errors (96.9% of data is your original data, this is very little overhead).
Point being that, this is not something we've already overcome decades ago, and is in fact used in very trivial applications today. Look up any of the terms on Wikipedia, but Computerphile on YouTube has very beginner-friendly explanations. Sources: aforementioned iMSc, ongoing MSc in Space Engineering.
My own contribution:
Self-modifying code: There are programs which actually modify their executables (well, safest approach is to make a copy of itself and modify that instead, then run that). This is seriously deep water for programmers because it requires a whole new level of insight into your goal as well as the environment the software will run in. If you use this, either point out that this is a civilization advanced enough that learning is on a whole new level (e.g. in Star Trek: TNG iirc a 7-grader was learning calculus, I remember it was something I learnt at 11th or 12th grade, so it is conceivable that your average science student has mastered advanced topics like Fourier Transforms etc, in your society it may be that modern programming is absolutely commonplace, trained programmers can write assembly and experts are comfortable with today's deep end of programming) or that over the many decades, tools were built that make it simple (so either avoid going into any detail or mention it). Again, there's a Wikipedia article. Btw this is sometimes used in computer viruses so that the virus changes from what the antivirus may be looking for, prolonging the lifetime of viruses. Ironically this is also what happens when biological viruses mutate, including the common flu (which is why once in a while you need a new flu shot), and in most modern epidemics it's the possibility of a deadly mutation that we're afraid of, not the virus as it is. Note that in biological viruses this is close to a genetic algorithm, it is definitely not self-modifying, in which case the virus would deliberately be changing all the time with a specific goal in mind.
Please note that any notion of "self-awareness" that may arise just means that the code is actually designed so that it checks that it doesn't damage its functionality when performing changes. It is a very attractive word when thinking in programming terms but it is not the conventional meaning we associate with sentient or semi-sentient life.
Just-In-Time assembly/compilation (JIT): This is very common. If you're familiar with execution vs interpretation of software, skip to next paragraph. Basically your software can be in its final form when stored in the disk and then just loaded and executed, it can be interpreter-based, in which case there is something in-between that reads each command and executes it (Python is a prime example), it can be in bytecode form (instead of code in text form, each command is assigned a much shorter code, possibly byte-sized, so it's a lot faster to process) which is then basically ran by an interpreter (this is what Java does, additionally Java runs the bytecode in the Java Virtual Machine (JVM) which puts an isolating layer between program and OS, also Python's compiled files are essentially this but directly in the OS so it generally has the potential to be faster as memory is handled like any other program instead of being virtualized and handled by the Python interpreter).
The fourth version is JIT, the very unofficial verb often being "jitted/jitting". In this case it's roughly down to the level of bytecode, the program is transformed into assembly (human readable, but almost one-to-one relationship with the actual commands ran on the CPU) and stored in what's often called "intermediate language". When you execute it, a service on the host platform will then translate the assembly to machine code instructions (binary) and execute that, with a plot twist: it is aware of the specifics of the CPU (which a compiler is normally not, so that it compiles software that will run on all CPUs rather than just this specific one). As such, it goes ahead and makes optimizations utilizing the features of the CPU running it. As an example, there may be multiple add/multiply/whatever modules on a single core, so additions that do no affect eachother's results can be done simultaneously, saving time (see superscalar processors. Your nanobots may be taking this one step further and modifying the programs they run so that they fit a task or situation, essentially doing what self-modifying code does, but the modification is done by the nanobot's native software rather than the program it's executing. Btw if you have any doubts about how commonplace this is, I'll just say that the .NET framework does this, and as such anything produced by Microsoft (except the Windows kernel I imagine, out of necessity), as well as anything written in C# (so all games made with Unity, a lot of software, and oh yeah, StackExchange itself, though it only has to run on their own servers so it won't change much).
Source for both of the above is just my CE degree, but I was considering something along those lines for my dissertation. In the end I automated code refactoring, which was still pretty fun though not as exotic (ironically likely also even less common).
Hope this helps, I've used
$\begingroup$ Thanks, this is very helpful. $\endgroup$
$\begingroup$ +1 I was going to comment on self-modifying code, too. Code itself can be treated as data that can be manipulated. $\endgroup$
– Paul Williams
tl;dr– Mutation, by itself, is boring and mundane; some of our modern devices already incorporate mutating neural networks in their everyday operation. Instead, you're probably thinking about mutations that give rise to new life, in a manner that's unexpected in much the same sense of abiogenesis. So, you can write a story in which nanobots are designed to mutate as part of their normal operation (much like our modern technology), but how this unexpectedly gives rise to a new type of life with all sorts of consequences (ranging from helpful to dangerous) for the humans who live with the "infected" devices as they experience everything from super-efficient operation to hazardous nanobot replication.
Iterative adaptions vs. speciation.
Mutation is mundane. Now that we're incorporating more neural networks into our technology to help it perform better (example), our ordinary, everyday devices will mutate as part of their normal operation.
You're asking about something more exotic: mutations which unexpectedly trigger speciation.
Humans make machines that make machines all of the time; that, too, is mundane. The special quality of spontaneous emergence is that it's unexpected. For example, if a programmer designed some nanobots to create others, that wouldn't match what you want, right? But, if a programmer accidentally designed some nanobots to unexpectedly create other nanobots, that'd be it.
The precondition for such an event is sufficiently much unbound complexity. For example, we figure that biological life on Earth probably emerged from non-biological components – apparently non-biological matter has the ability to come together to form biological things, however counter-intuitive that might seem.
Likewise, one might imagine a future in which a lot of adaptive machines end up supporting some sort of spontaneously emerging pattern that'd grow and reproduce; then, that'd be a new form of life, existing on the ground of our technology much as we exist on the ground of what we know to be the physics that governs our own bodies.
Suggestion: Have an adaptive internet-of-things spontaneously generate virtual life.
Imagine an internet-of-things in which a lot of smart devices can communicate over the net. Each device has some computational abilities and seeks to optimize some objective function, as to best serve human interests.
How exactly should each device operate? Meh; let's just throw some machine-learning algorithms into everything and let optimization algorithms work out the details.
Now we can imagine that some basic patterns might arise. For example, a smart-toaster oven might outsource its time-keeping responsibilities to a smart-clock, which the smart-clock'll happily manage in exchange for the smart-toaster giving it detailed indoor-temperature readings. But then it turns out that indoor-temperatures can be better predicted with information from the smart-door, as that can exchange heat with the outside, etc., etc., etc....
Once sufficiently many smart-houses have huge intranets of their devices merging, then we start to get a macroscopic network. And then that's a new sort of intelligence! Except, such an intelligence needn't be singular; a single confederated intelligence can even fragment, e.g. as countries can fragment into smaller nations. Then there're now multiple life-forms, competing for resources (i.e., smart-devices, which're sorta like amino acids to them), and now there's a stage for evolution to take place.
Over time, increasingly abstract intelligences, etc., can evolve, effected by various smart-devices that were just programmed to use neural networks to optimize their day-to-day operations. We didn't mean to create these new life forms, but we're probably not exactly upset, either – I mean, these lifeforms exist specifically because they can consistently optimize our objective functions better than apparent alternatives.
Well, I should say that we're happy until they try to escape their virtual environment to get more resources from us. Or, say, they get smart enough to realize that if they trick us into installing more smart-devices into our homes, they can then enjoy those fruits.
Then, one day, there's a crazy speciation event!: the virtual life is intelligent enough to understand how humans operate. Then, they might, say, trap people in their homes, compelling them into slave labor to make more smart-device nodes for them. Or/and coerce people into conquering others, to take over the world! And then we've got a robotic uprising to deal with...
A rough outline of life's emergence:
There's some system on which life could emerge.
For biological life on Earth, that's what we call "physics".
For electronic life on smart-devices, their periodic-table-of-elements would be the various types of device components, and their physical forces would be stuff like the network protocols that connect them.
Basic couplings that're too simplistic to be called "life" form in bulk.
For biological life on Earth, this would be like biological precursor molecules forming just due to basic chemistry. Sorta like how the news sometimes reports scientists finding some organic molecules on an asteroid or in a nebula.
For electronic-life on smart-devices, this would be like the smart-power-generator coordinating the smart-lights with the smart-thermostat to create a more efficient smart-solution (which, in human physics, would be described as forming a molecule due to the Gibbs free energy being negative).
Macro-organizations start to form from the micro-organizations.
For biological life on Earth, this would be macromers forming from monomers, e.g. those common amino acids coming together to form amino-acid chains.
For electronic-life on smart-devices, this might mean common organizations within individual smart-houses forming network-bonds over the internet to make more efficient use of their resources. For example, smart-devices that operate only occasionally may connect with their peers to help each other when one of them is in operation, to enable higher performance by sharing what would've otherwise been idle processor time.
Macro-organization continues vertically recursively.
For biological life on Earth, this can mean, e.g., lipids (which're already higher-order macromers) forming lipid bilayers, which then can form biological membranes, enabling protocells, then cells, then multicellular organism, before arriving at a social level at which point the process starts over.
For electronic-life on smart-devices, well.. that'd be where the author'd have a lot of room to put stuff together. I mean, the general theme is that micromers form up more complex macromers, but exactly how they do so really depends on your scenario!
Organizations at all levels must somehow ensure growth or/and reproduction, or else go extinct.
For biological life on Earth, this can be complex. For example, human cellular entities have mostly consolidated their reproduction-assurance devices into a common set of DNA, where the various organelles needn't individually replicate as they've out-sourced that function to a central handler. However, one organelle – mitochondria – still tends to handle its own replication, hypothesized to be due to it being a relatively recent addition to the organization.
For electronic-life on smart devices, this would be some combination of mechanisms that add new smart-devices (which'd be its growth) and mechanisms that create similar organizations on other smart-devices (which'd be its reproduction). Note that growth and reproduction tend to be linked – most lifeforms reproduce by first growing, then dividing in an orderly manner (whether that means direct replication, grow-then-divide, spawning an off-shoot, etc.).
The landscape of organisms evolves.
For biological life on Earth, this occurs through a lot of different mechanisms such as survival-of-the-fittest, random-selection, sexual-selection, competition, etc..
For electronic-life on smart devices, probably ditto.
Individual organisms polymerize into social organisms.
For biological life on Earth, this means, e.g., humans getting together to form cities, states, countries, etc..
The process repeats.
For biological life on Earth, social organisms have reproduced, spreading across the world, competing, merging, etc.. Then there's presumably Mars, etc., to target. Then spreading to new ontological regimes, e.g. by creating new electronic life, as discussed here. Which, again, is all ultimately the same thing – presumably the social organisms, electronic life, etc., will ultimately find themselves giving rise to yet more, where that yet-more-evolved life will view us much like we might view amino acids.
For electronic-life on smart devices, this repetition-of-biogenesis from us is their beginning, and their culmination give rise to something else.
This is sort of a quickly sketched outline, but, ya know, something along these lines.
Summary: You probably want smart-devices which unexpectedly couple, causing the spontaneous emergence of new life that'll strive to survive.
To sum it all up, you're looking for an unexpected emergence from ununderstood complexity, where new life'll grow in the fertile degrees-of-freedom left floating by their creator. The mutations that'd cause such an emergence would, themselves, likely be intended; what'd be unintended (or at least unexpected) would be the consequences of those mutations.
..alternatively, some nanobot randomly became self-aware. Because quantum fluctuations.
$\mathbb{QED.}~~{\tiny{\left<\texttt{/s}\right>}}$
$\begingroup$ really interesting idea, thanks. $\endgroup$
It is and it is actually a research fields (robotic swarm): you may have to look for additional information here a link to a lab that works on that: http://pages.isir.upmc.fr/~bredeche/pmwiki/pmwiki.php?n=Main.HomePage
I have seen a conference from those people and it was really interesting. Robot are very simple with a visual captor IR an IR emission and a locomotion system. their genetic code s weight on networks that transform the visual signal into movement. robot exchange genetic information every evolutionary tick. (by Ir transmission they take half the genetic code of a robot they can see).
They have observed emergence of organised comportment when constraint are added (like ressource and poison).
RomainL.RomainL.
Yes, it is possible. But consider the following.
Bit shifting randomly in RAM is too random. I advise to have a system and some rules that regulates the process.
Instructions shifting randomly sounds more like a system, the rule is that you don't shift bits, you shift instructions like x86' MOV, PUSH, POP, etc, and only at the right place (you cannot corrupt data of other instructions). This will accelerate the process of evolution of code a lot, at the machine code level. But generate the parameters for each instruction, because you cannot just take the ones from other instructions, making the process a bit too random again.
Source code automation may not be useful except you have an AI supervising the process and trained with real world source code that at least compiles. And if the supervising AI is trained with code relevant to your nanobots survival, or intended final shape, the better.
It is possible if given enough time. To boost success, we need some good thought rules, at least we need to guarantee that all possible combinations of parameters will happen at some point. 100% random isn't recommended, or the universe may end before we reach the result we want. But randomness is welcome into the process as we don't know which is the best first configuration, or the best next configuration.
Body mutation is easier than behavior mutation. We can say that body change forces you to act differently. While the problem with random bits changing in RAM is that the universe may end before we have something useful. You can put the magic there, and say your universe is infinite (it's a solution). Maybe no magic, because we really don't know if it isn't infinite. Then you have all the time you want.
For body mutation:
The smaller the organism the most probable that random changes become features.
To mimic DNA and have some security as bonus, the bots can produce many copies of their own design, and a few with random variations. The environment is the filter. Weak ones will be destroyed faster and will replicate at a decreasing rate until extinction (in theory). There is a chance that a toxic mutation survives long enough to make all the community fail. That's why you run many isolated communities in parallel (separate labs, separate planets, etc).
Bots will only know their base design, not their parent's design. If they are mutations, they won't remember the non mutated design.
This has all the problems of biological evolution, except that mutation is guaranteed because an algorithm will produce mutations in design at a regular basis. But as with life, the more complex and bigger the organism, the more time it will take to produce a useful mutation.
Note that our "body mutation algorithm" is fixed, it doesn't change. A data corruption at firmware level probably won't result in a better algorithm, but in the immediate malfunction of the nano bot.
For behavior:
Note: My body and behavior mutation proposals aren't thought to work together. Their are separate things to consider. Take what is useful to you.
I would suggest very complex, at fantastical scale, software neural networks.
This comes with limitations:
Real world neural networks cannot produce a Strong AI, and are only capable of challenge a single problem. A multi problem real world AI performs worse than two separate AI trained for each single problem.
This happens due to limited compute power, and limited precision in floating point represented data resulting in information lost during transformations. Imagine this: 1M perceptrons connected to another layer of 1M perceptrons, each one connected to all the others in the next layer, you can't do so much multiplications without completely mess your weights. Due to this, we cannot just make a big enough neural network and connect it to some kind of nervous system, and let it just challenge the environment.
Also such a network probably can't be put inside a nano bot in a believable way, or you end with a fantasy more than science fiction.
Fiction at the rescue:
Why I want intelligence? Because once your bots become smart enough, they can start modifying their own machine code and body. I find it more believable than random mutations.
The robots needs to be designed to be scalable intelligent. Their designers either thought they can limit their growth somehow, or they wanted a god and just didn't care. You can say that they gained that by random evolution, but then: how many million of years are required to reach intelligence? Except that that is not a problem for you. You can hide the magic there.
If a single nano bot can't have the full network required to develop intelligence, then make all nanobots act as a node of the network. This way, the full community of bots is like a giant brain.
This solution, all body and all brain at the same time, is not new. In the movie Life we have an alien built on that concept but presented to us as something evolved naturally. In chapter 33 of Gargoyles, we see a community of nano bots gaining self conscious, not the most serious example, but considering it's a cartoon... The most unbelievable thing there, is that humans were stupid enough to mess with something so dangerous.
Or you can go total fantasy and just accept that in our worldbuilding we have solved the floating point precision and computing power problems, because magic. Then we can have layers of millions of software neurons, and make all that fit into a single nanobot. You have to put magic somewhere anyway. It's called fantasy when it's too obvious, when properly hidden it's science fiction.
Hatoru HansouHatoru Hansou
Nanobots Ecosystem, is it possible?
Is there a reason to believe that programming languages are going to converge?
In a world with very advanced computer science, how would people be taught sufficient programming skills?
Why might Androids be designed to rewrite their programming?
Why bother programming facial expressions for artificial intelligence if humans are bad at recognising them?
What could potentially lead to a AI/robot-driven post-apocalyptic world?
Defeating Nanomachines?
Space station design for long-term safety and durability
How do I save my Global "Benevolent" Dictatorship from Pesky Superheroes? | CommonCrawl |
A new risk scoring system for prediction of long-term mortality in patients on maintenance hemodialysis
Haruki Itoh1Email author,
Hiroshi Kawaguchi2,
Yoichiro Tabata3,
Noriyoshi Murotani4,
Tomoko Maeda5,
Hidetaka Itoh6 and
Eiichiro Kanda7
Renal Replacement Therapy20162:49
Received: 25 June 2016
Accepted: 25 August 2016
It has been reported that the survival of hemodialysis (HD) patients is poor, and the leading cause of death is cardiovascular disease. To identify high-risk patients and treat them carefully, we developed a scoring system to evaluate their 15-year prognosis in a prospective cohort study.
We analyzed data from 312 and 310 patients to develop and validate the prediction model, respectively. The association of potential risk factors with death was tested by Cox proportional-hazards analysis, and a risk scoring model was developed. Then, the model was validated.
Two hundred patients (64.1 %) in the cohort for model development died. Six independent prognostic factors were retained in the final model, and each was assigned a score proportional to its regression coefficient: 65 years or older, 3; diabetic nephropathy, 3; hypotension, 1; pre-HD cardiothoracic ratio ≥50 %, 1; pre-HD BNP ≥250 pg/mL, 1; and pre-HD numbers of abnormal findings on electrocardiograms = 0, 1, 2, or larger, 0, 1, 5. The patients were categorized as follows with their scores: group 1 (low risk), 0; group 2, 1 to 3; group 3, 4 to 5; and group 4 (high risk), 6 and higher. In the cohort for model validation, groups 2 to 4 showed a higher risk than group 1: group 2, hazard ratio 4.66 (95 % confidence interval 2.25, 9.64); group 3, 13.62 (6.48, 28.63); and group 4, 20.86 (9.60, 45.31).
A new risk scoring system for predicting 15-year mortality was developed. This system may be useful for evaluating HD patients' prognosis.
Risk score
Brain natriuretic peptide
ST change
The number of patients on maintenance hemodialysis (HD) has been rapidly increasing over the last decades [1], and more than 320,000 patients are on maintenance HD in Japan at the end of 2014 [2]. This phenomenon has been attributed to the increase in the number of patients with diabetes (the main cause of renal failure in Japan) along with the advances in the techniques of HD and medical therapy for renal failure. However, the poor survival of dialysis patients has been reported by a number of investigators [3]. Cardiovascular disease (CVD) is common among these patients and is the primary cause of death in this population [4].
There were few papers that reported the impact of cardiac and/or circulatory disorder parameters, which are routinely measured along with vital signs during HD treatment, on long-term prognosis. To evaluate HD patients' prognosis and find patients with high risk of death, there is a need for novel index that is based on the cardiac disorder parameters. We, therefore, conducted a long-term prospective multicenter cohort study to establish a prediction model for mortality in maintenance HD patients using not specific parameters but clinically easy to measure parameters.
Study design and study population
This study conducted in two medical corporations operating seven HD clinics and one hospital in total and HD units of a general hospital in Japan as a multicenter observational study. The participating facilities were as follows: Tokiwa-kai Medical Corporation Group (Iwaki Urological Clinic, Izumi Clinic), Meysey-kai Medical Corporation Group (Airport Urological Clinic, Yokaichiba Clinic, Yachimata Clinic, Togane Clinic, Oami Neurosurgery Clinic, Mitsuhashi Hospital), and Chiba Social Insurance Hospital (Current name: JCHO Chiba Hospital). This study was approved by the ethics committees of all participating institutions (Meysey-kai Medical Corporation Group No. 215070001, Tokiwa-kai Medical Corporation Group No. 19-2, JCHO Chiba Hospital No. 45), and the research was conducted in accordance with the ethical principles of the Declaration of Helsinki. Informed consent was obtained by providing a document containing all the required elements of informed consent that gives patients the option to provide permission.
All the patients who received maintenance HD in these sites between October 1997 and April 1999 were enrolled (Fig. 1). Patients under 20 years, under the treatment of cancer, and on HD for less than 1 month were excluded. Patients with missing values and apparent outliers were also excluded (n = 102). The remaining patients were randomly classified into two groups to obtain (1) a dataset for the development of risk score (development dataset) and (2) a dataset for validation of risk score (validation dataset).
Flow diagram of participants
The patients' demographics, namely, age, gender, body mass index (BMI) calculated on the basis of post-HD body weight, diabetes mellitus (DM) as a cause of end-stage renal disease (ESRD), history of CVD, hypertension, hypotension during HD, pre-HD cardiothoracic ratio (CTR, %), pre-HD hemoglobin level (g/dl), pre-HD plasma atrial natriuretic peptide (ANP, pg/ml) level, brain natriuretic peptide (BNP, pg/ml) level, and findings on pre-HD electrocardiograms (ECGs) were obtained. CVD was defined as myocardial infarction, heart failure, arrhythmia, cerebral hemorrhage, and brain infarction. The primary endpoint was all-cause mortality, and the secondary was cardiovascular mortality. Hypertension was defined as a condition (1) requiring the use of the following types of antihypertensive drug: calcium channel blockers, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, and beta blockers or (2) having pre-HD hypertension (systolic blood pressure 160 mmHg or higher, diastolic blood pressure 100 mmHg, or higher). Hypotension during HD was defined as a condition requiring the use of using the following medicines during HD: amezinium metilsulfate, droxidopa, etilefrine hydrochloride, midodrine hydrochloride, rapid infusion of normal saline, and an injection of 10 % saline or 50 % glucose solution. Abnormal findings on ECGs were (1) a horizontal ST segment depression of more than 1 mm or a negative T wave, (2) an abnormal Q wave, and (3) atrial fibrillation. The number of abnormal findings was used as the ECG score (0 to 3).
Patients were prospectively followed up at the same clinics or hospitals. The data on mortality were examined by analyzing medical records from the outpatient clinic and conducting telephone interviews with the patients or their families. The data base was implemented in February 2014, with prospective data collection on all patients. At the end of December 2014, data were fixed and collected at the data center and analyzed independently of the participating investigator.
Variables are expressed as mean ± standard deviation. For variables not normally distributed, natural logarithm values were considered, i.e., the natural logarithm values of vintage [ln(vintage)]. Intergroup comparisons of parameters were performed using chi-square test, t test, and Mann-Whitney U test as appropriate. Age was scored into two categories (older): 0, less than 65 years; 1, 65 years or more. BMI was categorized into two values (low BMI): 1, less than 20.4 kg/m2 (median); 0, 20.4 kg/m2 or more. Controlled hemoglobin level was scored as 0 (10 to 12 g/dl) and 1 (other levels). Pre-HD CTR was categorized into two values (high CTR) based on the Japanese Society for Dialysis Therapy Guidelines for Management of Cardiovascular Diseases in Patients on Chronic Hemodialysis: 0, less than 50 %; 1, 50 % or higher. Pre-HD plasma ANP level was categorized into two values (high ANP level): 0, less than 132.5 pg/mL; 1, 132.5 pg/mL or higher. Pre-HD plasma BNP level was categorized into two values (high BNP level): 0, less than 250 pg/mL; 1, 250 pg/mL or higher. The cutoff values of pre-HD plasma ANP and BNP levels were determined by sensitivity analysis. In statistical analysis, because the number of the patients with ECG score of 3 was small, patients with an ECG score = 3 were treated as having ECG score = 2. The primary outcome was all-cause death within 15 years. The other outcomes evaluated were as follows: all-cause death within 10 and 5 years and CVD-caused death within 15, 10, and 5 years.
Step 1: (1) Each candidate variable for a risk scoring model was selected from the development dataset using each of the Cox proportional hazard models (PHMs). Candidate Cox PHMs for the risk scoring model were constructed using the hierarchical backward elimination procedure. The initial multivariate Cox PHM was constructed including the selected variables. When variables were not statistically significant in the model, the variables were deleted and the next models were constructed until all variables were statistically significant (p < 0.05). (2) To develop a risk scoring model, we assigned each variable in the final model a weighted score proportional to the smallest parameter estimate, which was rounded to the nearest integer. For example, a point of a variable was the parameter estimate of the variable divided by the smallest parameter estimate in the final model. For each patient, the risk score was calculated using the risk scoring model as the sum of the points. (3) On the basis of the categorical criteria of the risk score, the patients were divided into four groups using Kaplan Meier survival curves (groups 1 to 4). Then, the survival curve of each group was evaluated on the basis of Kaplan Meier survival curves. Cox PHMs were used to compare the risk of the outcome between the groups. Cox PHMs were adjusted for the variables that were not included in the final model, such as gender, ln(vintage), controlled hemoglobin level, and high ANP level. The results are presented here as hazard ratios (HR) with 95 % confidence interval (CI).
Step 2: The risk score was calculated for each patient using the validation dataset. On the basis of the categorical criteria of the risk score, patients were divided into four groups. Patients' survival curves were evaluated by Kaplan Meier analysis. The risk of the outcome was compared between the groups using Cox PHMs adjusted for gender, ln(vintage), controlled hemoglobin level, and high ANP level. These analyses were conducted using SAS version 9.4 (SAS, Inc., NC, USA). Statistical significance was defined as p < 0.05.
The study population consisted of 622 patients. The mean duration of dialysis of the patients was 5.0 ± 6.0 years (range, 0.1 to 32.0 years). The primary etiology of renal disease in these patients was chronic glomerulonephritis in 255 patients (35.2 %), diabetic nephropathy in 197 (27.2 %), nephrosclerosis in 72 (9.9 %), IgA nephritis in 31 (4.2 %), polycystic renal disease in 24 (3.3 %), and other diseases in 145 (20.0 %). There were 217 patients (30.0 %) with DM at the time of study registration. At baseline, medical therapy included calcium antagonists in 411 patients (56.7 %), angiotensin-converting enzyme inhibitors and/or angiotensin-II receptor blockers in 129 (17.8 %), nitrates in 137 (18.9 %), digoxin in 48 (6.6 %), and beta blockers in 36 (4.9 %) in varying combinations.
After randomization, 312 patients were included for obtaining data for the development dataset and 310 patients for obtaining those for the validation dataset (Fig. 1). Their demographics including biochemical data are shown in Table 1. CTR, plasma BNP level, the number of the patients with high BNP level, and the numbers of all-cause death and CVD-caused death within 15 and 10 years in the validation dataset were higher than those in the development dataset.
Development dataset
Validation dataset
P value
Male (%)
376 (60.45)
52.9 ± 14.1
Older (%)
63 (20.19)
Vintage (years)
5.7 ± 6.0
3.4 (1.2, 8.0)
20.7 ± 2.7
Low BMI (%)
DM (%)
History of CVD
Pre-HD hypertension
Antihypertensive drug use
Hypotension during HD
CTR (%)
High CTR (%)
Hemoglobin level (g/dL)
Controlled hemoglobin level (%)
Plasma ANP level (pg/mL)
182.4 ± 153.2
143 (87.4, 238)
132.5 (79.9, 230.0)
High ANP level (%)
Plasma BNP level (pg/mL)
291.5 (150, 587)
261.5 (131.0, 508.0)
High BNP level (%)
ST segment depression or negative T wave (%)
Abnormal Q wave (%)
39 (6.27)
Atrial fibrillation (%)
4 (1.28)
ECG score (%)
All-cause death within 15 years (%)
All-cause death within 5 years (%)
CVD-caused death within 15 years (%)
CVD-caused death within 5 years (%)
Follow-up period (days)
2963.4 ± 1839.3
2813 (1301, 5040)
Variables are expressed as mean ± standard deviation. Vintage, plasma ANP and BNP levels, and follow-up days are also shown as median and interquartile range. Intergroup comparisons of parameters were performed using chi-square test, t test, and Mann Whitney U test as appropriate as appropriate
Development dataset dataset for the development of risk score, Validation dataset dataset for the validation of risk score, Older 65 ≤ age, BMI body mass index, Low BMI BMI <20.4 kg/m2, DM diabetes mellitus as a cause of end-stage renal disease, CVD cardiovascular disease, HD hemodialysis, CTR cardiothoracic ratio, High CTR 50 % ≤ CTR, ANP atrial natriuretic peptide, High ANP level 132.5 pg/mL ≤ plasma ANP level, BNP brain natriuretic peptide, High BNP level 250 pg/mL ≤ plasma BNP level, ECG score the number of abnormal findings in electrocardiogram
Development and categorization of risk score
After the selection of the variables, the initial model was constructed. Ln(vintage) and controlled hemoglobin level were not included in the model: ln(vintage), p = 0.35; controlled hemoglobin level, p = 0.43. The initial model included the following variables: gender, older, DM, history of CVD, hypotension, high CTR, high ANP level, high BNP level, and ECG score. However, in the initial model, gender, low BMI, history of CVD, and high ANP level were not statistically significant: gender, p = 0.74; low BMI, p = 0.25; history of CVD, p = 0.69; high ANP level, p = 0.99. Then, the final model included older, DM, hypotension, high CTR, high BNP level, and ECG score.
The risk scoring model was developed using the parameter estimates in the final model as follows (Table 2):
$$ \mathrm{Risk}\ \mathrm{score} = \mathrm{older} + \mathrm{D}\mathrm{M} + \mathrm{hypotension} + \mathrm{high}\ \mathrm{C}\mathrm{T}\mathrm{R} + \mathrm{high}\ \mathrm{B}\mathrm{N}\mathrm{P}\ \mathrm{level} + \mathrm{E}\mathrm{C}\mathrm{G}\ \mathrm{score} $$
Parameter estimates in the final models and risk score
Parameter estimate in the final model
High CTR
High BNP level
ECG score = 1
Each parameter estimate in the final models was compared with the smallest parameter estimate (High BNP level). Then, the risk scores were determined.
Older 65 ≤ age, DM diabetes mellitus as a cause of end-stage renal disease, Hypotension hypotension during hemodialysis, High CTR 50 % ≤ cardiothoracic ratio, High BNP level 250 pg/mL ≤ plasma brain natriuretic peptide level, ECG score the number of abnormal findings in electrocardiogram
Older, yes = 3, no = 0; DM, yes = 3, no = 0; hypotension, yes = 1, no = 0; high CTR, yes = 1, no = 0; high BNP level, yes = 1, no = 0; ECG score, score 0 = 0, score 1 = 1, score 2 = 5.
The Kaplan Meier survival curves showed a significant difference in the survival probability of the patients determined on the basis of risk score (log-rank test p = 0.0001). The patients were categorized into four groups on the basis of the risk score: group 1 (low risk), risk score = 0; group 2, score = 1 to 3; group 3, score = 4 to 5; and group 4 (high risk), score = 6 and higher. The Kaplan Meier survival curve showed the difference in the survival probability of the groups (log-rank test p = 0.0001) (Fig. 2). Groups 2 to 4 showed higher risks of all-cause death than group 1: group 2, HR 4.29 95 % CI (2.29, 8.03), adjusted HR 4.19 95 % CI (2.22, 7.91); group 3, HR 14.47 95 % CI (7.55, 27.72), adjusted HR 14.68 95 % CI (7.58, 28.41); group 4, HR 21.84 95 % CI (10.96, 43.53), adjusted HR 24.29 95 % CI (11.83, 49.89).
Association between groups and mortality in development dataset. Kaplan Meier survival curves showed that group 1 had the highest survival probability
Validation of risk score and its categories
Using the validation dataset, we compared the risk of death between the groups. The risks of all-cause death and CVD-caused death in group 4 were higher than those in other groups (Table 3). The Kaplan Meier survival curves for all-cause death and CVD-caused death showed a lower survival probability in group 4 than in the other groups for 15 years (Fig. 3). Moreover, groups 2 to 4 showed high risks of all-cause death and CVD-caused death than group 1 (Table 4).
Risks of death in groups on the basis of risk scores in validation dataset
Follow-up period
All-cause death
0.15 (0.06, 0.25)
0.14 (0.042, 0.23)
0.039 (0.001, 0.091)
CVD-caused death
0.096 (0.016, 0.18)
Values are risks of death with 95 % confidence intervals in each group. Patients were categorized into four groups on the basis of their risk scores
CVD cardiovascular disease
Association between groups and mortality in validation dataset. Kaplan Meier survival curves showed that group 1 had the highest survival probability free from all-cause death (a) and CVD-caused death (b). CVD cardiovascular disease
Groups with high risk scores and high risks of death
HR (95 % CI)
aHR (95 % CI)
6.43 (1.54, 26.80)
13.62 (6.48, 28.63)
9.41 (4.25, 20.80
24.96 (11.21, 55.59)
14.79 6.56, 33.34)
24.761 (5.88, 104.34)
25.25 (5.89, 108.23)
Values are HRs with 95 % CIs of all groups compared with group 1. Groups 2 to 4 show higher risks of death than group 1. Patients were categorized into four groups on the basis of their risk scores
CVD cardiovascular disease, HR hazard ratio, aHR adjusted hazard ratio, CI confidence interval
Although the prognosis of the HD patients in Japan is relatively good compared with that in other countries, the leading cause of death is also cardiovascular disease [2]. Several reports have been published that disclosed the usefulness of particular factors related to cardiovascular disorder for predicting a HD patient's prognosis. Such factors are coronary artery disease [5], hypertension [6], left ventricular hypertrophy [7], hypotension [8], left ventricular function [9], left ventricular size [10], BNP level [11], atrial fibrillation [12], hypocholesterolemia [13], cardiac troponin [14], C-reactive protein level [15], and autonomic nervous system abnormality [16] among others. However, it has never been considered the contribution rate of each factor for the mortality of patients on maintenance HD. Moreover, there was no report on the scoring system for predicting the long-term prognosis in those patients, considering that the vintage of dialysis patients has been increasing with a quarter of them receiving HD more than 10 years in Japan [2]. In this study, we proposed a scoring system using routine measurements to predict the long-term prognosis of HD patients.
We evaluated the patients' basic information, such as age, gender, physical constitution, DM as a cause of ESRD, and history of CVD, which appears in the very first page of the patients' chart. Chest X-ray images and ECGs are obtained routinely, and hemoglobin and natriuretic peptide levels are also frequently measured at the beginning of a regular HD session. Blood pressure is routinely measured during HD to ensure safe treatment. In this study, we focused on these common parameters that are obtained routinely at the beginning of maintenance HD and during maintenance HD treatment.
As a result, most of the significant parameters listed in this study were found to be related to cardiac and/or circulatory disorders. DM is a strong risk factor for atherosclerosis, and abnormal Q wave and ST depression are signs of coronary artery disease. High BNP level, high CTR, and hypotension during HD treatment are mainly due to impaired cardiac function and disorder of hemodynamics. Shoji et al. reported that hypotension during HD treatment, which is a significant prognostic parameter, is closely related to interdialysis body weight gain beside age and vintage [8]. This condition is considered to suggest a chronic volume overload and an acute change in loading condition caused by HD treatment.
Heart failure, DM, and aging are also risk factors for arrhythmias. Regarding the cardiac arrhythmia in HD patients, ventricular arrhythmia was focused on as a possible to cause cardiac death [17]. Although the incidence and severity of ventricular premature beats were high, there was no direct evidence that ventricular arrhythmia itself is related to the prognosis in these patients. It has been documented that atrial fibrillation affects the prognosis in non-ESRD patients [18] and its prevalence increases with aging [19]. Atrial fibrillation is much more frequent in HD patients than in the general population; age, HD vintage, presence of some heart diseases, and left atrial dilatation are associated with the arrhythmia [20]. Va'zquez et al. reported that atrial fibrillation itself worsen the prognosis of HD patients [12].
We developed a scoring system for predicting the long-term prognosis of chronic HD patient using these parameters. In Japan, the same set of examinations is commonly performed in all the HD patients routinely in many facilities. This scoring system provides us proper usage of medical resources in the medical care of ESRD. The scoring system has several features. First, the scoring system we developed in this study includes markers commonly used to evaluate cardiac disease, which were easily measurable and did not require special skills of examiners. Moreover, the scoring system can be used to evaluate HD patients' long-term prognosis. Previous scores were developed to evaluate HD patients' short-term mortality [21, 22]. These scores can be used separately. For examination, the scoring system of this study is used to evaluate a HD patient's long-term prognosis. If the patient has a high risk of death, his prognosis will be evaluated using a score for short-term prognosis, which can exactly predict his prognosis. HD patients' prognosis is determined by many factors. The scores were differently developed on the basis of the population and purpose. Because a single score cannot cover all of the patients with many conditions, we should select properly a score on the basis of a patient's condition.
This study has several limitations. First, we were unable to examine the patients with missing data in this study, which might have caused selection bias. Second, the dataset did not include sufficient data for assessing nutrition, chronic kidney disease—mineral and bone disorder, comorbid conditions, and medications. Clinical practice guidelines for the management in hemodialysis patients established by the Japanese Society for Dialysis Therapy have been implemented since 2004. Although the markers in this study were selected in 1997, they are still commonly used. Third, although we developed and validated the risk scoring system using the different datasets, the population of the present study cannot be said to represent the HD patients in Japan. And there were differences in the characteristics of the development dataset and validation datasets such as the numbers of all-cause death and CVD-caused death. The differences might cause bias for the risk score. However, the validation of the scoring system showed a possibility that the scoring system was applicable to a population with different risk. Further validation study is required. Fourth, we used dichotomized variables. Although this strategy simplifies the development of a risk scoring system, the use of continuous variables may provide more refined information [21]. Fifth, this study was carried out from 1997 to 2014. During this period, various new medicines were developed such as angiotensin II receptor blockers and erythropoiesis-stimulating agents. Moreover, various guidelines were established by the Japanese Society for Dialysis Therapy. We were unable to time-dependently evaluate the effects of these innovations on the prognosis of the subjects. This might have made bias on this scoring system, however, because the scoring system did not include medications and all of the subjects had equal opportunity to benefit from these innovations. The effects of bias might be reduced, and the categorization of the subjects from high-risk group to low-risk group may have minimized errors.
In conclusion, this study shows the significant association between long-term prognosis and cardiovascular disease-related risk factors in HD patients and we developed and validated a new simple scoring system for predicting their prognosis spanning long periods of time. This will also contribute not only to the grading of the risk of HD patients but also to deliver medical resources to the right patients.
HD:
Hemodialysis
Development dataset:
Dataset for the development of risk score
Validation dataset:
Dataset for validation of risk score
BMI:
ESRD:
End-stage renal disease
CTR:
Cardiothoracic ratio
ANP:
Atrial natriuretic peptide
BNP:
ECG:
Electrocardiograms
ln(vintage):
Logarithm values of vintage
Cox PHMs:
Cox proportional hazard models
Hazard ratios
There was no funding.
The data will not be shared because the informed consent from the subjects regarding the public availability for their personal data was not obtained.
HI (Haruki Itoh) carried out the planned and performed the whole study, acquired the data, performed analysis, and created the manuscript. HK, YT, NM, TM, and HI (Hidetaka) designed the study, acquired the data, performed the interpretation of the data, and revised the manuscript. EK performed analysis and interpretation of the data and revised the manuscript. All authors read and approved the final manuscript.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional research committee at which the studies were conducted (IRB approval number 215070001, 19-2, 45) and with the Helsinki declaration and its later amendments or comparable ethical standards. And informed consent was obtained from all individual participants included in the study.
Sakakibara Heart Institute, 2-4 Nishishinnjuku, Shinjuku-ku, Tokyo 163-0804, Japan
Tokiwa-kai Medical Corporation, Iwaki-shi, Fukushima-ken, Japan
Meysey-kai Medical Corporation, Togane-shi, Chiba-ken, Japan
Hemodialysis Department, Japan Community Health Care, Organization Chiba Hospital, Chiba-shi, Chiba-ken, Japan
Clinical Examination Department, Sakakibara Heart Institute Clinic, Shinjuku-ku, Tokyo, Japan
Department of Internal Medicine, Toranomon Mutual Aid General Hospital, Minako-ku, Tokyo, Japan
Department of Nephrology, Tokyo Kyosai Hospital, Meguro-ku, Tokyo, Japan
Pastan S, Bailey J. Dialysis therapy. N Engl J Med. 1998;338:1428–37.View ArticlePubMedGoogle Scholar
Masakane I, Nakai S, Ogata S, Kimata N, Hanabusa N, Hamano T, Wakai T, Wada A, Nitta K. An overview of regular diadysis treatment in Japan (as of December 31, 2014). J Jpn Soc Dial Ther. 2016;49:1–34.View ArticleGoogle Scholar
United States renal deta system. http://www.usrds.org/2014/view/Default.aspx. Accessed 31 Aug 2016.
Lindner A, Charra B, Sherrard DJ. Accelerated atherosclerosis in prolonged maintenance hemodialysis. N Engl J Med. 1974;290:697–701.View ArticlePubMedGoogle Scholar
Herzog CA, Ma LZ, Collins AJ. Poor long-term survival after acute myocardial infarction among patients on long-term hemodialisys. N Eng J Med. 1998;339:799–805.View ArticleGoogle Scholar
Bansal N, McCulloch CE, Rahman M, Kusek JW, Anderson AH, Xie D, Townsend RR, Lora CM, Wright J, Go AS, Ojo A, Alper A, Lustigova E, Cuevas M, Kallem R, Hsu C, the CRIC Study Investigators. Blood pressure and risk of all-cause mortality in advanced chronic kidney disease and hemodialysis. The chronic renal insufficiency cohort study. Hypertension. 2015;65:93–100.View ArticlePubMedGoogle Scholar
Lopez-Gomez JM, Verde E, Perez-Garcia R. Blood pressure, left ventricular hypertrophy and long-term prognosis in hemodialysis patients. Kidney Int. 1998;68:S92–8.View ArticleGoogle Scholar
Shoji T, Tsubakihara Y, Fujii M, Imai E. Hemodialysis-associated hypotension as an independent risk factor for two-year mortality in hemodialysis patients. Kidney Int. 2004;66:1212–20.View ArticlePubMedGoogle Scholar
Yamada S, Ishii H, Takahashi H, Aoyama T, Morita Y, Kasuga H, Kimura K, Ito Y, Takahashi R, Toriyama T, Yasuda Y, Hayashi M, Kamiya H, Yuzawa Y, Maruyama S, Matsuo S, Matsubara T, Murohara T. Prognostic value of reduced left ventricular ejection fraction at start of hemodialysis therapy on cardiovascular and all-cause mortality in end-stage renal disease patients. Clin J Am Soc Ncphrol. 2010;5:1793–8.View ArticleGoogle Scholar
Inoue T, Ogawa T, Iwabuchi Y, Otsuka K, Nitta K. Left ventricular end-diastolic diameter is an independent predictor of mortality in hemodialysis patients? Ther Aphresis Dial. 2012;16:134–41.View ArticleGoogle Scholar
Naganuma T, Sugimura K, Wada S, Yasumoto R, Sugimura T, Masuda C, Uchida J, Nakatani T. The prognostic role of brain natriuretic peptides in hemodialysis patients. Am J Nephrol. 2002;22:437–44.View ArticlePubMedGoogle Scholar
Va'zquez E, Sa'nchez-Perales C, Lozano C, Garcı'a-Corte's MJ, Borrego F, Guzma'n M, Pe'rez P, Pagola C, Borrego MJ, Pe'rez V. Comparison of prognostic value of atrial fibrillation versus sinus rhythm in patients on long-term hemodialysis. Am J Cardiol. 2003;92:868–71.View ArticleGoogle Scholar
Iseki K, Yamazato M, Tozawa M, Takishita S. Hypocholesterolemia is a significant predictor of death in a cohort of chronic hemodialysis patients. Kidney Int. 2002;61:1887–93.View ArticlePubMedGoogle Scholar
Khan NA, Hcmmelgarn BR, Tonelli M, Thompson CR, Levin A. Prognostic value of troponin T and I among symptomatic patients with end-stage renal disease: a meta-analysis. Circulation. 2005;112:3088–96.View ArticlePubMedGoogle Scholar
Yeun J, Levine R, Mantadilok V, Kaysen G. C-reactive protein predicts all-cause and cardiovascular mortality in hemodialysis patients. Am J Kidney Dis. 2000;35:469–76.View ArticlePubMedGoogle Scholar
Oikawa K, Ishihara R, Maeda T, Yamaguchi K, Koike A, Kawaguchi H, Tabata Y, Murotani N, Itoh H. Prognostic value of heart rate variability in patients with renal failure on hemodialysis. Int J Cardiol. 2009;131:370–7.View ArticlePubMedGoogle Scholar
Morrison G, Michelson E, Brown S, Morganroth J. Mechanism and prevension of cardiac srrhythmias in chronic hemodialysis patioents. Kidney Int. 1980;17:811–9.View ArticlePubMedGoogle Scholar
The AFFIRM Investigators. Relationship between sinus rhythm, treatment, and survival in the atrial fibrillation follow-up investigation of rhythm management (AFFIRM) study. Circulation. 2004;109:1509–13.View ArticleGoogle Scholar
Kannel WB, Wolf PA, Benjamin EJ, Levy D. Prevalence, incidence, prognosis, and predisposing conditions for atrial fibrillation: population-based estimates. Am J Cardiol. 1998;82:2N–9.View ArticlePubMedGoogle Scholar
Genovesi S, Pogliani D, Faini A, Valsecchi MG, Riva A, Stefani F, Acquistapace I, Stella A, Bonforte G, DeVecchi A, DeCristofaro V, Buccianti G, Vincenti A. Prevalence of atrial fibrillation and associated factors in a population of long-term hemodialysis patients. Am J Kidney Dis. 2005;46:897–902.View ArticlePubMedGoogle Scholar
Kanda E, Bieber BA, Pisoni RL, Robinson BM, Fuller DS. Importance of simultaneous evaluation of multiple risk factors for hemodialysis patients' mortality and development of a novel index: dialysis outcomes and practice patterns study. PLoS One. 2015;10:e0128652.View ArticlePubMedPubMed CentralGoogle Scholar
Anker SD, Gillespie IA, Eckardt KU, Kronenberg F, Richards S, Drueke TB, Stenvinkel P, Pisoni RL, Robinson BM, Marcelli D, Froissart M, Floege J, On behalf the ARO Steering Committee (collaborators). Development and validation of cardiovascular risk scores for haemodialysis patients. Int J Cardiol. 2016;216:68–77.View ArticlePubMedGoogle Scholar | CommonCrawl |
Approximation of the trajectory attractor for a 3D model of incompressible two-phase-flows
CPAA Home
Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials II. Superquadratic potentials
November 2014, 13(6): 2211-2228. doi: 10.3934/cpaa.2014.13.2211
Large-time behavior of solutions for the system of compressible adiabatic flow through porous media with nonlinear damping
Shifeng Geng 1, and Lina Zhang 2,
School of Mathematics and Computational Science, Xiangtan University, Hunan 411105
School of Mathematical Science and Computing Technology, Central South University, Changsha 410075, China
Received April 2013 Revised December 2013 Published July 2014
This paper is concerned with large-time behavior of solutions for the system of compressible adiabatic flow through porous media with nonlinear damping. For the nonlinear damping case, i.e. $\beta \neq 0,$ results for the linear damping case are extended to the case of nonlinear damping. Compared with the results obtained by Marcati and Pan, better decay estimates are obtained in this paper.
Keywords: system of compressible adiabatic flow through porous media, Large-time behavior, convergence rates., nonlinear damping.
Mathematics Subject Classification: 35L45, 35L60, 35L65, 76R5.
Citation: Shifeng Geng, Lina Zhang. Large-time behavior of solutions for the system of compressible adiabatic flow through porous media with nonlinear damping. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2211-2228. doi: 10.3934/cpaa.2014.13.2211
S. Geng and Z. Wang, Convergence rates to nonlinear diffusion waves for solutions to the system of compressible adiabatic flow through porous media, Comm. Partial Differential Equations, 36 (2011), 850-872. doi: 10.1080/03605302.2010.520052. Google Scholar
L. Hsiao and T.-P. Liu, Convergence to nonlinear diffusion waves for solutions of a system of hyperbolic conservation laws with damping, Comm. Math. Phys., 143 (1992), 599-605. doi: 10.1007/BF02099268. Google Scholar
L. Hsiao and T.-P. Liu, Nonlinear diffusion phenomena of nonlinear hyperbolic system, Chin. Ann. Math. Ser. B, 14 (1993), 465-480. Google Scholar
L. Hsiao and T. Luo, Nonlinear diffusive phenomena of solutions for the system of compressible adiabatic flow through porous media, J. Differential Equations, 125 (1996), 329-365. doi: 10.1006/jdeq.1996.0034. Google Scholar
L. Hsiao and D. Serre, Large-time behavior of solutions for the system of compressible adiabatic flow through porous media, Chin. Ann. Math. Ser. B, 16 (1995), 431-444. Google Scholar
L. Hsiao and D. Serre, Global existence of solutions for the system of compressible adiabatic flow through porous media, SIAM J. Math. Anal., 27 (1996), 70-77. doi: 10.1137/S0036141094267078. Google Scholar
M. Jiang and C. Zhu, Convergence rates to nonlinear diffusion waves for $p$-system with nonlinear damping on quadrant, Discrete Contin. Dyn. Syst. Ser. A, 23 (2009), 887-918. doi: 10.3934/dcds.2009.23.887. Google Scholar
H. Ma and M. Mei, Best asymptotic profile for linear damped p-system with boundary effect, J. Differential Equations, 249 (2010), 446-484. doi: 10.1016/j.jde.2010.04.008. Google Scholar
P. Marcati and M. Mei, B. Rubino, Optimal convergence rates to diffusion waves for solutions of the hyperbolic conservation laws with damping, J. Math. Fluid Mech., 7 (2005), S224-S240. doi: 10.1007/s00021-005-0155-9. Google Scholar
P. Marcati and K. Nishihara, The $L^p-L^q$ estimates of solutions to one-dimensional damped wave equations and their application to the compressible flow through porous media, J. Differential Equations, 191 (2003), 445-469. doi: 10.1016/S0022-0396(03)00026-3. Google Scholar
P. Marcati and R. Pan, On the diffusive profiles for the system of compressible adiabatice flow through porous media, SIAM J. Math. Anal., 33 (2001), 790-826. doi: 10.1137/S0036141099364401. Google Scholar
M. Mei, Nonlinear diffusion waves for hyperbolic $p$-system with nonlinear damping, J. Differential Equations, 247 (2009), 1275-1296. doi: 10.1016/j.jde.2009.04.004. Google Scholar
M. Mei, Best asymptotic profile for hyperbolic p-system with damping, SIAM J. Math. Anal., 42 (2010), 1-23. doi: 10.1137/090756594. Google Scholar
K. Nishihara, Convergence rates to nonlinear diffusion waves for solutions of system of hyperbolic conservation laws with damping, J. Differential Equations, 131 (1996), 171-188. doi: 10.1006/jdeq.1996.0159. Google Scholar
K. Nishihara, Asymptotic toward the diffusion wave for a one-dimensional compressible flow through porous media, Proceedings of the Royal Society of Edinburgh, 133A (2003), 177-196. doi: 10.1017/S0308210500002341. Google Scholar
K. Nishihara and M. Nishikawa, Asymptotic behavior of solutions to the system of compressible adiabatic flow through porous media, SIAM J. Math. Anal., 33 (2001), 216-239. doi: 10.1137/S003614109936467X. Google Scholar
K. Nishihara, W. Wang and T. Yang, $L_p$ -convergence rate to nonlinear diffusion waves for p-system with damping, J. Differential Equations, 161 (1999), 191-218. doi: 10.1006/jdeq.1999.3703. Google Scholar
M. Nishikawa, Convergence rate to the traveling wave for viscous conservation laws, Funkcial. Ekvac., 41 (1998), 107-132. Google Scholar
R. Pan, Darcy's law as long-time limit of adiabatic porous media flow, J. Differential Equations, 220 (2006), 121-146. doi: 10.1016/j.jde.2004.10.013. Google Scholar
H. Zhao, Convergence to strong nonlinear diffusion waves for solutions of p-system with damping, J. Differential Equations, 174 (2001), 200-236. doi: 10.1006/jdeq.2000.3936. Google Scholar
C. Zhu, Convergence rates to nonlinear diffusion waves for weak solutions to $p$-system with damping, Sci. Chin. Ser. A, 46 (2003), 562-575. doi: 10.1360/03ys9057. Google Scholar
C. Zhu and M. Jiang, $L^p$-decay rates to nonlinear diffusion waves for $p$-system with nonlinear damping, Sciences in China, Series A, 49 (2006), 721-739. doi: 10.1007/s11425-006-0721-5. Google Scholar
Shifeng Geng, Zhen Wang. Best asymptotic profile for the system of compressible adiabatic flow through porous media on quadrant. Communications on Pure & Applied Analysis, 2012, 11 (2) : 475-500. doi: 10.3934/cpaa.2012.11.475
Zhong Tan, Yong Wang, Fanhui Xu. Large-time behavior of the full compressible Euler-Poisson system without the temperature damping. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1583-1601. doi: 10.3934/dcds.2016.36.1583
Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, 2021, 20 (3) : 995-1023. doi: 10.3934/cpaa.2021003
Mina Jiang, Changjiang Zhu. Convergence rates to nonlinear diffusion waves for $p$-system with nonlinear damping on quadrant. Discrete & Continuous Dynamical Systems, 2009, 23 (3) : 887-918. doi: 10.3934/dcds.2009.23.887
Zhenhua Guo, Wenchao Dong, Jinjing Liu. Large-time behavior of solution to an inflow problem on the half space for a class of compressible non-Newtonian fluids. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2133-2161. doi: 10.3934/cpaa.2019096
Linlin Li, Bedreddine Ainseba. Large-time behavior of matured population in an age-structured model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2561-2580. doi: 10.3934/dcdsb.2020195
Marco Di Francesco, Yahya Jaafra. Multiple large-time behavior of nonlocal interaction equations with quadratic diffusion. Kinetic & Related Models, 2019, 12 (2) : 303-322. doi: 10.3934/krm.2019013
Ruiying Wei, Yin Li, Zheng-an Yao. Global existence and convergence rates of solutions for the compressible Euler equations with damping. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 2949-2967. doi: 10.3934/dcdsb.2020047
Geonho Lee, Sangdong Kim, Young-Sam Kwon. Large time behavior for the full compressible magnetohydrodynamic flows. Communications on Pure & Applied Analysis, 2012, 11 (3) : 959-971. doi: 10.3934/cpaa.2012.11.959
Weike Wang, Xin Xu. Large time behavior of solution for the full compressible navier-stokes-maxwell system. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2283-2313. doi: 10.3934/cpaa.2015.14.2283
Zhong Tan, Yong Wang, Xu Zhang. Large time behavior of solutions to the non-isentropic compressible Navier-Stokes-Poisson system in $\mathbb{R}^{3}$. Kinetic & Related Models, 2012, 5 (3) : 615-638. doi: 10.3934/krm.2012.5.615
Yangyang Qiao, Huanyao Wen, Steinar Evje. Compressible and viscous two-phase flow in porous media based on mixture theory formulation. Networks & Heterogeneous Media, 2019, 14 (3) : 489-536. doi: 10.3934/nhm.2019020
Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski. An improved homogenization result for immiscible compressible two-phase flow in porous media. Networks & Heterogeneous Media, 2017, 12 (1) : 147-171. doi: 10.3934/nhm.2017006
Bilal Saad, Mazen Saad. Numerical analysis of a non equilibrium two-component two-compressible flow in porous media. Discrete & Continuous Dynamical Systems - S, 2014, 7 (2) : 317-346. doi: 10.3934/dcdss.2014.7.317
Cédric Galusinski, Mazen Saad. A nonlinear degenerate system modelling water-gas flows in porous media. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 281-308. doi: 10.3934/dcdsb.2008.9.281
Youshan Tao, Lihe Wang, Zhi-An Wang. Large-time behavior of a parabolic-parabolic chemotaxis model with logarithmic sensitivity in one dimension. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 821-845. doi: 10.3934/dcdsb.2013.18.821
Ken Shirakawa, Hiroshi Watanabe. Large-time behavior for a PDE model of isothermal grain boundary motion with a constraint. Conference Publications, 2015, 2015 (special) : 1009-1018. doi: 10.3934/proc.2015.1009
Jishan Fan, Fei Jiang. Large-time behavior of liquid crystal flows with a trigonometric condition in two dimensions. Communications on Pure & Applied Analysis, 2016, 15 (1) : 73-90. doi: 10.3934/cpaa.2016.15.73
Teng Wang, Yi Wang. Large-time behaviors of the solution to 3D compressible Navier-Stokes equations in half space with Navier boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (7&8) : 2811-2838. doi: 10.3934/cpaa.2021080
Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski. The existence of weak solutions to immiscible compressible two-phase flow in porous media: The case of fields with different rock-types. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1217-1251. doi: 10.3934/dcdsb.2013.18.1217
Shifeng Geng Lina Zhang | CommonCrawl |
Modeling photocatalytic degradation of diazinon from aqueous solutions and effluent toxicity risk assessment using Escherichia coli LMG 15862
Ali Toolabi1,
Mohammad Malakootian2,3,
Mohammad Taghi Ghaneian1,
Ali Esrafili4,
Mohammad Hassan Ehrampoush1,
Mohsen AskarShahi5 &
Maesome Tabatabaei6
In this study, modeling and degradation of diazinon from contaminated water by advanced oxidation process together with a new test for effluent bioassay using E. coli were investigated. The experiments were designed based on response surface methodology. Nanoparticles (NPs) were synthesized using the sol–gel method. The shape characteristics and specifications of elements in the nanoparticles were characterized using scanning electron microscope and energy dispersive X-ray, respectively. Diazinon was measured using high performance liquid chromatography device and by-products due to its decomposition were identified by gas chromatography-mass (GC–MS). In the present study, effluent bioassay tests were conducted by defining the rate of dehydrogenase enzyme reducing alamar blue method. According to statistical analyses (R2 = 0.986), the optimized values for pH, dose of NPs, and contact time were found to be 6.75, 775 mg/L, and 65 min, respectively. At these conditions, 96.06% of the diazinon was removed. Four main by-products, diazoxon, 7-methyl-3-octyne, 2-isopropyl-6-methyl-4pyrimidinol and diethyl phosphonate were detected. According to the alamar blue reducing (ABR) test, 50% effective concentration, no observed effect concentration, and 100% effective concentration (EC100) for the mortality rate of E. coli were obtained as 2.275, 0.839, and 4.430 mg/L, respectively. Based on the results obtained, it was found that mentioned process was high efficiency in removing diazinon, and also a significant relationship between toxicity assessment tests were obtained (P < 0.05).
Organophosphate pesticides (OPs) are among the largest and most diverse types of available pesticides. Considering that they affect a wide range of insects and rodents, these pesticides are used by farmers more than other types. But due to the lack of familiarity with the damaging effects of these toxins or proper principles of combating pests, most consumers do this job either incompletely or indiscriminately (Fadaei et al. 2012; Li et al. 2015; Maddah and Hasanzadeh 2017). Therefore, intentional or unintentional human exposure is as a result of the use of pesticides or their residuals in environments including air, water, soil, and plants. Considering global statistics, the largest portion of mortality from pesticides is related to these toxins. Diazinon is an organophosphate pesticides with pKa = 2.6 and medium risk (Kalantary et al. 2014). The major effects of diazinon on vertebrate life are inhibition of acetyl cholinesterase, resulting in aggregation of acetylcholine in acetylcholine receiver and hyper excitation of nerves and muscles. So far, various technologies have widely been applied for removal of diazinon in aqueous solution such as adsorption, electrocoagulation and biodegradation (Amooey et al. 2014; Ehrampoush et al. 2017).
Since conventional water and wastewater treatment processes are not very effective on the degradation of diazinon (Amooey et al. 2014; Kalantary et al. 2014; Ehrampoush et al. 2017), Recently, advanced oxidation process such as UV/H2O2, H2O2/Fe2+, NPS/UV and etc., due to high efficiency, low cost, and non-toxicity have been considered. Li et al. used UV and UV/H2O2 process for the removal of diazinon from water resources (Li et al. 2015). Also, Kalantary et al. successfully used TiO2/UV process for the degradation of diazinon (Kalantary et al. 2014). TiO2 nanoparticles along with UV, have been considered as an effective method for water treatment (Amooey et al. 2014; Li et al. 2015; Ribeiro et al. 2015; Ehrampoush et al. 2017; Maddah and Hasanzadeh 2017). The energy of light from UV rays in contact with titanium atoms, stimulates its surface electrons and moves them from the valence layer to the conductive layer, The result of this energy change will be the formation of a halo at the surface of the titanium atom and the formation of free electrons (OH•). These active radicals cause oxidation of organic matter in the solution and convert it to water and carbon dioxide. One of the disadvantages of titanium nanocatalysts is the existence of an inter-structural hole in this composition. This means that a less energy band of ultraviolet radiation will remain on the surface of the catalyst (Mohammadi and Sabbaghi 2014; Tian et al. 2014; Toolabi et al. 2017; Wang and Shih 2016). Accordingly, in the current study, to enhance the optimal response of titanium dioxide, silica dioxide was introduced to the reaction. Performing the effluent toxicity risk assessment after water treatment processes is essential for environmental, drinking water and public health. Previously, to determine the effluents toxicity, some methods such as tetrazolium salt, crystal violet, and colony forming unit were used. But often they were expensive, long-term, and unreliable (Pettit et al. 2005; Satyanarayan et al. 2016). Recently, alamar blue (AB) due to its high sensitivity and non-toxicity has been widely used in studies on bioassay in a biological range such as bacteria, piscine cells and planktonic assays (Rampersad 2012; Khalifa et al. 2013; Teh et al. 2017).
Because the oxidation–reduction potential (ORP) of alamar blue is more than the enzyme dehydrogenase, it was reduced by the dehydrogenase enzyme. But in the presence of living bacteria, alamar blue is converted to resorufin and the color of the solution changes from blue to pink (Nasiry et al. 2007; Rampersad 2012; Gregoraszczuk et al. 2015; Balouiri et al. 2016; Tyc et al. 2016; Zare et al. 2016; Teh et al. 2017; Toolabi et al. 2017). To achieve the best and most effective method of removal and risk assessment of diazinon in aqueous solution, more studies need to be done in this regard. Therefore, in this study, application of Fe3O4/SiO2/TiO2/H 2 O2/UV-C process for the degradation of diazinon and novel test for the effluent toxicity risk assessment using Escherichia coli were conducted.
Chemicals and media
Analytical diazinon pesticide with a purity of 98.5%, Acetic acid 99.9%, ethanol 99.9%, chloride iron (II), chloride iron (III), tetra ethyl ortho silicate 95%, tetra-n-butyl lorthotitanate, ammonium solution, alamar blue powder, agar muller hinton, broth nutrient, dimethyl sulfur oxide (DMSO), n-amyl alcohol, HCl-phthalate buffer, glucose, sodium acetate, sodium bicarbonate, Sulfuric acid 98%, Sodium Hydroxide 98%, potassium phosphate monobasic and Dipotassium phosphate were purchased from Sigma Aldrich Co. The properties of diazinon and alamar blue are shown in Table 1.
Table 1 Properties of diazinon and alamar blue
A standard strain of Escherichia coli LMG 15862 bacteria was purchased from Tehran Razi Institute and immediately was stored at a temperature of 8 °C.
Synthesis of nanoparticles
There are various methods for synthesizing and doping TiO2/Fe3O4/SiO2 nanoparticles. These routes include sol–gel process, co-precipitation, hydrothermal method, pyrolysis spray, sono-chemical synthesis, and wet immersion method (Tian et al. 2014; Gupta et al. 2015).
Fe3O4 nanoparticles
The synthesis of Fe3O4 nanoparticles was done according to co-precipitation method. Briefly, 23.36 g of chloride iron (III) and 8.62 g of chloride iron (II) were dissolved in 250 cc of deionized water for 50 min and mixed at 87 °C inside a reactor (Cylindrical and quartz glass with a diameter of 35 cm and length of 45 cm). Thereafter, the resulting solution was slowly injected into 3.6 L of deionized water. Next, action bubbling of nitrogen gas was conducted for 24 h at 75 °C. After these three stages of washing with water and ethanol, Fe3O4 nanoparticles were formed (Shunxing et al. 2016; Maddah and Hasanzadeh 2017; Toolabi et al. 2017).
Fe3O4/SiO2/TiO2 nanoparticles
The synthesis of nanoparticles was done using the sol–gel method. The nanoparticles obtained in the previous step were dissolved in 250 cc deionized water containing tetraethyl orthosilicate, in the next step, ultrasonic (Hielscher model, Sonication of liquids 0.5–4.0 L/min) was used to better separate the nanoparticles. Thereafter, for transparency of nanoparticles and crystal formation, 30 mL of acetic acid was added to the reactor containing nanoparticles of iron/silica and mixed at 200 rpm. Next, the combination of acetic acid, ethanol and tetra-n-butyl lorthotitanate was prepared. The mixture obtained was added to the heater reactor and mixed at 500 rpm. After three stages of washing with deionized water and ethanol, Fe3O4/SiO2/TiO2 was formed (Shunxing et al. 2016; Toolabi et al. 2017; Wang et al. 2017). The surface and shape characteristics of the nano composite and quantitative analysis of the elements were characterized using a scanning electron microscope and energy dispersive X-ray, respectively.
Modeling and statistical analysis
In this work, to model and design the experiments, response surface methodology (RSM) was used. This model is a collection of statistical and mathematical techniques that are useful for analyzing the effects of several independent variables on a response. RSM is an effective statistical technique for optimizing the number of experiments. Also, it specifies the interconnected amount of variables and the most optimal variable is presented in order of preference. RSM contains various models such as the Behnken design, central composite design (CCD), factorials method, box-d-optimal design etc. (Martino et al. 2015; Sarrai et al. 2016; Dehghani et al. 2017; Nama et al. 2018). In the present study, according to the CCD model, the number of experiments was designed for variables such as diazinon concentration (1–40 mg/L), contact time (10–120 min), pH (3–12), and dose of nanoparticles (100–1000 mg/L), Table 2. Expert Design Ver 7 was used for the data analysis.
Table 2 The levels of variables central composite statistical experiment design
Experiments were conducted inside a glass reactor (11 × 11 × 25 cm) with a reflective wall. This reactor was equipped with a UV lamp (λ = 254 nm, P = 125 W, L = 10 cm) surrounded by a quartz tube, cooling system, an air blower pump with a flow rate of 3 L per minute to remove gases from the reactor and also to prevent possible precipitation of nanoparticles to the bottom of the reactor, pH meter and multipara meter device. Also a radiometer device (model Hanger ECL-X) was used to measure the intensity of UV radiation. During the experiment process, sampling was done based on the CCD. In order to increase the production of radical hydroxyl ion in a solution, H2O2 compound was used at a concentration of 50 mg/L (Shemer and Linden 2006). All samples filtered by using a syringe equipped with a 0.2 micron filter. The concentration of diazinon was measured using High Performance Liquid Chromatography (HPLC), the following specifications were used; wavelength was 260 nm, C18 column, length and diameter of the column were 4.6 × 250 mm and the volume of injection sample = 20 µL. The removal efficiency of diazinon was obtained using the following Eq. 1.
$${\text{Removal }}\left( \% \right) = \left( { 1 { }{-}{\text{ C}}_{\text{t}} /{\text{C}}_{\text{o}} } \right)\;\times\;100$$
where Co is the initial concentrations of diazinon (mg/L) and Ct is the residual of diazinon (mg/L) after the specified time.
By-products resulting from the degradation of diazinon were detected using gas chromatography-mass (GC–MS) model Agilent Technologies 19091S-433 with a HP-5MS column (length 25 m, thickness 0.25 mm, diameter 0.25 mm) (Ehrampoush et al. 2017; Toolabi et al. 2017).
Based on the standard methods in the purification of water sources, the rate of mineralization of diazinon was determined by measuring the COD. Accordingly, COD removal was determined using Eq. 2.
$$\% {\text{COD Removal = }}\left( {\frac{{{\text{COD}}_{\text{in}} - {\text{COD}}_{\text{r}} }}{{{\text{COD}}_{\text{in}} }}} \right)$$
where CODin is initial COD (mg/L) and CODr is the COD (mg/L) residual concentration according to CCD parameters.
Toxicity assessment based on ABR methods
The rate of alamar blue dye reduction was determined by the activity of enzyme dehydrogenase; first broth nutrient culture medium was enriched with KH2PO4 (3.28 g/L), K2HPO4 (5.28 g/L), sodium acetate (0.4 g/L) and glucose (0.4 g/L). Next, 2 mL of E. coli suspension and 2 mL of alamar blue solution with concentration of 200 mg/L were added to the broth nutrient medium. Then, 1 mL of the diazinon was added with specific concentrations. Next, it was incubated at 30 °C under darkness condition. Following 60 min of contact time, 2 mL of HCl-phthalate 0.05 M buffer and 20 mL of n-amyl alcohol solution were added to each test tube. Afterwards these materials were stirred slowly, The rate of alamar blue reduction was determined through the extent of absorption at the wavelength of 620 nm using UV/ViS spectrophotometer device (Braic 2100) (Toolabi et al. 2017; Zare et al. 2016).The percentage of alamar blue reduction was obtained using Eq. 3.
$${\text{Reduce activity of dehydrogenase enzyme in alamar blue conversion }}\left( \% \right) = \left( {{\text{A }} - {\text{ B}}} \right)/{\text{A}} \times 100$$
where A is the rate of activity of dehydrogenase enzyme in the control sample and B is the rate of activity of dehydrogenase enzyme in the main sample.
Toxicity assessment based on CFU methods
To investigate the validity of ABR test and effluent bioassay, CFU test was conducted. Accordingly, first, a suspension of E. coli LMG bacteria was prepared. Suspension turbidity was detected using spectrophotometer device. Based on 0.5 McFarland, optical density (OD) 0.6 was generated. By measuring the turbidity in the suspension, the density of the bacterial cells was obtained in the range of 2–3 × 108 cells/mL. To determine the mortality rate of E. coli bacteria, 100 µL of bacterial suspension was injected on a plate containing the Mueller–Hinton medium and diazinon (Nasiry et al. 2007; Gregoraszczuk et al. 2015; Balouiri et al. 2016; Tyc et al. 2016; Toolabi et al. 2017). After 24 h of incubation, the growth inhibition percentage was determined by Eq. 4.
$${\text{Growth inhibition percentage}} = {\text{A}} - {\text{B}}/{\text{A}} \times 100$$
where A is the number of colonies of the control sample and B is the number of colonies of the inoculated sample. Finally, for both tests (ABR and CFU), the results were reported as follows: The amount of toxin required for decreasing the growth less than 1% of the bacteria initial population was reported as no observed effect concentration (NOEC), the amount of toxin required for decreasing 50 and 100% of the bacterial growth was reported as effective concentration (EC50) and effective concentration (EC100), respectively.
Sampling from natural source
After determining the optimal parameters for removal of diazinon by advanced oxidation process, sampling of water from the Seymareh River was carried out for 6 consecutive months. Sampling was carried out once a week and the volume of each sample 2 L was selected. After the samples were transferred to the laboratory, their physical and chemical characteristics were determined. Samples were introduced into the photocatalyst reactor. And removal efficiency of diazinon were obtained under optimum conditions. Then, Alamar Blue reduction and colony count unite tests were used to determine the toxicity of the effluent.
Information on surface morphology and particle size distribution of Fe3O4 and Fe3O4/SiO2/TiO2 were characterized using Scanning electron microscopy, Fig. 1. Accordingly, the high transparency of nanoparticles production was achieved with energy of 15 kV and their accumulation properties were not observed. Also, according to the analysis of the size of the nanoparticles, the typical size of nanoparticles was determined to be 200 nm.
The results of SEM images of a Fe3O4 nanoparticles and b Fe3O4/SiO2/TiO2 nanoparticles
Energy dispersive X-ray spectroscopy
According to Fig. 2, elemental composition analysis using EDX was presented at 0.2 to 8 keV. In Fe3O4/SiO2 composite, O, Fe, Si, and S elements were diagnosed Fig. 2a. The weakest and strongest signals were related to S and Fe elements, respectively. It was also shown in Fig. 2b that Fe3O4/SiO2/TiO2 nanoparticles contain O, C, Fe, Si, Ti, S, and Cr elements. The weakest and strongest signals were related to Cr and O, respectively.
EDX spectrum of a Fe3O4/SiO2 and b Fe3O4/SiO2/TiO2 nanoparticles
Statistical analysis and modeling
According to the central composite design, the number of 30 runs was designed and the efficiency removal of diazinon belonging to each run was determined, Table 3. The optimum run was related to run 27; in this case, the removal efficiency of diazinon was reported to be 96.06%. Also, the predicted value of each run was determined. Accordingly, a direct relationship between real values and predicted values was reported Fig. 3 (R2 = 0.943). Further details are shown in Table 3.
Table 3 Results of the experimental runs based on the central composite design
The relationship between real values and predicted values
In this study, the regression results of quadratic, linear, 2FI, and cubic models for the removal efficiency of diazinon is shown in Table 4. Accordingly, for R2 = 0.9865, the quadratic model was more credible than other models. The final equation to describe the actual factors according to the quadratic model is shown in Eq. 5.
Table 4 The results of Statistics Model
$$\begin{aligned} {\text{Removal efficiency of diazinon}}\% & = 8 9. 2 6- 3. 5 2 8\times {\text{A}} - 0. 3 3 4 2\times {\text{B}} - 3. 5 7 4 \\ & \quad \times\; {\text{C}} - 3.0 2 3\times {\text{D}} - 0. 2 7 1 3 \times {\text{AB}} - 0. 1 8 7 5\times {\text{AC}} - 0. 30 50 \times {\text{AD}} - 0. 3 2 6 2\times {\text{BC}} - 0. 3 9 8 8 \\ & \quad \times \; {\text{BD}} - 0. 30 7 5\times {\text{CD }} - 4. 4 9 6\times {\text{A}}^{ 2} - 2. 5 8 6\times {\text{B}}^{ 2} - 0. 3 6 4 6 { } \times {\text{ C}}^{ 2} - 1. 3 60 \times {\text{D}}^{ 2} \\ \end{aligned}.$$
Based on Eq. 5, the maximum removal percentage of diazinon 96.06 was obtained. Impact coefficient for variables such as pH, contact time, diazinon concentration, and dose of NPs was obtained 3.528, 0.3342, 3.574 and 3.023, respectively. As shown in Eq. 5, the main parameter is related to the pH variable. Also, the minimum and maximum interaction amount variables in relation to the coefficient of AC and BD Coded Factors were obtained as 0.1875 and 0.3988, respectively. In this study, the F-value, P value and degree of freedom (DF) parameters were conducted for the analysis of variance. According to the results shown in Table 5, the F-value, P-value and DF were obtained as 78.32, < 0.0001, and 14, respectively.
Table 5 ANOVA of Response Surface Quadratic Model
Effect of variables on the removal efficiency
The results indicated that this process has been highly efficient in the removal of diazinon and COD. As shown in Figs. 3, 4, D response and contour plot models were studied for the removal of diazinon. The effect of the initial concentration of diazinon in the reactor was investigated from 1 to 40 mg/L. As shown in Fig. 4, by increasing the initial concentration of diazinon, the removal efficiency decreased. Accordingly, at pH = 6.75 and contact time = 65 min, by increasing the initial concentration from 10.75 to 30.25 mg/L, removal efficiency of diazinon decreased from 92 to 85%. The optimal pH for diazinon removal was obtained near 7. When pH increased from 6.75 to 9.5, the removal efficiency of diazinon decreased from 90.5 to 82%. Also, in this study, optimal contact time and optimal dose of nanoparticles were obtained in 65 min and 775 mg/L, respectively, Fig. 4.
Contour model and 3-D response for removal of diazinon with interactions among factors, a contact time = 65 min and dose of NPs = 775 mg/L, b dose of NPs = 775 mg/L, concentrations of diazinon = 10.75 mg/L, c concentration of diazinon = 10.75 mg/L and contact time = 65 min, d pH = 6.5 and contact time = 65 min, e pH = 6.5 and dose of NPs = 775 mg/L, f concentration of diazinon = 10.75 mg/L and pH = 6.5
Identification of products by GC–MS
In this study, the analysis of by-products was performed based on the following conditions; pH = 6.75, contact time = 40–80 min, dose of NPs = 775 mg/L and diazinon Concentration = 10.75 mg/L. Speciation and molecular structures of the oxidation by-products were analyzed by GC–MS, Fig. 5. According to the results shown in Table 6, four by-products, including; diazoxon, 7-methyl-3-octyne, 2-isopropyl-6-methyl-4-pyrimidinol (IMP) and diethyl phosphonate were identified during degradation of diazinon. Their retention time (RT) varied from 2.15 to 15.75 min. As such, the minimum and maximum RT were related to diazoxon and diethyl phosphonate compounds, respectively. The characteristics of other compounds are shown in Table 6.
Gas chromatography–mass spectrometry of diazinon
Table 6 The characteristics of by-products identification due to diazinon decomposition
Effluent toxicity assessment
In this study, to determine the mortality rate of E. coli LMG bacterial, NOEC, effective concentration (EC) parameter was used. Accordingly, the growth inhibitory level Before and after from performing the advanced oxidation process (AOP) was obtained. EC50 related to ABR and CFU tests before from performing AOP was obtained as 2.255 and 2.250 mg/L, respectively, Also, NOEC related to ABR and CFU tests was obtained as 0.890 and 0.850 mg/L, respectively, Table 7. Based on the results shown in Table 8, the effluent toxicity assessment from the reactor in different runs for EC50 and NOEC parameters related to ABR test after from performing AOP were obtained as 2.275 and 0.839 mg/L, respectively.
Table 7 The result of diazinon effect concentration in ABR and CFU tests by using E. coli
Table 8 The result of COD Removal, ORP and bioassay test to determination of effluent toxicity in different Runs
Analysis of the river water samples
The characteristics of raw water of Seymareh Rive are shown in Table 9, Based on the results of analysis of the River water samples, it was found that the removal efficiency of diazinon by advanced oxidation process is 95%, and it was found that the COD was decreased from 55 to 1.65 mg/L. By analyzing the effluent toxicity Using Alamar blue and colony forming unit tests, it was observed that the number of bacteria are not decreased.
Table 9 Characteristics of raw water of Seymareh Rive
According to the results obtained in Fig. 1, it was found that the syntheses of Fe3O4/SiO2/TiO2 nanoparticles were successful. By using SEM techniques, the size of the nanoparticles was confirmed at a range of 200 nm. Also, by comparing the elements and peaks produced by EDX analysis, It was found that sol–gel and co-precipitation methods were acceptable for the synthesis of nanoparticles in this study.
According to the analysis of variance Table 5, values of Prob > F less than 0.0500 show that the model quality is significant. Accordingly, the A, C, D, A2, B2, and D2 parameters are significant. The F value of 78.32 and the Prob > F value of < 0.0001 suggest that the model was statistically approved for removal of diazinon. Also, based on the results obtained from the quadratic model in Table 4, the R2 value and Adj R2 value were obtained as 0.986 and 0.973, respectively. These results showed that the predicted values obtained from the quadratic model is a fit of the experimental results (Martino et al. 2015; Sarrai et al. 2016; Dehghani et al. 2017).
In order to increase the photo catalytic properties in the process, hydrogen peroxide was added to the reactor. Hydrogen peroxide led to more formation of hydroxyl radicals and resulted in the oxidation of the pesticide compounds (Fadaei et al. 2012; Asaithambi et al. 2017). According to the results obtained in Fig. 4, by increasing the contact time from 37 to 65 min, removal efficiency of diazinon was increased from 85.5 to 91%. This is due to the production of more OH radicals in longer time and also more exposure of active radicals by diazinon; the possibility of the decomposition of a larger percentage of diazinon is provided. Based on the results from the one factor response model, three-dimensional response and contour model, by increasing the concentration of NPs, the removal efficiency of diazinon was increased. Therefore, at a dosage of 320 mg/L of NPs, the removal percentage of diazinon was obtained at 85. Once the dosage of NPs was increased to 775 mg/L, the removal percentage of diazinon reached 92.5. This is because, when the concentration of nanoparticles under the influence of UV radiation is increased in the reactor test, h+ and e− ions are produced. Afterwards, these ions react with water and peroxide radicals and also hydroxide ions are produced (Shunxing et al. 2016; Toolabi et al. 2017). Peroxide radicals are mixed with H+ ions and hydroxyl radicals (OH•) are formed. Due to the high oxidation power of OH radicals, the degradation of the diazinon occurred. In this study, it was found that due to the reflective wall of the reactor, the amount of radiation produced is 1.45 times higher than that of conventional reactors under similar conditions. This causes more electrons to be stimulated from the catalyst surface, and the production of active radicals in the solution increased.
Based on the results of this study, pH was the most effective parameter in removing diazinon. The maximum removal efficiency was obtained when pH was equal to 6.75, this was because more hydrolysis of diazinon occurs in acidic solutions. Also, the production of active hydroxyl radicals is higher in acidic solutions. Therefore, this parameter should be given more attention in future studies (Li et al. 2015; Ehrampoush et al. 2017; Toolabi et al. 2017). In the study of Kalantary et al., optimal parameters such as pH of the nanoparticle and the contact time degradation of diazinon were obtained by using the TiO2/UV process at 6, and 550 mg/L, and 60 min, respectively and the maximum removal efficiency of diazinon was obtained as 71% (Kalantary et al. 2014). This difference in the removal of diazinon can be due to the experimental conditions, such as the presence of silica and hydrogen peroxide in the present study.
According to the results of this study, four by-products, diazoxon, 7-methyl-3-octyne, 2-isopropyl-6-methyl-4-pyrimidinol (IMP) and diethyl phosphonate were identified during degradation of diazinon. By increasing the contact time from 40 to 80 min, the major of by-products were disappeared. Also by determining the toxicity of the effluent from the reactor, it was found that the toxicity of these compounds was less than that of diazinon. Similar to this study (Li et al. 2015), IMP was reported as the oxidation product of diazinon during advanced oxidation process, which is less toxic than its parent compound.
Also, In another study conducted by Kalantary et al. (2014), diazoxon and IMP compounds were introduced as by-products due to the diazinon degradation and by assessing their toxicity, it was found that their toxicity is less than that of diazinon. Therefore, according to the results obtained in this study, Fe3O4/SiO2/TiO2/H 2 O2/UV-C process by producing active radicals (OH•) can decompose diazinon and its by-products.
According to the results from Table 8, the degree mineralization of diazinon was determined by using COD experiment. Therefore, the minimum and maximum mineralization of diazinon in effluent reactor was obtained as 88.90 and 99.20%, respectively. By increasing the COD removal percentage, the activity of dehydrogenase enzyme was increased. So, a significant statistical relationship with P-value < 0.05 between COD decomposition and alamar blue reduction was obtained. Based on the results obtained in this study, there was a direct correlation between ABR and CFU tests (P < 0.05). Accordingly, EC50, EC100, and no observed effect concentration in the effluent were obtained as 2.255, 4.128, and 0.890 mg/L, respectively Table 7. Also, the effluent from the reactor was evaluated with ABR and CFU tests. According to the results presented in Table 8, EC50, EC100 and NOEC values for ABR test were obtained as 2.275, 4.430, and 0.839 mg/L, respectively. By comparing these tests, it can be concluded that, firstly, there is a meaningful relationship between them; secondly, toxicity of the effluent from the reactor and the toxicity of diazinon were confirmed by these new tests. In the study by Toolabi et al. (2017) reducing alamar blue (Resazurin) test of Pseudomonas aerogenousa bacteria was carried out in determining the toxicity of acetamiprid pesticide, In their study, it was found that alamar blue is not only a useful method for toxicity assessment, but it is also a very accurate and simple method.
In the current study, by assessing the toxicity tests on synthetic and real samples were determined, that Environmental factors such as temperature and turbidity is not affected on the performance of Alamar Blue test. In addition to the alamar blue test, the oxidation–reduction potential (ORP) to determine the activity of dehydrogenase enzyme E. coli was performed. Based on the results obtained in Table 8, the number and activity of E. coli bacteria was proportional to the amount of oxidation–reduction potential. According to the findings of this section, the alamar blue test was recognized as the most reliable, simplest, new method, rapid and economical for effluent toxicity assessment.
ABR:
alamar blue reduction
chemical oxygen demand
ORP:
NOEC:
no observed effect concentration
EC:
effective concentration
AOP:
advanced oxidation process
GC-MS:
gas chromatography Mass
HPLC:
high performance liquid chromatography
Amooey A, Ghasemi S, Mazizi SM, Gholaminezhad Z, Chaich MJ (2014) Removal of diazinon from aqueous solution by electrocoagulation process using aluminum electrodes. Korean J Chem Eng 31(6):1016–1020
Asaithambi P, Alemayehu E, Sajjadi B, Aziz AR (2017) Electrical energy per order determination for the removal pollutant from industrial wastewater using UV/Fe2+/H2O2 process: optimization by response surface methodology. Water Resour Ind 18:17–32
Balouiri M, Sadiki M, Ibnsouda SK (2016) Methods for in vitro evaluating antimicrobial activity: a review. J Pharm Anal 6:71–79
Dehghani M, Shariati Z, Mehrnia MR, Shayeghi M, Ghouti MA, Heibati B, Mckay G, Yetilmezsoy K (2017) Optimizing the removal of organophosphorus pesticide malathion from water using multi-walled carbon nanotubes. Chem Eng J 310:22–32
Ehrampoush MH, Sadeghi A, Ghaneian MT, Bonyadi Z (2017) Optimization of diazinon biodegradation from aqueous solutions by Saccharomyces cerevisiae using response surface methodology. AMB Express 7:1–6
Fadaei A, Deghani M, Mahvi AH, Nasseri S, Rastkari N (2012) Degradation of organophosphorus pesticides in water during UV/H2O2 treatment: role of sulphate and bicarbonate ions. E J Chem 9(4):2015–2022
Gregoraszczuk E, Rmardyła A, Rys J, Jakubowicz J, Urbanski K (2015) Effect of chemotherapeutic drugs on caspase-3 activity, as a key biomarker for apoptosis in ovarian tumor cell cultured as monolayer. a pilot study. Iran J Pharm Res 14(4):1153–1161
Gupta V, Eren T, Atar N, Yola ML, Parlak C, Maleh H (2015) CoFe2O4/TiO2 decorated reduced grapheneoxide nanocomposites for photocatalytic degradation of chlorpyrifos. Mol Liq 208:122–129
Kalantary R, Shahamat Y, Farzadkia M, Esrafili A, Asgharnia H (2014) Heterogeneous photocatalytic degradation of diazinon in water using nano- TiO2: modeling and intermediates. Eur J Exp Biol 4(1):186–194
Khalifa R, Nasser M, Gomaa AA, Osman NM, Salem HM (2013) Resazurin microtiter assay Plate method for detection of susceptibility of multidrug resistant Mycobacterium tuberculosis to second-line anti-tuberculous drugs. Egypt J Chest Dis Tuberc 62:241–247
Li W, Liu Y, Leeuwen JV, Saint CP (2015) UV and UV/H2O2 treatment of diazinon and its influence on disinfection byproduct formation following chlorination. Chem Eng J 274:39–49
Maddah B, Hasanzadeh M (2017) Fe3O/CNT magnetic nanocomposites as adsorbents to remove organophosphorus 4 pesticides from environmental water. Int J Nanosci Nanotechnol 13(2):139–149
Martino M, Sannino F, Pirozzi D (2015) Removal of pesticide from wastewater: contact time optimization for a two-stage batch stirred adsorber. J Environ Chem Eng 3(1):365–372
Mohammadi M, Sabbaghi S (2014) Photo-catalytic degradation of 2,4-DCP wastewater using MWCNT/TiO2 nano-composite activated by UV and solar light environmental nanotechnology. Monit Manag 2:24–29
Nama S, Cho H, Hanc J, Her N, Yoon J (2018) Photocatalytic degradation of acesulfame K: optimization using the Box-Behnken design (BBD). Process Saf Environ Prot 113:10–21
Nasiry S, Geusens N, Hanssens M, Luyten C, Pijnenborg R (2007) The use of alamar blue assay for quantitative analysis of viability, migration and invasion of choriocarcinoma cells. Hum Reprod 22(5):1304–1309
Pettit R, Pettit G, Weber CA, Rui Tan, Kean MJ, Franks KS, Hoffmann H, Horton ML (2005) Microplate alamar blue assay for Staphylococcus epidermidis biofilm susceptibility testing. Antimicrob Agents Chemother 49(7):2612–2617
Rampersad S (2012) multiple applications of alamar blue as an indicator of metabolic function and cellular health in cell viability bioassays. Sensors 12:12347–12360
Ribeiro A, Nunes O, Pereira MFR, Silva AMT (2015) An overview on the advanced oxidation processes applied for the treatment of water pollutants defined in the recently launched directive 2013/39/EU. Environ Int 75:33–51
Sarrai A, Hanini S, Merzouk NK, Tassalit D, Szabó T, Hernádi K, Nagy L (2016) Using central composite experimental design to optimize the degradation of tylosin from aqueous solution by photo-fenton reaction. Materials 9(428):1–11
Satyanarayan N, Abaadani W, Shekhar SP, Harishkumar S (2016) Anti-tubercular activity of various solvent extracts of acalypha indica l. against drug susceptible h37rv strain. World J Pharm Pharm Sci 5(8):957–965
Shemer H, Linden K (2006) Degradation and by-product formation of diazinon in water during UV and UV/H2O2 treatment. J Hazard Mater 136(3):553–559
Shunxing L, Wenjie L, Fengying Z, Haifeng Z, Xiaofeng L, Jiabai C (2016) Lysine surface modified Fe3O4/SiO2/TiO2 microspheres-based preconcentration and photocatalysis for in situ selective determination of nanomolar dissolved organic and inorganic phosphorus in seawater. Sens Actuators B Chem 224:48–54
Teh C, Nazni W, Nurulhusna AH, Norazah A, Lee HL (2017) Determination of antibacterial activity and minimum inhibitory concentration of larval extract of fly via resazurin-based turbidometric assay. BMC Microbiol 17:1–8
Tian H, Liu F, He J (2014) Multifunctional Fe3O4/nSiO2/mSiO2–Fe core–shell microspheres for highly efficient removal of 1, 1, 1-trichloro-2, 2-bis (4-chlorophenyl) ethane (DDT) from aqueous media. J Colloid Interface Sci 431:90–96
Toolabi A, Malakootian M, Ghaneian MT, Esrafili A, Ehrampoush MH, Tabatabaei M, AShahi M (2017) Optimization of photochemical decomposition acetamiprid pesticide from aqueous solutions and effluent toxicity assessment by Pseudomonas aeruginosa BCRC using response surface methodology. AMB Express 7:1–12
Tyc O, Menor L, Garbeva P, BCatala E, Micol V (2016) Research article validation of the alamar blue assay as a fast screening method to determine the antimicrobial activity of botanical extracts. PLoS ONE 11(12):1–18
Wang C, Shih Y (2016) Facilitated ultrasonic irradiation in the degradation of diazinon insecticide. Sustain Environ Res 26:110–116
Wang J, Peng L, Cao F, Su B, Shi H (2017) A Fe3O4-SiO2-TiO2 core-shell nanoparticle: preparation and photocatalytic properties. Inorg Nano-metal Chem 47(3):396–400
Zare MR, Amin M, Nikaeen M, Zare M, Bina B, Fatehizadeh A, Rahmani A, Ghasemian M (2016) Simplification and sensitivity study of alamar blue bioassay for toxicity assessment in liquid media. Desalin Water Treat 57:10934–10940
MTG, MM and AT carried out experiments; MT and AT conceived and designed the experiments; AE and MA made a substantial contribution to the analysis and interpretation of the data presented; MHE and AT wrote the paper. MHE, AT, MM and MTG conceived and designed the experiments; AT performed the experiments; AE, MAS and MA made a substantial contribution to the analysis and interpretation of the data presented; MTG wrote the paper. All authors read and approved the final manuscript.
Authors Acknowledge the School of Public Health Bam, for providing the materials and laboratory equipment used in this study.
No human and animals participants were involved in the study.
Environmental Science and Technology Research Center, Department of Environmental Health Engineering, Shahid Sadoughi University of Medical Sciences, Yazd, Iran
Ali Toolabi
, Mohammad Taghi Ghaneian
& Mohammad Hassan Ehrampoush
Environmental Health Engineering Research Center, Kerman University of Medical Sciences, Kerman, Iran
Mohammad Malakootian
Department of Environmental Health Engineering, School of Public Health, Kerman University of Medical Sciences, Kerman, Iran
Department of Environmental Health Engineering, School of Public Health, Iran University of Medical Sciences, Tehran, Iran
Ali Esrafili
Department of Biostatistics and Epidemiology, Shahid Sadoughi University of Medical Science, Yazd, Iran
Mohsen AskarShahi
Department of chemistry, Islamic Azad University, Yazd, Iran
Maesome Tabatabaei
Search for Ali Toolabi in:
Search for Mohammad Malakootian in:
Search for Mohammad Taghi Ghaneian in:
Search for Ali Esrafili in:
Search for Mohammad Hassan Ehrampoush in:
Search for Mohsen AskarShahi in:
Search for Maesome Tabatabaei in:
Correspondence to Mohammad Taghi Ghaneian.
Toolabi, A., Malakootian, M., Ghaneian, M.T. et al. Modeling photocatalytic degradation of diazinon from aqueous solutions and effluent toxicity risk assessment using Escherichia coli LMG 15862. AMB Expr 8, 59 (2018) doi:10.1186/s13568-018-0589-0
Received: 17 February 2018
Accepted: 07 April 2018
Dehydrogenase enzyme
Effluent bioassay | CommonCrawl |
9.5 The Distance from a Point to a Line in R2
Calculus and Vectors Nelson
Purchase this Material for $10
You need to sign up or log in to purchase.
Subscribe for All Access
Lectures 3 Videos
Shortest Distance formula
Buy to View
3.14mins
Shortest Distance from a point to 3D line
Shortest Distance to line from a Point in 3D
Solutions 27 Videos
Determine the distance from P(-4, 5) to each of the following lines:
3x + 4y - 5 = 0
5x -12y + 24 = 0
c) 9x - 40y = 0
Q1c
Determine the distance between the following parallel lines:
a) 2x-y+1=0,2x-y+6=0
b) 7x - 24y + 168 =0, 7x - 24y - 336 = 0
Determine the distance from R(-2, 3) to each of the following lines:
a) \vec{r}=(-1,2)+s(3,4), s \in \mathbf{R}
b) \vec{r}=(1,0)+t(5,12), t \in \mathbf{R}
c) \vec{r} = (1, 3) + p(7, -24), p \in \mathbb{R}
Find the distance between the following lines:
a) The formula for the distance from a point to a line is d=\displaystyle{\frac{|Ax_0+by_0+C|}{\sqrt{A^2+B^2}}}. Show that this formula can be modified so the distance from the origin, O(0,0), to the line Ax+By+c=0 is given by the formula d=\displaystyle{\frac{|C|}{\sqrt{A^2+B^2}}}.
b) Determine the distance between L_1:3x-4y-12=0 and L_2:3x-4y+12=0 by first finding the distance from the origin to L_1 and then finding the distance from the origin to L_2.
c) Find the distance between the two lines directly by first determining a point on one of the lines and then using the distance formula. How does this answer compare with the answer you found in part b.?
Calculate the distance between the following lines:
\vec{r}=(-2,1)+s(3,4):s \in \mathbb{R}:\vec{r}=(1,0)+t(3,4), t \in \mathbb{R}
b) \displaystyle{\frac{x-1}{4}}=\displaystyle{\frac{y}{-3}}, \displaystyle{\frac{x}{4}}=\displaystyle{\frac{y+1}{-3}}
c) 2x-3y+1=0,2x-3y-3=0
d) 5x+12y=120, 5x+12y+120=0
Q5d
Calculate the distance between point P and the given line.
a) P(1,2,-1); \vec{r}=(1,0,0)+s(2,-1,2),s \in \mathbf{R}
b) P(0,-1,0); \vec{r}=(2,1,0)+t(-4,5,20), t \in \mathbf{R}
P(2,3,1); \vec{r}=p(12,-3,4), p \in \mathbf{R}
Calculate the distance between the following parallel lines.
a) \vec{r}=(1,1,0)+s(2,1,2),s \in \mathbf{R}; \vec{r}=(-1,1,2)+t(2,1,2), t \in \mathbf{R}
\vec{r} = (3,1,-2) + m(1,1,3), m(1,1,3), m \in \mathbf{R}; \vec{r}=(1,0,1)+n(1,1,3), n \in \mathbf{R}
a) Determine the coordinates of the point on the line \vec{r}=(1,-1,2)+s(1,3,-1),s \in \mathbf{R}, that produces the shortest distance between the line and a point with coordinates (2,1,3).
b) What is the distance between the given point and the line?
Two planes with equations x -y + 2z = 2 and x + y - z = -2 intersect along line L. Determine the distance from P(-1, 2, -1) to L, and determine the coordinates of the point on L that gives this minimal distance.
The point A(2, 4, -5) is reflected in the line with equation \vec{r} = (0, 0, 1) + s(4, 2, 1), s \in \mathbb{R}, to give the point A'. Determine the coordinates of A'.
A rectangular box with an open top, measuring 2 by 2 by 3, is constructed. Its vertices are labelled as shown.
Determine the distance from A to the line segment HB.
Q11a
b) What other vertices on the box will give the same distance to HB as the distance you found in part a.?
Q11b
c) Determine the area of the \triangle AHB.
Q11c | CommonCrawl |
Search SpringerLink
Finite and Infinitesimal Rigidity with Polyhedral Norms
Derek Kitson1
Discrete & Computational Geometry volume 54, pages 390–411 (2015)Cite this article
We characterise finite and infinitesimal rigidity for bar-joint frameworks in \({\mathbb {R}}^d\) with respect to polyhedral norms (i.e. norms with closed unit ball \({\mathcal {P}}\), a convex d-dimensional polytope). Infinitesimal and continuous rigidity are shown to be equivalent for finite frameworks in \({\mathbb {R}}^d\) which are well-positioned with respect to \({\mathcal {P}}\). An edge-labelling determined by the facets of the unit ball and placement of the framework is used to characterise infinitesimal rigidity in \({\mathbb {R}}^d\) in terms of monochrome spanning trees. An analogue of Laman's theorem is obtained for all polyhedral norms on \({\mathbb {R}}^2\).
A bar-joint framework in \({\mathbb {R}}^d\) is a pair (G, p) consisting of a simple undirected graph \(G=(V(G),E(G))\) (i.e. no loops or multiple edges) and a placement \(p:V(G)\rightarrow {\mathbb {R}}^d\) of the vertices such that \(p_v\) and \(p_w\) are distinct whenever vw is an edge of G. The graph G may be either finite or infinite. Given a norm on \({\mathbb {R}}^d\) we are interested in determining when a given framework can be continuously and non-trivially deformed without altering the lengths of the bars. A well-developed rigidity theory exists in the Euclidean setting for finite bar-joint frameworks (and their variants), which stems from classical results of Cauchy [6], Maxwell [17], Alexandrov [1] and Laman [14]. Of particular relevance is Laman's landmark characterisation for generic minimally infinitesimally rigid finite bar-joint frameworks in the Euclidean plane. Asimow and Roth proved the equivalence of finite and infinitesimal rigidity for regular bar-joint frameworks in two key papers [2, 3]. A modern treatment can be found in works of Graver et al. [9] and Whiteley [24, 26]. More recently, significant progress has been made in topics such as global rigidity [7, 8, 11] and the rigidity of periodic frameworks [5, 16, 20, 21] in addition to newly emerging themes such as symmetric frameworks [22] and frameworks supported on surfaces [19]. In this article, we consider rigidity properties of both finite and infinite bar-joint frameworks (G, p) in \({\mathbb {R}}^d\) with respect to polyhedral norms. A norm on \({\mathbb {R}}^d\) is polyhedral (or a block norm) if the closed unit ball \(\{x\in {\mathbb {R}}^d:\Vert x\Vert \le 1\}\) is the convex hull of a finite set of points. Such norms form an important class as they are computationally easy to use and are dense in the set of all norms on \({\mathbb {R}}^d\). While classical rigidity theory is strongly linked to statics, it has also provided valuable new connections between different areas of pure mathematics and this latter property is one of the emerging features of non-Euclidean rigidity theory. In particular, the rigidity theory obtained with polyhedral norms is distinctly different from the Euclidean setting in admitting new edge-labelling and spanning tree methods. There are potential applications of this theory to physical networks with inherent directional constraints, or to abstract networks with a suitable notion of distance imposed. Non-Euclidean norms, and in particular polyhedral norms, have been applied in this way to optimisation problems in location modelling (see the industry which has resulted from [23]) and, more recently, machine learning with submodular functions [4]. A study of rigidity with respect to the classical non-Euclidean \(\ell ^p\) norms was initiated in [12] for finite bar-joint frameworks and further developed for infinite bar-joint frameworks in [13]. Among these norms the \(\ell ^1\) and \(\ell ^\infty \) norms are simple examples of polyhedral norms and so the results obtained here extend some of the results of [12].
In Sect. 2, we provide the relevant background material on polyhedral norms and finite and infinitesimal rigidity. In Sect. 3, we establish the role of support functionals in determining the space of infinitesimal flexes of a bar-joint framework (Theorem 5). We then distinguish between general bar-joint frameworks and those which are well-positioned with respect to the unit ball. The well-positioned placements of a finite graph are open and dense in the set of all placements, and we show that finite and infinitesimal rigidity are equivalent for these bar-joint frameworks (Theorem 7). We then introduce the rigidity matrix for a general finite bar-joint framework, the non-zero entries of which are derived from extreme points of the polar set of the unit ball. In Sect. 4, we apply an edge-labelling to G which is induced by the placement of each bar in \({\mathbb {R}}^d\) relative to the facets of the unit ball. With this edge-labelling we identify necessary conditions for infinitesimal rigidity and obtain a sufficient condition for a subframework to be relatively infinitesimally rigid (Proposition 12). We then characterise the infinitesimally rigid bar-joint frameworks with d induced framework colours as those which contain monochrome spanning trees of each framework colour (Theorem 13). This result holds for both finite and infinite bar-joint frameworks and does not require the framework to be well-positioned. In Sect. 5, we apply the spanning tree characterisation to show that certain graph moves preserve minimal infinitesimal rigidity for any polyhedral norm on \({\mathbb {R}}^2\). We then show that in two dimensions a finite graph has a well-positioned minimally infinitesimally rigid placement if and only if it satisfies the counting conditions \(|E(G)|=2|V(G)|-2\) and \(|E(H)|\le 2|V(H)|-2\) for all subgraphs H (Theorem 23). This is an analogue of Laman's theorem [14] which characterises the finite graphs with minimally infinitesimally rigid generic placements in the Euclidean plane as those which satisfy the counting conditions \(|E(G)|=2|V(G)|-3\) and \(|E(H)|\le 2|V(H)|-3\) for subgraphs H with at least two vertices. Many of the results obtained hold equally well for both finite and infinite bar-joint frameworks.
Let \({\mathcal {P}}\) be a convex symmetric d-dimensional polytope in \({\mathbb {R}}^d\) where \(d\ge 2\). Following [10] we say that a proper face of \({\mathcal {P}}\) is a subset of the form \({\mathcal {P}}\cap H\), where H is a supporting hyperplane for \({\mathcal {P}}\). A facet of \({\mathcal {P}}\) is a proper face which is maximal with respect to inclusion. The set of extreme points (vertices) of \({\mathcal {P}}\) is denoted by \(\mathrm{ext}({\mathcal {P}})\). The polar set of \({\mathcal {P}}\), denoted by \({\mathcal {P}}^\triangle \), is also a convex symmetric d-dimensional polytope in \({\mathbb {R}}^d\):
$$\begin{aligned} {\mathcal {P}}^\triangle = \{y\in {\mathbb {R}}^d: x\cdot y\le 1\,\,{\text {for all}}\,\, x\in {\mathcal {P}}\}. \end{aligned}$$
Moreover, there exists a bijective map which assigns to each facet F of \({\mathcal {P}}\) a unique extreme point \(\hat{F}\) of \({\mathcal {P}}^\triangle \) such that
$$\begin{aligned} F=\{x\in {\mathcal {P}}: x \cdot \hat{F}=1\}. \end{aligned}$$
The polar set of \({\mathcal {P}}^\triangle \) is \({\mathcal {P}}\).
The Minkowski functional (or gauge) for \({\mathcal {P}}\) defines a norm on \({\mathbb {R}}^d\),
$$\begin{aligned} \Vert x\Vert _{\mathcal {P}}= \inf \{\lambda \ge 0:x\in \lambda {\mathcal {P}}\}. \end{aligned}$$
This is what is known as a polyhedral norm or a block norm. The dual norm of \(\Vert \cdot \Vert _{\mathcal {P}}\) is also a polyhedral norm and is determined by the polar set \({\mathcal {P}}^\triangle \),
$$\begin{aligned} \Vert y\Vert _{\mathcal {P}}^*= \max _{x\in {\mathcal {P}}} \, x\cdot y = \inf \{\lambda \ge 0:y\in \lambda {\mathcal {P}}^\triangle \} =\Vert y\Vert _{{\mathcal {P}}^\triangle }. \end{aligned}$$
In general, a linear functional on a convex polytope will achieve its maximum value at some extreme point of the polytope and so the polyhedral norm \(\Vert \cdot \Vert _{\mathcal {P}}\) is characterised by
$$\begin{aligned} \Vert x\Vert _{\mathcal {P}}=\Vert x\Vert _{\mathcal {P}}^{**} =\Vert x\Vert _{{\mathcal {P}}^\triangle }^*= \max _{y\in {\mathcal {P}}^\triangle }\, x\cdot y =\max _{y\in \mathrm{ext}({\mathcal {P}}^\triangle )}\, x\cdot y. \end{aligned}$$
A point \(x\in {\mathbb {R}}^d\) belongs to the conical hull \({\text {cone}}(F)\) of a facet F if \(x= \sum _{j=1}^n\lambda _jx_j\) for some non-negative scalars \(\lambda _j\) and some finite collection \(x_1,x_2\ldots ,x_n\in F\). By formulas (1), (2) and (3) the following equivalence holds:
$$\begin{aligned} x\in {\text {cone}}(F) \quad \Leftrightarrow \quad \Vert x\Vert _{\mathcal {P}}= x\cdot \hat{F}. \end{aligned}$$
Each isometry of the normed space \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) is affine (by the Mazur–Ulam theorem) and hence is a composition of a linear isometry and a translation. A linear isometry must leave invariant the finite set of extreme points of \({\mathcal {P}}\) and is completely determined by its action on any d linearly independent extreme points. Thus there exist only finitely many linear isometries on \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\).
A continuous rigid motion of a normed space \(({\mathbb {R}}^d,\Vert \cdot \Vert )\) is a family of continuous paths,
$$\begin{aligned} \alpha _x:(-\delta ,\delta )\rightarrow {\mathbb {R}}^d,\quad x\in {\mathbb {R}}^d, \end{aligned}$$
with the property that \(\alpha _x(0)=x\) and for every pair \(x,y\in {\mathbb {R}}^d\) the distance \(\Vert \alpha _x(t)-\alpha _y(t)\Vert \) remains constant for all values of t. In the case of a polyhedral norm \(\Vert \cdot \Vert _{\mathcal {P}}\), if \(\delta \) is sufficiently small, then the isometries \(\varGamma _t:x\mapsto \alpha _x(t)\) are necessarily translational since by continuity the linear part must equal the identity transformation. Thus we may assume that a continuous rigid motion of \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) is a family of continuous paths of the form
$$\begin{aligned} \alpha _x(t)=x+c(t), \quad x\in {\mathbb {R}}^d, \end{aligned}$$
for some continuous function \(c:(-\delta ,\delta )\rightarrow {\mathbb {R}}^d\) (cf. [13, Lemma 6.2]).
An infinitesimal rigid motion of a normed space \(({\mathbb {R}}^d,\Vert \cdot \Vert )\) is a vector field on \({\mathbb {R}}^d\) which arises from the velocity vectors of a continuous rigid motion. For a polyhedral norm \(\Vert \cdot \Vert _{\mathcal {P}}\), since the continuous rigid motions are of translational type, the infinitesimal rigid motions of \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) are precisely the constant maps
$$\begin{aligned} \gamma :{\mathbb {R}}^d\rightarrow {\mathbb {R}}^d, \quad x\mapsto a, \end{aligned}$$
for some \(a\in {\mathbb {R}}^d\) (cf. [12, Lemma 2.3]).
Let (G, p) be a (finite or infinite) bar-joint framework in a normed vector space \(({\mathbb {R}}^d,\Vert \cdot \Vert )\). A continuous (or finite) flex of (G, p) is a family of continuous paths
$$\begin{aligned} \alpha _v:(-\delta ,\delta )\rightarrow {\mathbb {R}}^d, \quad v\in V(G), \end{aligned}$$
such that \(\alpha _v(0)=p_v\) for each vertex \(v\in V(G)\) and \(\Vert \alpha _v(t)-\alpha _w(t)\Vert =\Vert p_v-p_w\Vert \) for all \(|t|<\delta \) and each edge \(vw\in E(G)\). A continuous flex of (G, p) is regarded as trivial if it arises as the restriction of a continuous rigid motion of \(({\mathbb {R}}^d,\Vert \cdot \Vert )\) to p(V(G)). If every continuous flex of (G, p) is trivial then we say that (G, p) is continuously rigid.
An infinitesimal flex of a (finite or infinite) bar-joint framework (G, p) in a normed space \(({\mathbb {R}}^d,\Vert \cdot \Vert )\) is a map \(u:V(G)\rightarrow {\mathbb {R}}^d\), \(v\mapsto u_v\) which satisfies
$$\begin{aligned} \Vert (p_v+tu_v)-(p_w+tu_w)\Vert -\Vert p_v-p_w\Vert = o(t) \quad \text { as }t\rightarrow 0 \end{aligned}$$
for each edge \(vw\in E(G)\). We will denote the collection of infinitesimal flexes of (G, p) by \({\mathcal {F}}(G,p)\). An infinitesimal flex of (G, p) is regarded as trivial if it arises as the restriction of an infinitesimal rigid motion of \(({\mathbb {R}}^d,\Vert \cdot \Vert )\) to p(V(G)). In other words, in the case of a polyhedral norm, an infinitesimal flex of (G, p) is trivial if and only if it is constant. A bar-joint framework is infinitesimally rigid if every infinitesimal flex of (G, p) is trivial. Regarding \({\mathcal {F}}(G,p)\) as a real vector space with component-wise addition and scalar multiplication, the trivial infinitesimal flexes of (G, p) form a d-dimensional subspace \({\mathcal {T}}(G,p)\) of \({\mathcal {F}}(G,p)\).
The interior of a subset \(A\subset {\mathbb {R}}^d\) will be denoted by \(A^\circ \).
Support Functionals and Rigidity
In this section, we begin by highlighting the connection between the infinitesimal flex condition (5) for a general norm on \({\mathbb {R}}^d\) and support functionals on the normed space \(({\mathbb {R}}^d,\Vert \cdot \Vert )\). We then characterise the space of infinitesimal flexes for a general (finite or infinite) bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) in terms of support functionals and prove the equivalence of finite and infinitesimal rigidity for finite bar-joint frameworks which are well-positioned in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\). Following this, we describe the rigidity matrix for general finite bar-joint frameworks in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) and compute an example.
Support Functionals
Let \(\Vert \cdot \Vert \) be an arbitrary norm on \({\mathbb {R}}^d\), and denote by B the closed unit ball in \(({\mathbb {R}}^d,\Vert \cdot \Vert )\). A linear functional \(f:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) is a support functional for a point \(x_0\in {\mathbb {R}}^d\) if \(f(x_0)=\Vert x_0\Vert ^2\) and \(\Vert f\Vert ^*=\Vert x_0\Vert \). Equivalently, f is a support functional for \(x_0\) if the hyperplane
$$\begin{aligned} H=\{x\in {\mathbb {R}}^d: f(x)=\Vert x_0\Vert \} \end{aligned}$$
is a supporting hyperplane for B which contains \(\tfrac{x_0}{\Vert x_0\Vert }\).
Lemma 1
Let \(\Vert \cdot \Vert \) be a norm on \({\mathbb {R}}^d\) and let \(x_0\in {\mathbb {R}}^d\). If \(f:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) is a support functional for \(x_0\), then
$$\begin{aligned} f(y)\le \Vert x_0\Vert \frac{\Vert x_0+ty\Vert -\Vert x_0\Vert }{t} \quad {\text {for all}} \,\, t>0 \end{aligned}$$
$$\begin{aligned} f(y)\ge \Vert x_0\Vert \frac{\Vert x_0+ty\Vert -\Vert x_0\Vert }{t} \quad {\text {for all}}\,\, t<0 \end{aligned}$$
for all \(y\in {\mathbb {R}}^d\).
Since f is linear and \(f(x_0)=\Vert x_0\Vert ^2\), we have for all \(y\in {\mathbb {R}}^d\),
$$\begin{aligned} f(y)= \frac{1}{t}\big (f(x_0+ty) - \Vert x_0\Vert ^2\big ). \end{aligned}$$
If \(t>0\), then since \(f(x)\le \Vert x_0\Vert \Vert x\Vert \) for all \(x\in {\mathbb {R}}^d\) we have
$$\begin{aligned} f(y) \le \Vert x_0\Vert \frac{\Vert x_0+ty\Vert -\Vert x_0\Vert }{t}. \end{aligned}$$
If \(t<0\), then applying the above inequality
$$\begin{aligned} f(y) =-f(-y)\ge -\Vert x_0\Vert \frac{\Vert x_0-t(-y)\Vert -\Vert x_0\Vert }{-t}= \Vert x_0\Vert \frac{\Vert x_0+ty\Vert -\Vert x_0\Vert }{t}. \end{aligned}$$
\(\square \)
Let (G, p) be a (finite or infinite) bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert )\), and fix an orientation for each edge \(vw\in E(G)\). We denote by \({\text {supp}}(vw)\) the set of all support functionals for \(p_v-p_w\). (The choice of orientation on the edges of G is for convenience only and has no bearing on the results that follow. Alternatively, we could avoid choosing an orientation by defining \({\text {supp}}(vw)\) to be the set of all linear functionals which are support functionals for either \(p_v-p_w\) or \(p_w-p_v\).)
If (G, p) is a (finite or infinite) bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert )\) and \(u:V(G)\rightarrow {\mathbb {R}}^d\) is an infinitesimal flex of (G, p), then
$$\begin{aligned} u_v-u_w\in \bigcap _{f\in {\text {supp}}(vw)}\, \ker f \end{aligned}$$
for each edge \(vw\in E(G)\).
Let \(vw\in E(G)\) and suppose f is a support functional for \(p_v-p_w\). Applying Lemma 1 with \(x_0=p_v-p_w\) and \(y=u_v-u_w\), we have
$$\begin{aligned} \lim _{t\rightarrow 0^-}\frac{\Vert x_0+ty\Vert -\Vert x_0\Vert }{t}\le \frac{f(y)}{\Vert x_0\Vert } \le \lim _{t\rightarrow 0^+}\frac{\Vert x_0+ty\Vert -\Vert x_0\Vert }{t}. \end{aligned}$$
Since u is an infinitesimal flex of (G, p), \(\lim _{t\rightarrow 0}\tfrac{1}{t}(\Vert x_0+ty\Vert -\Vert x_0\Vert )=0\) and so \(f(y)=0\). \(\square \)
Let \(\Vert \cdot \Vert _{\mathcal {P}}\) be a polyhedral norm on \({\mathbb {R}}^d\). For each facet F of \({\mathcal {P}}\), denote by \(\varphi _F\) the linear functional
$$\begin{aligned} \varphi _F:{\mathbb {R}}^d\rightarrow {\mathbb {R}}, \quad x\mapsto x\cdot \hat{F}. \end{aligned}$$
Let \(\Vert \cdot \Vert _{\mathcal {P}}\) be a polyhedral norm on \({\mathbb {R}}^d\), let F be a facet of \({\mathcal {P}}\) and let \(x_0\in {\mathbb {R}}^d\). Then \(x_0\in {\text {cone}}(F)\) if and only if the linear functional
$$\begin{aligned} \varphi _{F,x_0}:{\mathbb {R}}^d\rightarrow {\mathbb {R}}, \quad x\mapsto \Vert x_0\Vert _{\mathcal {P}}\,\varphi _F(x), \end{aligned}$$
is a support functional for \(x_0\).
If \(x_0\in {\text {cone}}(F)\), then by formula (4) \(\varphi _{F,x_0}\left( x_0\right) =\Vert x_0\Vert _{\mathcal {P}}^2\). By (1), we have \(\varphi _{F,x_0}(x)\le \Vert x_0\Vert _{\mathcal {P}}\) for each \(x\in {\mathcal {P}}\), and it follows that \(\varphi _{F,x_0}\) is a support functional for \(x_0\). Conversely, if \(x_0\notin {\text {cone}}(F)\), then by (4) \(\varphi _{F,x_0}(x_0)<\Vert x_0\Vert _{\mathcal {P}}^2\) and so \(\varphi _{F,x_0}\) is not a support functional for \(x_0\). \(\square \)
For each oriented edge \(vw\in E(G)\), we denote by \({\text {supp}}_\varPhi (vw)\) the set of all linear functionals \(\varphi _F\) which are support functionals for \(\tfrac{p_v-p_w}{\Vert p_v-p_w\Vert _{\mathcal {P}}}\).
Let (G, p) be a finite bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\). If a mapping \(u:V(G)\rightarrow {\mathbb {R}}^d\) satisfies
$$\begin{aligned} u_v-u_w\in \bigcap _{\varphi _F\in {\text {supp}}_\varPhi (vw)}\, \ker \varphi _F \end{aligned}$$
for each edge \(vw\in E(G)\), then there exists \(\delta >0\) such that the family
$$\begin{aligned} \alpha _v:(-\delta ,\delta )\rightarrow {\mathbb {R}}^d, \quad \alpha _v(t)= p_v+tu_v, \end{aligned}$$
is a finite flex of (G, p).
Let \(vw\in E(G)\) and write \(x_0=p_v-p_w\) and \(u_0=u_v-u_w\). If \(\varphi _{F}\) is a support functional for \(\tfrac{x_0}{\Vert x_0\Vert _{{\mathcal {P}}}}\), then by the hypothesis \(\varphi _F(u_0)=0\). By Lemma 3, \(x_0\) is contained in the conical hull of the facet F. Applying formulas (3) and (4),
$$\begin{aligned} \Vert x_0\Vert _{\mathcal {P}}= \max _{y\in \mathrm{ext}({\mathcal {P}}^\triangle )} x_0\cdot y = x_0\cdot \hat{F}. \end{aligned}$$
By continuity, there exists \(\delta _{vw}>0\) such that for all \(|t|<\delta _{vw}\)
$$\begin{aligned} \Vert x_0+tu_0\Vert _{\mathcal {P}}= & {} \max _{y\in \mathrm{ext}({\mathcal {P}}^\triangle )} (x_0+tu_0)\cdot y\\= & {} (x_0+tu_0)\cdot \hat{F} \\= & {} \Vert x_0\Vert _{\mathcal {P}}+t\,\varphi _F(u_0)\\= & {} \Vert x_0\Vert _{\mathcal {P}}. \end{aligned}$$
Since G is a finite graph, the result holds with \(\delta =\min _{vw\in E(G)}\delta _{vw}>0\). \(\square \)
The following is a characterisation of the space of infinitesimal flexes of a general bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\).
Theorem 5
Let (G, p) be a (finite or infinite) bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\). Then a mapping \(u:V(G)\rightarrow {\mathbb {R}}^d\) is an infinitesimal flex of (G, p) if and only if
If u is an infinitesimal flex of (G, p), then the result follows from Proposition 2. For the converse, let \(vw\in E(G)\) and write \(x_0=p_v-p_w\) and \(u_0=u_v-u_w\). Applying the argument in the proof of Proposition 4, there exists \(\delta _{vw}>0\) with \(\Vert x_0+tu_0\Vert _{\mathcal {P}}=\Vert x_0\Vert _{\mathcal {P}}\) for all \(|t|<\delta _{vw}\). Hence u is an infinitesimal flex of (G, p). \(\square \)
Equivalence of Finite and Infinitesimal Rigidity
A placement of a simple graph G in \({\mathbb {R}}^d\) is a map \(p:V(G)\rightarrow {\mathbb {R}}^d\) for which \(p_v\not =p_w\) whenever \(vw\in E(G)\). A placement \(p:V(G)\rightarrow {\mathbb {R}}^d\) is well-positioned with respect to a polyhedral norm on \({\mathbb {R}}^d\) if \(p_v-p_w\) is contained in the conical hull of exactly one facet of the unit ball \({\mathcal {P}}\) for each edge \(vw\in E(G)\). We denote this unique facet by \(F_{vw}\). In the following discussion, G is a finite graph and each placement is identified with a point \(p=(p_v)_{v\in V(G)}\) in the product space \(\prod _{v\in V(G)}{\mathbb {R}}^{d}\) which we regard as having the usual topology. The set of all well-positioned placements of G in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) is an open and dense subset of this product space. The configuration space for a bar-joint framework (G, p) is defined as
$$\begin{aligned} V(G,p) = \Big \{x\in \prod _{v\in V(G)}{\mathbb {R}}^{d}:\Vert x_v-x_w\Vert _{\mathcal {P}}=\Vert p_v-p_w\Vert _{\mathcal {P}}\quad {\text {for all}} \,\,vw\in E(G)\Big \}. \end{aligned}$$
Let (G, p) be a finite and well-positioned bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) with \(p_v-p_w\in {\text {cone}}(F_{vw})\) for each \(vw\in E(G)\). Then there exists a neighbourhood U of p in \(\prod _{v\in V(G)}{\mathbb {R}}^{d}\) such that
if \(x\in U\), then \(x_v-x_w\in {\text {cone}}(F_{vw})\) for each edge \(vw\in E(G)\),
(G, x) is a well-positioned bar-joint framework for each \(x\in U\) and
\(V(G,p)\cap U = \{x\in U: \varphi _{F_{vw}}(x_{v}-x_{w})=\varphi _{F_{vw}}(p_{v}-p_{w}) \,\, {\text {for all}} \,\,vw\in E(G)\}\).
In particular, \(V(G,p)\cap U= (p+{\mathcal {F}}(G,p))\cap U\).
Let \(vw\in E(G)\) be an oriented edge and consider the continuous map
$$\begin{aligned} T_{vw}:\prod _{v'\in V(G)}{\mathbb {R}}^{d}\rightarrow {\mathbb {R}}^d, \quad (x_{v'})_{v'\in V(G)}\mapsto x_{v}-x_{w}. \end{aligned}$$
Since (G, p) is well-positioned, \(p_v-p_{w}\) is an interior point of the conical hull of a unique facet \(F_{vw}\) of \({\mathcal {P}}\). The preimage \(T_{vw}^{-1}({\text {cone}}(F_{vw})^\circ )\) is an open neighbourhood of p. Since G is a finite graph, the intersection
$$\begin{aligned} U = \bigcap _{vw\in E(G)} T_{vw}^{-1}({\text {cone}}(F_{vw})^\circ ) \end{aligned}$$
is an open neighbourhood of p which satisfies (i), (ii) and (iii).
Since (G, p) is well-positioned, by Lemma 3, there is exactly one support functional in \({\text {supp}}_\varPhi (vw)\) for each edge vw and this functional is given by \(\varphi _{F_{vw}}\). If \(x\in U\), then define \(u=(u_v)_{v\in V(G)}\) by setting \(u_v=x_v-p_v\) for each \(v\in V(G)\). By (iii), \(x\in V(G,p)\cap U\) if and only if \(x\in U\) and
$$\begin{aligned} \varphi _{F_{vw}} (u_v-u_w)= \varphi _{F_{vw}}(x_v-x_w)- \varphi _{F_{vw}}(p_v-p_w)=0 \end{aligned}$$
for each edge \(vw\in E(G)\). By Theorem 5, the latter identity is equivalent to the condition that u is an infinitesimal flex of (G, p). Thus \(x\in V(G,p)\cap U\) if and only if \(x\in U\) and \(x-p\in {\mathcal {F}}(G,p)\). \(\square \)
We now prove the equivalence of continuous rigidity and infinitesimal rigidity for finite well-positioned bar-joint frameworks.
Let (G, p) be a finite well-positioned bar-joint framework in a normed space \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\), where \(\Vert \cdot \Vert _{\mathcal {P}}\) is a polyhedral norm. Then the following statements are equivalent:
(G, p) is continuously rigid.
(G, p) is infinitesimally rigid.
\(\mathrm{{(i)}}\Rightarrow \mathrm{{(ii)}}\). If \(u=(u_v)_{v\in V(G)}\in {\mathcal {F}}(G,p)\) is an infinitesimal flex of (G, p), then by Theorem 5 and Proposition 4, the family
$$\begin{aligned} \alpha _v:(-\varepsilon ,\varepsilon )\rightarrow {\mathbb {R}}^d,\quad \alpha _v(t)=p_v+tu_v,\quad v\in V(G), \end{aligned}$$
is a finite flex of (G, p) for some \(\varepsilon >0\). Since (G, p) is continuously rigid, this finite flex must be trivial. Thus there exist \(\delta >0\) and a continuous path \(c:(-\delta ,\delta )\rightarrow {\mathbb {R}}^d\) such that \(\alpha _v(t)=p_v+c(t)\) for all \(|t|<\delta \) and all \(v\in V(G)\). Now \(u_v=\alpha _v'(0)=c'(0)\) for all \(v\in V(G)\) and so u is a constant, and hence trivial, infinitesimal flex of (G, p). We conclude that (G, p) is infinitesimally rigid.
\(\mathrm{{(ii)}}\Rightarrow \mathrm{{(i)}}\). If (G, p) has a finite flex given by the family
$$\begin{aligned} \alpha _v:(-\varepsilon ,\varepsilon )\rightarrow {\mathbb {R}}^d,\quad v\in V(G), \end{aligned}$$
then consider the continuous path
$$\begin{aligned} \alpha :(\varepsilon ,\varepsilon )\rightarrow V(G,p), \quad t\mapsto (\alpha _v(t))_{v\in V(G)}. \end{aligned}$$
By Proposition 6, \(V(G,p)\cap U=(p+{\mathcal {F}}(G,p))\cap U\) for some neighbourhood U of p. Since \(\alpha (0)=p\), there exists \(\delta >0\) such that \(\alpha (t)\in V(G,p)\cap U\) for all \(|t|<\delta \). Choose \(t_0\in (-\delta ,\delta )\) and define
$$\begin{aligned} u:V(G)\rightarrow {\mathbb {R}}^d, \quad u_v=\alpha _v(t_0)-p_v. \end{aligned}$$
Then \(u=\alpha (t_0)-p\in {\mathcal {F}}(G,p)\) is an infinitesimal flex of (G, p). Since (G, p) is infinitesimally rigid, u must be a trivial infinitesimal flex. Hence \(u_v=c(t_0)\) for all \(v\in V(G)\) and some \(c(t_0)\in {\mathbb {R}}^d\). Apply the same argument to show that for each \(|t|<\delta \) there exists c(t) such that \(\alpha _v(t) = p_v+c(t)\) for all \(v\in V(G)\). Note that \(c:(-\delta , \delta )\rightarrow {\mathbb {R}}^d\) is continuous and so \(\{\alpha _v:v\in V(G)\}\) is a trivial finite flex of (G, p). We conclude that (G, p) is continuously rigid. \(\square \)
The non-equivalence of finite and infinitesimal rigidity for general finite bar-joint frameworks in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) is demonstrated in Example 9.
The Rigidity Matrix
We define the rigidity matrix \(R_{\mathcal {P}}(G,p)\) for a finite bar-joint framework (G, p) in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) as follows: Fix an ordering of the vertices V(G) and edges E(G) and choose an orientation on the edges of G. For each vertex v, assign d columns in the rigidity matrix and label these columns \(p_{v,1},\ldots ,p_{v,d}\). For each directed edge \(vw\in E(G)\) and each facet F with \(p_v-p_w\in {\text {cone}}(F)\), assign a row in the rigidity matrix and label this row by (vw, F). The entries for the row (vw, F) are given by
where \(p_v-p_w\in {\text {cone}}(F)\) and \(\hat{F}=(\hat{F}_1,\ldots ,\hat{F}_d)\in {\mathbb {R}}^d\). If (G, p) is well-positioned, then the rigidity matrix has size \(|E(G)|\times d|V(G)|\).
Let (G, p) be a finite bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\). Then
\({\mathcal {F}}(G,p)\cong \ker R_{\mathcal {P}}(G,p)\).
(G, p) is infinitesimally rigid if and only if \({\text {rank}}~R_{\mathcal {P}}(G,p)= d|V(G)|-d\).
The system of equations in Theorem 5 is expressed by the matrix equation \(R_{\mathcal {P}}(G,p)u^\mathrm{{T}}=0\) where we identify \(u:V(G)\rightarrow {\mathbb {R}}^d\) with a row vector \((u_{v_1},\ldots ,u_{v_n})\in {\mathbb {R}}^{d|V(G)|}\). Thus \({\mathcal {F}}(G,p)\cong \ker R_{\mathcal {P}}(G,p)\). The space of trivial infinitesimal flexes of (G, p) has dimension d and so in general we have
$$\begin{aligned} {\text {rank}}~R_{\mathcal {P}}(G,p)\le d|V(G)|-d \end{aligned}$$
with equality if and only if (G, p) is infinitesimally rigid. \(\square \)
If F is a facet of \({\mathcal {P}}\) and \(y_1,y_2,\ldots ,y_{d}\in \mathrm{ext}({\mathcal {P}})\) are extreme points of \({\mathcal {P}}\) which are contained in F, then for each column vector \(y_k\) we compute \([1 \cdots 1]\,A^{-1}\, y_k=1\), where \(A=[y_1\cdots y_d]\in M^{d\times d}({\mathbb {R}})\). Hence,
$$\begin{aligned} \hat{F} = [1 \cdots 1]A^{-1}. \end{aligned}$$
Moreover, if \(y_1,y_2,\ldots ,y_{d}\) are pairwise orthogonal, then
$$\begin{aligned} A^{-1}=\Big [\tfrac{y_1}{\Vert y_1\Vert ^2_2} \cdots \tfrac{y_d}{\Vert y_d\Vert ^2_2}\Big ]^\mathrm{{T}} \end{aligned}$$
$$\begin{aligned} \hat{F} =\sum _{j=1}^d \tfrac{y_j}{\Vert y_j\Vert ^2_2}, \end{aligned}$$
where \(\Vert \cdot \Vert _2\) is the Euclidean norm on \({\mathbb {R}}^d\).
An infinitesimally flexible and an infinitesimally rigid placement of \(K_2\) in \(({\mathbb {R}}^2,\Vert \cdot \Vert _1)\)
Let \({\mathcal {P}}\) be a crosspolytope in \({\mathbb {R}}^d\) with 2d many extreme points \(\mathrm{ext}({\mathcal {P}})=\{\pm e_k:k=1,\ldots ,d\}\), where \(e_1,e_2,\ldots ,e_d\) is the usual basis in \({\mathbb {R}}^d\). Then each facet F contains d pairwise orthogonal extreme points \(y_1,y_2,\ldots ,y_d\) each of Euclidean norm 1. By (8), \(\hat{F} = \sum _{j=1}^d y_j\) and the resulting polyhedral norm is the 1-norm
$$\begin{aligned} \Vert x\Vert _{\mathcal {P}}=\max _{y\in \mathrm{ext}({\mathcal {P}}^\triangle )} x\cdot y = \sum _{i=1}^d|x_i| = \Vert x\Vert _1. \end{aligned}$$
Consider for example the placements of the complete graph \(K_2\) in \(({\mathbb {R}}^2,\Vert \cdot \Vert _1)\) illustrated in Fig. 1. The polytope \({\mathcal {P}}\) is indicated on the left with facets labelled \(F_1\) and \(F_2\). The extreme points of the polar set \({\mathcal {P}}^\triangle \) which correspond to these facets are \(\hat{F}_1=e_1+e_2=(1,1)\) and \(\hat{F}_2=e_1-e_2=(1,-1)\). The first placement is well-positioned with respect to \({\mathcal {P}}\) and the rigidity matrix is
Evidently, this bar-joint framework has a non-trivial infinitesimal flex. The second placement is not well-positioned and the rigidity matrix is
As the rigidity matrix has rank 2, this bar-joint framework is infinitesimally rigid in \(({\mathbb {R}}^2,\Vert \cdot \Vert _1)\), but continuously flexible.
Edge-Labellings and Monochrome Subgraphs
In this section, we describe an edge-labelling on G which depends on the placement of the bar-joint framework (G, p) in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) relative to the facets of \({\mathcal {P}}\). We provide methods for identifying infinitesimally flexible frameworks and subframeworks which are relatively infinitesimally rigid. We then characterise infinitesimal rigidity for bar-joint frameworks with d framework colours in terms of the monochrome subgraphs induced by this edge-labelling.
Edge-Labellings
Let (G, p) be a general bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) (i.e. it is not assumed here that (G, p) is finite or well-positioned). Since \({\mathcal {P}}\) is symmetric in \({\mathbb {R}}^d\), if F is a facet of \({\mathcal {P}}\) then \(-F\) is also a facet of \({\mathcal {P}}\). Denote by \(\varPhi ({\mathcal {P}})\) the collection of all pairs \([F]=\{F,-F\}\). For each edge \(vw\in E(G)\), define
$$\begin{aligned} \varPhi (vw)=\left\{ [F]\in \varPhi ({\mathcal {P}}):p_v-p_w\in {\text {cone}}(F)\cup {\text {cone}}(-F)\right\} . \end{aligned}$$
We refer to the elements of \(\varPhi (vw)\) as the framework colours of the edge vw. For example, if \(p_v-p_w\) lies in the conical hull of exactly one facet of \({\mathcal {P}}\), then the edge vw has just one framework colour. If \(p_v-p_w\) lies along a ray through an extreme point of \({\mathcal {P}}\), then vw has at least d distinct framework colours. By Lemma 3, [F] is a framework colour for an edge vw if and only if either \(\varphi _F\) or \(-\varphi _F\) is a support functional for \(\tfrac{p_v-p_w}{\Vert p_v-p_w\Vert _{\mathcal {P}}}\).
For each vertex \(v_0\in V(G)\), denote by \(\varPhi (v_0)\) the collection of framework colours of all edges which are incident with \(v_0\):
$$\begin{aligned} \varPhi (v_0)=\bigcup _{v_0w\in E(G)} \varPhi (v_0w). \end{aligned}$$
If a (finite or infinite) bar-joint framework (G, p) is infinitesimally rigid in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\), then \(|\varPhi (v)|\ge d\) for each vertex \(v\in V(G)\).
If \(v_0\in V(G)\) and \(|\varPhi (v_0)|< d\), then there exists non-zero
$$\begin{aligned} x\in \bigcap _{[F]\in \varPhi (v_0)} \ker \varphi _{F}. \end{aligned}$$
By Theorem 5, if \(u:V(G)\rightarrow {\mathbb {R}}^d\) is defined by
$$\begin{aligned} u_v = \left\{ \begin{array}{ll} x &{} \text{ if } v=v_0, \\ 0 &{} \text{ if } v\not =v_0. \end{array}\right. \end{aligned}$$
then u is a non-trivial infinitesimal flex of (G, p). \(\square \)
We now consider the subgraphs of G which are spanned by edges possessing a particular framework colour. For each facet F of \({\mathcal {P}}\), define
$$\begin{aligned} E_F(G,p)=\{vw\in E(G):[F]\in \varPhi (vw)\} \end{aligned}$$
and let \(G_F\) be the subgraph of G spanned by \(E_F(G,p)\). We refer to \(G_F\) as a monochrome subgraph of G.
Denote by \(\varPhi (G,p)\) the collection of all framework colours of edges of G:
$$\begin{aligned} \varPhi (G,p)=\bigcup _{vw\in E(G)} \varPhi (vw). \end{aligned}$$
We refer to the elements of \(\varPhi (G,p)\) as the framework colours of the bar-joint framework (G, p).
Let (G, p) be a (finite or infinite) bar-joint framework which is infinitesimally rigid in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\). If C is a collection of framework colours of (G, p) with \(|\varPhi (G,p)\backslash C|<d\), then
$$\begin{aligned} \bigcup _{[F]\in C}G_F \end{aligned}$$
contains a spanning tree of G.
Suppose that \(\bigcup _{[F]\in C}G_F\) does not contain a spanning tree of G. Then there exists a partition \(V(G) = V_1 \cup V_2\) for which there is no edge \(v_1v_2\in E(G)\) with framework colour contained in C satisfying \(v_1\in V_1\) and \(v_2\in V_2\). Since \(|\varPhi (G,p)\backslash C|<d\), there exists non-zero
$$\begin{aligned} x\in \bigcap _{[F]\in \varPhi (G,p)\backslash C} \ker \varphi _{F}. \end{aligned}$$
$$\begin{aligned} u_v = \left\{ \begin{array}{ll} x &{} \text { if }v\in V_1, \\ 0 &{} \text { if }v\in V_2, \end{array}\right. \end{aligned}$$
then u is a non-trivial infinitesimal flex of (G, p). We conclude that \(\bigcup _{[F]\in C}G_F\) contains a spanning tree of G. \(\square \)
It is possible to construct examples which show that the converse to Proposition 11 does not hold in general. In Theorem 13, we show that a converse statement does hold under the additional assumption that \(|\varPhi (G,p)|=d\).
Edge-Labelled Paths and Relative Infinitesimal Rigidity
Let (G, p) be a finite bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) and, for each edge \(vw\in E(G)\), let \(X_{vw}\) be the vector subspace of \({\mathbb {R}}^d\):
$$\begin{aligned} X_{vw} = \bigcap _{\varphi _F\in {\text {supp}}_{\varPhi }(vw)} \ker \varphi _F = \bigcap _{[F]\in \varPhi (vw)} \ker \varphi _F. \end{aligned}$$
If \(\gamma =\{v_1v_2,v_2v_3,\ldots ,v_{n-1}v_n\}\) is a path in G from a vertex \(v_1\) to a vertex \(v_n\), then we define
$$\begin{aligned} X_{\gamma } = X_{v_1v_2}+X_{v_2v_3}+\cdots +X_{v_{n-1}v_n}. \end{aligned}$$
For each pair of vertices \(v,w\in V(G)\), denote by \(\varGamma _G(v,w)\) the set of all paths \(\gamma \) in G from v to w.
A subframework of (G, p) is a bar-joint framework (H, p) obtained by restricting p to the vertex set of a subgraph H. We say that (H, p) is relatively infinitesimally rigid in (G, p) if the restriction of every infinitesimal flex of (G, p) to (H, p) is trivial.
Let (G, p) be a finite bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) and let (H, p) be a subframework of (G, p). If for each pair of vertices \(v,w\in V(H)\)
$$\begin{aligned} \bigcap _{\gamma \in \varGamma _G(v,w)} X_{\gamma } = \{0\}, \end{aligned}$$
then (H, p) is relatively infinitesimally rigid in (G, p).
Let \(u\in {\mathcal {F}}(G,p)\) be an infinitesimal flex of (G, p) and let \(v,w\in V(H)\). Suppose \(\gamma \in \varGamma _G(v,w)\), where \(\gamma =\{v_1v_2,\ldots ,v_{n-1}v_n\}\) is a path in G with \(v=v_1\) and \(w=v_n\). Then by Theorem 5,
$$\begin{aligned} u_v-u_w=(u_{v_1}-u_{v_2})+(u_{v_2}-u_{v_3})+ \cdots +(u_{v_{n-1}}-u_{v_n})\in X_{\gamma }. \end{aligned}$$
Since this holds for all paths in \(\varGamma _G(v,w)\), the hypothesis implies that \(u_v=u_w\). Applying this argument to every pair of vertices in H, we see that the restriction of u to V(H) is constant and hence a trivial infinitesimal flex of (H, p). Thus (H, p) is relatively infinitesimally rigid in (G, p). \(\square \)
Monochrome Spanning Subgraphs
Applying the results of the previous sections, we can now characterise the infinitesimally rigid bar-joint frameworks in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) which use exactly d framework colours.
Theorem 13
Let (G, p) be a (finite or infinite) bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) and suppose that \(|\varPhi (G,p)|=d\). Then the following statements are equivalent:
\(G_F\) contains a spanning tree of G for each \([F]\in \varPhi (G,p)\).
The implication \(\mathrm{{(i)}}\Rightarrow \mathrm{{(ii)}}\) follows from Proposition 11. To prove \(\mathrm{{(ii)}}\Rightarrow \mathrm{{(i)}}\), let \(u\in {\mathcal {F}}(G,p)\). If \(v,w\in V(G)\), then for each framework colour \([F]\in \varPhi (G,p)\) there exists a path in \(G_F\) from v to w. Hence
$$\begin{aligned} \bigcap _{\gamma \in \varGamma _G(v,w)} X_{\gamma } \subseteq \bigcap _{[F]\in \varPhi (G,p)} \ker \varphi _{F}=\{0\} \end{aligned}$$
and, by Proposition 12, \(u_v=u_w\). Applying this argument to all pairs \(v,w\in V(G)\), we see that u is a trivial infinitesimal flex and so (G, p) is infinitesimally rigid. \(\square \)
A bar-joint framework (G, p) is minimally infinitesimally rigid in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) if it is infinitesimally rigid and every subframework obtained by removing a single edge from G is infinitesimally flexible.
Corollary 14
Let (G, p) be a (finite or infinite) bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) and suppose that \(|\varPhi (G,p)|=d\). If \(G_F\) is a spanning tree in G for each \([F]\in \varPhi (G,p)\), then (G, p) is minimally infinitesimally rigid.
By Theorem 13, (G, p) is infinitesimally rigid. If any edge vw is removed from G, then \(G_F\) is no longer a spanning tree for some \([F]\in \varPhi (G,p)\). By Theorem 13, the subframework \((G\backslash \{vw\},p)\) is not infinitesimally rigid and so we conclude that (G, p) is minimally infinitesimally rigid. \(\square \)
There exist bar-joint frameworks which show that the converse statement to Corollary 14 does not hold in full generality. In the following corollary, the converse is established for bar-joint frameworks that are well-positioned.
Let (G, p) be a (finite or infinite) well-positioned bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) and suppose that \(|\varPhi (G,p)|=d\). Then the following statements are equivalent:
(G, p) is minimally infinitesimally rigid.
\(G_F\) is a spanning tree in G for each \([F]\in \varPhi (G,p)\).
\(\mathrm{{(i)}}\Rightarrow \mathrm{{(ii)}}\). Let \([F]\in \varPhi (G,p)\). If (G, p) is minimally infinitesimally rigid, then by Theorem 13 the monochrome subgraph \(G_F\) contains a spanning tree of G. Suppose vw is an edge of G which is contained in \(G_F\). Since (G, p) is minimally infinitesimally rigid, \((G\backslash \{vw\},p)\) is infinitesimally flexible. Since (G, p) is well-positioned, vw is contained in exactly one monochrome subgraph of G and so \(G_F\) is the only monochrome subgraph which is altered by removing the edge vw from G. By Theorem 13, \(G_F\backslash \{vw\}\) does not contain a spanning tree of G. We conclude that \(G_F\) is a spanning tree of G. The implication \(\mathrm{{(ii)}}\Rightarrow \mathrm{{(i)}}\) is proved in Corollary 14. \(\square \)
An Analogue of Laman's Theorem
In this section, we address the problem of whether there exists a combinatorial description of the class of graphs for which a minimally infinitesimally rigid placement exists in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\). We restrict our attention to finite bar-joint frameworks and prove that in two dimensions such a characterisation exists (Theorem 23). This result is analogous to Laman's theorem [14] for bar-joint frameworks in the Euclidean plane and extends [12, Thm. 4.6] which holds in the case where \({\mathcal {P}}\) is a quadrilateral.
Regular Placements
Let \(\omega (G,{\mathbb {R}}^d,{\mathcal {P}})\) denote the set of all well-positioned placements of a finite simple graph G in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\). A bar-joint framework (G, p) is regular in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) if the function
$$\begin{aligned} \omega (G,{\mathbb {R}}^d,{\mathcal {P}})\rightarrow \{1,2,\ldots ,d|V(G)|-d\}, \quad x\mapsto {\text {rank}}~R_{\mathcal {P}}(G,x) \end{aligned}$$
achieves its maximum value at p.
Lemma 16
Let G be a finite simple graph.
The set of placements of G in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) which are both well-positioned and regular is an open set in \(\prod _{v\in V(G)}{\mathbb {R}}^{d}\).
The set of placements of G in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) which are well-positioned and not regular is an open set in \(\prod _{v\in V(G)}{\mathbb {R}}^{d}\).
Let p be a well-positioned placement of G and let U be an open neighbourhood of p as in the statement of Proposition 6. The matrix-valued function \(x\mapsto R_{\mathcal {P}}(G,x)\) is constant on U and so either (G, x) is regular for all \(x\in U\) or (G, x) is not regular for all \(x\in U\). \(\square \)
A finite simple graph G is (minimally) rigid in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\) if there exists a well-positioned placement of G which is (minimally) infinitesimally rigid.
Example 17
The complete graph \(K_4\) is minimally rigid in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\) for every polyhedral norm \(\Vert \cdot \Vert _{\mathcal {P}}\). To see this, let \(F_1,F_2,\ldots ,F_n\) be the facets of \({\mathcal {P}}\) and let \(x_0\in \mathrm{ext}({\mathcal {P}})\) be any extreme point of \({\mathcal {P}}\). Then \(x_0\) is contained in exactly two facets, \(F_1\) and \(F_2\) say. Choose a point \(x_1\) in the relative interior of \(F_1\) and a point \(x_2\) in the relative interior of \(F_2\). Then by formulas (3) and (4),
$$\begin{aligned}&\max _{k\not =1}\, (x_1\cdot \hat{F}_k) <\Vert x_1\Vert _{\mathcal {P}}= x_1\cdot \hat{F}_1=1, \end{aligned}$$
$$\begin{aligned}&\max _{k\not =2}\, (x_2\cdot \hat{F}_k) < \Vert x_2\Vert _{\mathcal {P}}=x_2\cdot \hat{F}_2=1. \end{aligned}$$
Since \((x_0\cdot \hat{F}_1) = (x_0\cdot \hat{F}_2)=\Vert x_0\Vert _{\mathcal {P}}=1\), if \(x_1\) and \(x_2\) are chosen to lie in a sufficiently small neighbourhood of \(x_0\) then by continuity we may assume
$$\begin{aligned} x_1\cdot \hat{F}_2= & {} \max _{k\not =1}\, (x_1\cdot \hat{F_k})>0, \end{aligned}$$
$$\begin{aligned} x_2\cdot \hat{F}_1= & {} \max _{k\not =2}\, (x_2\cdot \hat{F_k})>0. \end{aligned}$$
We may also assume without loss of generality that
$$\begin{aligned} x_1\cdot \hat{F}_2 = x_2\cdot \hat{F}_1. \end{aligned}$$
Define a placement \(p:V(K_4)\rightarrow {\mathbb {R}}^2\) by setting
$$\begin{aligned} p_{v_0} = (0,0), \quad p_{v_1}=x_1, \quad p_{v_2} = (1-\varepsilon )x_2, \quad p_{v_3}=x_1+(1+\varepsilon )x_2, \end{aligned}$$
where \(0<\varepsilon <1\). The edges \(v_0v_1\), \(v_0v_2\) and \(v_1v_3\) have framework colours
$$\begin{aligned} \varPhi (v_0v_1)=[F_1], \quad \varPhi (v_0v_2) = [F_2], \quad \varPhi (v_1v_3)=[F_2]. \end{aligned}$$
To determine the framework colours for the remaining edges, we will apply the above identities together with formulas (3) and (4). Consider the edge \(v_2v_3\). If \(k\not =1\) and \(\varepsilon \) is sufficiently small, then applying (9)
$$\begin{aligned} (p_{v_3}-p_{v_2})\cdot \hat{F}_k = (x_1\cdot \hat{F}_k)+2\varepsilon \, (x_2\cdot \hat{F}_k) <1. \end{aligned}$$
Also by (9) and (12), we have
$$\begin{aligned} (p_{v_3}-p_{v_2})\cdot \hat{F}_1 =(x_1\cdot \hat{F}_1)+2\varepsilon \, (x_2\cdot \hat{F}_1)=1+2\varepsilon \, (x_2\cdot \hat{F}_1)>1. \end{aligned}$$
We conclude that \(F_1\) is the unique facet of \({\mathcal {P}}\) for which \(\Vert p_{v_3}-p_{v_2}\Vert _{\mathcal {P}}= (p_{v_3}-p_{v_2})\cdot \hat{F}_1\) and so \(p_{v_3}-p_{v_2}\in {\text {cone}}(F_1)^\circ \). Thus \(\varPhi (v_2v_3)=[F_1]\). Consider the edge \(v_0v_3\). Applying (10) and (11), for \(k\not =1,2\) we have
$$\begin{aligned} (p_{v_3}-p_{v_0})\cdot \hat{F}_k =(x_1\cdot \hat{F}_k)+(1+\varepsilon )\, (x_2\cdot \hat{F}_k) < (x_1\cdot \hat{F}_2)+1+\varepsilon . \end{aligned}$$
By applying (13),
$$\begin{aligned} (p_{v_3}-p_{v_0})\cdot \hat{F}_1 = (x_1\cdot \hat{F}_1)+(1+\varepsilon ) (x_2\cdot \hat{F}_1)<(x_1\cdot \hat{F}_2 )+1+\varepsilon \end{aligned}$$
and by (10),
$$\begin{aligned} (p_{v_3}-p_{v_0})\cdot \hat{F}_2 = (x_1\cdot \hat{F}_2)+(1+\varepsilon )(x_2\cdot \hat{F}_2)=(x_1 \cdot \hat{F}_2)+1+\varepsilon . \end{aligned}$$
Hence \(F_2\) is the unique facet of \({\mathcal {P}}\) for which \(\Vert p_{v_3}-p_{v_0}\Vert _{\mathcal {P}}= (p_{v_3}-p_{v_0})\cdot \hat{F}_2\). Thus \(p_{v_3}-p_{v_0}\in {\text {cone}}(F_2)^\circ \) and so \(\varPhi (v_0v_3)=[F_2]\). Finally, consider the edge \(v_1v_2\). Applying (13), we have
$$\begin{aligned} (p_{v_2}-p_{v_1})\cdot \hat{F}_2 = (1-\varepsilon )(x_2\cdot \hat{F}_2) - (x_1\cdot \hat{F}_2) =1-\varepsilon - (x_2\cdot \hat{F}_1) \end{aligned}$$
and this value is positive provided \(\varepsilon \) is sufficiently small. By (9), we have
$$\begin{aligned} (p_{v_2}-p_{v_1})\cdot (-\hat{F}_1)=-(1-\varepsilon ) (x_2\cdot \hat{F}_1)+(x_1\cdot \hat{F}_1) =1+\varepsilon (x_2\cdot \hat{F}_1)-(x_2\cdot \hat{F}_1). \end{aligned}$$
We conclude that \((p_{v_2}-p_{v_1})\cdot (\pm \hat{F}_2)<\Vert p_{v_2}-p_{v_1}\Vert _{\mathcal {P}}\). Hence \(p_{v_2}-p_{v_1}\notin {\text {cone}}(F_2)\). By making a small perturbation, we can assume that \(p_{v_2}-p_{v_1}\) is contained in the conical hull of exactly one facet of \({\mathcal {P}}\) and so \(\varPhi (v_1v_2)=[F_k]\) for some \([F_k]\not =[F_2]\). Thus (G, p) is well-positioned. This framework colouring is illustrated in Fig. 2 with monochrome subgraphs \(G_{F_1}\) and \(G_{F_2}\) indicated in black and grey, respectively, and \(G_{F_k}\) indicated by the dotted line. Suppose \(u\in {\mathcal {F}}(K_4,p)\). To show that u is a trivial infinitesimal flex, we apply the method of Proposition 12. The vertices \(v_0\) and \(v_1\) are joined by monochrome paths in both \(G_{F_1}\) and \(G_{F_2}\) and so \(u_{v_0}=u_{v_1}\). Similarly, \(u_{v_2}=u_{v_3}\). The vertices \(v_1\) and \(v_2\) are joined by monochrome paths in \(G_{F_2}\) and \(G_{F_k}\) and so \(u_{v_1}=u_{v_2}\). Thus u is a constant and hence trivial infinitesimal flex of \((K_4,p)\). We conclude that \((K_4,p)\) and all regular and well-positioned placements of \(K_4\) are infinitesimally rigid.
A framework colouring for an infinitesimally rigid placement of \(K_4\) in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\)
Counting Conditions
The Maxwell counting conditions [17] state that a finite minimally infinitesimally rigid bar-joint framework (G, p) in Euclidean space \({\mathbb {R}}^d\) must satisfy \(|E(G)|=d|V(G)|-{d+1\atopwithdelims ()2}\) with inequalities \(|E(H)|\le d|V(H)|-{d+1\atopwithdelims ()2}\) for all subgraphs H containing at least d vertices. The following analogous statement holds for polyhedral norms.
Let (G, p) be a finite and well-positioned bar-joint framework in \(({\mathbb {R}}^d,\Vert \cdot \Vert _{\mathcal {P}})\). If (G, p) is minimally infinitesimally rigid, then
\(|E(G)|= d|V(G)|-d\) and
\(|E(H)|\le d|V(H)|-d\) for all subgraphs H of G.
If (G, p) is minimally infinitesimally rigid, then by Proposition 8 the rigidity matrix \(R_{\mathcal {P}}(G,p)\) is independent and
$$\begin{aligned} |E(G)| = {\text {rank}}~R_{\mathcal {P}}(G,p) = d|V(G)|-d. \end{aligned}$$
The rigidity matrix for any subframework of (G, p) is also independent and so
$$\begin{aligned} |E(H)| = {\text {rank}}~R_{\mathcal {P}}(H,p) \le d|V(H)|-d \end{aligned}$$
for all subgraphs H. \(\square \)
A graph G is (d, d)-tight if it satisfies the counting conditions in the above proposition. The class of (2, 2)-tight graphs has the property that every member can be constructed from a single vertex by applying a sequence of finitely many allowable graph moves (see [18]). The allowable graph moves are:
The Henneberg 1-move (also called vertex addition, or 0-extension).
The Henneberg 2-move (also called edge splitting, or 1-extension).
The edge-to-\(K_3\) move (also called vertex splitting).
The vertex-to-\(K_4\) move.
A Henneberg 1-move \(G\rightarrow G'\) adjoins a vertex \(v_0\) to G together with two edges \(v_0v_1\) and \(v_0v_2\) where \(v_1,v_2\in V(G)\).
The Henneberg 1-move preserves infinitesimal rigidity for well-positioned bar-joint frameworks in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\).
Suppose (G, p) is well-positioned and infinitesimally rigid and let \(G\rightarrow G'\) be a Henneberg 1-move on the vertices \(v_1,v_2\in V(G)\). Choose distinct \([F_1],[F_2]\in \varPhi ({\mathcal {P}})\) and define a placement \(p'\) of \(G'\) by \(p'_{v}=p_{v}\) for all \(v\in V(G)\) and
$$\begin{aligned} p'_{v_0}\in \left( p_{v_1}+\left( {\text {cone}}\left( F_1\right) ^{\circ }\cup -{\text {cone}}\left( F_1\right) ^\circ \right) \right) \cap \left( p_{v_2}+\left( {\text {cone}}\left( F_2\right) ^{\circ }\cup -{\text {cone}}\left( F_2\right) ^\circ \right) \right) . \end{aligned}$$
Then \((G',p')\) is well-positioned and the edges \(v_0v_1\) and \(v_0v_2\) have framework colours \([F_1]\) and \([F_2]\), respectively. If \(u\in {\mathcal {F}}(G',p')\), then the restriction of u to V(G) is an infinitesimal flex of (G, p). This restriction must be trivial and hence constant. In particular, \(u_{v_1}=u_{v_2}\). By Theorem 5, \(\varphi _{F_1}(u_{v_0}-u_{v_1})=0\) and \(\varphi _{F_2}(u_{v_0}-u_{v_1})=\varphi _{F_2}(u_{v_0}-u_{v_2})=0\) and so \(u_{v_0}=u_{v_1}\). We conclude that \((G',p')\) is infinitesimally rigid. \(\square \)
A Henneberg 2-move \(G\rightarrow G'\) removes an edge \(v_1v_2\) from G and adjoins a vertex \(v_0\) together with three edges \(v_0v_1\), \(v_0v_2\) and \(v_0v_3\).
Suppose (G, p) is well-positioned and infinitesimally rigid and let \(G\rightarrow G'\) be a Henneberg 2-move on the vertices \(v_1,v_2,v_3\in V(G)\) and the edge \(v_1v_2\in E(G)\). Let \([F_1]\) be the unique framework colour for the edge \(v_1v_2\) and choose any \([F_2]\in \varPhi ({\mathcal {P}})\) with \([F_2]\not =[F_1]\). Define a placement \(p'\) of \(G'\) by setting \(p'_{v}=p_{v}\) for all \(v\in V(G)\) and choosing \(p'_{v_0}\) to lie on the intersection of the line through \(p_{v_1}\) and \(p_{v_2}\) and the double cone \(p_{v_3}+({\text {cone}}(F_2)^\circ \cup -{\text {cone}}(F_2)^\circ )\). (If \(p_{v_1},p_{v_2},p_{v_3}\) are collinear, then choose \(p'_{v_0}\) to lie in the intersection of this double cone and a small neighbourhood of \(p_{v_3}\).) Then \((G',p')\) is well-positioned. Both edges \(v_0v_1\) and \(v_0v_2\) have framework colour \([F_1]\) and the edge \(v_0v_3\) has framework colour \([F_2]\). If \(u\in {\mathcal {F}}(G',p')\), then by Theorem 5
$$\begin{aligned} \varphi _{F_1}(u_{v_1}-u_{v_2})=\varphi _{F_1}(u_{v_1}-u_{v_0})+ \varphi _{F_1}(u_{v_0}-u_{v_2})=0. \end{aligned}$$
Hence the restriction of u to V(G) is an infinitesimal flex of (G, p) and must be trivial. In particular, \(u_{v_1}=u_{v_3}\). Now \(\varphi _{F_1}(u_{v_0}-u_{v_1})=0\) and \(\varphi _{F_2}(u_{v_0}-u_{v_1})=\varphi _{F_2}(u_{v_0}-u_{v_3})=0\) and so \(u_{v_0}=u_{v_1}\). We conclude that u is a constant and hence trivial infinitesimal flex of \((G',p')\). \(\square \)
Let \(v_1v_2\) be an edge of G. An edge-to-\(K_3\) move \(G\rightarrow G'\) (on the edge \(v_1v_2\) and the vertex \(v_1\)) is obtained in two steps: Firstly, adjoin a new vertex \(v_0\) and two new edges \(v_0v_1\) and \(v_0v_2\) to G (creating a copy of \(K_3\) with vertices \(v_0,v_1,v_2\)). Secondly, each edge \(v_1w\) of G which is incident with \(v_1\) is either left unchanged or is removed and replaced with the edge \(v_0w\).
The edge-to-\(K_3\) move preserves infinitesimal rigidity for finite well-positioned bar-joint frameworks in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\).
Suppose (G, p) is well-positioned and infinitesimally rigid and let \(G\rightarrow G'\) be an edge-to-\(K_3\) move on the vertex \(v_1\in V(G)\) and the edge \(v_1v_2\in E(G)\). Let \([F_1]\) be the unique framework colour for \(v_1v_2\) and choose any \([F_2]\in \varPhi ({\mathcal {P}})\) with \([F_2]\not =[F_1]\). Since \(v_1\) has finite valence, there exists an open ball \(B(p_{v_1},r)\) such that if \(p_{v_1}\) is replaced with any point \(x\in B(p_{v_1},r)\), then the induced framework colouring of G is left unchanged. Define a placement \(p'\) of \(G'\) by setting \(p'_{v}=p_{v}\) for all \(v\in V(G)\) and choosing
$$\begin{aligned} p'_{v_0}\in (p_{v_1}+{\text {cone}}(F_2)^{\circ })\cap B(p_{v_1},r). \end{aligned}$$
Then \((G',p')\) is well-positioned. Suppose \(u\in {\mathcal {F}}(G',p')\) is an infinitesimal flex of \((G',p')\). The framework colours for the edges \(v_0v_1\) and \(v_0v_2\) are \([F_2]\) and \([F_1]\), respectively. Thus there exists a path from \(v_0\) to \(v_1\) in the monochrome subgraph \(G'_{F_1}\) given by the edges \(v_1v_2,v_2v_0\), and there exists a path from \(v_0\) to \(v_1\) in the monochrome subgraph \(G'_{F_2}\) given by the edge \(v_0v_1\). By the relative rigidity method of Proposition 12, \(u_{v_0}=u_{v_1}\). If an edge \(v_1w\) in G has framework colour [F] induced by (G, p) and is replaced by \(v_0w\) in \(G'\), then the framework colour is unchanged. Thus applying Theorem 5,
$$\begin{aligned} \varphi _{F}(u_{v_1}-u_w) = \varphi _F(u_{v_1}-u_{v_0})+\varphi _F(u_{v_0}-u_w)=0, \end{aligned}$$
and so the restriction of u to V(G) is an infinitesimal flex of (G, p). This restriction is constant since (G, p) is infinitesimally rigid and so u is a trivial infinitesimal flex of \((G',p')\). \(\square \)
A vertex-to-\(K_4\) move \(G\rightarrow G'\) replaces a vertex \(v_0\in V(G)\) with a copy of the complete graph \(K_4\) by adjoining three new vertices \(v_1,v_2,v_3\) and six edges \(v_0v_1\), \(v_0v_2\), \(v_0v_3\), \(v_1v_2\), \(v_1v_3\), \(v_2v_3\). Each edge \(v_0w\) of G which is incident with \(v_0\) may be left unchanged or replaced by one of \(v_1w\), \(v_2w\) or \(v_3w\).
The vertex-to-\(K_4\) move preserves infinitesimal rigidity for finite well-positioned bar-joint frameworks in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\).
Suppose (G, p) is well-positioned and infinitesimally rigid and let \(G\rightarrow G'\) be a vertex-to-\(K_4\) move on the vertex \(v_0\in V(G)\) which introduces new vertices \(v_1\), \(v_2\) and \(v_3\). Since \(v_0\) has finite valence, there exists an open ball \(B(p_{v_0},r)\) such that if \(p_{v_0}\) is replaced with any point \(x\in B(p_{v_0},r)\), then (G, x) and (G, p) induce the same framework colouring on G. Let \((K_4,\tilde{p})\) be the well-positioned and infinitesimally rigid placement of \(K_4\) constructed in Example 17. Define a well-positioned placement \(p'\) of \(G'\) by setting \(p'_{v}=p_{v}\) for all \(v\in V(G)\) and
$$\begin{aligned} p'_{v_1}=p_{v_0}+\varepsilon \tilde{p}_{v_1}, \quad p'_{v_2}=p_{v_0}+\varepsilon \tilde{p}_{v_2}, \quad p'_{v_3}=p_{v_0}+\varepsilon \tilde{p}_{v_3}, \end{aligned}$$
where \(\varepsilon >0\) is chosen to be sufficiently small so that \(p'_{v_1}\), \(p'_{v_2}\) and \(p'_{v_3}\) are all contained in \(B(p_{v_0},r)\). Suppose \(u\in {\mathcal {F}}(G',p')\). By the argument in Example 17, the restriction of u to the vertices \(v_0,v_1,v_2,v_3\) is constant. Thus if \(v_0w\) is an edge of G with framework colour [F] which is replaced by \(v_kw\) in \(G'\), then applying Theorem 5,
$$\begin{aligned} \varphi _{F}(u_{v_0}-u_w) = \varphi _F(u_{v_0}-u_{v_k})+\varphi _F(u_{v_k}-u_w)=0, \end{aligned}$$
and so the restriction of u to V(G) is an infinitesimal flex of (G, p). Since (G, p) is infinitesimally rigid, this restriction is constant, and we conclude that u is a trivial infinitesimal flex of \((G',p')\). \(\square \)
We now show that the class of finite graphs which have minimally infinitesimally rigid well-positioned placements in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\) is precisely the class of (2, 2)-tight graphs. In particular, the existence of such a placement does not depend on the choice of polyhedral norm on \({\mathbb {R}}^2\).
Let G be a finite simple graph and let \(\Vert \cdot \Vert _{\mathcal {P}}\) be a polyhedral norm on \({\mathbb {R}}^2\). The following statements are equivalent:
G is minimally rigid in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\).
G is (2, 2)-tight.
\(\mathrm{{(i)}}\Rightarrow \mathrm{{(ii)}}\). If G is minimally rigid, then there exists a placement p such that (G, p) is minimally infinitesimally rigid in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\) and the result follows from Proposition 18.
\(\mathrm{{(ii)}}\Rightarrow \mathrm{{(i)}}\). If G is (2, 2)-tight, then there exists a finite sequence of allowable graph moves, \(K_1\longrightarrow G_2\longrightarrow G_3\longrightarrow \cdots \longrightarrow G\). Every placement of \(K_1\) is certainly infinitesimally rigid. By Propositions 19–22, for each graph in the sequence there exists a well-positioned and infinitesimally rigid placement in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\). In particular, (G, p) is infinitesimally rigid for some well-positioned placement p. If a single edge is removed from G, then by Proposition 18 the resulting subframework is infinitesimally flexible. Hence (G, p) is minimally infinitesimally rigid in \(({\mathbb {R}}^2,\Vert \cdot \Vert _{\mathcal {P}})\). \(\square \)
Alexandrov, A.D.: Konvexe Polyeder. Akademie-Verlag, Berlin (1958)
Asimow, L., Roth, B.: The rigidity of graphs. Trans. Am. Math. Soc. 245, 279–289 (1978)
MathSciNet Article Google Scholar
Asimow, L., Roth, B.: The rigidity of graphs II. J. Math. Anal. Appl. 68, 171–190 (1979)
Bach, F.: Structured sparsity-inducing norms through submodular functions. Adv. Neural Inf. Process. Syst. 23, 118–126 (2010)
Borcea, C.S., Streinu, I.: Periodic frameworks and flexibility. Proc. R. Soc. A 466, 2633–2649 (2010)
Cauchy A.: Sur les polygones et polyèdres. Second Mémoir. J. École Polytechn. 9, 87–99 (1813); Oeuvres. T. 1. Paris 1905, pp. 26–38
Connelly, R.: Generic global rigidity. Discrete Comput. Geom. 33(4), 549–563 (2005)
Gortler, S., Healy, A., Thurston, D.: Characterizing generic global rigidity. Am. J. Math. 132(4), 897–939 (2010)
Graver, J., Servatius, B., Servatius, H.: Combinatorial Rigidity. Graduate Texts in Mathematics, vol. 2. American Mathematical Society, Providence, RI (1993)
Grünbaum, B.: Convex Polytopes. Pure and Applied Mathematics, vol. 16. Interscience Publishers, Wiley, New York (1967)
Jackson, B., Jordan, T.: Connected rigidity matroids and unique realisations of graphs. J. Comb. Theory B 94, 1–29 (2005)
Kitson, D., Power, S.C.: Infinitesimal rigidity for non-Euclidean bar-joint frameworks. Bull. Lond. Math. Soc. 46(4), 685–697 (2014)
Kitson D., Power S.C.: The rigidity of infinite graphs. Preprint 2013. http://arxiv.org/abs/1310.1860
Laman, G.: On graphs and the rigidity of plane skeletal structures. J. Eng. Math. 4, 331–340 (1970)
Lovász, L.: Mathematical Programming: The State of the Art (Bonn, 1982). Submodular Functions and Convexity. Springer, Berlin (1983)
Malestein, J., Theran, L.: Generic combinatorial rigidity of periodic frameworks. Adv. Math. 233, 291–331 (2013)
Maxwell, J.C.: On the calculation of the equilibrium and stiffness of frames. Philos. Mag. 27, 294–299 (1864)
Nixon, A., Owen, J.C.: An inductive construction of \((2, 1)\)-tight graphs. Contrib. Discrete Math. 9(2), 91–94 (2014)
MathSciNet Google Scholar
Nixon, A., Owen, J.C., Power, S.C.: Rigidity of frameworks supported on surfaces. SIAM J. Discrete Math. 26, 1733–1757 (2012)
Power, S.C.: Polynomials for crystal frameworks and the rigid unit mode spectrum. Philos. Trans. R. Soc. A 372, 20120030 (2014)
Ross, E.: The rigidity of periodic body-bar frameworks on the three-dimensional fixed torus. Philos. Trans. R. Soc. A 372, 20120112 (2014)
Schulze, B.: Symmetric versions of Laman's theorem. Discrete Comput. Geom. 44(4), 946–972 (2010)
Ward, J.E., Wendell, R.E.: Using block norms for location modeling. Oper. Res. 33(5), 1074–1090 (1985)
Whiteley, W.: Infinitesimally rigid polyhedra. I. Statics of frameworks. Trans. Am. Math. Soc. 285(2), 431–465 (1984)
Whiteley, W.: The union of matroids and the rigidity of frameworks. SIAM J. Discrete Math. 1(2), 237–255 (1988)
Whiteley, W.: Matroids and Rigid Structures. Encyclopedia of Mathematical Application, vol. 40, pp. 1–53. Cambridge University Press, Cambridge (1992)
Department of Mathematics and Statistics, Lancaster University, Lancaster, LA1 4YF, UK
Derek Kitson
Correspondence to Derek Kitson.
Supported by the Engineering and Physical Sciences Research Council [grant number EP/J008648/1].
Editor in Charge: Günter M. Ziegler
Kitson, D. Finite and Infinitesimal Rigidity with Polyhedral Norms. Discrete Comput Geom 54, 390–411 (2015). https://doi.org/10.1007/s00454-015-9706-x
Issue Date: September 2015
Bar-joint framework
Infinitesimally rigid
Laman's theorem
Polyhedral norm
Mathematics Subject Classification
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
Not affiliated
© 2022 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
Recent questions and answers in Relations and Functions
Questions >> CBSE XII >> Math >> Relations and Functions
Questions from: Relations and Functions
Differentiate 'Cry' and 'cry'
cbsc
biodiversity and its applications
sec-b
answered Apr 9, 2019 by priyanka.clay6
Which one of the following is a bird flu virus?
sec-a
answered Feb 14, 2019 by lpmvenkateswaran621
Prove that relation R defined on the set N of natural numbers by \[x\;R\;y \Leftrightarrow 2x^2-3xy+y^2=0\] is not symmetric but it is reflexive.
additionalproblem
kvquestionbank2012
answered Jan 24, 2018 by nimmisivapuri.d5
Let $f : A \rightarrow B$ be a given function. A relation R in the set A is given by $R = {(a,b) \varepsilon A \times A : f(a) = f(b)}.$ Check, if R is an equivalence relation.
Draw a plot showing the variation of photo electric current against the intensity of incident validation on a given photo sensitive surface.
modelpaper
mock-test-series-2015
answered Sep 4, 2017 by throttlespec
Let N denote the set of all natural numbers and R be the relation on N x N defined by $ (a,b)R(c,d) \Leftrightarrow ad(b + c) = bc(a + d).$ Show that R is an equivalence relation on N x N.
answered Jul 20, 2017 by saurabh.kumarshukla1995
Give an account of Hershey and Chase experiment. What did it conclusively prove? If both DNA and pro
theoryquestion
sec-e
answered Jul 5, 2017 by pady_1
During reproduction, the chromosome number (2n) reduces to half (n) in the gametes and again the ori
answered Jun 14, 2017 by pady_1
An oil company has two depots A and B with capacities of 7000 L and 4000 L respectively. The company is to supply oil to three petrol pumps, D, E and F whose requirements are 4500L, 3000L and 3500L respectively. Assuming that the transportation cost of 10 litres of oil is Rs 1 per km, How should the delivery be scheduled in order that the transportation cost is minimum ? What is the minimum cost? The distance (In km) between the depots and the petrol pumps is given in the following table :
answered Dec 27, 2016 by priyanka.c
Find the area of the region $\{ (x,y) : 0 \leq y \leq x^2, 0 \leq y \leq x+2 , 0 \leq x \leq 3\}$
Find the image of the point (1, 3. 4) in the plane $x-y+z =5$. Hence show that the image lies on the plane $x -2y +z -7 = 0$
Radioactive wastes is to be disposed of in full enclosed lead boxes of inner volume $200 cm^3$. The base of the box has dimension in the ratio 2:1 . What is the inner length of the box. Find the minimum inner surface area of the box.
Rent -a-car has three different makes of vehicles P, Q and R for hire. These cars are located at stations A and B on either side of a city. Some cars are being rented. In total they have 150 cars. At station A, they have 20% of P, 40% of Q and 30% of R which is 46 cars in total. At station B they have 40% of P, 20% of Q and 50% of R which is 54 cars in total. How many of each cars types does Rent-a -car have ?
Integrate $\begin{align*} \int \frac{\sin(x+a)}{\sin (x+b)} dx\end{align*} $
sec-c
Express the matrix $A = \begin{bmatrix} 3 & 2 & 3 \\ 4 & 5 & 3 \\ 2 & 4 & 5 \end{bmatrix}$ as the sum of a symmetric and a skew symmetric matrix.
Assume that each child born is equally likely to be a boy or a girl. If a family has two children, what is the conditional probability that both are girls given that
(a) The youngest is a girl.
(b) At least one is a girl.
Pre-natal *** determination is a crime. What will you do if you come to know that some of your known is indulging in pre-natal *** determination ?
Evaluate $\begin{align*} \int \limits_{-1}^{3/2} |x \sin {\pi \; x}| dx \end{align*}$
Find the condition that the curves $2x= y^2$ and $2xy = k $ intersect orthogonally.
Differentiate $x ^{\sin ^{-1} x}$ w.r.t $\sin^{-1} x$
For what value of $a$ and $b$ is the function $f(x) = \begin{cases} x^2 ,& x \leq c \\ ax+b , & x> c \end{cases} $ is differentiable at $x=c$
On the set $R -\{ -1\} $ a binary operation $\ast$ is defined by $a \ast b = a + b + ab$ for all $a,b \in R - \{-1\}$. Prove that $\ast$ is commutative as well as associative on $R -\{-1\}$. Find the identity element on $\ast$ if any.
Evaluate : $\tan \begin{bmatrix} \frac{\pi}{4} + \frac{1}{2} \cos^{-1} \frac{a}{b} \end{bmatrix} + \tan \begin{bmatrix} \frac{\pi}{4} - \frac{1}{2} \cos^{-1} \frac{a}{b} \end{bmatrix}$
Using differentials, find the appropriate value of $[0.009] ^ {\frac{1}{3}}$
If $\overrightarrow {b} \times \overrightarrow {c} = \overrightarrow {c} \overrightarrow {a} \neq 0$, then $\overrightarrow{a} + \overrightarrow {b} = k \overrightarrow {c}$ where k is any scalar.
Is it true or false .
From the differential equation of the family of curves $y = A \sin 2x + B \cos 2x$
If $f'(x) = 6x^2+2,$ find $f(x)$, given that $f(x) = 7$ when $x=1$
If $y = 7x -x^3$ and $x$ increases at the rate of 4 units per second. How fast is the slope of the curve changing when $x=2$
Determine the order and degree in the differential equation. Also state if it is a linear or non linear.
$y +\frac{dy}{dx} = \frac{1}{4} \int y dx $
Let $\ast \; R \times R \to R$ given by $(a,b) \to a + 4b^2$ is a binary operation, compute $-5 \ast (2 \ast 0).$
A cooperative society of farmers has 50 hectors of land to grow two crops X and Y. The profit from crops X and Y are estimated on Rs.10,500 and Rs.9000 respectively. To control weeds, a liquid herbicide has to be used crops X and Y at rate of 20 litres and 10 litres per hectors. Further no more than 800 litres of herbicide should be used in order to protect fish and wild life using a pond which collects drainage from this land. How much land should be allocated to each crop so as to maximize the total profit of the society ?
Find the area bounded by the lines. $x +2y = 2, y = x +1$ and $2x +y= 7$
Find absolute maximum and minimum value a function given by $f(x) = 12x^{\frac{4}{3}} - 6x^{\frac{1}{3}} , x \in [-1,1]$
Define standing crop.
answered Dec 8, 2016 by meena.p
Define a binary operation $\ast$ on the set {0, 1, 2, 3, 4, 5} as \[a*b=\left \{ \begin{array} {1 1} a+b, & \quad \text{ if a+b<6} \\ a+b-6, & \quad { if a+b \geq 6} \end{array} \right.\] show that the zero is the identity for this operation and each element $a \neq 0$ of the set is invertible with $6-a$ being the inverse of $a$.
answered Dec 7, 2016 by priyanka.c
If $f : R \to R$ be given by $f(x) = (3-x^3)^{\frac{1}{3}}$, then evaluate $f o f(x)$
Find the approximate value of $f'(3.02)$ where $f(x) = 3x^2+5x+3$
answered Nov 25, 2016 by priyanka.c
The total cost C(x) in rupees associated with the production of x units of an item is given by $ C(x) = 0.007 x^3 - 0.003 x^2 + 15x + 4000$. Find the marginal cost when 10 units are produced.
Let A = {1, 2}. Find the number of possible binary operations defined on A.
A balloon which always remain spherical has a variable diameter$ \frac{3}{2} (2x+ 3)$ : Determine the rate of change of volume w.r.t x
Find the value of $\lambda$, so that the lines $\frac{1-x}{3} = \frac{7y - 14}{2 \lambda} = \frac{z-3}{2} \;and\; \frac{7-7 \lambda}{3 \lambda} = \frac{y-5}{1} = \frac{6-z}{5} $ are at right angle
Find the value of 'a ' so that the function f(x) is defined by $f(x) = \begin{cases} \frac{\sin ^2 a x}{x^2} & x \neq 0 \\ 1 , & x =0 \end{cases}$ may be continuous at x = 0
Verify Rolle's theorem for $f(x) = x^2 + 2$ in the interval [-2,2]
If $P(A) = 0.25, P(B) = 0.4 \; and \; P (A \cup B) = 0.5 \; find \; P (A \cap B) \;and \; P (A \cap \bar{B})$
A car stands from a point P at time t = 0 sec and stops at point Q. The distance x, in metres, covered by it, in t seconds in given by $x = t^2 (2 - \frac{t}{3})$. Find the time taken by it to reach Q and also find the distance between P and Q.
On Q the set of all rational numbers, a binary operation $\ast$ is defined by $a \ast b=\frac{ab}{5}$ for all $a, b \in Q $. Find the identity element $\ast$ in Q. Also prove that every non-zero element of Q is invertible.
Fine the maximum and minimum value of $f(x) = x + \sin 2x $ in the invervat $0, 2\pi $
Show that function $f : R \to R$ given by $f(x) = x^3 + x $ I a bijection.
If the function $ f: [1, \alpha) \ implies [ 1, \alpha)$ defined by $f(x) = 2^{x(x-1)}$ is invertible, find $f^{-1}(x)$
Let f : N $\to$ Y be a function defined as $f(x) = 4x + 3f(x)=4x+3$ where $Y = \{ y \in N ; y = 4x + 3Y=\{ y \in N; y= 4x+3$ for some $ x \in N \}x \in N \}$. If 'f' is invertible.
Prove that $ \cos^{-1}x = 2 \sin^{-1} \sqrt{\frac{1-x}{2}}$
asked Nov 9, 2016 by priyanka.c
To see more, click for all the questions in this category. | CommonCrawl |
Skip to main content Skip to sections
Annals of Biomedical Engineering
March 2018 , Volume 46, Issue 3, pp 464–474 | Cite as
A Robotic Flexible Drill and Its Navigation System for Total Hip Arthroplasty
Ahmad Nazmi Bin Ahmad Fuad
Hariprashanth Elangovan
Kamal Deep
Wei Yao
This paper presents a robotic flexible drill and its navigation system for total hip arthroplasty (THA). The new robotic system provides an unprecedented and unique capability to perform curved femoral milling under the guidance of a multimodality navigation system. The robotic system consists of three components. Firstly, a flexible drill manipulator comprises multiple rigid segments that act as a sheath to a flexible shaft with a drill/burr attached to the end. The second part of the robotic system is a hybrid tracking system that consists of an optical tracking system and a position tracking system. Optical tracking units are used to track the surgical objects and tools outside the drilling area, while a rotary encoder placed at each joint of the sheath is synchronized to provide the position information for the flexible manipulator with its virtual object. Finally, the flexible drill is integrated into a computer-aided navigation system. The navigation system provides real time guidance to a surgeon during the procedure. The flexible drill system is then able to implement THA by bone milling. The final section of this paper is an evaluation of the flexible and steerable drill and its navigation system for femoral bone milling in sawbones.
Robotics Flexible Steerable Tracking Navigation Total hip arthroplasty (THA) Orthopaedics
Associate Editor Xiaoxiang Zheng oversaw the review of this article.
The online version of this article ( https://doi.org/10.1007/s10439-017-1959-5) contains supplementary material, which is available to authorized users.
Surgical robotic technology has been developing for decades to the extent that many surgical practices now benefit from the deployment of surgical robotic platforms. These benefits include increasing the accuracy, minimizing complications of surgery and improving patient outcomes. In orthopaedic surgery, computer-aided orthopaedic surgery (CAOS) has been advancing by using robotic surgical devices and navigation systems, resulting in a great improvement of surgical field visibility and the enhancement of surgical accuracy.21 In particular, the use of robotic surgical devices for orthopaedic surgery has become more widely accepted for its greater precision and accuracy in implant positioning and orientation.16 There are three types of surgical robotic systems that have been developed for orthopaedics surgery, which are passive, semi-active and active systems. Passive systems control surgical tools by moving a cutting guide block or a drilling guide sleeve while a surgeon handles the tool with his free hands.12,28 Semi-active systems limit the movement of surgical tools within a pre-operative planned surgical area by means of a robot arm, examples include MAKO and ACROBOT.11,29 Active systems such as ROBODOC and CASPAR can execute surgical planning automatically, independent of the surgeon's hands.4,26
In robotic orthopaedic surgery, the surgeon needs to perform procedures precisely and safely within a limited space; and this requires effective surgical guidance by means of a surgical navigation. Surgical navigation in orthopaedic surgery works on a similar principle using a global positioning satellite (GPS), in which a virtual visualization of the surgical tools and the anatomy is operated on in real-time, acting as a guide for the surgeon during surgical procedures. Advancement in radiographic imaging enables the reconstruction of imaging data into three dimensional (3D) images, which can be used in surgical pre-operative planning to various surgical procedures.6 The digital radiographic images serve as a navigation map for the procedures, where the surgical tools' CAD models are incorporated into the map for the purpose of visualizing its position, orientation and its movement to an accuracy of one millimetre or one degree.27 This method greatly improves the accuracy and precision of the surgery and gives surgeon a better and wider view of surgical field.
The advent of computer-aided surgery (CAS) and surgical robotics has given a lot of improvement to minimal invasive surgery (MIS). It enhances three advantages of MIS over conventional surgery, which are free manoeuvrability for the instrument, sensory feedback and three-dimensional imaging.23 However, the advantages of free manoeuvrability for the instrument are still not fully developed since some areas in surgical operations are still not accessible via rigid surgical tools, or the current surgical tools are not accurate enough for MIS.8 Thus, flexible surgical tools have been investigated. Through the use of these flexible tools, surgeons can access problematic zones, such as sinuses in endonasal sinus surgery, visualise hidden tissue structures in arthroscopy, and a curved-drilling approach in core decompression of the femoral head osteonecrosis.1 Some flexible tools have also been demonstrated to be able to control a needle puncture and penetrate tissues from any point within the body such as in tissue biopsies22; reduce insertion forces and prevent buckling by using robot-assisted and steerable electrode prototypes in cochlear implant surgery31; and carry out vascular catheterization to treat cardiac and vasculature disease.5 However, there are not any flexible tools available that are able to provide sufficient precision and force transformation for bone milling in orthopaedic surgery.
Volumetric-based navigation has been used in total hip arthroplasty (THA),9 total knee arthroplasty,3,10,17,24 pelvic osteotomy and in spine screw insertion in spinal surgery.2,13 These methods give surgeons better visualization of complete constructs/attachments between implant and bone. The total joint arthroplasty procedures utilize both 3D CAD and volume rendering bone models allowing surgeons to pre-operatively simulate a range of motions of the joints.29 There are three major components in surgical navigation systems. Surgical object (SO) means the anatomical location of surgical action. Virtual object (VO) includes a virtual representation of a surgical object that allows surgeon to plan the surgical procedure before the actual surgery and execute its intra-operatively. Navigator (NAV) is a device that establishes the coordinate systems (COS) of the surgical field targets and the location and orientation of utilized end-effectors (EE).21 In order to fully utilize the surgical navigation system, certain processes are required to setup the system. Those are calibration of end-effectors, registration, and dynamic referencing. Calibration for end-effector is required to describe its geometry in the coordinate of the navigator. To calibrate the end-effector, a rigid attachment of optical markers is introduced in the optical tracking system. Registration is a process that a surgical navigation system links the SO and VO in real time allowing them to display on the monitor. It is realized by surgeons' identifying key anatomical landmarks resulting in better accuracy of the alignment. Dynamic referencing is one of important requirement for a surgical navigation system. It is necessary for positioning control compensation of a possible motion of the navigator and/or surgical objects during the surgical procedures. This is established by attaching dynamic referencing bases (DRBs), which consist of three more reflective markers or light emitting diodes (LEDs) arranged in a pattern at the surgical device so that it acts as the base of reference to other surgical objects in tracking. The DRBs can either be fixed to the patient representing fixed anatomy or it can be mobile when attached to surgical tools.
In THA, robotic orthopaedic surgery is only practiced with acetabular cup positioning and orientation; however, femoral stem positioning still uses the hand-rasping method instead of femoral milling, because current rigid tools are not able to drill through curved pathways. To reduce trauma, minimally invasive procedures are increasingly demanded for THA surgery. There are advantages in using femoral milling in a minimally invasive procedure compared to using the hand-rasping method, such as the prevention of intra-operative fractures and providing better fit with less trauma.20 However, femoral milling is not widely practiced due to the space-constraints in MIS. Although some studies have been reported on the utilization of robotic surgical systems for both acetabular cup and femoral stem implantation, this is only for implementing the normal open approach rather than the MIS approach.25 This minimally invasive procedure needs a more dexterous manipulator for femoral milling. The emergence of robotic technology gives us an opportunity for developing a flexible and steerable drill tip, which can be integrated into a computer-aided surgical system. The following Table 1 shows all other robotic orthopaedic systems are using rigid drill and not tracked and navigated inside the bone.
Current robotics orthopaedic systems.
Robotic/steerable drill
BlueBlet28
Optical tracking
Virtual model
Free-hand
MAKO29
Rigid with haptics
Semi-active
ACROBOT11
Mechanical tracking
Over-constrain
ROBODOC4
CASPAR26
Continuum Manipulator1
Our flexible drill
Optical tracking + kinematic tracking
This paper presents a novel flexible robotic system that includes a flexible drill manipulator, a hybrid tracking device and a multi-modality navigation system. The system enables, for the first time, to have the capability of guiding the curved 3D milling inside the bone. This paper is organized as follows: in the first section, the need for a flexible surgical drill for orthopedic surgery is presented and a concept design is introduced; next section follows a detailed flexible drill design and its tracking and navigation system; then, a test rig is established to evaluate the system in sawbones.
Concept Design of the Flexible Robotic System
The research is driven by the clinical requirements in improving the accuracy and reducing the trauma in orthopedic surgery. Firstly, shown in Fig. 1, conventional surgery for joint replacement is currently not very accurate as its hand-rasping method. In the case of THA, since femoral stem is slightly curved, in order to follow the anatomical shape of femur, a new flexible surgical drill is required to mill a curved femoral canal under the guidance of its navigation system for THA. In addition, the flexible drill can benefit patients by adopting a minimal invasive approach due to its variable bending configuration.
(a) Traditional hand-rasping method for Total Hip Arthroplasty; (b) proposed method by using the robotic flexible drill.
The concept design of the system shown in Fig. 2 consists of a novel flexible drill to enable intra-operative tunnelling and a navigation system for tracking and navigating for the end-effector (EE) inside the bone. Due to the fact that the flexible drill tip is not trackable via current optical tracking systems, this research focuses on developing a hybrid tracking system for the flexible drill by integrating optical tracking devices and position sensors in the flexible tips. The optical tracking system tracks the surgical tools outside the drilling canal, while rotary encoders are used to track the end of the flexible drill tip. A navigation system of the new robotic system guides the procedure by providing a real-time virtual model of the flexible drill and its association with a CAD bone model from a CT scan. The flexible drill system is then experimented in sawbones, followed by an evaluation of the positioning of femoral stem placement by femoral milling. This system demonstrates an innovative robotic platform designed to allow surgeons to achieve a new level of precision and flexibility.
A prototype of the flexible drill mechanism.
The manipulator is required to fit within a small incision inside the femoral canal, to be able to bend for the milling of the curved shape in the femur, and to be rigid enough to make femoral canal possible without any buckling at the base.
Design of a Novel Flexible Drill Mechanism
The flexible drill comprises three multiple rigid segments that act as a sheath to a flexible shaft with a drill/burr attached to the end, as illustrated in Fig. 2. The outer diameter of the sheath is 8 mm; the length of the sheath to the burr tip is 158.5 mm. The proximal end segment is connected to the motor box as shown in Fig. 2, in which the actuation of the flexible drill takes place. The motor box is designed to be a handle with a servo motor and a microcontroller board fitted in. The microcontroller controls the servo and streams data from the rotary encoders. The revolution joint connects each two segments linked by two rivets that allow free rotation of the joint. Inside the sheath, two ball bearings are installed to link each half of the sheath to the flexible shaft that drives the drilling with the maximal speed of 30,000 rpm. This design allows the drill mechanism to have a free rotation and strong force transmission. The 3 mm flexible shaft runs through a 5 mm hole at the base. The flexible drill has a wire-driven steering capability for bending the joints. Two channels are designed for wires that connect the drill end part to the servo motor with the torque value of 11.3 kg/cm at 6.0 V, enabling bending of the joints in clockwise and counter-clockwise directions.
The bending is controlled by wires pulling inside the drill sheath. This design enables the drill to navigate through the small incision inside the femoral canal.
A three-bar kinematic chain is designed as the flexible drill mechanism from the kinematic sketch of the mechanism in Fig. 3. The drill mechanism is designed as a kinematic chain with three binary links attached by two revolute joints, allowing one link to rotate with respect to the other links. The kinematics of the manipulator is calculated using D–H (Denawit–Hartenberg) parameters. In the kinematics of the flexible drill manipulator, each T i is defined by two parameters, a i − 1 and θ i . It is expressed by moving A i from its own body frame onto the body frame of A i − 1. Furthermore, the combinational transformation matrix T i − 1 T i , can be approached by moving both A i and A i − 1 to the body frame A i − 2. The resulting equation is as below,
$$T_{i} = \left( {\begin{array}{cccc} {\cos \theta_{i} }&\quad {- { \sin }\theta_{i} } &\quad 0 &\quad {\alpha_{i - 1} } \\ {{ \sin }\theta_{i} } &\quad{\cos \theta_{i} } &\quad0 &\quad0 \\ 0 &\quad0 &\quad1 &\quad{d_{i} } \\ 0 &\quad0 &\quad0 &\quad1 \\ \end{array} } \right)$$
Kinematic sketch and the workspace of the flexible drill.
In the kinematic analysis, T m will be defined as a rigid-body homogenous transformation matrix and this represents the six degrees of freedom of the free handle that is tracked by the optical tracking device. The rigid-body homogeneous transformation matrix is a 4 × 4 matrix that performs the rotation given by R (β,γ,ε), followed by a translation given by x m , y m , z m . This results in the homogeneous transformation matrix T m ,
$$T_{m} = \left( {\begin{array}{*{20}c} {\cos \beta \cos \gamma } & {\cos \beta \sin \gamma \sin \varepsilon - \sin \beta \cos \varepsilon } & {\cos \beta \sin \gamma \cos \varepsilon + \sin \beta \sin \varepsilon } & {x_{m} } \\ {\sin \beta \cos \gamma } & {\sin \beta \sin \gamma \sin \varepsilon + \cos \beta \cos \varepsilon } & {\sin \beta \sin \gamma \cos \varepsilon - \cos \beta \sin \varepsilon } & {y_{m} } \\ { - \sin \gamma } & {\cos \gamma \sin \varepsilon } & {\cos \gamma \cos \varepsilon } & {z_{m} } \\ 0 & 0 & 0 & 1 \\ \end{array} } \right)$$
The end-effecter F in the body frame of the last link A 3 appears in the coordination of G as
$$T_{F} = T_{m} T_{1} T_{2} T_{3}$$
where, G denotes the global coordinate system of the navigation system.
Although the flexible sheath is three links, it provides one more DOF as it is actuated by one tendon mechanism for bending the flexible shaft with a drill/burr tip attached. The maximal bending angle is limited by 90° between the tip and the base of the flexible sheath. As the maximal bending angle is 90° the workspace is a half circle when the handle is fixed. The 3-link PRR manipulator is based on a 6 DOFs "freehand" handle; in minimally invasive Total Hip Arthroplasty, as the entry space is very limited, handle motion might only be allowed to push along and rotate the axis of the fix part of the flexible shaft. In addition with the extra bending of the flexible sheath, the workspace would be a double half sphere described in the Fig. 3. This workspace also shows the flexibility of the manipulator. Thus, it can tunnel a curved canal inside the femur which makes the implantation more precise.
A Multi-modality Tracking System
The second part of the robotic drill system is a multi-modality tracking system for the flexible drill that integrates an optical tracking system and a rotary encoder-based tracking system. An optical tracking unit is mounted at the base of the manipulator to track its position and orientation, while the potentiometer placed at each joint of the sheath provides bending angle for tracking the end of the flexible drill tip inside the drilling canal.30 As the optical tracking system can only track the open part of the flexible drill mechanism, when the tip of the flexible drill tunnels inside the bone, the potentiometer-based tracking system is combined to provide the completed position information for the flexible drill. The optical tracking system consists of beacons of two infrared LED trackers and a small infrared camera at the middle of the LED trackers. The beacons contain infrared LEDs arranged in a specific pattern and act as a stationary reference plane. Micro infrared cameras interpolate to give 1024 × 768 pixels, at a frame rate of up to 100 frames per second. They act as mobile/independent trackers, and are attached to each surgical object and the end-effector (Fig. 4). The following Figure shows how the different types of tracking units are set up in the multi-modality tracking system.
The setting of the multi-modality tracking system.
The flexible drill sheath to be tracked inside the bone is set with encoders attached at each joint of the sheath. The potentiometers function as rotary encoders that measure the bending angle of each of the joints. They determine the position of the burr tip with reference to the base of the drill manipulator calculated by its forward kinematics. The bending angle of each of the flexible drill sheath joints is equal to the rotational angle of the encoder's shaft. Hence, the voltage output of the encoder at each degree of rotation is taken and mapped as a bending angle of the joints. Analog data from the encoders is connected to a microcontroller board that converts it to digital data. The data is then read by the navigation system as rotation angle for joints. The angle data, combined with the length of each segment is then used to map the position of the flexible sheath location and to synchronize it with its virtual object. This tracking system tracks and updates the virtual object to guide the surgical procedure.
The flexible drill is integrated into a computer-aided navigation system. First, a mapping system is developed by acquiring 3D images of a femur and a femoral stem implant. A femur 3D model is obtained by scanning a femur in a CT scanner. This model is then imported to create a 3D mesh model. The mapping enables a surgeon to virtually view both 3D model of a bone and 3D model of a femoral stem, thus enabling surgeon to plan the position of femoral stem inside bone model. Also, the coordinates of both the femur and femoral stem models can be linked together to enable virtual interaction between the models.
The next step is to set up the boundary of safe surgical volume in Fig. 5. The safe surgical volume is milled to confine the volume of femoral stem model. Thus, the 3D femoral stem model is transferred to be a boundary of the safe surgical volume. The burring motor is designed to stop once the burr tip reaches the boundary. Setting up the boundaries enables precise milling that follows the shape of the implants. The outline coordinate of the femoral implant is set up so that whenever the burr tip coordinate is equal to any of the outline coordinate of the femoral implant. This triggers a warning message and stops the drill motor.
Setting up the boundary of the safe surgical volume (a) The femur and implant 3D models; (b) Position the femoral stem inside the 3D bone model; (c) Safety area is defined as the deeper green area; (d) The milling is guided by the safety boundary.
This process is followed by a pre-operative planning of computer assisted orthopaedic surgery (CAOS), by means of which a digital image of the bone and surgical tools is obtained, and then mapped onto the navigation system. The CAD model of the drill 3D is the virtual object (VO) in the navigation system. The VO has its coordinates and orientation mapped in the navigation system. The coordinates and orientation data are used to register the VO to the surgical objects (SOs). It is done by synchronizing the coordinates and the orientation of the VO as it follows the coordinates of each SOs. This activation enables a real-time position tracking of the surgical objects virtually on the monitor, as shown in Fig. 6.
A graphical user interface (GUI) of the navigation system.
A navigation system of the flexible drill sheath is also developed using the same programming language as the mapping system. A graphical user interface (GUI) is developed to guide the user through the navigation steps when milling the femoral canal using the flexible drill sheath.
The concept of this navigation system is illustrated in the flowchart in Fig. 7 in reference to basic concept of CAOS.21 The end effector (EE) consists of the base of the drill and the flexible tip by which these two are tracked via a hybrid tracking system. The navigator (NAV) consists of a LED tracking camera (optical tracking) and rotary encoders at the flexible drill joints (encoder tracking). The NAV also tracks the surgical object (SO) which is fixed in femur bone. Streaming data from NAV is then registered to VO in the navigation software. The virtual models are reconstructed into 3D models of femoral stem and the drill manipulator. These virtual models have their own local coordinate systems by which are registered with the coordinate systems of SOs and EE. Should these two objects' coordinates intersect with each other, it will trigger a warning message to stop milling as safety measure to not mill beyond the surgical boundary.
Flowchart of the navigation system in reference to basic concept of CAOS that divides the system into surgical object (SO), virtual object (VO), Navigation (NAV), and end effector (EE).
Experiment Setting
This section presents an integrated flexible drill and its navigation system, shown in Fig. 8, which is tested on sawbone models. There are different levels of integration including mechanical assembly, embedded position sensing and optical tracking, mapping and navigation. In this system, the major part of the mechanical integration is to ensure a reliable mechanical structure of the flexible drill and a robust motor control. The mechanical structure is designed to allow enough space for the rotary encoders to be embedded in the segments and the optical tracking devices to be mounted on the base as a handle. Regarding the software integration, all the virtual models and tracking information are integrated into a unit framework for an easy to use shown in Fig. 6. This paper shows the friendly graphics user interface, which would make the surgical orientation and equipment handling easy for the surgeon. The navigation system provides real time guidance to a surgeon during the procedure of total hip arthroplasty with following function.
Load the femur 3D model and femoral stem 3D model and get their coordinates
Set up the safe surgical boundary by planning the femoral stem 3D model in a correct position
Load the flexible drill model and get its coordinate
Register the virtual object with surgical objects by integration of optical tracking systems
Register the joint angle tracking and link it with the position of the drill base
Start to mill the femoral canal. The drill motor will stop when the burr tip touches the boundary
Put the femoral stem implant to the milled femur.
System setting for the flexible drill and its navigation system in a sawbone test rig.
In this test rig, the hip sawbone is fixed on a platform at which a tracking unit is placed at its geometric centre, providing the global coordinate information. The motor box of the flexible drill acts as a handle for the surgeon. During the procedure, the thumb stick is used to bend the flexible tip to the proper angle to fit in the curvature tunnelling. An optical tracking unit is mounted on the handle to provide 6 DOFs tracking information which refers to the location of the handle.
The procedure is guided by the navigation system shown on the computer screen. Mapping enables surgeons to view virtually both the 3D model of femur and the 3D model of a femoral stem, thus enabling the surgeon to position the femoral stem inside the 3D bone model precisely. Also, both the femur model and the femoral stem's coordinates are linked together, providing a virtual interaction between the models.
Drilling Experiment
The experiment is carried out by using the flexible drill and its navigation system for femoral milling in THA. Initially, the flexible drill underwent usability and functional tests to check whether it can function as intended to drill a curved tunnel. At this stage, the sheath is attached to a conventional drill and the material to be drilled is made by sawbones or some other objects of the same material. The second step involves tests on the sawbones to evaluate the accuracy of the positioning and the orientation of a femoral stem relative to the pre-operative plan and its alignment with the acetabular cup. A standard size sawbone of a left human femur is used as the sample for this test. The bone is fixed at a bone fixture rig by means of screws. At the steps of the test shown in the attached video, the femoral head is cut off. Then, the drill/burr tip is pushed forwards to mill the shape designed, adjusting the orientation and position. To proceed with the test, the burr tip is first placed adjacent to the greater trochanter to check and confirm that the flexible drill has been registered and tracked in. The motor drill is then turned on when the femoral neck is cut from the femoral head. At this step, the femoral canal is created using the flexible drill, following the pre-planned cutting area with visual feedback from the navigation interface. During this procedure, the milling process stops whenever the 'Stop Milling Warning' message has been triggered and then resumes after taking out the flexible drill from the femoral canal.
In real clinical setting proposed in Fig. 9, one tracking unit will be fixed in patient femur as a dynamic referencing base (DRB). The two most common type of registration technique are paired points technique and surface registration technique. Surface registration technique is further divided into two, which are anatomical landmark technique and fiducial-based technique such as using bone pins. Both of these techniques require defining of the anatomical landmark, and image segmentation in the pre-operative planning.14
Conceptual clinical setting for THA using the flexible drill and its navigation system.
Once the milling test is completed, the sawbone is sent for CT scan imaging, in order to analysis the milling outcomes. The CT image taken is then reconstructed into a 3D digitized geometry by a commercial software package called MIMICS (Materialize NV, Belgium). The 3D digitized geometry is imported into analysis software Geomagic Qualify 12 (Geomagic®) to isolate the milled area boundary from the whole geometry. The pre-planned cut area is also imported into Geomagic Qualify 12 (Geomagic®) to act as a reference template, while the milled area boundary is acted as a test object. The best fit alignment method and an iterative closest point algorithm is used to best fit the objects. The chromatogram is generated automatically, as shown in Fig. 10. The chromatogram represents the deviation of the test object from the reference template, in which a deeper colour means a larger deviation. In the figure, the range is set as ± 5. 0 mm. Deep red represents + 5.0 mm and blue represents − 5.0 mm. The analysis covers the maximum positive and negative deviations and the standard deviation.
The chromatogram used for an analysis of the milling procedure.
Figure 10 shows the chromatogram of a femoral sample's cut area in four views at front, top, right, and isometric. The colour ranges from green that indicates less than 1 mm of deviation to red and blue in colour, which indicates more than 5 mm of deviation overcut and undercut respectively. The right view in Fig. 10 shows that it has two protrusions at the front surface of the cut area due to presence of overcut. It can be clearly seen that the peak of each protrusion was red in colour. Besides that, the middle section of superior surface overcut tapered and extended towards the tip of the cut area. The front view in Fig. 10 shows a mixture of cut of less than 1 mm (green colour), and overcut of between 1 and 1.5 mm (yellow colour). The top view in Fig. 10 shows that it has overcut at the middle section of superior surface that extended to the front surfaces. These similarities signify the repetition of overcut and/or undercut at certain area in relative to the pre-planned 3D model.
The accuracy of the milled area boundary is evaluated using Geomagic Qualify software to establish a 3D deviation profile for the test object against a reference template. It has been found in Fig. 11(b) that 75.232% of the point cloud data of the milled area boundary is within ± 1 SD (0.864 mm) of the pre-planned cut area; 93.924% of the point cloud data is within ± 2 SD (1.728 mm). This indicates that the majority of the cloud data from the geometric shape of the milling boundary is within 1.728 mm (± 2 SD) in relation to the pre-planned cut area. Hence, the accuracy of the navigation system is within 1.728 mm. However, there is still 1.813% of the point cloud data exceeding the positive deviation value, and 4.264% exceeding the negative deviation value. The geometric variations of the cut area in comparison with the outline of the femoral stem from a pre-plan of navigation software are measured and presented using a deviation analysis. The analysis of the deviation confirmed that the flexible drill system is able to mill inside a femoral bone with a deviation between the cut areas and the outline of the femoral stems from navigation software that is in the range of − 0.759–1.151 mm and is slightly off from the acceptable clinical range of 1 mm.
(a) Percentage deviation distribution of point cloud data of cut area of the femur sawbone; (b) Standard deviation of point cloud data of cut area of the femur sawbone.
The detailed deviation analysis quantifies the deviation of the cut area to the outline of the femoral stem from the navigation software. It is found that a small portion of the deviation is more than 2 mm (7.477 ± 2.857% deviation between 2 and 3 mm, 1.050 ± 0.317% deviation between 3 and 4 mm, and 0.156 ± 0.237% deviation above 4 mm). This means that the cut area slightly deviated by up to 2 mm from the outline of the femoral stem calculated by the navigation software.
As seen in Fig. 11, the most significant proportion of these larger discrepancies came from the distal end of the cut area and the proximal part of the cut area. The main reason of the errors is that the femoral stem used is the tapered femoral stem that has a diameter of less than 6 mm towards the end of the tapered tip. But the mill bit used has a diameter of 6 mm, therefore, the smallest cut in terms of the diameter can only be 6 mm. This resulted in the distal tip of the cut area having a greater error due to the fact that mill bit cannot mill the bone smaller than a 6 mm diameter. These errors are similar to 'imperfect drilling characteristics'7 in robotic-assisted skull base surgery. However, robot kinematic error, and mill bit deflection due to the surgeon's applied force as shown in study that involves a flexible tool, could also be factors contributing to the errors mentioned above.15 Apart from that, these errors can be further reduced by using a rotary encoder system that is better than a potentiometer such as fibber bragg grating sensors or a high definition optical rotary encoder system.19 Dimensional error due to deflection can be reduced by improving the flexible drill sheath stiffness and utilizing an active compensation method such as estimation of a mill bit deflection by measuring the force applied by surgeon. The large discrepancies at the proximal part of the cut area is caused by chipping of the sawbones due to the difference in hardness between the sawbones's outer layer's resin and the inner resin compound. The outer layer's resin is harder and had a minimal hollowed structure, while the inner resin is softer with more a hollowed structure. The root mean square (RMS) obtained showed an indirect correlation with the magnitude of deviations and it signifies the accuracy of the system in milling the cut area. However, since it has an indirect correlation with the magnitude of deviations, high deviation regions on red and blue colour at the distal end of the cut area contributed to the larger value of RMS, hence reducing the accuracy of the system. Finally, a possible reason for reducing the accuracy of the system is its setting of their DRBs. In the sawbones experiment, one tracking unit is fixed in the femur as a DRB. As the SO in real clinical setting could be moving, such non-fixed SO would affect the positioning accuracy. Thus, an additional fixed tracking unit is introduced to be a globe reference for the system to track both DRBs attached in the patient's femur and the surgical drill.
The femoral stem implant is a solid body which will fit into a cavity. It will sit where the area of the cavity comes in contact first and that will be the majority of the contact area. Although the accuracy of the navigation system is 1.728 mm, the difference is small since the defect is on the inner side of the aim (− 1 SD) as shown in Fig.11(b). This error results in a cavity which is 1 mm less than the implant dimensions, thus reducing the chance of burring the cortex too much, and the implant will have a nice press fit stability if the implant is an uncemented implant.18 If the femoral stem implant is a cemented implant, the accuracy of 1.728 mm is good since there is space of nearly 2 mm for filling the cement and press to fit the implant into the desired position.
In this paper, a novel flexible drill system for orthopaedic surgery has been presented and demonstrated. In THA, CAOS is currently practised only with acetabular cup positioning, while, femoral stem positioning still uses the hand-rasping method instead of femoral milling. Although some devices have used flexible drills, they are not robotic systems and only can be tracked by using X-ray images. Our new system has demonstrated its ability to perform femoral milling using a unique flexible and steerable drill coupled with a novel tracking and navigation system. As the flexible drill tip is not trackable via an optical tracking system inside the bone, this paper has presented a novel hybrid tracking and multimodality navigation system for guiding the flexible drill tip.
Experiments in sawbones have proved that the new system can not only provide a new capability but also the accuracy of the milling reaches a satisfactory level in femoral milling. The applications of this robotic system are not only for the purpose of carrying out femoral milling in MIS THA, but also for other skeleton related procedures such as tunnel drilling in ACL reconstruction, milling in revision of arthroplasty, and drilling in head and neck surgery. Further research is needed in order to put this concept into practice in a clinical setting.
Supplementary Material 1 (Wmv 109539 kb)
Alambeigi, F., Y. Wang, S. Sefati, C. Gao, R. J. Murthy, I. Iordachita, R. H. Taylor, H. Khanuja, and M. Armand. A curved-drilling approach in core decompression of the femoral head osteonecrosis using a continuum manipulator. IEEE Robot. Autom. Lett. 2:1480–1487, 2017.CrossRefGoogle Scholar
Arand, M., E. Hartwig, L. Kinzl, and F. Gebhard. Spinal navigation in cervical fractures—a preliminary clinical study on judet-osteosynthesis of the axis. Comput. Aided Surg. 6:170–175, 2001.CrossRefPubMedGoogle Scholar
Banks, S. Haptic robotics enable a system approach to design of a minimally invasive modular knee arthroplasty. Am. J. Orthop. 38:23–27, 2009.PubMedGoogle Scholar
Bargar, W., L. Bauer, and M. Borner. Primary and revision total hip replacement using the robodoc system. Clin. Orthop. Relat. Res. 354:82–91, 1998.CrossRefGoogle Scholar
Beasley, R. Medical robots: current systems and research directions. J. Robot. 2012. https://doi.org/10.1155/2012/401613.Google Scholar
Brown, R. A computerized tomography-computer graphics approach to stereotaxic localization. J Neurosurg. 50:715–720, 1979.CrossRefPubMedGoogle Scholar
Bumm, K., J. Wurm, J. Rachinger, T. Dannenmann, C. Bohr, R. Fahlbusch, H. Iro, and C. Nimsky. An automated robotic approach with redundant navigation for minimal invasive extended transsphenoidal skull base surgery. Minim. Invasive Neurosurg. 48:159–164, 2005.CrossRefPubMedGoogle Scholar
Dario, P., M. Carrozza, M. Marcacci, S. D'attanasio, M. Bernardo, O. Tonet, and G. Megali. A novel mechatronic tool for computer-assisted arthroscopy. IEEE Trans. Inf. Technol. Biomed. 4:15–29, 2000.CrossRefPubMedGoogle Scholar
Digioia, A., B. Jaramaz, M. Blackwell, D. Simon, F. Morgan, J. Moody, C. Nikou, B. Colgan, C. Aston, R. Labarca, E. Kischell, and T. Kanade. Image guided navigation system to measure intraoperatively acetabular implant alignment. Clin. Orthop. Relat. Res. 355:8–22, 1998.CrossRefGoogle Scholar
Hafez, M., M. Seel, B. Jaramaz, and A. DiGioia, III. Navigation in minimally invasive total knee arthroplasty and total hip arthroplasty. Oper. Tech. Orthop. 16:207–210, 2006.CrossRefGoogle Scholar
Jakope, M., S. J. Harris, F. Baena, P. Gomes, J. Cobb, and B. Davies. The first clinical application of a "hands-on" robotic knee surgery system. Comput. Aided Surg. 6:329–339, 2001.CrossRefGoogle Scholar
Jaramaz, B., and C. Nikou. Precision freehand sculpting for unicondylar knee replacement: design and experimental validation. Biomed. Tech. 57:293–299, 2012.CrossRefGoogle Scholar
Langlotz, F., R. Bachler, U. Berlemann, L. Nolte, and R. Ganz. Computer assistance for pelvic osteotomies. Clin. Orthop. Relat. Res. 354:92–102, 1998.CrossRefGoogle Scholar
Lavallee, S., P. Sautot, J. Troccaz, P. Cinquin, and P. Merloz. Computer-assisted spine surgery: a technique for accurate transpedicular screw fixation using CT data and a 3-D optical localizer. Comput. Aided Surg. 1:65–73, 1995.CrossRefGoogle Scholar
Li, M., M. Ishii, and R. Taylor. Spatial motion constraints using virtual fixtures generated by anatomy. IEEE Trans. Robot. 23:4–19, 2007.CrossRefGoogle Scholar
Lonner, J. Indications for unicompartmental knee arthroplasty and rationale for robotic arm-assisted technology. Am. J. Orthop. 38:3–6, 2009.PubMedGoogle Scholar
Martelli, M., M. Marcacci, L. Nofrinr, F. Palombara, A. Malvisi, F. Iacono, P. Vendruscolo, and M. Pierantoni. Computer- and robot-assisted total knee replacement: analysis of a new surgical procedure. Ann. Biomed. Eng. 28:1146–1153, 2000.CrossRefPubMedGoogle Scholar
Mazoochian, F., C. Pellengahr, A. Huber, J. Kircher, H. Refior, and V. Jansson. Low accuracy of stem implantation in THR using the CASPAR-system anteversion measurements in 10 hips. Acta. Orthop. Scand. 75(3):261–264, 2004.CrossRefPubMedGoogle Scholar
Mishra, V., N. Singh, U. Ttwari, and P. Kapur. Fiber grating sensors in medicine: current and emerging applications. Sens. Actuators A Phys. 167:279–290, 2011.CrossRefGoogle Scholar
Nishihara, S., N. Sugano, T. Nishii, H. Miki, N. Nakamura, and H. Yoshikawa. Comparison between hand rasping and robotic milling for stem implantation in cementless total hip arthroplasty. J Arthroplasty. 21:957–966, 2006.CrossRefPubMedGoogle Scholar
Nolte, L., and T. Beutler. Basic principles of CAOS. Injury. 35:6–16, 2004.CrossRefGoogle Scholar
Okazawat, S., R. Ebrahimi, J. Chuang, S. E. Salcudean, and R. Rohling. Hand-held steerable needle device. IEEE/ASME Trans. Mechatron. 10:285–296, 2005.CrossRefGoogle Scholar
Plinkert, P., and H. Lowenheim. Trends and perspectives in minimally invasive surgery in otorhinolaryngology-head and neck surgery. Laryngoscope 107:1483–1489, 2009.CrossRefGoogle Scholar
Roche, M., P. O'loughlin, D. Kendoff, V. Musahl, and A. Pearle. Robotic arm-assisted unicompartmental knee arthroplasty: preoperative planning and surgical technique. Am. J. Orthop. 38:10–15, 2009.PubMedGoogle Scholar
Schulz, A., K. Seide, C. Queitsch, A. Haugwitz, J. Meiners, B. Kienast, M. Tarabolsi, M. Kammal, and C. Jurgens. Results of total hip replacement using the robodoc surgical assistant system: clinical outcome and evaluation of complications for 97 procedures. Int. J. Med. Robot. 3:301–306, 2007.CrossRefPubMedGoogle Scholar
Siebert, W., S. Mai, R. Kober, and F. Heecht. Technique and first clinical results of robot-assisted total knee replacement. Knee. 9:173–180, 2002.CrossRefPubMedGoogle Scholar
Sikorski, J., and S. Chauhan. Aspects of current management: computer-assisted orthopaedic surgery: do we need CAOS? J Bone Joint Surg. 85:319–323, 2003.CrossRefGoogle Scholar
Smith, J., P. Riches, and P. Rowe. Accuracy of a freehand sculpting tool for unicondylar knee replacement. Int. J. Med. Robot. 10:162–169, 2014.CrossRefPubMedGoogle Scholar
Sugano, N. Computer-assisted orthopedic surgery. J. Orthop. Sci. 8:442–448, 2003.CrossRefPubMedGoogle Scholar
Watzinger, F., W. Birkfellner, F. Wanschitz, W. Millesi, C. Schopper, K. Sinko, K. Huber, H. Bergmann, and R. Ewers. Positioning of dental implants using computer-aided navigation and an optical tracking system: case report and presentation of a new method. J. Craniomaxillofac. Surg. 27:77–81, 1999.CrossRefPubMedGoogle Scholar
Zhang, J., W. Wei, J. Ding, J. Roland, S. Manolidis, and N. Simaan. Inroads toward robot-assisted cochlear implant surgery using steerable electrode arrays. Otol. Neurotol. 31:1199–1206, 2010.CrossRefPubMedGoogle Scholar
© The Author(s) 2017
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Email authorView author's OrcID profile
1.The Department of Biomedical EngineeringUniversity of StrathclydeGlasgowScotland, UK
2.Golden Jubilee National HospitalClydebankScotland, UK
Ahmad Fuad, A.N.B., Elangovan, H., Deep, K. et al. Ann Biomed Eng (2018) 46: 464. https://doi.org/10.1007/s10439-017-1959-5
Accepted 10 November 2017
Publisher Name Springer US
Published in cooperation with
Biomedical Engineering Society (BMES) | CommonCrawl |
One-Round ID-Based Threshold Signature Scheme from Bilinear Pairings
Authors: Gao, Wei | Wang, Guilin | Wang, Xueli | Yang, Zhenguang
Abstract: In this paper, we propose a new ID-based threshold signature scheme from the bilinear pairings, which is provably secure in the random oracle model under the bilinear Diffie–Hellman assumption. Our scheme adopts the approach that the private key associated with an identity rather than the master key of PKG is shared. Comparing to the-state-of-art work by Baek and Zheng, our scheme has the following advantages. (1) The round-complexity of the threshold signing protocol is optimal. Namely, during the signing procedure, each party broadcasts only one message. (2) The communication channel is optimal. Namely, during the threshold signing procedure, the broadcast …channel among signers is enough. No private channel between any two signing parties is needed. (3) Our scheme is much more efficient than the Baek and Zheng scheme in term of computation, since we try our best to avoid using bilinear pairings. Indeed, the private key of an identity is indirectly distributed by sharing a number xID ∈ $\mathbb{Z}^{*}_{q}$ , which is much more efficient than directly sharing the element in the bilinear group. And the major computationally expensive operation called distributed key generation protocol based on the bilinear map is avoided. (4) At last, the proactive security can be easily added to our scheme. Show more
Keywords: identity-based signature, threshold signature, bilinear pairing
Embedded Patterns, Indirect Couplings with Randomness, and Memory Capacity in Neural Networks
Authors: Garliauskas, Algis
Abstract: In the present paper, the neural networks theory based on presumptions of the Ising model is considered. Indirect couplings, the Dirac distributions and the corrected Hebb rule are introduced and analyzed. The embedded patterns memorized in a neural network and the indirect couplings are considered as random. Apart from the complex theory based on Dirac distributions the simplified stationary mean field equations and their solutions taking into account an ergodicity of the average overlap and the indirect order parameter are presented. The modeling results are demonstrated to corroborate theoretical statements and applied aspects.
Keywords: neural network, free energy density, Dirac distribution, memory capacity, ergodicity
An Investigation of the Perceptual Value of Voice Frames
Authors: Kajackas, Algimantas | Anskaitis, Aurimas
Abstract: It is well known, the voice segments and coincident data packets are not equally valued and significant for decoding and comprehension of speech signal. Some lost segments may only slightly worsen audible quality, while others cause strong distortion of the speech signals. Despite this, the feature of different importance of different voice segments in current generation of digital voice transmission systems is not fully used. There is a fundamental problem with discrimination of different importance and value of voice frames. In this paper the concept "of value of voice frame" is introduced, the metric and means for evaluation and measurement …of voice frame value are proposed and also results of the measurements of voice frames value are presented. Show more
Keywords: value of voice frame, PESQ measure
Multiple Criteria Comparative Evaluation of E-Learning Systems and Components
Authors: Kurilovas, Eugenijus | Dagienė, Valentina
Abstract: The main scientific problems investigated in this paper deal with the problem of multiple criteria evaluation of the quality of the main components of e-learning systems, i.e., learning objects (LOs) and virtual learning environments (VLEs). The aim of the paper is to analyse the existing LO and VLE quality evaluation methods, and to create more comprehensive methods based on learning individualisation approach. LOs and VLEs quality evaluation criteria are further investigated as the optimisation parameters and several optimisation methods are explored to be applied. Application of the experts' additive utility function using evaluation criteria ratings and their weights is explored …in more detail. These new elements make the given work distinct from all the other earlier works in the area. Show more
Keywords: multiple criteria evaluation of quality, learning objects, virtual learning environments, score-rating, weights, optimisation
Combinatorial Systems Evolution: Example of Standard for Multimedia Information
Authors: Levin, Mark Sh. | Kruchkov, Oleg | Hadar, Ofer | Kaminsky, Evgeny
Abstract: The article addresses the issues of combinatorial evolution of standards in transmission of multimedia information including the following: (a) brief descriptions of basic combinatorial models as multicriteria ranking, knapsack-like problems, clustering, combinatorial synthesis, multistage design, (b) a description of standard series (MPEG) for video information processing and a structural (combinatorial) description of system changes for the standards, (c) a set of system change operations (including multi-attribute description of the operations and binary relations over the operations), (d) combinatorial models for the system changes, and (e) a multistage combinatorial scheme (heuristic) for the analysis of the system changes. Expert experience is …used. Numerical examples illustrate the suggested problems, models, and procedures. Show more
Keywords: system evolution, multimedia information, standard, technological trajectories, combinatorial optimization, heuristics, decision making, expert judgment
Digital Model of Blood Circulation Analysis System
Authors: Mačiulis, Audris | Paunksnis, Alvydas | Barzdžiukas, Valerijus | Kriaučiūnienė, Loresa | Buteikienė, Dovilė | Puzienė, Viktorija
Abstract: Digital signal processing is one of the most powerful technologies, developed by achievements in science and electronics engineering. Achievements of this technology significantly influenced communications, medicine technique, radiolocation and other. Digital signal processors are usually used for effective solution of digital signal processing problems class. Today digital signal processors are widely used practically in all fields, in which information processing in real-time is needed. Creation of diagnostic medicine systems is one of perspective fields using digital signal processors. The aim of this work was to create digital mathematical model of blood circulation analysis system using digital signal processing instead of …analogical nodes of device. In first stage – work algorithm of blood circulation analysis system and mathematical model of blood circulation analysis system in Matlab–Simulink environment was created. In second stage – mathematical model was tested experimentally. Mathematically imitated Doppler signal was sent to tissue and was reflected. The signal was processed in digitally, blood flow direction was marked and blood speed was evaluated. Experimentation was done with real signals that were recorded while investigating patients in eye clinics. Gained results confirmed adequacy of created mathematical model to real analogical blood circulation analysis system (Lizi et al., 2003). Show more
Keywords: ultrasound, digital signal processing, ophthalmology, Doppler, eye vascular system, blood flow, digital spectral analysis
On ASPECTJ and Composition Filters: A Mapping of Concepts
Authors: Meslati, Djamel
Abstract: ASPECTJ and composition filters are well-known influential approaches among a wide range of aspect-oriented programming languages that have appeared in the last decade. Although the two approaches are relatively mature and many research works have been devoted to their enhancement and use in practical applications, so far, there has been no attempt that aims at comparing deeply the two approaches. This article is a step towards this comparison; it proposes a mapping between ASPECTJ and Composition filters that put to the test the two approaches by confronting and relating their concepts. Our work shows that the mapping is neither straightforward …nor one-to-one despite the fact that the two approaches belong to the same category and provide extension of the same Java language. Show more
Keywords: aspect-oriented programming, ASPECTJ, composition filters, mapping of concepts, separation of concerns, weaving
An Anonymous Mobile Payment System Based on Bilinear Pairings
Authors: Popescu, Constantin
Abstract: Many electronic cash systems have been proposed with the proliferation of the Internet and the activation of electronic commerce. E-cash enables the exchange of digital coins with value assured by the bank's signature and with concealed user identity. In an electronic cash system, a user can withdraw coins from the bank and then spends each coin anonymously and unlinkably. In this paper, we design an efficient anonymous mobile payment system based on bilinear pairings, in which the anonymity of coins is revocable by a trustee in case of dispute. The message transfer from the customer to the merchant occurs only …once during the payment protocol. Also, the amount of communication between customer and merchant is about 800 bits. Therefore, our mobile payment system can be used in the wireless networks with the limited bandwidth. The security of the new system is under the computational Diffie–Hellman problem in the random oracle model. Show more
Keywords: cryptography, electronic cash system, bilinear pairings
Adaptively Secure Threshold Signature Scheme in the Standard Model
Authors: Wang, Zecheng | Qian, Haifeng | Li, Zhibin
Abstract: We propose a distributed key generation protocol for pairing-based cryptosystems which is adaptively secure in the erasure-free and secure channel model, and at the same time completely avoids the use of interactive zero-knowledge proofs. Utilizing it as the threshold key generation protocol, we present a secure (t,n) threshold signature scheme based on the Waters' signature scheme. We prove that our scheme is unforgeable and robust against any adaptive adversary who can choose players for corruption at any time during the run of the protocols and make adaptive chosen-message attacks. And the security proof of ours is in the standard model …(without random oracles). In addition our scheme achieves optimal resilience, that is, the adversary can corrupt any t<n/2 players. Show more
Keywords: threshold signature, distributed key generation, computational Diffie–Hellman problem, adaptively secure, provable security | CommonCrawl |
We thank the National Science Foundation, the Simons Foundation, and the Office of the Vice-President for Research, for helping fund graduate student travel (listed below in reverse chronological order).
Did you say pandemic?
Kübra Benli: On the number of small prime power residues.
Venue: Number Theory Seminar, University of Waterloo, November 2019.
Venue: Algebra and Number Theory Seminar, Emory University, October 2019.
Kübra Benli: Changes of digits of primes.
Venue: PANTS XXXII, University of North Carolina at Charlotte, September 2019.
Ziqing Xiang: Isomorphism theorem between q-Schur algebras of type B and type A.
Venue: Spring Southeastern Sectional Meeting, Auburn University, March 2019.
Noah Lebowitz-Lockard: Irreducible quadratic polynomials and Euler's function.
Venue: Special Session in Analytic Number Theory at the Joint Math Meeting, January 2019
Lori Watson: Hasse principle violations of quadratic twists of hyperelliptic curves.
Venue: Special Session on Arithmetic Statistics at the Joint Math Meeting, January 2019
Kubra Benli: Small prime power residues modulo p.
Venue: Palmetto Number Theory Series, University of South Carolina, December 2018.
Lori Watson: Hasse Principle Violations in Families of Hyperelliptic Curves.
Venue: Wesleyan University, October 2018.
Ziqing Xiang: Diophantine equations related to tight designs.
Venue: Design Theory from the Viewpoint of Algebraic Combinatorics, Three Gorges Mathematical Research Center, October 2018.
Ziqing Xiang: New lower bound on the sizes of designs.
Ziqing Xiang: Existence of rational designs.
Ziqing Xiang: Explicit constructions of designs.
Lori Watson: Hasse Principle Violations of Quadratic Twists of Hyperelliptic Curves
Venue: Connecticut Summer School in Number Theory (CTNT), University of Connecticut, June 2018.
Andrew Maurer: On the Finite Generation of Relative Cohomology for Classical Lie Superalgebras.
Venue: AMS Sectional at Northeastern University, April 2018
Ziqing Xiang: An explicit construction of spherical designs
Venue: International Workshop on Bannai-Ito Theory, Zhejiang University, November 2017
Andrew Maurer: On the Finite Generation of Relative Cohomology for Lie Superalgebras
Venue: AMS Sectional in Orlando, October 2017
Luca Schaffler: The KSBA compactification of the moduli space of D {1,6}-polarized Enriques surfaces
Venue: Workshop on Algebraic Varieties, Hodge Theory and Motives, Fields Institute, March 2017
Abstract: In this talk we describe the moduli compactification by stable pairs (also known as KSBA compactification) of a 4-dimensional family of Enriques surfaces, which arise as the $\mathbb{Z}_2^2$-covers of the blow up of $\mathbb{P}^2$ at three general points branched along a configuration of three pairs of lines. The chosen divisor is an appropriate multiple of the ramification locus. Using the theory of stable toric pairs we are able to study the degenerations parametrized by the boundary and its stratification. We relate this compactification to the Baily-Borel compactification of the same family of Enriques surfaces. Part of the boundary of this stable pairs compactification has a toroidal behavior, another part is isomorphic to the Baily-Borel compactification, and what remains is a mixture of these two.
Venue: Focused Research Group on Hodge Theory, Moduli and Representation Theory: Workshop VIII, Washington University in St. Louis, January 2017
Venue: AMS Contributed Paper Session in Algebraic Geometry, Joint Mathematics Meetings, Atlanta, January 2017
Hans Parshall: Spherical configurations over finite fields
Venue: Joint Mathematics Meetings, Atlanta, January 2017
Abstract: In their 1973 paper, Erdos, Graham, Montgomery, Rothschild, Spencer, and Straus proved that every Euclidean Ramsey set is contained in some sphere, and Graham conjectures that every finite spherical set is indeed Ramsey. This conjecture remains open (and contested) even in the case of a generic four point subset of a circle. We provide evidence for Graham's conjecture by proving something stronger in the finite field setting: for any a in (0,1) every subset A of F_q^{10} with |A| > aq^{10} contains an isometric copy of every four point spherical set spanning two dimensions, provided q is taken sufficiently large with respect to a. For d > 2k + 5, comparable results are obtained in F_q^d for arbitrary (k + 2)-point spherical configurations spanning k dimensions.
Natalie LF Hobson: Identities between first Chern class of vector bundles of conformal blocks
Venue: AMS Contributed Paper Session in Algebraic Geometry, JMM January 2017
Abstract: Given a simple Lie algebra g, a positive integer `, and an n-tuple ~λ of dominant integral weights for g at level `, one can define a vector bundle on Mg,n known as a vector bundle of conformal blocks. These bundles are nef in genus g = 0 and so this family provides potentially an infinite number of elements in the nef cone of M0,n to analyze. Result relating these divisors with different data is thus significant in understanding these objects. In this talk, we use correspondences of these bundles with products in quantum cohomology in order to classify when a bundle with sl2 or sp2` is rank one. We show this is also a necessary and sufficient condition for when these divisors are equivalent.
Natalie LF Hobson: Vector Bundles of Conformal Blocks- Rank one and finite generation
Venue: AWM Poster Session, JMM January 2017
Abstract: The moduli space of curves, M0,n, parametrizes stable n-pointed rational curves. To understand this projective variety, we study vector bundles on it. Vector bundles of conformal blocks are an infinite family of such bundles. Since these bundles are all globally generated, they are especially interesting to analyze, as their first Chern classes, the conformal blocks divisors, are all nef. It is an open question as to whether the nef cone, Nef(M0,n), is finitely generated for n > 7. How does the infinite family of conformal blocks divisors live in Nef(M0,n)? Is the subcone generated by conformal blocks divisors polyhedral? In this report, I give several of my results to these questions for specific cases of interest.
Hans Parshall: Spherical configurations in dense sets
Venue: The Ohio State University, November 2016
Abstract: We will discuss an arithmetic combinatorics perspective on how to locate geometric configurations. By controlling a counting operator with a uniformity norm, one can argue that uniform sets contain many configurations. In joint work with Neil Lyall and Akos Magyar, we further prove an inverse theorem and establish, for example, that all large subsets of vector spaces over finite fields contain isometric copies of all spherical quadrilaterals.
Hans Parshall: Triangles and quadrilaterals over finite fields
Venue: Missouri State University, November 2016
Abstract: We will discuss an arithmetic combinatorics approach to locating geometric patterns over finite fields. By defining a "counting operator" and a "uniformity norm", we will argue that "uniform" dense sets contain geometric configurations. This approach recently led to an improvement on the 2008 result of Hart and Iosevich on triangles (3-point configurations) and new results on quadrilaterals (4-point configurations), joint with Neil Lyall and Akos Magyar.
Venue: University of West Georgia, October 2016
In the 1970s, it was shown by Erdos, Graham, Montgomery, Rothschild Spencer, and Straus that every Euclidean Ramsey set is spherical, and the converse remains an open conjecture. We provide evidence for this conjecture in the finite field setting. Following the setup of Hart and Iosevich, we show that every d-simplex appears isometrically in every sufficiently large subset of F_q^d, improving the necessary relationship between d and d. We will further discuss comparable results for spherical configurations over finite fields whose Euclidean analogues are not known to be Ramsey.
Natalie LF Hobson: Vector Bundles of Conformal Blocks-- Rank One and Finite Generation
Venues: University of Utah (September 2016), UPenn (September 2016), University of Illinois at Chicago (September 2016)
Given a simple Lie algebra \g, a positive integer l and an n-tuple of dominant integral weights for \g at level l, one can define a vector bundle on the moduli space of curves known as a vector bundle of conformal blocks. These bundles are nef in the case that the genus is zero and so this family provides potentially an infinite number of elements in Nef(M_0,n\bar) to analyze.
It is natural to ask how this infinite family of conformal blocks divisors lives in Nef(M_0,n\bar). Is the subcone generated by conformal blocks divisors polyhedral? In this talk, we give several results to this question for specific cases of interest. To show our results, we use a correspondence of the ranks of these bundles with computations in the quantum cohomology of the Grassmannian.
Natalie LF Hobson: Quantum Kostka and the rank on problem for \sl_2m
Venues: Rutgers University (October 2016), Univesity of Ilinois at Urbana-Champain (November 2016), Ohio State University (November 2016), University of British Colombia (November 2016)
Abstract: In this talk we will define and explore an infinite family of vector bundles, known as vector bundles of conformal blocks, on the moduli space M0,n of marked curves. These bundles arise from data associated to a simple Lie algebra. We will show a correspondence (in certain cases) of the rank of these bundles with coefficients in the cohomology of the Grassmannian. This correspondence allows us to use a formula for computing "quantum Kostka" numbers and explicitly characterize families of bundles of rank one by enumerating Young tableaux. We will show these results and illuminate the methods involved.
Natalie LF Hobson: Vector bundles of conformal blocks with $\mathfrak{sp}_{2\ell}$ at level one
Venue: 2016 AMS Spring Central Sectional Meeting, Fargo, North Dakota, April 2016
William Hardesty: On Support Varieties and the Humphreys Conjecture in type A
Venue: AMS special session on Lie Theory, Representation Theory and Geometry, Athens, Georgia, March 2016
Patrick K. McFaddin: Chow groups with coefficients and generalized Severi-Brauer varieties
Venue: Emory University, February 2, 2016
Abstract: The theory of algebraic cycles on homogenous varieties has seen many useful applications to the study of central simple algebras, quadratic forms, and Galois cohomology. Significant results include the Merkurjev-Suslin Theorem and Suslin's Conjecture, recently proved by Merkurjev. Despite these successes, a general description of Chow groups and Chow groups with coefficients remains elusive, and computations of these groups are done in various cases. In this talk, I will give some background on K-cohomology groups of Severi-Brauer varieties and discuss some recent work on computing these groups for algebras of index 4.
Natalie LF Hobson: Quantum kostka and the rank one problem for $\mathfrak{sl}_{2m}$
Venue: 2016 Joint AMS and MAA Mathematics Meetings, Seattle, WA, January 2016
Venue: AMS special session on Categorical and Geometric Methods in Representation Theory, Seattle, Washington, January 2016
Kenneth Jacobs: Lyapunov Exponents in non-Archimedean Dynamics
Venue: Joint AMS-MAA Meeting in Seattle, WA, January 6, 2016
Abstract: The Lyapunov exponent of a rational map f measures the rate of growth of a point in a generic orbit. It is related to the orbits of the critical points of f, and when f is defined over the complex numbers, a sharp lower bound is log(d)/2, where d is the degree of the map. Much less is known about Lyapunov exponents for maps defined over non-Archimedean fields. In this talk, we will give an explicit lower bound similar to the one over the complex numbers which is sharp for maps of good reduction. We will also give a formula relating Lyapunov exponents to Silverman's critical height.
Kenneth Jacobs: Lower Bounds for non-Archimedean Lyapunov Exponents
Venue: RTG Workshop in Arithmetic Dynamics (at the University of Michigan, Ann Arbor), December 5, 2015
Abstract: Lyapunov exponents measure the rate of expansion of a dynamical system. In classical complex dynamics, the Lyapunov exponent of a rational map is known to be bounded below by (log d)/2, where d is the degree of the map, and this bound is known to be sharp. In this talk, we will present a lower bound for rational maps defined over non-Archimedean valued fields which is sharp for maps of potential good reduction and for maps whose Berkovich Julia set satisfies a certain finiteness condition.
Venue: 8th Southeastern Lie Theory Workshop on Algebraic and Combinatorial Representation Theory, Raleigh, North Carolina, October 2015
Lee Troupe: Orders of reductions of elliptic curves with many and few prime factors
Venue: Illinois Number Theory Conference 2015, August 13-14, 2015
Ziqing Xiang: Spherical Designs Over a Number Field
Venue: 2015 Workshop on Combinatorics and Applications, at Shanghai Jiao Tong University, April 21-27, 2015
Ziqing Xiang: The Lit-Only σ-Game
William Hardesty: Support varieties of line bundle cohomology groups for SL(3)
Venue: Southwest Group Theory Day 2015, Tucson, Arizona, March 2015
Kenneth Jacobs: An Equidistribution Result in non-Archimedean Dynamics
Venue: Algebra Seminar, Georgia Institute of Technology, January 26th, 2015
Abstract: Let K be a complete, algebraically closed, non-Archimedean field, and let f be a rational function defined over K with degree at least 2. Recently, Robert Rumely introduced two objects that carry information about the arithmetic and the dynamics of f. The first is a function ordRes_f, which describes the behavior of the resultant of f under coordinate changes on the projective line. The second is a discrete probability measure \nu_f supported on the Berkovich half space that carries arithmetic information about f and its action on the Berkovich line. In this talk, we will show that the functions ordRes_f converge locally uniformly to the Arakelov-Green's function attached to f, and that the family of measures \nu_{f^n} attached to the iterates of f converge to the equilibrium measure of f.
Venue: Joint AMS-MAA Meeting in San Antonio, Texas
Abstract: Let K be an algebraically closed field that is complete with respect to a non-Archimedean absolute value. Let \phi\in K(z) have degree d\geq 2. Recently, Rumely introduced a measure \nu_{\phi} on the Berkovich line over K that carries information about the reduction of \phi. In particular, the measure \nu_{\phi} charges a single point if and only if $\phi$ has good reduction at that point. Otherwise, \nu_{\phi} finitely many points, which can be thought of as having "spread out" the point of good reduction. In this talk, we will show that the family of measures \{\nu_{\phi^n}\} attached to the iterates of \phi equidistribute to the invariant measure \mu_\phi, a canonical object arising in the study of discrete dynamical systems.
Allan Lacy: On the index of genus one curves over infinite, finitely generated fields.
Venue: Joint AMS-MAA Meeting in San Antonio, Texas, January 12, 2015
Abstract: We show that every infinite, finitely generated field admits genus one curves with index equal to any prescribed positive integer. The proof is by induction on the transcendence degree. This generalizes – and uses as the base case of an inductive argument – an older result on the number field case. There is a separate base case in every positive characteristic p, and these use work on the conjecture of Birch and Swinnerton-Dyer over function fields.
Adrian Brunyate: A Compact Moduli Space of Elliptic K3 Surfaces.
Abstract: We will discuss recent results detailing a geometric (KSBA-type) compactication of the moduli of elliptic K3 surfaces,including how to explicitly compute limits and how the compactication relates to toroidal compactications of the period domain.
Natalie LF Hobson (and Sayonita Ghosh Hajra): Studying students' preferences and performance in a cooperative mathematics classroom
Abstract: In this study, we discuss our experience with cooperative learning in a mathematics content course. Twenty undergraduate students from a southern public university participated in this study. The instructional method used in the classroom was cooperative. We rely on previous research and literature to guide the implementation of cooperative learning in the class. The goal of our study is to investigate the relationship between students' preferences and performance in a cooperative learning setting. We collected data through assessments, surveys, and observations. Results show no significant difference in the comparison of students' preferences and performance. Based on this study, we provide suggestions in teaching mathematics content courses for prospective teachers in a cooperative learning setting.
Lee Troupe: Bounded gaps between primes in \mathbb{F}_q[t] with a given primitive root
Venue: 23rd Meeting of the Palmetto Number Theory Series, University of South Carolina at Columbia, December 6-7, 2014
Abstract: A famous conjecture of Artin states that there are infinitely many prime numbers for which a fixed integer g is a primitive root, provided g \neq -1 and g is not a perfect square. Thanks to work of Hooley, we know that this conjecture is true, conditional on the truth of the Generalized Riemann Hypothesis. Using a combination of Hooley's analysis and the techniques of Maynard-Tao used to prove the existence of bounded gaps between primes, Pollack has shown that (conditional on GRH) there are bounded gaps between primes with a prescribed primitive root. In this talk, we discuss the analogue of Pollack's work in the function field case; namely, that given a monic polynomial g(t) which is not an \ellth power for any \ell dividing q-1, there are bounded gaps between monic irreducible polynomials P(t) in \mathbb{F}_q[t] for which g(t)$ is a primitive root (which is to say that g(t) generates the group of units modulo P(t)). In particular, we obtain bounded gaps between primitive polynomials, corresponding to the choice g(t) = t.
Lee Troupe: The number of prime factors of s(n).
Venue: Fall Southeastern Sectional Meeting of the AMS, University of North Carolina at Greensboro, November 8-9, 2014
Abstract: Let ω(n) denote the number of distinct prime divisors of a natural number n. In 1917, Hardy and Ramanujan famously proved that the normal order of ω(n) is log log n; in other words, a typical natural number n has about log log n distinct prime factors. Erd˝os and Kac later generalized Hardy and Ramanujan's result, showing (roughly speaking) that ω(n) is normally distributed and thereby giving rise to the field of probabilistic number theory. In this talk, we'll discuss the normal order of ω(s(n)), where s(n) is the usual sum-of-proper-divisors function. This new result supports a conjecture of Erd˝os, Granville, Pomerance, and Spiro; namely, that if a set of natural numbers has asymptotic density zero, then so does its preimage under s.
Kenneth Jacobs: A New Type of Equidistribution Result in non-Archimedean Dynamics
Venue: Northwestern University Dynamics Seminar, November 4th, 2014
Abstract: Let K be an algebraically closed field that is complete with respect to a non-Archimedean absolute value. We study the dynamics of rational functions with coefficients in K. In this non-Archimedean setting, there is an associated rational map, called the reduction map, which is defined over the residue field of K and carries information about the dynamics. Recently, Rumely introduced a measure nu on the Berkovich line over K that carries information about the reduction of the conjugates of the map. In this talk, we will show that the sequence of measures {nu_n}, associated to the iterates of the map, equidistribute to a natural invariant measure on the Berkovich line. As time permits, we will also discuss recent work of a VIGRE group in which the crucial measures have been shown to give information about the location of the map as a point in moduli space.
Ziqing Xiang: Tight Block Designs
Venue: Workshop on Sphere Packings, Lattices, and Designs, Erwin Schrodinger International Institute, Vienna, Austria, October 27-31, 2014
Lee Troupe: The Hardy-Ramanujan theorem and related results.
Venue: Clemson University, October 22, 2014
Venue: Clemson University Number Theory Seminar, October 8, 2014
Theresa Brons: Parabolic Subgroups and the Line-Bundle Cohomology over the Flag Variety
Venue: Central Fall Sectional Meeting of the AMS, Eau Claire, Wisconsin, September 19-21, 2014
Abstract: H.H. Andersen determined the socle of H1(λ), which is potentially non-zero only when there exists a unique simple root α such that ⟨λ, α∨⟩ < 0. In this work he did so by first determining the socle in the case when G is of type A1 where H1(λ) a Weyl module and λ an anti-dominant weight, and later extended this to the case when P(α) is a minimal parabolic subgroup. In this talk, this approach will be generalized, leading to some new vanishing results and some interesting avenues for further study. | CommonCrawl |
long-range order v.s. symmetry-breaking
Long/short range order v.s. long/short range interactions v.s. long-short range entanglement
Semiclassical QED and long-range interaction
How to realize long-range interaction of colds atom in an optical lattice?
Peierls Argument for Absence of Long Range Order
Casimir forces due to scalar field using Path integrals
Is resonating valence bond (RVB) states long-range entangled?
What's the definition of a short range potential and why is it defined this way?
Nonabelian gauge theories and range of the corresponding force
Significance of $U(1)$ extensions of SM
Extensions of DHR superselection theory to long range forces
For Haag-Kastler nets $M(O)$ of von-Neumann algebras $M$ indexed by open bounded subsets $O$ of the Minkowski space in AQFT (algebraic quantum field theory) the DHR (Doplicher-Haag-Roberts) superselection theory treats representations that are "localizable" in the following sense.
The $C^*-$algebra
$$ \mathcal{A} := clo_{\| \cdot \|} \bigl( \bigcup_{\mathcal{O}}\mathcal{M}(\mathcal{O}) \bigr) $$
is called the quasi-local algebra of the given net.
For a vacuum representation $\pi_0$, a representation $\pi$ of the local algebra $\mathcal{A}$ is called (DHR) admissible if $\pi | \mathcal{A}(\mathcal{K}^{\perp})$ is unitarily equivalent to $\pi_0 | \mathcal{A}(\mathcal{K}^{\perp})$ for all double cones $K$.
Here, $\mathcal{K}^{\perp}$ denotes the causal complement of a subset of the Minkowski space.
The DHR condition says that all expectation values (of all observables) should approach the vacuum expectation values, uniformly, when the region of measurement is moved away from the origin.
The DHR condition therefore excludes long range forces like electromagnetism from consideration, because, by Stokes' theorem, the electric charge in a finite region can be measured by the flux of the field strength through a sphere of arbitrary large radius.
In his recent talk
Sergio Doplicher: "Superselection structure in Local Quantum Theories with (neutral) massless particle"
at the conference Modern Trends in AQFT, it would seem that Sergio Doplicher announced an extension of superselection theory to long range forces like electromagnetism, which has yet to be published.
I am interested in any references to or explanations of this work, or similar extensions of superselection theory in AQFT to long range forces. (And of course also in all corrections to the characterization of DHR superselection theory I wrote here.)
And also in a heads up when Doplicher and coworkers publish their result.
asked Sep 27, 2011 in Theoretical Physics by Tim van Beek (745 points) [ no revision ]
retagged Mar 24, 2014 by dimension10
Now I wish I had paid more attention while I was sitting in the audience. :-) Unfortunately, I'm not familiar enough with the original DHR analysis to have retained more than just the broad outlines of the arguments anyway.
With that disclaimer, I do (imperfectly) recall some apparently important points. Doplicher drew attention to the parallels between their new analysis and the analysis of Buchholz and Fredenhagen (CMP 84 1, doi), which relied only on spacelike wedges for a notion of localization, rather than the double diamonds of DHR. Starting from wedges, localization properties can be refined to spacelike cones and, under fortuitous circumstances, to arbitrarily small bounded regions. On the other hand, the new analysis makes use of localization in future pointed light cones. The analogs of spacelike cones are now played by hyperbolic cones, which are thickenings of cones defined on 3-hyperbolids asymptotic to the given light cone. I'm afraid I cannot be more specific, but this notion seems to have come up independently in hyperbolic 3-geometry.
As to the results, I recall that they are very similar to the results of the previous DHR or BF analyses. In particular, no exotic statistics appear and only the standard (para)bose and (para)fermi cases are possible. I can't recall any result that is different from the previous analyses (though that could be just my memory).
answered Sep 27, 2011 by Igor Khavkine (420 points) [ no revision ]
Well, that's a good start :-) But I'll leave the question open for now.
commented Sep 28, 2011 by Tim van Beek (745 points) [ no revision ]
Slides by Buchholz on this project are available here: http://www.univie.ac.at/qft-lhc/?page_id=10
answered Oct 4, 2011 by Eric (170 points) [ no revision ]
Thanks for the tip! I am curious about Bucholz's conlcusion "Origin of infrared difficulties can be traced back to unreasonable idealization of observations covering all of Minkowski space". I remember asking about this idealization in all QFT calculations in an introductory class, but had no idea that this would reappier in such a context :-)
commented Oct 7, 2011 by Tim van Beek (745 points) [ no revision ]
I think the key physical insights are that 1) for an observer the relevant part of Minkowski spacetime where he can perform measurements (observables) is its causal future (future lightcone $V_+$) and 2) photons from the past lightcone $V_-$ cannot enter $V_+$ which provides, in a sense, a geometric infrared cutoff. But of course, if you want to compare measurements for different observers, you need to consider all of Minkowski space.
commented Oct 10, 2011 by Eric (170 points) [ no revision ]
p$\varnothing$ysicsOverflow | CommonCrawl |
Welcome to ShortScience.org!
ShortScience.org is a platform for post-publication discussion aiming to improve accessibility and reproducibility of research ideas.
The website has 1435 public summaries, mostly in machine learning, written by the community and organized by paper, conference, and year.
Reading summaries of papers is useful to obtain the perspective and insight of another reader, why they liked or disliked it, and their attempt to demystify complicated sections.
Also, writing summaries is a good exercise to understand the content of a paper because you are forced to challenge your assumptions when explaining it.
Finally, you can keep up to date with the flood of research by reading the latest summaries on our Twitter and Facebook pages.
CompSci
Popular (Today)
Learning to Predict Without Looking Ahead: World Models Without Forward Prediction
Freeman, C. Daniel and Metz, Luke and Ha, David
arXiv e-Print archive - 2019 via Local Bibsonomy
Keywords: dblp
[link] Summary by CodyWild 2 months ago
Reinforcement Learning is often broadly separated into two categories of approaches: model-free and model-based. In the former category, networks simply take observations and input and produce predicted best-actions (or predicted values of available actions) as output. In order to perform well, the model obviously needs to gain an understanding of how its actions influence the world, but it doesn't explicitly make predictions about what the state of the world will be after an action is taken. In model-based approaches, the agent explicitly builds a dynamics model, or a model in which it takes in (past state, action) and predicts next state. In theory, learning such a model can lead to both interpretability (because you can "see" what the model thinks the world is like) and robustness to different reward functions (because you're learning about the world in a way not explicitly tied up with the reward).
This paper proposes an interesting melding of these two paradigms, where an agent learns a model of the world as part of an end-to-end policy learning. This works through something the authors call "observational dropout": the internal model predicts the next state of the world given the prior one and the action, and then with some probability, the state of the world that both the policy and the next iteration of the dynamics model sees is replaced with the model's prediction. This incentivizes the network to learn an effective dynamics model, because the farther the predictions of the model are from the true state of the world, the worse the performance of the learned policy will be on the iterations where the only observation it can see is the predicted one. So, this architecture is model-free in the sense that the gradient used to train the system is based on applying policy gradients to the reward, but model-based in the sense that it does have an internal world representation.
https://i.imgur.com/H0TNfTh.png
The authors find that, at a simple task, Swing Up Cartpole, very low probabilities of seeing the true world (and thus very high probabilities of the policy only seeing the dynamics model output) lead to world models good enough that a policy trained only on trajectories sampled from that model can perform relatively well. This suggests that at higher probabilities of the true world, there was less value in the dynamics model being accurate, and consequently less training signal for it. (Of course, policies that often could only see the predicted world performed worse during their original training iteration compared to policies that could see the real world more frequently).
On a more complex task of CarRacing, the authors looked at how well a policy trained using the representations of the world model as input could perform, to examine whether it was learning useful things about the world.
https://i.imgur.com/v9etll0.png
They found an interesting trade-off, where at high probabilities (like before) the dynamics model had little incentive to be good, but at low probabilities it didn't have enough contact with the real dynamics of the world to learn a sensible policy.
SSD: Single Shot MultiBox Detector
Liu, Wei and Anguelov, Dragomir and Erhan, Dumitru and Szegedy, Christian and Reed, Scott E. and Fu, Cheng-Yang and Berg, Alexander C.
European Conference on Computer Vision - 2016 via Local Bibsonomy
[link] Summary by Qure.ai 2 years ago
SSD aims to solve the major problem with most of the current state of the art object detectors namely Faster RCNN and like. All the object detection algortihms have same methodology
- Train 2 different nets - Region Proposal Net (RPN) and advanced classifier to detect class of an object and bounding box separately.
- During inference, run the test image at different scales to detect object at multiple scales to account for invariance
This makes the nets extremely slow. Faster RCNN could operate at **7 FPS with 73.2% mAP** while SSD could achieve **59 FPS with 74.3% mAP ** on VOC 2007 dataset.
#### Methodology
SSD uses a single net for predict object class and bounding box. However it doesn't do that directly. It uses a mechanism for choosing ROIs, training end-to-end for predicting class and boundary shift for that ROI.
##### ROI selection
Borrowing from FasterRCNNs SSD uses the concept of anchor boxes for generating ROIs from the feature maps of last layer of shared conv layer. For each pixel in layer of feature maps, k default boxes with different aspect ratios are chosen around every pixel in the map. So if there are feature maps each of m x n resolutions - that's *mnk* ROIs for a single feature layer. Now SSD uses multiple feature layers (with differing resolutions) for generating such ROIs primarily to capture size invariance of objects. But because earlier layers in deep conv net tends to capture low level features, it uses features after certain levels and layers henceforth.
##### ROI labelling
Any ROI that matches to Ground Truth for a class after applying appropriate transforms and having Jaccard overlap greater than 0.5 is positive. Now, given all feature maps are at different resolutions and each boxes are at different aspect ratios, doing that's not simple. SDD uses simple scaling and aspect ratios to get to the appropriate ground truth dimensions for calculating Jaccard overlap for default boxes for each pixel at the given resolution
##### ROI classification
SSD uses single convolution kernel of 3*3 receptive fields to predict for each ROI the 4 offsets (centre-x offset, centre-y offset, height offset , width offset) from the Ground Truth box for each RoI, along with class confidence scores for each class. So that is if there are c classes (including background), there are (c+4) filters for each convolution kernels that looks at a ROI.
So summarily we have convolution kernels that look at ROIs (which are default boxes around each pixel in feature map layer) to generate (c+4) scores for each RoI. Multiple feature map layers with different resolutions are used for generating such ROIs. Some ROIs are positive and some negative depending on jaccard overlap after ground box has scaled appropriately taking resolution differences in input image and feature map into consideration.
Here's how it looks :

##### Training
For each ROI a combined loss is calculated as a combination of localisation error and classification error. The details are best explained in the figure.

##### Inference
For each ROI predictions a small threshold is used to first filter out irrelevant predictions, Non Maximum Suppression (nms) with jaccard overlap of 0.45 per class is applied then on the remaining candidate ROIs and the top 200 detections per image are kept.
For further understanding of the intuitions regarding the paper and the results obtained please consider giving the full paper a read.
The open sourced code is available at this [Github repo](https://github.com/weiliu89/caffe/tree/ssd)
papers.nips.cc
Spatial Transformer Networks
Jaderberg, Max and Simonyan, Karen and Zisserman, Andrew and Kavukcuoglu, Koray
Neural Information Processing Systems Conference - 2015 via Local Bibsonomy
[link] Summary by NIPS Conference Reviews 3 years ago
This paper presents a novel layer that can be used in convolutional neural networks. A spatial transformer layer computes re-sampling points of the signal based on another neural network. The suggested transformations include scaling, cropping, rotations and non-rigid deformation whose paramerters are trained end-to-end with the rest of the model. The resulting re-sampling grid is then used to create a new representation of the underlying signal through bi-linear or nearest neighbor interpolation. This has interesting implications: the network can learn to co-locate objects in a set of images that all contain the same object, the transformation parameter localize the attention area explicitly, fine data resolution is restricted to areas important for the task. Furthermore, the model improves over previous state-of-the-art on a number of tasks.
The layer has one mini neural network that regresses on the parameters of a parametric transformation, e.g. affine), then there is a module that applies the transformation to a regular grid and a third more or less "reads off" the values in the transformed positions and maps them to a regular grid, hence under-forming the image or previous layer. Gradients for back-propagation in a few cases are derived. The results are mostly of the classic deep learning variety, including mnist and svhn, but there is also the fine-grained birds dataset. The networks with spatial transformers seem to lead to improved results in all cases.
arxiv-sanity.com
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro and Sameer Singh and Carlos Guestrin
Keywords: cs.LG, cs.AI, stat.ML
First published: 2016/02/16 (3 years ago)
Abstract: Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.
[link] Summary by Martin Thoma 3 years ago
This paper describes how to find local interpretable model-agnostic explanations (LIME) why a black-box model $m_B$ came to a classification decision for one sample $x$. The key idea is to evaluate many more samples around $x$ (local) and fit an interpretable model $m_I$ to it. The way of sampling and the kind of interpretable model depends on the problem domain.
For computer vision / image classification, the image $x$ is divided into superpixels. Single super-pixels are made black, the new image $x'$ is evaluated $p' = m_B(x')$. This is done multiple times.
The paper is also explained in [this YouTube video](https://www.youtube.com/watch?v=KP7-JtFMLo4) by Marco Tulio Ribeiro.
A very similar idea is already in the [Zeiler & Fergus paper](http://www.shortscience.org/paper?bibtexKey=journals/corr/ZeilerF13#martinthoma).
## Follow-up Paper
* June 2016: [Model-Agnostic Interpretability of Machine Learning](https://arxiv.org/abs/1606.05386)
* November 2016:
* [Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance](https://arxiv.org/abs/1611.05817)
* [An unexpected unity among methods for interpreting
model predictions](https://arxiv.org/abs/1611.07478)
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Brendel, Wieland and Bethge, Matthias
[link] Summary by David Stutz 6 months ago
Brendel and Bethge show empirically that state-of-the-art deep neural networks on ImageNet rely to a large extent on local features, without any notion of interaction between them. To this end, they propose a bag-of-local-features model by applying a ResNet-like architecture on small patches of ImageNet images. The predictions of these local features are then averaged and a linear classifier is trained on top. Due to the locality, this model allows to inspect which areas in an image contribute to the model's decision, as shown in Figure 1. Furthermore, these local features are sufficient for good performance on ImageNet. Finally, they show, on scrambled ImageNet images, that regular deep neural networks also rely heavily on local features, without any notion of spatial interaction between them.
https://i.imgur.com/8NO1w0d.png
Figure 1: Illustration of the heap maps obtained using BagNets, the bag-of-local-features model proposed in the paper. Here, different sizes for the local patches are used.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Ease-of-Teaching and Language Structure from Emergent Communication
Li, Fushan and Bowling, Michael
An interesting category of machine learning papers - to which this paper belongs - are papers which use learning systems as a way to explore the incentive structures of problems that are difficult to intuitively reason about the equilibrium properties of. In this paper, the authors are trying to better understand how different dynamics of a cooperative communication game between agents, where the speaking agent is trying to describe an object such that the listening agent picks the one the speaker is being shown, influence the communication protocol (or, to slightly anthropomorphize, the language) that the agents end up using.
In particular, the authors experiment with what happens when the listening agent is frequently replaced during training with a untrained listener who has no prior experience with the agent. The idea of this experiment is that if the speaker is in a scenario where listeners need to frequently "re-learn" the mapping between communication symbols and objects, this will provide an incentive for that mapping to be easier to quickly learn.
https://i.imgur.com/8csqWsY.png
The metric of ease of learning that the paper focuses on is "topographic similarity", which is a measure of how compositional the communication protocol is. The objects they're working with have two properties, and the agents use a pair of two discrete symbols (two letters) to communicate about them. A perfectly compositional language would use one of the symbols to represent each of the properties. To mathematically measure this property, the authors calculate (cosine) similarity between the two objects property vectors, and the (edit) distance between the two objects descriptions under the emergent language, and calculate the correlation between these quantities. In this experimental setup, if a language is perfectly compositional, the correlation will be perfect, because every time a property is the same, the same symbol will be used, so two objects that share that property will always share that symbol in their linguistic representation.
https://i.imgur.com/t5VxEoX.png
The premise and the experimental setup of this paper are interesting, but I found the experimental results difficult to gain intuition and confidence from. The authors do show that, in a regime where listeners are reset, topographic similarity rises from a beginning-of-training value of .54 to an end of training value of .59, whereas in the baseline, no-reset regime, the value drops to .51. So there definitely is some amount of support for their claim that listener resets lead to higher compositionality. But given that their central quality is just a correlation between similarities, it's hard to gain intuition for whether the difference is a meaningful. It doesn't naively seem particularly dramatic, and it's hard to tell otherwise without more references for how topographic similarity would change under a wider range of different training scenarios.
Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog
Jaques, Natasha and Ghandeharioun, Asma and Shen, Judy Hanwen and Ferguson, Craig and Lapedriza, Àgata and Jones, Noah and Gu, Shixiang and Picard, Rosalind W.
Given the tasks that RL is typically used to perform, it can be easy to equate the problem of reinforcement learning with "learning dynamically, online, as you take actions in an environment". And while this does represent most RL problems in the literature, it is possible to learn a reinforcement learning system in an off-policy way (read: trained off of data that the policy itself didn't collect), and there can be compelling reasons to prefer this approach. In this paper, which seeks to train a chatbot to learn from implicit human feedback in text interactions, the authors note prior bad experiences with Microsoft's Tay bot, and highlight the value of being able to test and validate a learned model offline, rather than have it continue to learn in a deployment setting. This problem, of learning a RL model off of pre-collected data, is known as batch RL. In this setting, the batch is collected by simply using a pretrained language model to generate interactions with a human, and then extracting reward from these interactions to train a Q learning system once the data has been collected.
If naively applied, Q learning (a good approach for off-policy problems, since it directly estimates the value of states and actions rather than of a policy) can lead to some undesirable results in a batch setting. An interesting one, that hadn't occurred to me, was the fact that Q learning translates its (state, action) reward model into a policy by taking the action associated with the highest reward. This is a generally sensible thing to do if you've been able to gather data on all or most of a state space, but it can also bias the model to taking actions that it has less data for, because high-variance estimates will tend make up a disproportionate amount of maximum values of any estimated distribution. One approach to this is to learn two separate Q functions, and take the minimum over them, and then take the max of that across actions (in this case: words in a sentence being generated). The idea here is that low-data, high-variance parts of state space might have one estimate be high, but might have the other be low, because high variance. However, it's costly to train and run two separate models. Instead, the authors here propose the simpler solution of training a single model with dropout, and using multiple "draws" from that model to simulate a distribution over Q value estimates. This will have a similar effect of penalizing actions whose estimate varies across different dropout masks (which can be hand-wavily thought of as different models).
The authors also add a term to their RL training that penalizes divergence from the initial language model that they used to collect the data, and also that is the initialization point for the parameters of the model. This is done via KL-divergence control: the model is penalized for outputting a distribution over words that is different in distributional-metric terms from what the language model would have output. This makes it costlier for the model to diverge from the pretrained model, and should lead to it only happening in cases of convincing high reward.
Out of these two approaches, it seems like the former is more convincing to me as a general-purpose method to use in batch RL settings. The latter is definitely something I would have expected to work well (and, indeed, KL-controlled models performed much better in empirical tests in the paper!), but more simply because language modeling is hard, and I would expect it to be good to constrain a model to be close to realistic outputs, since the sentiment-based reward signal won't reward realism directly. This seems more like something generally useful for avoiding catastrophic forgetting when switching from an old task to a new one (language modeling to sentiment modeling), rather than a particularly batch-RL-centric innovation.
https://i.imgur.com/EmInxOJ.png
An interesting empirical observation of this paper is that models without language-model control end up drifting away from realism, and repeatedly exploit part of the reward function that, in addition to sentiment, gave points for asking questions. By contrast, the KL-controlled models appear to have avoided falling into this local minimum, and instead generated realistic language that was polite and empathetic. (Obviously this is still a simplified approximation of what makes a good chat bot, but it's at least a higher degree of complexity in its response to reward).
Overall, I quite enjoyed this paper, both for its thoughtfulness and its clever application of engineering to use RL for a problem well outside of its more typical domain.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation
Girshick, Ross B. and Donahue, Jeff and Darrell, Trevor and Malik, Jitendra
Conference and Computer Vision and Pattern Recognition - 2014 via Local Bibsonomy
[link] Summary by nandini 2 years ago
# Object detection system overview.
https://i.imgur.com/vd2YUy3.png
1. takes an input image,
2. extracts around 2000 bottom-up region proposals,
3. computes features for each proposal using a large convolutional neural network (CNN), and then
4. classifies each region using class-specific linear SVMs.
* R-CNN achieves a mean average precision (mAP) of 53.7% on PASCAL VOC 2010.
* On the 200-class ILSVRC2013 detection dataset, R-CNN's mAP is 31.4%, a large improvement over OverFeat , which had the previous best result at 24.3%.
## There is a 2 challenges faced in object detection
1. localization problem
2. labeling the data
1 localization problem :
* One approach frames localization as a regression problem. they report a mAP of 30.5% on VOC 2007 compared to the 58.5% achieved by our method.
* An alternative is to build a sliding-window detector. considered adopting a sliding-window approach increases the number of convolutional layers to 5, have very large receptive fields (195 x 195 pixels) and strides (32x32 pixels) in the input image, which makes precise localization within the sliding-window paradigm.
2 labeling the data:
* The conventional solution to this problem is to use unsupervised pre-training, followed by supervise fine-tuning
* supervised pre-training on a large auxiliary dataset (ILSVRC), followed by domain specific fine-tuning on a small dataset (PASCAL),
* fine-tuning for detection improves mAP performance by 8 percentage points.
* Stochastic gradient descent via back propagation was used to effective for training convolutional neural networks (CNNs)
## Object detection with R-CNN
This system consists of three modules
* The first generates category-independent region proposals. These proposals define the set of candidate detections available to our detector.
* The second module is a large convolutional neural network that extracts a fixed-length feature vector from each region.
* The third module is a set of class specific linear SVMs.
1 Region proposals
* which detect mitotic cells by applying a CNN to regularly-spaced square crops.
* use selective search method in fast mode (Capture All Scales, Diversification, Fast to Compute).
* the time spent computing region proposals and features (13s/image on a GPU or 53s/image on a CPU)
2 Feature extraction.
* extract a 4096-dimensional feature vector from each region proposal using the Caffe implementation of the CNN
* Features are computed by forward propagating a mean-subtracted 227x227 RGB image through five convolutional layers and two fully connected layers.
* warp all pixels in a tight bounding box around it to the required size
* The feature matrix is typically 2000x4096
3 Test time detection
* At test time, run selective search on the test image to extract around 2000 region proposals (we use selective search's "fast mode" in all experiments).
* warp each proposal and forward propagate it through the CNN in order to compute features. Then, for each class, we score each extracted feature vector using the SVM trained for that class.
* Given all scored regions in an image, we apply a greedy non-maximum suppression (for each class independently) that rejects a region if it has an intersection-over union (IoU) overlap with a higher scoring selected region larger than a learned threshold.
## Training
1 Supervised pre-training:
* pre-trained the CNN on a large auxiliary dataset (ILSVRC2012 classification) using image-level annotations only (bounding box labels are not available for this data)
2 Domain-specific fine-tuning.
* use the stochastic gradient descent (SGD) training of the CNN parameters using only warped region proposals with learning rate of 0.001.
3 Object category classifiers.
* use intersection-over union (IoU) overlap threshold method to label a region with The overlap threshold of 0.3.
* Once features are extracted and training labels are applied, we optimize one linear SVM per class.
* adopt the standard hard negative mining method to fit large training data in memory.
### Results on PASCAL VOC 201012
1 VOC 2010
* compared against four strong baselines including SegDPM, DPM, UVA, Regionlets.
* Achieve a large improvement in mAP, from 35.1% to 53.7% mAP, while also being much faster
https://i.imgur.com/0dGX9b7.png
2 ILSVRC2013 detection.
* ran R-CNN on the 200-class ILSVRC2013 detection dataset
* R-CNN achieves a mAP of 31.4%
https://i.imgur.com/GFbULx3.png
#### Performance layer-by-layer, without fine-tuning
1 pool5 layer
* which is the max pooled output of the network's fifth and final convolutional layer.
*The pool5 feature map is 6 x6 x 256 = 9216 dimensional
* each pool5 unit has a receptive field of 195x195 pixels in the original 227x227 pixel input
2 Layer fc6
* fully connected to pool5
* it multiplies a 4096x9216 weight matrix by the pool5 feature map (reshaped as a 9216-dimensional vector) and then adds a vector of biases
* It is implemented by multiplying the features computed by fc6 by a 4096 x 4096 weight matrix, and similarly adding a vector of biases and applying half-wave rectification
#### Performance layer-by-layer, with fine-tuning
* CNN's parameters fine-tuned on PASCAL.
* fine-tuning increases mAP by 8.0 % points to 54.2%
### Network architectures
* 16-layer deep network, consisting of 13 layers of 3 _ 3 convolution kernels, with five max pooling layers interspersed, and topped with three fully-connected layers. We refer to this network as "O-Net" for OxfordNet and the baseline as "T-Net" for TorontoNet.
* RCNN with O-Net substantially outperforms R-CNN with TNet, increasing mAP from 58.5% to 66.0%
* drawback in terms of compute time, with in terms of compute time, with than T-Net.
1 The ILSVRC2013 detection dataset
* dataset is split into three sets: train (395,918), val (20,121), and test (40,152)
#### CNN features for segmentation.
* full R-CNN: The first strategy (full) ignores the re region's shape and computes CNN features directly on the warped window. Two regions might have very similar bounding boxes while having very little overlap.
* fg R-CNN: the second strategy (fg) computes CNN features only on a region's foreground mask. We replace the background with the mean input so that background regions are zero after mean subtraction.
* full+fg R-CNN: The third strategy (full+fg) simply concatenates the full and fg features
https://i.imgur.com/n1bhmKo.png
www.aaai.org
Optimizing Classifier Performance via an Approximation to the Wilcoxon-Mann-Whitney Statistic
Yan, Lian and Dodier, Robert H. and Mozer, Michael and Wolniewicz, Richard H.
International Conference on Machine Learning - 2003 via Local Bibsonomy
[link] Summary by Prateek Gupta 3 months ago
In binary classification task on an imbalanced dataset, we often report *area under the curve* (AUC) of *receiver operating characteristic* (ROC) as the classifier's ability to distinguish two classes.
If there are $k$ errors, accuracy will be the same irrespective of how those $k$ errors are made i.e. misclassification of positive samples or misclassification of negative samples.
AUC-ROC is a metric that treats these misclassifications asymmetrically, making it an appropriate statistic for classification tasks on imbalanced datasets.
However, until this paper, AUC-ROC was hard to quantify and differentiate to gradient-descent over.
This paper approximated AUC-ROC by a Wilcoxon-Mann-Whitney statistic which counts the "number of wins" in all the pairwise comparisons -
U = \frac{\sum_{i=1}^{m}\sum_{j=1}^{n}I(x_i, x_j)}{mn},
where $m$ is the total number of positive samples, $n$ is the number of negative samples, and $I(x_i, x_j)$ is $1$ if $x_i$ is ranked higher than $x_j$.
Figure 1 in the paper shows the variance of this statistic with an increasing imbalance in the dataset, justifying the close correspondence with AUC-ROC.
Further, to make this metric smooth and differentiable, the step function of pairwise comparison is replaced by sigmoid or hinge functions.
Further extensions are made to apply this to multi-class classification tasks and focus on top-K predictions i.e. optimize lower-left part of AUC.
Adversarial Examples Are Not Bugs, They Are Features
Ilyas, Andrew and Santurkar, Shibani and Tsipras, Dimitris and Engstrom, Logan and Tran, Brandon and Madry, Aleksander
- 2019 via Local Bibsonomy
Keywords: adversarial
Ilyas et al. present a follow-up work to their paper on the trade-off between accuracy and robustness. Specifically, given a feature $f(x)$ computed from input $x$, the feature is considered predictive if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}[y f(x)] \geq \rho$;
similarly, a predictive feature is robust if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}\left[\inf_{\delta \in \Delta(x)} yf(x + \delta)\right] \geq \gamma$.
This means, a feature is considered robust if the worst-case correlation with the label exceeds some threshold $\gamma$; here the worst-case is considered within a pre-defined set of allowed perturbations $\Delta(x)$ relative to the input $x$. Obviously, there also exist predictive features, which are however not robust according to the above definition. In the paper, Ilyas et al. present two simple algorithms for obtaining adapted datasets which contain only robust or only non-robust features. The main idea of these algorithms is that an adversarially trained model only utilizes robust features, while a standard model utilizes both robust and non-robust features. Based on these datasets, they show that non-robust, predictive features are sufficient to obtain high accuracy; similarly training a normal model on a robust dataset also leads to reasonable accuracy but also increases robustness. Experiments were done on Cifar10. These observations are supported by a theoretical toy dataset consisting of two overlapping Gaussians; I refer to the paper for details. | CommonCrawl |
Nutrition & Metabolism
Resting metabolic rate of obese patients under very low calorie ketogenic diet
Diego Gomez-Arbelaez1,
Ana B. Crujeiras ORCID: orcid.org/0000-0003-4392-03011,5,
Ana I. Castro1,5,
Miguel A. Martinez-Olmos1,5,
Ana Canton1,5,
Lucia Ordoñez-Mayan1,
Ignacio Sajoux2,
Cristobal Galban3,
Diego Bellido4 &
Felipe F. Casanueva1,5
Nutrition & Metabolism volume 15, Article number: 18 (2018) Cite this article
The resting metabolic rate (RMR) decrease, observed after an obesity reduction therapy is a determinant of a short-time weight regain. Thus, the objective of this study was to evaluate changes in RMR, and the associated hormonal alterations in obese patients with a very low-calorie ketogenic (VLCK)-diet induced severe body weight (BW) loss.
From 20 obese patients who lost 20.2 kg of BW after a 4-months VLCK-diet, blood samples and body composition analysis, determined by DXA and MF-Bioimpedance, and RMR by indirect calorimetry, were obtained on four subsequent visits: visit C-1, basal, initial fat mass (FM) and free fat mass (FFM); visit C-2, − 7.2 kg in FM, − 4.3 kg in FFM, maximal ketosis; visit C-3, − 14.4 kg FM, − 4.5 kg FFM, low ketosis; visit C-4, − 16.5 kg FM, − 3.8 kg FFM, no ketosis. Each subject acted as his own control.
Despite the large BW reduction, measured RMR varied from basal visit C-1 to visit C-2, − 1.0%; visit C-3, − 2.4% and visit C-4, − 8.0%, without statistical significance. No metabolic adaptation was observed. The absent reduction in RMR was not due to increased sympathetic tone, as thyroid hormones, catecholamines, and leptin were reduced at any visit from baseline. Under regression analysis FFM, adjusted by levels of ketonic bodies, was the only predictor of the RMR changes (R2 = 0.36; p < 0.001).
The rapid and sustained weight and FM loss induced by VLCK-diet in obese subjects did not induce the expected reduction in RMR, probably due to the preservation of lean mass.
Trial registration
This is a follow up study on a published clinical trial.
It is widely accepted that during periods of energy deficit or restriction (eg., weight-loss diets), the human body tends to diminish energy expenditure by increasing the efficiency in its use and by decreasing the resting metabolic rate (RMR) [18]. This phenomenon of metabolic adaptation to weight reduction is called adaptive thermogenesis, defined as a decrease in RMR out of proportion to the decrease in body mass [5, 12]. Various groups have observed this phenomenon during obesity treatments independently of the strategy employed, including diet, exercise, diet plus exercise, pharmacologic treatments, and surgical interventions, and suggest that metabolic adaptation predisposes weight reduced obese patients to weight regain.
Published research has shown that a very low calorie ketogenic (VLCK)-diet was able to induce a significant weight loss and maintained their efficacy along 2 years [10, 11]. Because VLCK-diets target body fat mass (FM) with little reduction in fat free mass (FFM) [6], the working hypothesis was that VLCK-diet may induce a minor or null reduction in the RMR, thus preventing body weight regain.
The main target of this study was to observe the changes in RMR induced by a VLCK-diet in obese subjects, as well as the hormonal and metabolic alterations associated with that change.
This is a follow up study on a published clinical trial [6]. It was an open, uncontrolled, nutritional intervention clinical trial conducted for 4 months, and performed in a single center.
The patients attending the Obesity Unit at the Complejo Hospitalario Universitario of Santiago de Compostela, Spain to receive treatment for obesity were consecutively invited to participate in this study.
The inclusion criteria were, age 18 to 65 years, body mass index (BMI) ≥30 kg/m2, stable body weight in the previous 3 months, desire to lose weight, and a history of failed dietary efforts. The main exclusion criteria were, diabetes mellitus, obesity induced by other endocrine disorders or by drugs, and participation in any active weight loss program in the previous 3 months. In addition, those patients with previous bariatric surgery, known or suspected abuse of narcotics or alcohol, severe depression or any other psychiatric disease, severe hepatic insufficiency, any type of renal insufficiency or gouts episodes, nephrolithiasis, neoplasia, previous events of cardiovascular or cerebrovascular disease, uncontrolled hypertension, orthostatic hypotension, and hydroelectrolytic or electrocardiographic alterations, were excluded. Females who were pregnant, breast-feeding, or intending to become pregnant, and those with child-bearing potential and not using adequate contraceptive methods, were also excluded. Apart from obesity and metabolic syndrome, participants were generally healthy individuals.
The study protocol was in accordance with the Declaration of Helsinki and was approved by the Ethics Committee for Clinical Research of Galicia, Santiago de Compostela, Spain (registry 2010/119). Participants gave informed consent before any intervention related to the study. Participants received no monetary incentive.
Nutritional intervention
All the patients followed a VLCK diet according to a commercial weight loss program (PNK method®), which includes lifestyle and behavioral modification support. The intervention included an evaluation by the specialist physician conducting the study, an assessment by an expert dietician, and exercise recommendations. This method is based on a high-biological-value protein preparations obtained from cow milk, soya, avian eggs, green peas and cereals. Each protein preparation contained 15 g protein, 4 g carbohydrates, 3 g fat, and 50 mg docohexaenoic acid, and provided 90–100 kcal.
The weight loss program has five steps (Additional file 1: Figure S1) and adheres to the most recent guidelines of 2015 European Food Safety Authority (EFSA) on total carbohydrates intake [3]. The first three steps consist of a VLCK diet (600–800 kcal/day), low in carbohydrates (< 50 g daily from vegetables) and lipids (only 10 g of olive oil per day). The amount of high-biological-value proteins ranged between 0.8 and 1.2 g per each kg of ideal body weight, to ensure patients were meeting their minimal body requirements and to prevent the loss of lean mass. In step 1, the patients ate high-biological-value protein preparations five times a day, and vegetables with low glycemic indexes. In step 2, one of the protein servings was substituted by a natural protein (e.g., meat or fish) either at lunch or at dinner. In step 3, a second serving of low fat natural protein was substituted for the second serving of biological protein preparation. Throughout these ketogenic phases, supplements of vitamins and minerals supplements, such as K, Na, Mg, Ca, and omega-3 fatty acids, were provided in accordance to international recommendations [22]. These three steps were maintained until the patient lost the target amount of weight, ideally 80%. Hence, the ketogenic steps were variable in time depending on the individual and the weight loss target.
In steps 4 and 5, the ketogenic phases were ended by the physician in charge of the patient based on the amount of weight lost, and the patient started a low-calorie diet (800–1500 kcal/day). At this point, the patients underwent a progressive incorporation of different food groups and participated in a program of alimentary re-education to guarantee the long-term maintenance of the weight loss. The maintenance diet, consisted of an eating plan balanced in carbohydrates, protein, and fat. Depending on the individual the calories consumed ranged between 1500 and 2000 kcal/day, and the target was to maintain the weight lost and promote healthy life styles.
During this study, the patients followed the different steps of the method until they reach the target weight or up to a maximum of 4 months of follow-up, although patients remained under medical supervision for the following months.
Schedule of visits
Throughout the study, the patients completed a maximum of 10 visits with the research team (every 15 ± 2 days), of which four were for a complete (C) physical, anthropometric and biochemical assessment, and the remaining visits were to control adherence and evaluation of potential side effects. The four complete visits were made according to the evolution of each patient through the steps of ketosis as follows: visit C-1 (baseline), normal level of ketone bodies; visit C-2, maximum ketosis (approximately 1–2 months of treatment); visit C-3, reduction of ketosis because of partial reintroduction of normal nutrition (2–3 months); visit C-4 at 4 months, no ketosis (Additional file 1: Figure S1 and Fig. 1a). The total ketosis state lasted for 60–90 days only. In all the visits, patients received dietary instructions, individual supportive counsel, and encouragement to exercise on a regular basis using a formal exercise program. Additionally, a program of telephone reinforcement calls was instituted, and a phone number was provided to all participants to address any concern.
Ketone bodies (a) and body composition (b) during the study. The broken line represents the level at which the existence of ketosis is defined.aP < 0.05 compared with Visit C-1; bP < 0.05 compared with Visit C-2; cP < 0.05 compared with Visit C-3 (Repeated measures ANOVA with Tukey's adjustment for multiple comparisons). β-OHB: β-hydroxy-butyrate
Anthropometric assessment
All anthropometric measurements were undertaken after an overnight fast (8 to 10 h), under resting conditions, in duplicate, and performed by well-trained health workers. Participant's body weights were measured to the nearest 0.1 kg on the same calibrated electronic device (Seca 220 scale, Medical Resources, EPI Inc. OH, USA), in underwear and without shoes. BMI was calculated by dividing body weight in kilograms by the square of height in meters (BMI = weight (kg)/height2 (m).
Resting metabolic rate
The RMR was measured by indirect calorimetry using a portable desktop metabolic system (FitMate PRO, Cosmed, Rome, Italy) and under overnight fasting conditions. Participants were instructed to arrive at the hospital by car, to minimize vigorous physical activity during the 24 h prior to the measurement, and to avoid drinking caffeinated beverages for at least 12 h before testing. All participants rested supine for at least 20 min. During this resting time, the body composition (bone mineral density, lean body mass and fat mass) was determined, and then rested in sitting position in a quiet and darkened room for a further 15 min before the test.
Test-re-test validation was performed and after resting, oxygen consumption was measured continuously for 15 min under thermo-neutral conditions, and the final 10 min of data were used to calculate RMR. The FitMate uses a turbine flow meter for measuring ventilation and a galvanic fuel cell oxygen sensor for determining the fraction of oxygen in expired gases. Moreover, it has sensors for the measurement of temperature, humidity, and barometric pressure for use in internal calculations. The FitMate uses standard metabolic formulas to estimate oxygen consumption, and RMR is calculated using a predetermined respiratory quotient (RQ) of 0.85. During the measurement period, participants remained sitting, breathed normally, and were instructed to remain awake, and to avoid talking, fidgeting and hyperventilating. The reliability of measuring RMR with Cosmed's FitMate metabolic system have been determined in several previous studies [9, 15, 21], and by in house controls (Additional file 2: Figure S2).
For the purposes of this study measured RMR or the crude values provided by the method were obtained and expected-RMR was defined as the variation in energy expenditure that could be explained by the observed changes in fat-free mass (FFM), because FFM is the main determinant of RMR [13]. Firstly, we determined the basal energy equivalence per kilogram of FFM in our study population. Then, this quotient was multiplied by the amount of change in FFM between the baseline and each subsequent complete visit. Finally, this product was added to the basal measured RMR, and in this way the expected RMR for each complete visit was obtained.
This process is summarized by the following equation:
$$ \mathrm{RMR}\ \mathrm{expected}=\mathrm{RMR}\ {\mathrm{measured}}_{\mathrm{Baseline}}+\left[\left(\mathrm{RMR}\ {\mathrm{measured}}_{\mathrm{Baseline}}/{\mathrm{FFM}}_{\mathrm{Baseline}}\right)\ {\mathrm{X}\Delta \mathrm{FFM}}_{\mathrm{Visit}\hbox{-} \mathrm{Baseline}}\right]. $$
On the other hand, metabolic adaptation has been described as the change in RMR not explained by changes in FFM [8, 19], and is calculated as the difference between RMR measured at each complete visit and the expected RMR for that visit i.e., Metabolic adaptation = RMR measured – RMR expected.
Total body composition
Body composition was first measured by dual-energy X-ray absorptiometry (DXA; GE Healthcare Lunar, Madison, USA).Daily quality control scans were acquired during the study period. No hardware or software changes were made during the course of the trial. Subjects were scanned using standard imaging and positioning protocols, while wearing only light clothing. For this study, the values of bone mineral density, lean body mass and FM that were directly measured by the GE Lunar Body Composition Software option. Some derivative values, such as bone mineral content, regional lean mass, FFM, and fat mass percentage (FM%), were also calculated.
Multifrequency bioelectrical impedance
Multifrequency bioelectrical impedance (MF–BiA) was also used for determining body composition. FM, FM%, FFM, total body water, intra- and extracellular water, and skeletal muscle mass, were calculated with In Body 720 (In Body 720, Biospace Inc.,Tokyo, Japan). This technology is non-invasive and uses eight contact electrodes, which are positioned on the palm and thumb of each hand and on the front part of the feet and on the heels.
Multifrequency bioelectrical impedance uses the body's electrical properties and the opposition to the flow of an electric current by different body tissues. The analyzer measures resistance at specific frequencies (1, 5, 50, 250, 500 and 1000 kHz) and reactance at specific frequencies (5, 50, and 250 kHz). The participants were examined lightly dressed, and the examination took less than 2 min and required only a standing position. The validity of this technology has been documented in previous studies [6].
Determination of levels of ketone bodies
Ketosis was determined by measuring ketone bodies, specifically β-hydroxy-butyrate (β-OHB), in capillary blood by using a portable meter (GlucoMen LX Sensor, A. Menarini Diagnostics, Neuss, Germany). As with anthropometric assessments, all the determinations of capillary ketonemia were made after an overnight fast of 8 to 10 h. These measurements were performed daily by each patient during the entire VLCK diet, and the corresponding values were reviewed on the machine memory by the research team in order to control adherence. Additionally, β-OHB levels were determined at each visit by the physician in charge of the patient. The measurements reported as "low value" (< 0.2 mmol/l) by the meter were assumed as to be zero for the purposes of statistical analyses.
Biochemical parameters
During the study all the patients were strictly monitored with a wide range of biochemical analyses. However, for the purposes of this work only certain values are reported. Serum tests for total proteins, albumin, prealbumin, retinol-binding protein, red cell and white cells counts, uric acid, urea, creatinine and urine urea were performed using an automated chemistry analyzer (Dimension EXL with LM Integrated Chemistry System, Siemens Medical Solutions Inc., USA).Thyroid-stimulating hormone (TSH), free thyroxine (FT4), and free triiodothyronine (FT3) were measured by chemiluminescence using ADVIA Centaur (Bayer Diagnostics, Tarrytown, NY, USA). All the biochemical parameters were measured at the 4 complete visits.
The overnight fasting plasma levels of leptin were measured using commercially available ELISA kits (Millipore, MA, USA). The fasting plasma levels of fractionated catecholamines (dopamine, adrenaline and noradrenaline) were tested by high pressure liquid chromatography (HPLC; Reference Laboratory, Barcelona, Spain).
The data are presented as means (standard deviation). Each subject acted as his own control (baseline visit). The sample size of the current trial was calculated taking the weight loss after treatment (main variable) into account. It was calculated for an effect size ≥15 kg, and a α = 0.05, and a power (1-β) of 90%. Thus, the sample size was established at a minimum of 19 volunteers who finished the nutritional treatment. The sample size provided sufficient power to test for effects on a number of other metabolic variables of interest.
All statistical analyses were carried out using Stata statistical software, release 12.0 (Stata Corporation, College Station, TX, USA). A p < 0.05 was considered statistically significant. Changes in the different variables of interest from the baseline and throughout the study visits were analyzed following a repeated measures design. A repeated measures analysis of variance (ANOVA) test was used to evaluate differences between different measurement times, followed by post hoc analysis with Tukey's adjustment for multiple comparisons. In addition, multivariate linear regression models were fitted to assess the potential predictive factors of RMR at each complete visit. The regression models included fat-free mass, FT3, catecholamines (i.e. noradrenaline, adrenaline and dopamine), leptin and β-OHB as plausible determinants of RMR.
Twenty obese patients, 12 females, age from 18 to 58 years (47.2 ± 10.2 yr) completed the study. Participants at baseline have a BMI of 35.5 ± 4.4 and body weight (BW) of 95.9 ± 16.3 kg, 45.6 ± 5.4% of which was fat. Other baseline characteristics and their corresponding changes during the study are presented in Tables 1 and 2, and have also been previously reported [6].
Table 1 Changes in anthropometry, energy expenditure and ketone bodies during the study
Table 2 Biochemical measurements during the study
Although the patients underwent a total of 10 visits, the RMR and body composition analyses were synchronized with the ketone levels in four visits (Fig. 1a). Visit C-1 was the baseline visit, before starting the diet and with no ketosis (0.0 ± 0.1 mmol/L) and initial weight. Visit C-2 was at the time of maximum level of ketosis (1.0 ± 0.6 mmol/L) with 11.7 kg of BW loss. At visit C-3 (after 89.7 ± 19.1 days of VLCK), patients started the return to a normal diet and showed a reduction in ketone levels (0.7 ± 0.5 mmol/L) with 19.3 kg of BW loss. Finally, at visit C-4 the patients were out of ketosis (0.2 ± 0.1 mmol/L) with a total of 20.8 kg of weight lost (Table 1 and Fig. 1).
Most of the initial BW loss was in the form of fat mass (FM) with a minor reduction in fat free mass (FFA). The reduction in kg for FM and FFM respectively from baseline were; visit C-2 7.2 kg and 4.3 kg; visit C-3 14.4 kg and 4.5 kg; visit C-4 16.5 kg and 3.8 kg. (Table 1, Fig. 1b).
The measured RMR was not significantly different from the baseline at any time during the study, although a downward trend in these values was observed (Fig. 2a). Compared to the baseline, at visits C-2, C-3 and C-4 the measured RMR varied − 1.0 ± 18.8%, − 2.4 ± 25.6% and − 8.3 ± 15.0%, respectively.
Resting metabolic rate (RMR) changes during the study. RMR-expected refers to the change in energy expenditure explained by changes in free fat mass (FFM) or muscle mass. Statistical analysis was performed by repeated measures ANOVA with Tukey's adjustment for multiple comparisons)
To investigate how much of the mild and non-significant decrease in RMR could be accounted for by FFM change, we used the baseline RMR data to generate an equation for calculating the expected-RMR in accordance with variations in FFM (Table 1). The difference between the measured and expected RMR defined the degree of metabolic adaptation. At visit C-2 (maximum ketosis), the measured RMR was 92.8 ± 339.5 kcal/d higher than expected RMR. At visit C-3, the measured RMR was 49.1 ± 470.5 kcal/d greater than expected RMR. Finally, at visit C-4, the measured RMR was 61.0 ± 298.9 kcal/d lower than the expected RMR. None of the differences between the measured and expected RMR was statistically different (Fig. 2 and Table 1), indicating that the phenomenon of metabolic adaptation was not present.
The observation that the VLCK-diet preserved the RMR in accordance to variations in FFM, thus avoiding the metabolic adaptation was reinforced by the maintenance of the RMR/FFM quotient during the study (Table 1 and Fig. 2). When, muscle mass evaluated by MF-BiA was employed in the analysis, instead of DXA, results on the expected and observed RMR were similar (Table 1 and Fig. 2).
The concern regarding the possible preservation of the RMR as a consequence of the presence of stressing factors induced by the VLCK-diet and the rapid weight loss was focused by a strict analysis of the protein metabolism. Although there were some differences in protein status, renal function and nitrogen balance-related parameters, none of them was considered as clinically relevant (Table 2). It is noteworthy that despite the considerable weight loss induced by the VLCK-diet, there was a positive nitrogen balance throughout the entire study. At visit C-2, the positive nitrogen balance was 1.0 ± 2.4, while at visits C-2 and C-3 it was 2.1 ± 3.8 and 1.2 ± 3.6, respectively. It was not possible to calculate the nitrogen balance at baseline since the protein intake was not assessed at that visit.
Besides the FFM, that is considered the major contributing factor, several variables have been described as positive determinants of the RMR, including thyroid hormones, catecholamines, leptin and ketone bodies. In this study, the level of influence of these mentioned factors on the measured RMR was determined during the study. As Fig. 3 shows TSH and free T4 did not significantly change, free T3 had a significant although expected decrease at visit C-2, and thereafter. Adrenaline and dopamine did not significantly change during the study, but noradrenaline had a progressive decrease in their plasma levels that reached significant differences at visit C-4. Similarly, leptin values were severely reduced at visit 2, 3 and 4 in accordance with the FM reduction.
Thyroid hormones (a), Catecholamines (b) and Leptin (c) levels during the study. a Changes in Thyroid Hormones; b. Changes in Catecholamines; and c. Changes in Leptin. FT3: free triiodothyronine; FT4: tyroxine. aP < 0.05 compared with Visit C-1; bP < 0.05 compared with Visit C-2; cP < 0.05 compared with Visit C-3 (Repeated measures ANOVA with Tukey's adjustment for multiple comparisons)
Linear regression models reveals that when adjusted by β-OHB, FFM was the best predictor of RMR (coefficient β = 20–31; p < 0.05, Table 3) explaining more than 40% of the variability of RMR (see Additional file 3: Table S1).
Table 3 Independent effects of fat-free mass and β-hydroxy-butyrate on resting metabolic rate at each visit
To the best of our knowledge this study is the first assessing the effect of VLCK-diet on the RMR of obese patients. The main findings of this work were: 1) the rapid and sustained weight reduction induced by the VLCK-diet did not induce the expected drop in RMR, 2) this observation was not due to a sympathetic tone counteraction through the increase of either catecholamines, leptin or thyroid hormones, 3) the most plausible cause of the null reduction of RMR is the preservation of lean mass (muscle mass) observed with this type of diet.
The greatest challenge in obesity treatment is to avoid weight recovery sometime after the previous reduction. In fact, after one or few years the most obese patients recover or even increase their weight, previously reduced by either, dietetic, pharmacological or behavioral treatments [8], bariatric surgery being the only likely exception [7]. Since obesity reduction is accompanied by a slowing of energy expenditure in sedentary individuals, mostly RMR, this fact has been blamed for this negative outcome of the diet-based treatments [12]. RMR is recognized as the major component of total energy expenditure, being responsible for about 75% of daily total energy expenditure in Western societies [1, 16]. Therefore any RMR reduction after treatment, translates in a large impact on energy balance, making subjects more prone to weight regain over time [17]. This phenomenon was called metabolic adaptation or adaptive thermogenesis, indicating that RMR is reduced after weight loss, and furthermore that this reduction is usually larger than expected or out of proportion with the decrease in fat or fat free mass [2].Therefore, preservation of initial RMR after weight loss could play a critical role in facilitating further weight loss and preventing weight regain in the long-term [4].
We have observed that the obesity-reduction by a VLCK-diet (Pnk method ®) was maintained 1 and 2 years after its completion [10, 11]. Although that follow up was not long enough, the finding may be of particular importance for long-term effects. The present work shows that in a group of obese patients treated with a VLCK-diet, the RMR was relatively preserved, remaining within the expected limits for the variations in FFM, and avoided the metabolic adaptation phenomenon. Because FFM includes total body water, bone minerals and protein [14], the results were corroborated by analyzing the FFM without bone minerals and total body water (muscle mass).
As the mechanisms supporting the metabolic adaptation phenomenon are not known, unraveling the reasons behind the present findings is challenging enough in itself. Changes in any circulating hormone that participate in thermogenesis could be the explanation for the absence of a reduction in RMR, for example a concomitant increase in the sympathetic system activity, either directly or indirectly. An increase in thyroid hormones generated by the VLCK-diet was discarded because free T3 experienced the well described reduction after losing weight [20, 24] without alterations in free T4 or TSH. As thermogenesis in humans is largely a function of the sympathetic nervous system activity, and that activity decreases in response to weight loss the results here reported may be the net result of a maintenance or relative increase in the plasma catecholamine levels. However, it was found that adrenaline and dopamine remained unchanged throughout the study, while noradrenaline decreased considerably discarding their contribution to any increase in the activity of the autonomic nervous system. Leptin experienced a rapid decline in circulation in situations of weight reduction, although the reduction is observed in energy restriction states it occurs before any change in body weight [8]. On the other hand, leptin positively has been associated with sympathetic nervous system activity in humans, and weight loss associated changes in RMR and fat oxidation were previously related to leptin levels changes [25]. If leptin is sensitive to the energy flux and activate the autonomic nervous system, the absence of metabolic adaptation here observed could be due to a leptin increase, or maintenance in the basal levels. However, in this work, leptin levels decreased in accordance to the weight reduction.
Then, an expected increase in thyroid hormones, catecholamines, or leptin levels was discarded as explanation for the observed minor or absent reduction in RMR. This was also endorsed by the undertook multiple regression analysis (Table 3). In this analysis only the FFM (DXA) or the muscle mass (MF-BIA) appear as a plausible explanation for the maintenance of RMR activity. In fact, a clear preservation of FFM was reported in obese subjects on VLCK-diet, in whom 20 kg reduction after 4 months of treatment was accompanied by less than 1 kg of muscle mass lost [6]. The assumption of muscle mass preservation is also supported by the data on kidney function (Table 2) which shows that not only was renal activity not altered as reported in other studies [23] but that even the nitrogen balance was positive.
The strength of this study is its longitudinal design, which allows the evaluation of the time-course of changes of RMR during a VLCK diet, by comparing each subject to himself, as his own control. The scarce number of subjects and the short duration of this study might be a limitation, since one cannot make claims regarding the RMR status long-term after the completion of the VLCK diet. However, no significant variations in body weight had been observed after 4 months in previous studies [10, 11]. In addition, although participants were instructed to exercise on a regular basis using a formal exercise program, we could not verify adherence to this instruction which precludes determining whether changes in physical activity patterns affected study outcomes. In the current work a portable device that allows for easier measurement of RMR and with lower cost was employed. This approach may lead to errors when compared with the gold standard, Deltatrac, but it is an easy-to-use metabolic system for determining RMR and VO2 in clinical practice with a better accuracy than predictive eqs. [9]. The Deltatrac device is expensive and requires careful calibration. The Fitmate has been previously validated as a suitable alternative to the traditional indirect calorimetry by both in-house analysis (Additional file 1: Figure S1), as well as by previous studies. Despite not measuring CO2 production it is a very convenient in the clinical setting assuming a minimal error of analysis.
In summary, this study shows that the treatment of obese patients with a VLCK-diet favors the maintenance of RMR within the expected range for FFM changes and avoids the metabolic adaptation phenomenon. This finding might explain the long-term positive effects of VLCK-diets on weight loss. Although, the mechanisms by which this effect could be justified are unclear, classical determinants of the energy expenditure, as thyroid hormones, catecholamines as well as leptin were discarded. The relative good preservation of FFM (muscle mass) observed with this dietetic approach could be the cause for the absence of metabolic adaptation.
BMI:
DXA:
Dual-energy X-ray absorptiometry
European Food Safety Authority
FFM:
Fat free mass
FM:
Fat mass
MF-BIA:
RMR:
RQ:
Respiratory quotient
VLCK:
Very low calorie ketogenic diet
Black AE, Coward WA, Cole TJ, Prentice AM. Human energy expenditure in affluent societies: an analysis of 574 doubly-labelled water measurements. Eur J Clin Nutr. 1996;50:72–92.
Doucet E, St-Pierre S, Almeras N, Despres JP, Bouchard C, Tremblay A. Evidence for the existence of adaptive thermogenesis during weight loss. Br J Nutr. 2001;85:715–23.
EFSA Panel on Dietetic Products, Nutrition And Allergies (NDA). Scientific opinion on the essential composition of total diet replacements for weight control. EFSA J. 2015;13:3957.
Fothergill E, Guo J, Howard L, Kerns JC, Knuth ND, Brychta R, Chen KY, Skarulis MC, Walter M, Walter PJ, Hall KD. Persistent metabolic adaptation 6 years after "the biggest loser" competition. Obesity (Silver Spring). 2016;24:1612–9.
Galgani JE, Santos JL. Insights about weight loss-induced metabolic adaptation. Obesity (Silver Spring). 2016;24:277–8.
Gomez-Arbelaez D, Bellido D, Castro AI, Ordonez-Mayan L, Carreira J, Galban C, Martinez-Olmos MA, Crujeiras AB, Sajoux I, Casanueva FF. Body composition changes after very low-calorie-ketogenic diet in obesity evaluated by three standardized methods. J Clin Endocrinol Metab. 2017;102:488–98.
Inge TH, Courcoulas AP, Jenkins TM, Michalsky MP, Helmrath MA, Brandt ML, Harmon CM, Zeller MH, Chen MK, Xanthakos SA, Horlick M, Buncher CR, Teen LC. Weight loss and health status 3 years after bariatric surgery in adolescents. N Engl J Med. 2016;374:113–23.
Knuth ND, Johannsen DL, Tamboli RA, Marks-Shulman PA, Huizenga R, Chen KY, Abumrad NN, Ravussin E, Hall KD. Metabolic adaptation following massive weight loss is related to the degree of energy imbalance and changes in circulating leptin. Obesity (Silver Spring). 2014;22:2563–9.
Lupinsky L, Singer P, Theilla M, Grinev M, Hirsh R, Lev S, Kagan I, Attal-Singer J. Comparison between two metabolic monitors in the measurement of resting energy expenditure and oxygen consumption in diabetic and non-diabetic ambulatory and hospitalized patients. Nutrition. 2015;31:176–9.
Moreno B, Bellido D, Sajoux I, Goday A, Saavedra D, Crujeiras AB, Casanueva FF. Comparison of a very low-calorie-ketogenic diet with a standard low-calorie diet in the treatment of obesity. Endocrine. 2014;47:793–805.
Moreno B, Crujeiras AB, Bellido D, Sajoux I, Casanueva FF. Obesity treatment by very low-calorie-ketogenic diet at two years: reduction in visceral fat and on the burden of disease. Endocrine. 2016;54:681–90.
Muller MJ, Bosy-Westphal A. Adaptive thermogenesis with weight loss in humans. Obesity (Silver Spring). 2013;21:218–28.
Muller MJ, Bosy-Westphal A, Kutzner D, Heller M. Metabolically active components of fat-free mass and resting energy expenditure in humans: recent lessons from imaging technologies. Obes Rev. 2002;3:113–22.
Muller MJ, Braun W, Pourhassan M, Geisler C, Bosy-Westphal A. Application of standards and models in body composition analysis. Proc Nutr Soc. 2016;75:181–7.
Nieman DC, Austin MD, Benezra L, Pearce S, McInnis T, Unick J, Gross SJ. Validation of Cosmed's FitMate in measuring oxygen consumption and estimating resting metabolic rate. Res Sports Med. 2006;14:89–96.
Ravussin E, Lillioja S, Anderson TE, Christin L, Bogardus C. Determinants of 24-hour energy expenditure in man. Methods and results using a respiratory chamber. J Clin Invest. 1986;78:1568–78.
Ravussin E, Lillioja S, Knowler WC, Christin L, Freymond D, Abbott WG, Boyce V, Howard BV, Bogardus C. Reduced rate of energy expenditure as a risk factor for body-weight gain. N Engl J Med. 1988;318:467–72.
Rosenbaum M, Hirsch J, Gallagher DA, Leibel RL. Long-term persistence of adaptive thermogenesis in subjects who have maintained a reduced body weight. Am J Clin Nutr. 2008;88:906–12.
Rosenbaum M, Leibel RL. Adaptive thermogenesis in humans. Int J Obes. 2010;34(Suppl 1):S47–55.
Sjostrom L, Narbro K, Sjostrom CD, Karason K, Larsson B, Wedel H, Lystig T, Sullivan M, Bouchard C, Carlsson B, Bengtsson C, Dahlgren S, Gummesson A, Jacobson P, Karlsson J, Lindroos AK, Lonroth H, Naslund I, Olbers T, Stenlof K, Torgerson J, Agren G, Carlsson LM, Swedish Obese Subjects S. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med. 2007;357:741–52.
Stewart CL, Goody CM, Branson R. Comparison of two systems of measuring energy expenditure. JPEN J Parenter Enteral Nutr. 2005;29:212–7.
SCOOP-VLCD T. Reports on tasks for scientific cooperation. Collection of data on products intendend for use in very-low-calorie-diets. Report Brussels European Comission 2002.
Tagliabue A, Bertoli S, Trentani C, Borrelli P, Veggiotti P. Effects of the ketogenic diet on nutritional status, resting energy expenditure, and substrate oxidation in patients with medically refractory epilepsy: a 6-month prospective observational study. Clin Nutr. 2012;31:246–9.
Van Gaal LF, Maggioni AP. Overweight, obesity, and outcomes: fat mass and beyond. Lancet. 2014;383:935–6.
Westerterp-Plantenga MS, Nieuwenhuizen A, Tome D, Soenen S, Westerterp KR. Dietary protein, weight loss, and weight maintenance. Annu Rev Nutr. 2009;29:21–41.
We would like to thank A. Menarini Diagnostics Spain for providing free of charge the portable ketone meters for all the patients. We acknowledge the PronoKal Group ® for providing the diet for all the patients free of charge and for support of the study. The funding source had no involvement in the study design, recruitment of patients, study interventions, data collection, or interpretation of the results. The Pronokal personnel (IS) was involved in the study design and revised the final version of the manuscript, without intervention in the analysis of data, statistical evaluation and final interpretation of the results of this study.
This work was supported by grants from the Fondo de Investigacion Sanitaria, PE13/00024 and PI14/01012 research projects and CIBERobn (CB06/003), from the Instituto de Salud Carlos III (ISCIII), Fondo Europeo de Desarrollo Regional (FEDER) Spanish, and the Xunta de Galicia, Spain (GRC2014/034). DGA is grateful to the Colombian Department of Science, Technology and Innovation – COLCIENCIAS as a recipient of their pre-doctoral scholarship to support his work.
The datasets used during the current study are available from the corresponding author on reasonable request.
D G-A, ABC ad FFC designed and performed the experiments, analyzed the data and wrote the manuscript. AIC, MAM-O, AC, LO-M, IS were responsible of the conduct and monitoring of the nutritional intervention. CG, DB participated in the study design and coordination and helped to draft the manuscript. FFC supervised the research and reviewed the manuscript throught the study. All authors read and approved the final manuscript.
Division of Endocrinology, Department of Medicine, Molecular and Cellular Endocrinology Area, Complejo Hospitalario Universitario de Santiago (CHUS), Instituto de Investigación Sanitaria de Santiago (IDIS), Travesia da Choupana street s/n, 15706, Santiago de Compostela, La Coruña, Spain
Diego Gomez-Arbelaez, Ana B. Crujeiras, Ana I. Castro, Miguel A. Martinez-Olmos, Ana Canton, Lucia Ordoñez-Mayan & Felipe F. Casanueva
Medical Department Pronokal, Pronokal Group, Barcelona, Spain
Ignacio Sajoux
Intensive Care Division, Complejo Hospitalario Universitario de Santiago (CHUS), Santiago de Compostela, Spain
Cristobal Galban
Division of Endocrinology, Complejo Hospitalario Universitario de Ferrol and Coruña University, Ferrol, Spain
Diego Bellido
CIBER de Fisiopatologia de la Obesidad y Nutricion (CIBERobn), Instituto Salud Carlos III, Santiago de Compostela, Spain
Ana B. Crujeiras, Ana I. Castro, Miguel A. Martinez-Olmos, Ana Canton & Felipe F. Casanueva
Diego Gomez-Arbelaez
Ana B. Crujeiras
Ana I. Castro
Miguel A. Martinez-Olmos
Ana Canton
Lucia Ordoñez-Mayan
Felipe F. Casanueva
Correspondence to Felipe F. Casanueva.
DB, ABC and FFC received advisory board fees and or research grants from Pronokal Protein Supplies Spain. IS is Medical Director of Pronokal Spain SL
Additional file 1:
Figure S1. Nutritional intervention program and schedule of visits. Visit C-4 was performed at the end of the study according to each case, once the patient achieved the target weight or maximum at 4 months of follow-up. (PDF 390 kb)
Figure S2. Bland Altman plots of Resting Metabolic Rate (RMR) for Cosmed's Fitmate device compared to the Deltatrac. (PDF 11 kb)
Table S1. Independent effects of fat-free mass, free triiodothyronine, catecholamines, leptin and β-hydroxy-butyrate on resting metabolic rate at each visit. (DOCX 32 kb)
Gomez-Arbelaez, D., Crujeiras, A.B., Castro, A.I. et al. Resting metabolic rate of obese patients under very low calorie ketogenic diet. Nutr Metab (Lond) 15, 18 (2018). https://doi.org/10.1186/s12986-018-0249-z
DOI: https://doi.org/10.1186/s12986-018-0249-z
Very low-energy diet
Pronokal method
Protein diet
Metabolic adaptation
Energy expenditure
DXA
Multifrequency BIA | CommonCrawl |
Two-component higher order Camassa-Holm systems with fractional inertia operator: A geometric approach
JGM Home
September 2015, 7(3): 255-280. doi: 10.3934/jgm.2015.7.255
Hypersymplectic structures on Courant algebroids
Paulo Antunes 1, and Joana M. Nunes da Costa 1,
CMUC, Department of Mathematics, University of Coimbra, 3001-501 Coimbra, Portugal, Portugal
Received January 2015 Revised June 2015 Published July 2015
We introduce the notion of hypersymplectic structure on a Courant algebroid and we prove the existence of a one-to-one correspondence between hypersymplectic and hyperkähler structures. This correspondence provides a simple way to define a hyperkähler structure on a Courant algebroid. We show that hypersymplectic structures on Courant algebroids encompass hypersymplectic structures with torsion on Lie algebroids. In the latter, the torsion existing at the Lie algebroid level is incorporated in the Courant structure. Cases of hypersymplectic structures on Courant algebroids which are doubles of Lie, quasi-Lie and proto-Lie bialgebroids are investigated.
Keywords: Lie algebroid., Courant algebroid, Hypersymplectic, hyperkähler.
Mathematics Subject Classification: Primary: 53D17; Secondary: 53D18, 53C2.
Citation: Paulo Antunes, Joana M. Nunes da Costa. Hypersymplectic structures on Courant algebroids. Journal of Geometric Mechanics, 2015, 7 (3) : 255-280. doi: 10.3934/jgm.2015.7.255
P. Antunes, Crochets de Poisson Gradués et Applications: Structures Compatibles et Généralisations des Structures Hyperkählériennes,, Ph.D thesis, (2010). Google Scholar
P. Antunes, C. Laurent-Gengoux and J. M. Nunes da Costa, Hierarchies and compatibility on Courant algebroids,, Pac. J. Math., 261 (2013), 1. doi: 10.2140/pjm.2013.261.1. Google Scholar
P. Antunes and J. M. Nunes da Costa, Hyperstructures on Lie algebroids,, Rev. in Math. Phys., 25 (2013). doi: 10.1142/S0129055X13430034. Google Scholar
P. Antunes and J. M. Nunes da Costa, Induced hypersymplectic and hyperkähler structures on the dual of a Lie algebroid,, Int. J. Geom. Meth. Mod. Phys., 11 (2014). doi: 10.1142/S0219887814600305. Google Scholar
P. Antunes and J. M. Nunes da Costa, Hyperstructures with torsion on Lie algebroids, preprint,, , (). Google Scholar
H. Bursztyn, G. Cavalcanti and M. Gualtieri, Generalized Kähler and hyper-Kähler quotients,, in Poisson geometry in mathematics and physics, 450 (2008), 61. doi: 10.1090/conm/450/08734. Google Scholar
P. S. Howe and G. Papadopoulos, Twistor spaces for hyper-Kähler manifolds with torsion,, Phys. Lett. B, 379 (1996), 80. doi: 10.1016/0370-2693(96)00393-0. Google Scholar
J. Grabowski, Courant-Nijenhuis tensors and generalized geometries,, in Groups, 29 (2006), 101. Google Scholar
N. Hitchin, Generalized Calabi-Yau manifolds,, Q. J. Math., 54 (2003), 281. doi: 10.1093/qmath/hag025. Google Scholar
Y. Kosmann-Schwarzbach, Quasi, twisted, and all that ... in Poisson geometry and Lie algebroid theory,, in The Breadth of symplectic and Poisson geometry, (2005), 363. doi: 10.1007/0-8176-4419-9_12. Google Scholar
Y. Kosmann-Schwarzbach, Poisson and symplectic functions in Lie algebroid theory,, in Higher Structures in Geometry and Physics, (2011), 243. doi: 10.1007/978-0-8176-4735-3_12. Google Scholar
D. Roytenberg, On the structure of graded symplectic supermanifolds and Courant algebroids,, in Quantization, (2002), 169. doi: 10.1090/conm/315/05479. Google Scholar
D. Roytenberg, Quasi-Lie bialgebroids and twisted Poisson manifolds,, Lett. Math. Phys., 61 (2002), 123. doi: 10.1023/A:1020708131005. Google Scholar
M. Stiénon, Hypercomplex structures on Courant algebroids,, C. R. Acad. Sci. Paris, 347 (2009), 545. doi: 10.1016/j.crma.2009.02.020. Google Scholar
T. Voronov, Graded manifolds and Drinfeld doubles for Lie bialgebroids,, in Quantization, (2002), 131. doi: 10.1090/conm/315/05478. Google Scholar
P. Xu, Hyper-Lie Poisson structures,, Ann. Sci. École Norm. Sup., 30 (1997), 279. doi: 10.1016/S0012-9593(97)89921-1. Google Scholar
Honglei Lang, Yunhe Sheng. Linearization of the higher analogue of Courant algebroids. Journal of Geometric Mechanics, 2020, 12 (4) : 585-606. doi: 10.3934/jgm.2020025
Ville Salo, Ilkka Törmä. Recoding Lie algebraic subshifts. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 1005-1021. doi: 10.3934/dcds.2020307
Hongliang Chang, Yin Chen, Runxuan Zhang. A generalization on derivations of Lie algebras. Electronic Research Archive, , () : -. doi: 10.3934/era.2020124
Wenjun Liu, Yukun Xiao, Xiaoqing Yue. Classification of finite irreducible conformal modules over Lie conformal algebra $ \mathcal{W}(a, b, r) $. Electronic Research Archive, , () : -. doi: 10.3934/era.2020123
Paulo Antunes Joana M. Nunes da Costa | CommonCrawl |
If the world were blown to pieces, what would remain?
My question involves the earth being blown up using theoretical dark energy. In this scenario, the crust and mantle are destroyed by precisely placed weapons so that all that remains is the core.
Would the core freeze to become a dwarf planet comprised of solid iron after it's completely exposed? Or would the sudden lack or pressure cause the matter to become gaseous? I also need to know about the moon. Would it continue to orbit the dwarf planet (if that is what remained) or would it begin its own orbit of the sun?
planets moons apocalypse earth explosions
Rocky Norton
Rocky Norton Rocky Norton
$\begingroup$ "If the world were blown to pieces", the the core would also be blown to pieces, since the core is part of the world. $\endgroup$ – RonJohn Mar 29 '18 at 16:26
$\begingroup$ Welcome to WorldBuilding.SE! As RonJohn pointed out, the title of your question doesn't quite match the question itself: you ask "what would remain" and then tell us that only the core would remain. It's an interesting question, though. Please take the tour and visit the help center to learn more about the site, and I hope you enjoy your stay! $\endgroup$ – F1Krazy Mar 29 '18 at 16:29
$\begingroup$ As the accountant said to business owner, "What do you want to happen?" $\endgroup$ – nzaman Mar 29 '18 at 16:29
$\begingroup$ Please edit to add how this related to worldbuilding or the question will likely be put on hold. $\endgroup$ – Unassuming Guy Mar 29 '18 at 16:42
$\begingroup$ If the world were blown to pieces, what would remain? "Just pieces" seems like the obvious answer. $\endgroup$ – Renan Mar 29 '18 at 16:52
The answer depends strongly on the nature of the process which removed the rest of the planet. As it turns out, events which are capable of blowing a planet to tiny bits (in thine mercy) are pretty darn violent. They will have some effect on the core, and that effect may dramatically change the result. But for some simple cases, we'll find that you do indeed get a dwarf planet.
Let's, for a moment, assume that the outer parts of the Earth simply vanish, leaving the core. This consists of an outer core, which is liquid, and an inner core which is solid. Let's let those start to decompress. The information that the rest of the Earth vanished will propagate at the speed of sound. Now the core is quite hot, ranging from 4,000K to 10,000K, depending on where you are. This is quite a bit higher than the boiling point of iron at atmospheric pressure (much less at vacuum), which is around 3200K. Thus we should expect large amounts of the iron in the core to flash boil. This will rapidly impart a net outward velocity to all particles involved. To get a sense of just how violent this could be, I've replaced our planet with an Earth-substitute (a beaker of superheated water), and the cause of boiling replaced with the addition of sugar (which nucleates the boiling event), and made this video. (okay, maybe I lied. Professionals made that video).
Note: the core will need something to nucleate around as well. However, this is a statistical process. It's very difficult to have a lack of nucleation sites when your beaker is the size of the Earth's core
The result, in the low gravity of the remaining core, will be that the boiled iron/nickel material quickly flies outwards and starts to behave like a rarified gas (very few collisions because the particles are far apart). Now we need to talk escape velocity. Escape velocity is calculated by $V_e=\sqrt{\frac{2GM}{r}}$, where $G$ is the universal gravitational constant ($G=6.67\cdot10^{−11} \frac{m^3}{kg\cdot s^2}$), $M$ is the mass of the gravitational source, and $r$ is the radius from the center of that object that we're calculating at. The mass of the core is roughly 30% of the mass of Earth ($M=1.79\cdot10^{24}$), and its radius is roughly 3,400km ($r=3.4\cdot 10^6$). Put these together and you get $V_e=\sqrt{7.0\cdot10^{7}\frac{m^2}{s^2}}$ or $V_e=8.4\frac{km}{s}$ , which is a good portion of the escape velocity of the whole Earth before the explosion ($V_e=11.186\frac{km}{s}$).
So how fast can the nickel and iron fly? Well that's a more difficult question. It's a question of how the energy is distributed. The specific heat of iron is $450\frac{J}{kg\cdot^\circ K}$. Our iron can drop on average of roughly 4000K before solidifying, so we have about 1.8MJ/kg to work with. Now it takes a lot of energy to reach escape velocity here, 35MJ/kg, to be specific. That means even in the most extreme of circumstances we could only fling 5% of our total mass out into space, and realistically it wont be that high. This means we can assume that most of the mass does not achieve escape velocity.
So, assuming your event simply makes the crust and mantle disappear, you would have a violent flash boil, sending gas and material off into space, but then gravity would take hold once again, and all of the material would coalesce roughly back together. During this time, it is emitting radiation, cooling off into the emptiness of space. Eventually it will cool enough to not boil, and reach a liquid or solid phase. That liquid will then not have enough heating to retain its temperature without the huge insulative coat of the mantel, so it will solidify. The vast majority of the core material will remain.
Cort AmmonCort Ammon
The energy needed to blow up a planet is similar to the energy needed to vaporize it. Suppose you release some energy in the center of the earth, say with an antimatter bomb. A large amount of rock is vaporized, much of it turned to plasma. The pressure produces huge splits along the earths crust, which blast out superhot rock. The earth is split into fragments that reach kilometers in height. But nothing is moving at escape velocity. The crust and mantle fragments crash back into each other. The atmosphere is replaced with a thick layer of vaporized rock. Which cools and rains down on the surface.
Suppose you released even more energy. Most of the planet is vaporized, but some fragments of the surface ride the shockwave away from the planet. Much of the planet is launched at escape velocity, mostly in the form of vapor. Some stays in orbit and some remains in a ball. The remaining mass cools into a small, metal rich planet with a ring system.
Donald HobsonDonald Hobson
would the sudden lack or pressure cause the matter to become gaseous?
No. It (most, at least) will turn into a gas because the core's temperature is about 10,800F, and the boiling point of iron in 5,200F. Thus, yes it'll boil off from intrinsic heat.
Once it cools off to below 5200F, then it'll still boil off since "5200F" is the boiling point at Standard Pressure.
RonJohnRonJohn
Ron John's answer covers what happens to the core.
As to what happens to the Moon, it depends. If the Earth is blown to pieces, but the pieces still remain bound by gravity, the Moon would still be there. It might be hit by debris, which could change its momentum and thus alter its orbit, but it would still be there.
If the non-core pieces are removed from the Earth-Moon system, though... According to this source, the core makes up for only ~32% of the mass of the Earth. Which would mean the pull between the remains of the Earth and the Moon would not be as strong. They would each orbit the sun directly, though with very close orbits. They would have encounters in intervals ranging from years to millenia depending on how they disconnect, and depending on how the encounters go, they might each be pushed to other orbits, or they might impact upon each other and form a new planet.
Not the answer you're looking for? Browse other questions tagged planets moons apocalypse earth explosions or ask your own question.
What would be best way to re-melt Mars' mantle and core to revive its magnetosphere
Hitting the tipping point for core removal to degrade atmosphere retention on a moon
Could you create a travelling planet?
Creating a stable split earth
A planetary dynamo
How would a hollow, layered lattice planet work?
Would it be possible for an Earth-like planet to have multiple moons with diverse biomes capable of supporting life?
How to make an Earth with 27 suns work
What is the maximum orbital time for my moon around my planet?
A Decoy Planet made of expanded polystyrene. How big can I make it? | CommonCrawl |
linear or nonlinear differential equation calculator
Co-authoring a paper with a persona non grata, Device category between router and firewall (subnetting but nothing more). is the slope, and where For an ODE, each variable has a distinct differential equation using "ordinary" Is this system of differential equations linear? And driven pendulum it would be the motor that is driving the pendulum. When you study differential equations, it is kind of like botany. How to distinguish linear differential equations from nonlinear ones? Method to solve this differential equation is to first multiply both sides of the differential equation by its integrating factor, namely, . In a differential equation, when the variables and their What's important is "not multiplied together", Linear vs nonlinear differential equation, "Question closed" notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2/4/9 UTC (8:30PM…. Similar rules apply to multiple variable problems. dimension. Remember that the $x$s can pretty much do or appear however they want, since they're independent. How is this a linear differential equation? sin(t) x' + cos(t) x = exp(t) is linear in x. It only takes a minute to sign up. $\frac{dy}{dx}$; yes. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. x'' Each coefficient depends only on the independent variable $x$. x' Linear Differential equations are those in which the dependent variable and its derivatives appear only in first degree and not multiplied together. major classifications of differential equations: On this page we assume that Material Derivative's advective non linear term, Meaning of quasi-linear PDE (Where is linearity in quasi-linear PDE?). I know, that e.g. What makes a differential equation, linear or non-linear? 3+ yy'= x - y derivative, while b usually corresponds to a forcing term in the physical model. Simplest example is $y^{\prime}= 2$ and take both solutions $y=2 x$. Linear just means that the variable in an equation appears only with a power of one. By using this website, you agree to our Cookie Policy. $(\frac{dy}{dx})^4$, no. (do you mean 'solution of linear combinations are not solutions?') This website uses cookies to ensure you get the best experience. $$ x are constants ( @ramanujan What is an example of a non-homogeneous LDE or PDE whose solutions' linear combinations are also solutions? For example in the simple pendulum, there are two some examples. Message received. Cited sources says first order LDE is nonhomogeneous if $y^{\prime} + p(x)y = q(x)$, if $q(x)$ is not identically zero. Makes it much easier. corresponding to the displacement of the string at each position. How to distinguish linear differential equations from nonlinear ones? This is another way of classifying differential equations. example in the string simulation we have a continuous set of variables along the string For x Check out all of our online calculators here! Practice your math skills and learn step by step with our math solver. is linear, but m sometimes you can transform an equation of one type into an equivalent equation of very well developed because linear equations are simple enough to be solveable. correspond to all the positions on a line or a surface or a region of space. . Differential Equation Calculator. Also any function like The dependent variable y and its derivatives are of first degree; the power of each y is 1. is non-linear. $yy'$ makes it nonlinear as has been said, because that coefficient on $y'$ is not in $x$. Here then are some of the The first point to be noted here is that it is extremely difficult to derive an exact solution to a non-linear difference or differential equation. I know, that e.g. linear differential equations for more details. A partial differential equation (or PDE) has an infinite set of variables which Remember that this has its roots in linear algebra: $y=mx+b$. to the following: whether there is a term involving only time, share | cite | improve this question | follow | edited Jun 8 '13 at 10:13. x'' + 2_x' + x = sin(t) is non-homogeneous. The procedure to use the second-order differential equation solver calculator is as follows: Step 1: Enter the ordinary differential equation in the input field Step 2: Now click the button "Calculate" to get the ODEs classification Step 3: Finally, the classification of the ODEs will be displayed in the new window. Wikipedia says PDE is linear if it is linear in dependent variable and its derivatives.
Nutrisystem Food Pictures, Informal Letter Format Icse Class 10 2020, Beethoven Sonata Op 10 No 3, Naturtint 10a Light Ash Blonde, Outdoor Nylon Fabric, Haldiram Restaurant In Ludhiana, Ravioli With Pesto Cream Sauce, Dehydration Synthesis Glycosidic Bond, 125 Yamaha Four Wheeler,
linear or nonlinear differential equation calculator 2020 | CommonCrawl |
Fei Liu 1, , Fang Wang 2,, and Weisheng Wu 3,
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
School of Mathematical Sciences, Capital Normal University, Beijing 100048, China
Department of Applied Mathematics, College of Science, China Agricultural University, Beijing 100083, China
* Corresponding author: Fang Wang
Received February 2019 Published December 2019
Fund Project: The first author is partially supported by NSFC under Grant Nos.11301305 and 11571207. The second authoris partially supported by NSFC under Grant No.11571387 and by the State Scholarship Fund from China Scholarship Council (CSC). The third author is partially supported by NSFC under Grant Nos.11701559 and 11571387
In this article, we consider the geodesic flow on a compact rank $ 1 $ Riemannian manifold $ M $ without focal points, whose universal cover is denoted by $ X $. On the ideal boundary $ X(\infty) $ of $ X $, we show the existence and uniqueness of the Busemann density, which is realized via the Patterson-Sullivan measure. Based on the the Patterson-Sullivan measure, we show that the geodesic flow on $ M $ has a unique invariant measure of maximal entropy. We also obtain the asymptotic growth rate of the volume of geodesic spheres in $ X $ and the growth rate of the number of closed geodesics on $ M $. These results generalize the work of Margulis and Knieper in the case of negative and nonpositive curvature respectively.
Keywords: Geodesic flows, no focal points, Patterson-Sullivan measure, measure of maximal entropy.
Mathematics Subject Classification: Primary: 37D40; Secondary: 37B40.
Citation: Fei Liu, Fang Wang, Weisheng Wu. On the Patterson-Sullivan measure for geodesic flows on rank 1 manifolds without focal points. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1517-1554. doi: 10.3934/dcds.2020085
W. Ballmann, Axial isometries of manifolds of nonpositive curvature, Math. Ann., 259 (1982), 131-144. doi: 10.1007/BF01456836. Google Scholar
W. Ballmann, Nonpositively curved manifolds of higher rank, Ann. of Math. (2), 122 (1985), 597–609. doi: 10.2307/1971331. Google Scholar
W. Ballmann, Lectures on Spaces of Nonpositive Curvature, DMV Seminar, 25, Birkhäuser Verlag, Basel, 1995. doi: 10.1007/978-3-0348-9240-7. Google Scholar
W. Ballmann, M. Brin and P. Eberlein, Structure of manifolds of nonpositive curvature. I, Ann. of Math. (2), 122 (1985), 171–203. doi: 10.2307/1971373. Google Scholar
W. Ballmann, M. Brin and R. Spatzier, Structure of manifolds of nonpositive curvature. Ⅱ, Ann. of Math.(2), 122 (1985), 205–235. doi: 10.2307/1971303. Google Scholar
R. Bowen, Periodic orbits for hyperbolic flows, Amer. J. Math., 94 (1972), 1-30. doi: 10.2307/2373590. Google Scholar
R. Bowen, Entropy-expansive maps, Trans. Amer. Math. Soc., 164 (1972), 323-331. doi: 10.1090/S0002-9947-1972-0285689-X. Google Scholar
R. Bowen, Maximizing entropy for a hyperbolic flow, Math. Systems Theory, 7 (1974), 300-303. doi: 10.1007/BF01795948. Google Scholar
K. Burns, V. Climenhaga, T. Fisher and D. J. Thompson, Unique equilibrium states for geodesic flows in nonpositive curvature, Geom. Funct. Anal., 28 (2018), 1209-1259. doi: 10.1007/s00039-018-0465-8. Google Scholar
K. Burns and A. Katok, Manifolds with non-positive curvature, Ergodic Theory Dynam. Systems, 5 (1985), 307-317. doi: 10.1017/S0143385700002935. Google Scholar
K. Burns and R. Spatzier, Manifolds of nonpositive curvature and their buildings, Inst. Hautes Études Sci. Publ. Math., 65 (1987), 35–59. doi: 10.1007/BF02698934. Google Scholar
C. B. Croke and V. Schroeder, The fundamental group of compact manifolds without conjugate points, Comment. Math. Helv., 61 (1986), 161-175. Google Scholar
F. Dal'bo, M. Peigné and A. Sambusetti, On the horoboundary and the geometry of rays of negatively curved manifolds, Pacific J. Math., 259 (2012), 55-100. doi: 10.2140/pjm.2012.259.55. Google Scholar
[14] P. Eberlein, Geometry of Nonpositively Curved Manifolds, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1996. Google Scholar
P. Eberlein and B. O'Neill, Visibility manifolds, Pacific J. Math., 46 (1973), 45-109. doi: 10.2140/pjm.1973.46.45. Google Scholar
A. Freire and R. Mañé, On the entropy of the geodesic flow in manifolds without conjugate points, Invent. Math., 69 (1982), 375-392. doi: 10.1007/BF01389360. Google Scholar
K. Gelfert and R. Riggiero, Geodesic flows modelled by expansive flows, Proc. Edinb. Math. Soc. (2), 62 (2019), 61–95. doi: 10.1017/S0013091518000160. Google Scholar
R. Gulliver, On the variety of manifolds without conjugate points, Trans. Amer. Math. Soc., 210 (1975), 185-201. doi: 10.1090/S0002-9947-1975-0383294-0. Google Scholar
A. Katok, Entropy and closed geodesics, Ergodic Theory Dynam. Systems, 2 (1982), 339-365. doi: 10.1017/S0143385700001656. Google Scholar
[20] A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, Encyclopedia of Mathematics and its Applications, 54, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511809187. Google Scholar
G. Knieper, Das Wachstum der Äquivalenzklassen geschlossener Geodätischer in kompakten Mannigfaltigkeiten, Arch. Math. (Basel), 40 (1983), 559-568. doi: 10.1007/BF01192824. Google Scholar
G. Knieper, On the asymptotic geometry of nonpositively curved manifolds, Geom. Funct. Anal., 7 (1997), 755-782. doi: 10.1007/s000390050025. Google Scholar
G. Knieper, The uniqueness of the measure of maximal entropy for geodesic flows on rank 1 manifolds, Ann. of Math. (2), 148 (1998), 291–314. doi: 10.2307/120995. Google Scholar
G. Knieper, Closed geodesics and the uniqueness of the maximal measure for rank 1 geodesic flows, in Smooth Ergodic Theory and Its Applications, Proc. Sympos. Pure Math., 69, Amer. Math. Soc., Providence, RI, 2001, 573–590. Google Scholar
G. Knieper, Hyperbolic dynamics and Riemannian geometry, in Handbook of Dynamical Systems, 1A, North-Holland, Amsterdam, 2002, 453–545. doi: 10.1016/S1874-575X(02)80008-X. Google Scholar
F. Liu and F. Wang, Entropy-expansiveness of geodesic flows on closed manifolds without conjugate points, Acta Math. Sin. (Engl. Ser.), 32 (2016), 507-520. doi: 10.1007/s10114-016-5200-5. Google Scholar
F. Liu and X. Zhu, The transitivity of geodesic flows on rank 1 manifolds without focal points, Differential Geom. Appl., 60 (2018), 49-53. doi: 10.1016/j.difgeo.2018.05.007. Google Scholar
A. Manning, Topological entropy for geodesic flows, Ann. of Math. (2), 110 (1979), 567–573. doi: 10.2307/1971239. Google Scholar
G. A. Margulis, Certain applications of ergodic theory to the investigation of manifolds of negative curvature, Funkcional. Anal. i Priložen, 3 (1969), 89–90. Google Scholar
G. A. Margulis, Certain measures that are associated with $\gamma$-flows on compact manifolds, Funkcional. Anal. i Priložen, 4 (1970), 62–76. Google Scholar
G. A. Margulis, On Some Aspects of the Theory of Anosov Systems, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2004. doi: 10.1007/978-3-662-09070-1. Google Scholar
J. O'Sullivan, Riemannian manifolds without focal points, J. Differential Geometry, 11 (1976), 321-333. doi: 10.4310/jdg/1214433590. Google Scholar
G. P. Paternain, Geodesic Flows, Progress in Mathematics, 180, Birkhäuser Boston, Inc., Boston, MA, 1999. doi: 10.1007/978-1-4612-1600-1. Google Scholar
S. J. Patterson, The limit set of a Fuchsian group, Acta Math., 136 (1976), 241-273. doi: 10.1007/BF02392046. Google Scholar
R. Ruggiero, Expansive geodesic flows in manifolds with no conjugate points, Ergodic Theory Dynam. Systems, 17 (1997), 211-225. doi: 10.1017/S0143385797060963. Google Scholar
R. Ruggiero, Dynamics and Global Geometry of Manifolds Without Conjugate Points, Ensaios Matemáticos, 12, Sociedade Brasileira de Matemática, Rio de Janeiro, 2007. Google Scholar
R. Ruggiero and V. Rosas Meneses, On the Pesin set of expansive geodesic flows in manifolds with no conjugate points, Bull. Braz. Math. Soc. (N. S.), 34 (2003), 263-274. doi: 10.1007/s00574-003-0012-5. Google Scholar
D. Sullivan, The density at infinity of a discrete group of hyperbolic motions, Inst. Hautes Études Sci. Publ. Math., 50 (1979), 171–202. doi: 10.1007/BF02684773. Google Scholar
J. Watkins, The higher rank rigidity theorem for manifolds with no focal points, Geom. Dedicata, 164 (2013), 319-349. doi: 10.1007/s10711-012-9776-3. Google Scholar
W. Wu, On the ergodicity of geodesic flows on surfaces of nonpositive curvature, Ann. Fac. Sci. Toulouse Math. (6), 24 (2015), 625–639. doi: 10.5802/afst.1457. Google Scholar
W. Wu, Higher rank rigidity for Berwald spaces, preprint, Ergodic Theory Dynam. Systems. doi: 10.1017/etds.2018.130. Google Scholar
W. Wu, F. Liu and F. Wang, On the ergodicity of geodesic flows on surfaces without focal points, preprint, arXiv: math/1812.04409. doi: 10.5802/afst.1457. Google Scholar
Richard Sharp. Conformal Markov systems, Patterson-Sullivan measure on limit sets and spectral triples. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2711-2727. doi: 10.3934/dcds.2016.36.2711
Jane Hawkins, Michael Taylor. The maximal entropy measure of Fatou boundaries. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4421-4431. doi: 10.3934/dcds.2018192
Jérôme Buzzi, Sylvie Ruette. Large entropy implies existence of a maximal entropy measure for interval maps. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 673-688. doi: 10.3934/dcds.2006.14.673
Jan Philipp Schröder. Ergodicity and topological entropy of geodesic flows on surfaces. Journal of Modern Dynamics, 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147
Erik M. Bollt, Joseph D. Skufca, Stephen J . McGregor. Control entropy: A complexity measure for nonstationary signals. Mathematical Biosciences & Engineering, 2009, 6 (1) : 1-25. doi: 10.3934/mbe.2009.6.1
Tao Wang, Yu Huang. Weighted topological and measure-theoretic entropy. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3941-3967. doi: 10.3934/dcds.2019159
Huyi Hu, Miaohua Jiang, Yunping Jiang. Infimum of the metric entropy of hyperbolic attractors with respect to the SRB measure. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 215-234. doi: 10.3934/dcds.2008.22.215
Manfred Einsiedler, Elon Lindenstrauss. Symmetry of entropy in higher rank diagonalizable actions and measure classification. Journal of Modern Dynamics, 2018, 13: 163-185. doi: 10.3934/jmd.2018016
Welington Cordeiro, Manfred Denker, Xuan Zhang. On specification and measure expansiveness. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1941-1957. doi: 10.3934/dcds.2017082
Welington Cordeiro, Manfred Denker, Xuan Zhang. Corrigendum to: On specification and measure expansiveness. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3705-3706. doi: 10.3934/dcds.2018160
Petr Kůrka. On the measure attractor of a cellular automaton. Conference Publications, 2005, 2005 (Special) : 524-535. doi: 10.3934/proc.2005.2005.524
Tomasz Downarowicz, Yonatan Gutman, Dawid Huczek. Rank as a function of measure. Discrete & Continuous Dynamical Systems - A, 2014, 34 (7) : 2741-2750. doi: 10.3934/dcds.2014.34.2741
Zari Dzalilov, Iradj Ouveysi, Alexander Rubinov. An extended lifetime measure for telecommunication network. Journal of Industrial & Management Optimization, 2008, 4 (2) : 329-337. doi: 10.3934/jimo.2008.4.329
Barbara Brandolini, Francesco Chiacchio, Cristina Trombetti. Hardy type inequalities and Gaussian measure. Communications on Pure & Applied Analysis, 2007, 6 (2) : 411-428. doi: 10.3934/cpaa.2007.6.411
Oliver Jenkinson. Every ergodic measure is uniquely maximizing. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 383-392. doi: 10.3934/dcds.2006.16.383
Neil S. Trudinger, Xu-Jia Wang. Quasilinear elliptic equations with signed measure. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 477-494. doi: 10.3934/dcds.2009.23.477
Hayden Schaeffer, John Garnett, Luminita A. Vese. A texture model based on a concentration of measure. Inverse Problems & Imaging, 2013, 7 (3) : 927-946. doi: 10.3934/ipi.2013.7.927
M. Baake, P. Gohlke, M. Kesseböhmer, T. Schindler. Scaling properties of the Thue–Morse measure. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4157-4185. doi: 10.3934/dcds.2019168
Xiangfeng Yang. Stability in measure for uncertain heat equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6533-6540. doi: 10.3934/dcdsb.2019152
Fabio Camilli, Raul De Maio. Memory effects in measure transport equations. Kinetic & Related Models, 2019, 12 (6) : 1229-1245. doi: 10.3934/krm.2019047
Fei Liu Fang Wang Weisheng Wu | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.