text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Nano Express
Ion Beam Nanostructuring of HgCdTe Ternary Compound
Aleksey B. Smirnov1, 2,
Rada K. Savkina1, 2Email author,
Ruslana S. Udovytska1, 2,
Oleksandr I. Gudymenko1, 2,
Vasyl P. Kladko1, 2 and
Andrii A. Korchovyi1, 2
Nanoscale Research Letters201712:320
Systematic study of mercury cadmium telluride thin films subjected to the ion beam bombardment was carried out. The evolution of surface morphology of (111) Hg1 − x Cd x Te (x ~ 0.223) epilayers due to 100 keV B+ and Ag+ ion irradiation was studied by AFM and SEM methods. X-ray photoelectron spectroscopy and X-ray diffraction methods were used for the investigation of the chemical compound and structural properties of the surface and subsurface region. It was found that in the range of nanoscale, arrays of holes and mounds on Hg0.777Cd0.223Te (111) surface as well as the polycrystalline Hg1 − x Cd x Te cubic phase with alternative compound (x ~ 0.20) have been fabricated using 100 keV ion beam irradiation of the basic material. Charge transport investigation with non-stationary impedance spectroscopy method has shown that boron-implanted structures are characterized by capacity-type impedance whereas for silver-implanted structures, an inductive-type impedance (or "negative capacitance") is observed. A hybrid system, which integrates the nanostructured ternary compound (HgCdTe) with metal-oxide (Ag2O) inclusions, was fabricated by Ag+ ion bombardment. The sensitivity of such metal-oxide-semiconductor hybrid structure for sub-THz radiation was detected with NEP ~ 4.5 × 10−8 W/Hz1/2at ν ≈ 140 GHz and 296 K without amplification.
HgCdTe
IR and sub-THz detector
Ion implantation
61.72.uj
71.20.Nr
72.20.Pa
Ion implantation demonstrates one of the best examples of successful path from the fundamental research to the high-level technology. The advantages of this technique in producing of a precise dose of impurity as well as uniform and shallow junction are indisputable [1]. This method found application in silicon-based device manufacturing, to form buried dielectric and metal layers, and in III–V technology also. At the same time, the bombardment of semiconductor with energetic ions inevitably produces the defect structure transformation that can improve or undesirably affect the device performance. For example, post-growth processing with cold, high fluence, Fe implantation was the key to produce InGaAsP-based THz devices with good emitter characteristics [2]. An ion implantation method for large-scale synthesis of high-quality graphene films [3, 4] and for InSbN layer formation at nitrogen incorporated into InSb wafer [5] was demonstrated. Another example deals with a low-energy implantation (25 keV) of thin (Ga,Mn)As layers with a very low fluence of either O or Ne ions completely suppressed ferromagnetism and which could be applied as a method for tailoring nanostructures in the layers [6].
One of the most important opportunities provided during implantation, in our opinion, is a wide spectrum of topological features induced on semiconductor surface by ion bombardment [7–10]. It was shown that normal-incidence ion beam can results in the formation of nanoscale objects on the surface of both elemental (Si, Ge [7, 8]) and compound semiconductor (GaSb [11, 12]). Well-ordered hexagonal arrays of InP nanodots [13] and well-aligned ripple structures on the surface of a single crystal of 3C-SiC [14] were created by oblique-incidence ion bombardment. Low-energy ion processing (from hundreds eV to tens of keV) creates peculiar surface morphologies, such as nano-ripples and nanodots, ranging from random to regular structures, whose electronic and optical properties are different from those of bulk materials and might find technological application for nanophotonics and nanoscale magnetism. For example, ion implantation is used to locally modify the solid surface to create periodic plasmonic microstructures with metal nanoparticles [15]. Thus, interest in developing technique for fabricating nanostructured semiconductor surfaces having varied textures and properties is increasing. However, we did not find any papers on the nanostructuring of ternary compounds induced by implantation.
As we know, HgCdTe (MCT) ternary compound is one of the basic semiconductors for photon detectors from NWIR to LWIR spectral range [16] that can absorb IR radiation over a broad range of wavelengths due to a change in the bandgap from 0 to 1.6 eV related to the change in the composition. The ability to detect sub-terahertz radiation by MCT-based structures is also discussed [17, 18]. Commonly used method for the fabrication of the IR devices based on MCT ternary compound is an ion implantation. An implant, getting into the epitaxial layer, initiates an active restructuring of the defect structure of MCT, which changes the epilayer carrier type. As a result, n-on-p (boron-implanted) [19] and p-on-n (arsenic-implanted) [20] photodiodes are fabricated. At the same time, it is well known that ion implantation induces mechanical stress in MCT layers, which is a matter of paramount importance for solid-state devices, and has been exploited to improve their electrical and optical properties. It was shown that implantation-induced stress is an important factor influencing the depth of p-n junctions in MCT-based structures [21].
This work aimed at studying the nanostructuring surfaces of a ternary chalcogenide semiconductor compound Hg1 − x Cd x Te (x = 0.223) performed using the processing with 100 keV B+ and Ag+ ion bombardment. We report on the studies of the evolution of surface morphology, the chemical compound and structural properties of the surface and subsurface region as well as charge transport of MCT epilayers subjected to the ion implantation. Considered here is the possibility to use well-known IR material which properties were changed under high-energy influence as detector for sub-THz radiation and, in such a way, to achieve operating range broadening. Role of the strain appearing upon implantation of ternary compound is discussed.
Here, we have carried out a systematic study of mercury cadmium telluride thin films Hg1 − x Cd x Te (x ~ 0.223) which were grown on [111]-oriented semi-insulating Cd1 − y Zn y Te (y = 0.04) substrates from a Te-rich solution at 450 °C by liquid-phase epitaxy. The samples were irradiated by B+ and Ag+ ions on the side of the MCT epilayer (d = 17 μm) on "Vezuviy" implanter. The implantation energy and dose were 100 keV and Q = 3 × 1013 cm−2, respectively. Post-implantation thermal treatments were carried out under an Ar atmosphere at 75 °C for 5 h [22]. The temperature conditions and the technique of heat treatment of the samples allowed us to avoid the oxidation of the distorted layer ("to observe" a surface charge) and to activate ionic migration in the layer. All processed surfaces were examined after ion bombardment using atomic force (Digital Instruments NanoScope IIIa operating in the tapping mode) and a scanning electron microscope (MIRA3 TESCAN).
The structural characterization of the MCT samples was performed by X-ray diffraction (XRD) using a PANalytical X′Pert PRO triple-axis X-ray diffractometer. X-rays were generated from copper linear fine-focus X-ray tube. The CuK α1 line with a wavelength of 0.15418 nm was selected using a four-bounce (440) Ge monochromator. The experimental schemes allowed two cross sections of reciprocal lattice sites to be obtained: normally (ω-scanning) and in parallel (ω/2θ-scanning) to the diffraction vector. X-ray photoelectron spectroscopy (XPS) investigation was carried out using X-ray photoelectron spectrometer KRATOS equipped with a monochromatic AlK α source.
Charge transport was investigated by the Hall Effect method and impedance spectroscopy. The concentration and mobility of carriers in MCT layers were determined from the Hall coefficient R H and conductivity σ measurements which were made by the van der Pauw method in the magnetic field B of 0.01 up to 0.7 T at T = 80°K. High substrate resistivity excluded any influence of the one on results of electrical measurements. Samples 1 × 1 cm in size were cut for measurements from wafers.
To provide impedance spectroscopy characterization, a mesa-structure was prepared using the chemical etching of the sample in the standard etchant Br-HBr. Indium electrodes were deposited on faces of the sample. Impedance characteristics of the samples were studied using the precision impedance meter Z-3000X within the frequency range 1 Hz to 3 MHz with the amplitude of a sinusoidal signal 120 mV.
To obtain data about migration of impurity ions and intrinsic defects in the disordered area of MCT heteroepitaxial layer, the authors performed a model experiment by applying program packages TRIM_2008.
Topological and Structural Characterization
It was found that the ion bombardment of the samples investigated has resulted in the essential change of the physical and structural properties of MCT surface. AFM and SEM methods permitted to find that in the range of nanoscale, arrays of holes and mounds are generated on a (111) MCT surface as a result of the normal incident ion bombardment. The histograms which present the superposition of the distribution functions of lateral dimensions in the X–Y plane were also constructed. The most probable size of nano-objects was determined as the position of the major maximum in the distribution histogram.
Initial surface plane (see Fig. 1) is densely and regularly packed with round shaped grains with preferred size 25 nm in diameter. This means that the studied epitaxial film is characterized by a considerable nonequilibrium resource. As a rule, this state is concentrated in mechanical stresses of the local character (grains–pores), which is confirmed by the presence of a network of quasipores 3.5–10 nm in depth and 50–160 nm in diameter. The root-mean-square roughness (RMS) parameter for 1 × 1 μm2 initial surface fragments was in the energy range (2.45–3.34) nm.
AFM images of a (111) MCT surface generated as a result of the ion bombardment with B+ and Ag+ ions. a Typical virgin surface. b B+ (θ = 0°, 100 keV, 3 × 1013 cm−2). c Ag+ (θ = 0°, 100 keV, 3 × 1013 cm−2). Inset: Fourier transforms of AFM images
The results of topometry based on AFM measurements show that in the range of nanoscale, arrays of holes and mounds are generated on a (111) MCT surface as a result of the normal incident ion bombardment. The results of electron microscopy confirmed these features in surface morphology of the treated samples. Figure 1b shows AFM reconstruction of periodic height modulations ("nanohole" pattern) induced on a MCT (111) surface with 100 keV B+ ion processing. After low-temperature annealing, MCT surface became denser—the study of the microhardness pointed to increase of its value to 12%. The ordered grid of quasipores is not observed. At the same time, some grains become consolidated. Silver ion bombardment (see Fig. 1c) gives rise to emergence of a uniform array of nano-islands 5 to 25 nm in height and with a base diameter of 13 to 35 nm. The corresponding 2D-fast Fourier transformation (FFT) has been depicted in insets of Fig. 1. They reveal that there is no signature of ordering of nanostructures over the surfaces for all regimes.
Structural characterization of MCT surface before and after implantation was performed by XRD and XPS measurements. X-ray rocking curves (RC) for MCT-based structures were obtained from the symmetrical ω/2θ scanning. As seen in Fig. 2a, b (curve 1), the observed distribution of the intensity along axis q z indicates the existence in the initial material of some structural heterogeneity caused by the existence of the vacancies (q z < 0) and interstitials (q z > 0). The micro-defect system in the initial material is apparently compensated that is confirmed by the symmetric form of the initial RC. The RCs for boron-implanted samples have symmetric form also whereas the RCs for silver-implanted samples have asymmetric form and are characterized by substantial shoulder on the high-angle side. XRD results in the coherent-scattering region point out to the compression of the boron-implanted MCT and tension of the silver-implanted MCT layers [23].
XRD and XPS characterization of typical samples investigated. a X-ray rocking curves for MCT-based structure: 1 initial, 2 boron implanted, and 3 annealed. b X-ray rocking curves for MCT-based structure: 1 initial, 2 silver implanted, and 3 annealed. c XPS survey spectrum of MCT-based structure after silver ion bombardment and annealing. d GI XRD spectrum of MCT-based structure after silver ion bombardment and annealing. Inset shows the X-ray diffraction spectrum of the sample in the Bragg configuration
XPS measurements were performed to investigate the chemical state of MCT-based structures after ion bombardment. The survey XPS spectrum indicates that samples investigated are composed of Hg, Cd, and Te elements. O 1s peak at 531.0 eV, as well as In and Sn, was found in the MCT surface after silver bombardment (see Fig. 2c). At the same time, no peaks related to Ag were detected for sure. As we know, XPS method obtains information from within a few atomic layers of the surface and subsurface. Therefore, properties of the subsurface layer of the implanted MCT samples were studied by X-ray diffraction in the grazing-incidence (GI) scheme.
The GI diffractograms have been collected by irradiating the samples at an incident angle (θ inc) of 1°. The penetration depth was estimated by expression 2θ inc /μ, where μ is the linear coefficient of X-ray attenuation that is ~1.5 × 103 cm–1 for CdTe (and also CdZnTe) at the X-ray radiation energy used in [24]. Thus, GI XRD method in the current experimental conditions obtains information from ∼200 to 300 nm of subsurface region. Corresponding XRD spectrum of MCT-based structure is presented in Fig. 2d. It has confirmed the formation of the new phase in the subsurface region of MCT after silver implantation. This is MCT polycrystalline phase (ICDD PDF 00-051-1122). Besides, reflections attributable to cubic Ag2O (2θ = 32.8°, 38.2° according to ICDD PDF 00-041-1104) are appeared. It should be noted that the simulation of the implantation process performed in [25] using the TRIM2008 program package allowed us to state that the introduced impurity is mainly localized in the subsurface region (~100 nm) of MCT. The ellipsometry data also indicate the formation, in the silver-implanted CdHgTe/CdZnTe samples, of a distorted ~100-nm-thick layer with anomalous values of the extinction and refractory coefficients [25]. At last, data obtained by Transport of Ions in Matter simulation and ellipsometry for MCT samples after B+ bombardment point to the formation of an implantation-induced distorted layer ~400 nm in thickness.
Thus, from the results of AFM, XRD, and XPS measurements, it was found that in the range of nanoscale, arrays of holes and mounds on MCT (111) surface as well as the polycrystalline MCT cubic phase with alternative compound (x ~ 0.20) and a new phase of metal-oxide (Ag2O) have been fabricated using 100 keV ion beam irradiation of the basic material. Next, we examined the electrical properties of the samples investigated.
Hall Measurements
The magnetic-field dependences of the Hall coefficient and the conductivity were measured. The change of the measurable parameters after implantation was observed for all samples. The Hall Effect data were processed in terms of the model, which includes several kinds of carriers using the next expression [26]:
$$ e{R}_H(B)=\frac{{\displaystyle \sum {a}_i{\mu}_i{c}_i}(B)}{{\left({\displaystyle \sum {c}_i(B)}\right)}^2+{B}^2{\left({\displaystyle \sum {a}_i{\mu}_i{c}_i(B)}\right)}^2} $$
where e is the electric charge, c i = n i μ i /(1 + μ i 2 B 2 ), n i is the concentration of the ith type of carrier, μ i is the mobility of the ith type of carrier, a i is the sign of carrier (−1 for electrons, +1 for holes), and B is the magnetic flux density. In addition, zero magnetic field electrical conductivity is given by Σ(0) = eΣc i (0). From the analysis carried out, the electron and hole concentration and mobility were obtained before and after implantation (see Table 1).
Some parameters of typically investigated MCT epilayers, T = 80 K
Initial p-MCT epilayer,
x = 0.223
d = 17 μm
p-MCT epilayer after Ag+ implantation,
Initial n-MCT epilayer,
n-MCT epilayer after B+ implantation,
n-MCT epilayer after Ag+ implantation,
x = 0.2
Concentration, m−3
p = 3 × 1022
n = 1015
n = 3 × 1018
p l = 1017
n l = 5 × 1018
n = 1.5 × 1022
Mobility, m2 V−1c−1
μ p = 0.01
μ n = 8
μ p = 0.012
μ n = 0.8
μ pl = 2.1
μ n = 0.322
μ nl = 8
It was revealed that initial samples were with n- and p-type of conductivity at 80 K. Initial dependences R H(B) and σ(B) for p-MCT epilayer were satisfactorily described by using combined electron and hole conductivity. The calculated values of the charge carrier parameters are presented in Table 1. After implantation, a tendency to the majority carrier (holes) concentration decrease and the electron concentration increase is observed. The mobility of majority carriers (holes) remains practically unchanged, while the electron mobility is decreased by an order of magnitude. Moreover, taking into account the light hole contribution was necessary. The character of the dependences R H(B) and σ(B) for n-MCT epilayer can be explained by the presence of electrons with high and low mobilities. After the implantation, the concentration of electrons with a low mobility increased from 4 × 1021 to 6 × 1022 m−3 in the boron-implanted specimens and to 1.5 × 1022 m−3 in the specimens implanted with silver ions. No contribution of high-mobility electrons was revealed for all n-type specimens after ion bombardment.
As it was mentioned in the previous paragraph, both TRIM simulation and ellipsometry results indicate the formation of an implantation-induced distorted layer with other properties in comparison with the basic material. This is evidenced by the X-ray data also. However, the Hall Effect data did not show multilayer formation, as it was obtained in our work devoted to the effect of high-frequency sonication on charge carrier transport in MCT [27]. At the same time, Hall Effect data simulation shows the composition reduction after Ag+ ion bombardment (see Table 1), which is agree with XRD results.
Impedance spectroscopy is a very sensitive method for detection of non-stationary charge transport governed by charge-carrier relaxation in disordered semiconductors structure. Using the impedance technique, data equivalent to the real and imaginary parts of complex electrical values are measured as a function of the frequency of the applied electric field. The value and interpretation of impedance spectra are processed in analogy to equivalent circuits involving simple components such as resistors, capacitors, and inductors [28].
The impedance measurements were performed at the G-Cp (parallel conductance and capacitance) configuration using Au plates as blocking electrodes. Figure 3a, b shows the complex impedance plane plots of the MCT samples implanted with B+ and Ag+ ions. Data for the boron-implanted sample are given for comparison. The arrow shows the direction of the increase in frequency. Symbols are the experimental data, and solid lines are the results of the fitting obtained using the EIS spectrum analyzer (http://www.abc.chemistry.bsu.by/vi/analyser/). Equivalent circuits obtained according to the Maxwell approach are shown in the insets of Fig. 3.
Impedance spectra (Nyquist plots) for MCT samples implanted with B+ (a) and Ag+ (b) ions. Inset shows the equivalent circuit model
The impedance plane plot for the case of B+ implantation is in the shape of a line (Fig. 3a). On the equivalent circuits, R1 is the contact resistance. CPE is the constant phase element with impedance Z CPE = A −1(iω)−n (ω—angular frequency), which is used to accommodate the nonideal behavior of the capacitance which may have its origin in the presence of more than one relaxation process with similar relaxation times [29]. The parameter n estimates the nonideal behavior having a value of zero for pure resistive behavior and is unity for capacitive behavior. For our case, CPE1 is a Warburg impedance element with n = 0.5, and CPE2 is a capacitance with n = 1. The series R2-CPE2 is a circuit which can be corresponding to the charge transport in the space charge region.
It is necessary to point out that the electrical circuit given in Fig. 3a is resembled with circuit presented in [30], which describes the behavior of an ideally polarized semiconductor that contains a reasonable concentration of inter-band defects. Feature consists in that instead of the capacitance, we have an element of Warburg, which manifests itself by a line in the low-frequency region and corresponds to mass transfer effect.
For the typical sample implanted by Ag+ ions, the hodographs of impedance is shown in Fig. 3b. At impedance locus, one can observe two clearly observed parts, namely, a ray followed by a small inductive loop. On the equivalent circuits, R1 is the contact resistance and R2-CPE1 is the resistors and capacitors in parallel combinations. It characterizes the conductivity and the charge of the disordered layer. The important peculiarity of its equivalent circuit is the presence of the reactive element inductance L, i.e., inductive-type impedance.
Since components of the last equivalent circuits are electrically in parallel, it is convenient to consider admittance Y instead of impedance Z. Frequency dependence for real and imaginary part of admittance of a sample implanted by Ag+ ions on Fig. 4a, b was analyzed. From the analysis of the real part of admittance Y′ (Fig. 4a), the frequency dispersion is not observed within the range (1 Hz…1 MHz). The imaginary part of admittance Y′′ has practically very low value but within the total range of measurements increases by almost three orders of magnitude (Fig. 4b). In the low-frequency range (3 × 102…3 × 103) Hz, the silver-implanted samples show specific features that are indicative on resonance levels in structured material (first circle Fig. 4a). The observed sharp drop within the high-frequency range (1…3) MHz (second circle Fig. 4a) can be caused by geometrical and electrical relation of the studied samples.
Bode plots for real (a) and imaginary (b) part of admittance of MCT sample implanted with Ag+ ions
Thus, we have demonstrated that the implantation resulted in semiconductor surface modification up to the nanoscale pattern formation. Furthermore, the examination of the electrical properties of MCT epilayers has confirmed the significant effect of the ion radiation treatment on both a charge and mass transfer phenomena in the material investigated.
A lot of experimental as well as theoretical efforts have been devoted to understand the mechanism of nanoscale structure formation on surfaces subjected to energetic ion bombardment. The widely accepted Bradley–Harper theory [31] explains pattern formation by the curvature dependence of the sputtering yield. An alternative approach is based on the theory (Cuerno [32] and Norris [33]) of the stress relaxation. In particular, Cuerno with co-authors show that nonuniform generation of stress across the damaged amorphous layer induced by their radiation is a key factor behind the range of experimental observations. We assume that the deformation fields appearing upon implantation of the studied heterostructure lead to the topological instability of the irradiated surface and are a determining factor of the observed surface transformation as well as of the change of the carrier transport.
The deformation sign is dependent on the ratio of ionic radii r + of the matrix atoms and introduced impurity [21]. Implantation with ions of small radius (such as B+, r B ~ 0.97 Ǻ) stimulates the compression of the damaged layer, whereas the implantation with ions of radius comparable with that of Hg (in our case, these are Ag+ ions, r Ag ~ 1.44 Ǻ, r Hg ~ 1.55 Ǻ) gives rise to tensile stress in the damaged layer, as confirmed by the X-ray diffraction data obtained in this work and in our previous work [23, 34]. The calculated mechanical stress is σ Ag = 2.2 × 105 Pa (strain ε ~ 10−6) and σ B = 1.4 × 103 Pa (strain ε ~ 10−8) [25].
The structural transformation of the region subjected to the implantation is thought to occur due to the formation of a state with excess energy in a thin layer of the material. However, the accumulated energy is not sufficient for the formation of extended defects. We also did not observe the anisotropy of the surface morphology. While XRD studies in the coherent-scattering region point out to the implantation-induced deformation and post-annealing relaxation of MCT layers [23]. We need to emphasize that used implantation conditions (energy and fluence) are softer than those which stimulated formation of dislocation loops in MCT system [35]. An analysis of the initial sample surface indicates the substructural growth nonequilibrium of samples under study [34]. The charge particle (ion) flow additionally distorts the target lattice; in this case, its specific surface and the degree of disordering increase up to the formation of the distorted layer with the optical and electrical characteristics different than those observed for the matrix. The subsequent relaxation of the nonequilibrium state of the semiconducting material can pass via the formation of point defects in the crystal structure and the formation of new surface up to the excitation of solid-phase chemical reactions [36].
Additional factor affecting the surface morphology is the ion migration after implantation. Weak chemical bonds in material under study define the high concentration of electrically active intrinsic defects in accordance with the defect reaction Hg i + VHg = HgHg [21]. The migration of Hg i was found to be the dominating process in MCT. The strain induced by the ion irradiation can shift this reaction to the left or to the right depending on the deformation sign and, in such a way, to make an impact on the parameters of the charge carriers. Defect migration in implanted MCT ternary compound is discussed in [25] in detail. Besides, applying strain on MCT removes the degeneracy at the Γ8 point by lowering the crystal symmetry. This can result in the decrease of the hole effective mass in the splitting subbands. In our experiment, the Hall Effect data point to the rise of the light-mass hole contribution in implanted samples. Thus, mechanical strain induced in ternary compound under high-energy influence is responsible to the evolution of MCT surface morphology as well as stipulates peculiarity of the mass and charge transport in this material.
At last, charge transport investigation with non-stationary impedance spectroscopy method has shown that boron-implanted MCT structures are characterized by capacity-type impedance whereas for silver-implanted MCT structures, an inductive-type impedance (or "negative capacitance") is observed. It is known, that the inductive-type impedance (or "negative capacitance") is observed in various semiconductor structures such as chalcogenide films, semi-insulating polycrystalline silicon, multilayer heterostructures, and metal-semiconductor interfaces; in homogeneous samples with an inertial–relaxational type of electrical conductivity; in bipolar transistors with insulating gate and Schottky diodes; and also in p+–n junction diodes fabricated on the basis of crystalline and amorphous semiconductor materials (see Ref. in [37]). It is believed that the disordered systems is characterized by the inductive-type impedance which caused by the processes of capture and retention of charge carriers at the trapping centers for a time [37, 38]. With regard to our case, a disordered layer with oxide inclusions (Ag2O) is induced by Ag+ ion bombardment and the trapping centers can be located at the oxide-semiconductor interface. The typical lifetime of carriers on these centers can be defined as τ ~ 1/2f ~ (0.1–1) ms [37], where f was determined from the low-frequency peculiarity on the Bode plot for real part of admittance of MCT sample implanted with Ag+ ions (see Fig. 4a).
It should be also noted that the observed effect of ion beam nanostructuring as well as formation of the oxide inclusions (Ag2O) in semiconductor matrix can be useful from the viewpoint of developing a new class of electro-optical facility based on MCT that possesses a necessary combination of optical, electro-physical, and photoelectric properties. Because of the challenges put forward to miniaturizing modern detectors and communication devices, there is a problem in transferring power to and from small antennas, which could not break through the gain-bandwidth theory limit (Foster theorem) [39]. At that, circuits containing negative elements ("non-Foster" networks) are not constrained by gain-bandwidth theory and can achieve wide matching bandwidths with "difficult" loads arising from electrically short antennas.
We have assumed that it is possible to achieve operating range broadening in obtained MCT-based structure with inductive-type impedance. Really, the sensitivity of the hybrid structure, which integrates the nanostructured ternary compound (HgCdTe) with metal-oxide (Ag2O) inclusions, for sub-THz radiation was detected at 296 K. The millimeter (mm wave) source with ~140GHz frequency was used for testing of the MCT heterostructure responsivity that was found after the oblique-incidence (45°) Ag+ ion beam bombardment. The value of the measured signal was about 7–15 μV at output power ~7 mW. These measurements were performed using a lock-in detection scheme with modulation at 190 Hz. The signal was detected without amplification. NEP at ν ≈ 140 GHz and 296 K reaches 4.5 × 10−8 W/Hz1/2.
Presented in this work are the results concerning of topological features of a semiconductor surface developed by the ion implantation. Modification of Hg1 − x Cd x Te-based structure was performed using the method of normal-incidence ion bombardment with boron and silver ions (100 keV), which was followed by low-temperature treatment. It was found that in the range of nanoscale, arrays of holes and mounds on Hg1 − x Cd x Te (111) surface as well as the polycrystalline Hg1 − x Cd x Te cubic phase with alternative compound (x ~ 0.20) and a new phase of metal-oxide (Ag2O) have been fabricated. Mechanical strain induced in MCT ternary compound under high-energy influence is responsible to the evolution of surface morphology as well as stipulates peculiarity of the mass and charge transport in this material. Charge transport investigation with non-stationary impedance spectroscopy method has shown that boron-implanted structures are characterized by capacity-type impedance whereas for silver-implanted structures, an inductive-type impedance (or "negative capacitance") is observed. A hybrid system, which integrates the nanostructured ternary compound (HgCdTe) with metal-oxide (Ag2O) inclusions, was fabricated by Ag+ ion bombardment. The sensitivity of such metal-oxide-semiconductor hybrid structure for sub-THz radiation was detected with NEP ~ 4.5 × 10−8 W/Hz1/2at ν ≈ 140 GHz and 296 K without amplification.
CPE:
Constant phase element
Fast Fourier transformation
GI:
Grazing-incidence
IR:
LWIR:
Long-wavelength infrared region
MCT:
Mercury cadmium telluride
NWIR:
Near-wavelength infrared region
Rocking curves
RMS:
Root-mean-square
Transport of Ions in Matter
XPS:
X-ray photoelectron spectroscopy
XRD:
This work was supported by the Volkswagen-Stiftung of Germany (contract no. 7208130).
The idea of the study was conceived by ABS. RKS carried out the Hall Effect investigation. RSU carried out the impedance spectroscopy experiments. OIG and VPK carried out the XRD experiments. AAK carried out the AFM investigation. ABS and RKS interpreted the experiments and wrote this manuscript. All authors read and approved the final manuscript.
V. Lashkaryov Institute of Semiconductor Physics, National Academy of Sciences of Ukraine, 41 Prospect Nauky, Kyiv, 03028, Ukraine
Rimini E (2013) Ion implantation: basics to device fabrication, vol 293. Springer Science & Business Media, New YorkGoogle Scholar
Fekecs A, Bernier M, Morris D, Chicoine M, Schiettekatte F, Charette P, Ar'es R (2011) Fabrication of high resistivity cold-implanted InGaAsP photoconductors for efficient pulsed terahertz devices. Opt Mater Express 1(7):1165–1177View ArticleGoogle Scholar
Garaj S, Hubbard W, Golovchenko JA (2010) Graphene synthesis by ion implantation. Appl Phys Lett 97(18):183103View ArticleGoogle Scholar
Zhang R, Wang ZS, Zhang ZD, Dai ZG, Wang LL, Li H, Zhou L, Shang YX, He J, Fu DJ, Liu JR (2013) Direct graphene synthesis on SiO2/Si substrate by ion implantation. Appl Phys Lett 102(19):193102View ArticleGoogle Scholar
Wang Y, Zhang DH, Chen XZ, Jin YJ, Li JH, Liu CJ, Wee AT, Zhang S, Ramam A (2012) Bonding and diffusion of nitrogen in the InSbN alloys fabricated by two-step ion implantation. Appl Phys Lett 101(2):021905View ArticleGoogle Scholar
Yastrubchak O, Domagala JZ, Sadowski J, Kulik M, Zuk J, Toth AL, Szymczak R, Wosinski T (2010) Ion-implantation control of ferromagnetism in (Ga, Mn) as epitaxial layers. J Electron Mater 39(6):794–798View ArticleGoogle Scholar
Ziberi B, Cornejo M, Frost F, Rauschenbach B (2009) Highly ordered nanopatterns on Ge and Si surfaces by ion beam sputtering. J Phys Condens Matter 21:224003View ArticleGoogle Scholar
Wei Q, Zhou X, Joshi B, Chen Y, Li KD, Wei Q, Sun K, Wang L (2009) Self-assembly of ordered semiconductor nanoholes by ion beam sputtering. Adv Mater 21:2865–2869View ArticleGoogle Scholar
Garg SK, Cuerno R, Kanjilal D, and Som T (2016) Anomalous behavior in temporal evolution of ripple wavelength under medium energy Ar+-ion bombardment on Si: A case of initial wavelength selection. J Appl Phys 119:225301Google Scholar
Motta FC, Shipman PD, Bradley RM (2012) Highly ordered nano-scale surface ripples produced by ion bombardment of binary compounds. J Phys D Appl Phys 45(12):122001View ArticleGoogle Scholar
Facsko S, Dekorsy T, Koerdt C, Trappe C, Kurz H, Vogt A, Hartnagel HL (1999) Formation of ordered nanoscale semiconductor dots by ion sputtering. Science 285:1551–1553View ArticleGoogle Scholar
Plantevin O, Gago R, Vázquez L, Biermanns A, Metzger TH (2007) In situ X-ray scattering study of self-organized nanodot pattern formation on GaSb (001) by ion beam sputtering. Appl Phys Lett 91(11):113105View ArticleGoogle Scholar
Frost F, Schindler A, Bigl F (2000) Roughness evolution of ion sputtered rotating InP surfaces: pattern formation and scaling laws. Phys Rev Lett 85:4116View ArticleGoogle Scholar
Jiaming Z, Qiangmin W, Ewing RC, Jie L, Weilin J, Weber WJ (2008) Self-assembly of well-aligned 3C-SiC ripples by focused ion beam. Appl Phys Lett 92(19):3107Google Scholar
Stepanov AL, Galyautdinov MF, Evlyukhin AB, Nuzhdin VI, Valeev VF, Osin YN, Evlyukhin EA, Kiyan R, Kavetskyy TS, Chichkov BN (2013) Synthesis of periodic plasmonic microstructures with copper nanoparticles in silica glass by low-energy ion implantation. Applied Physics A 111(1):261–264View ArticleGoogle Scholar
Rogalski A (2011) Infrared Detectors (second ed.). CRC Press, Taylor & Francis Group, Boca Raton, LondonGoogle Scholar
Kryshtab T, Savkina RK, Smirnov AB, Kladkevich MD, Samoylov VB (2016) Multi-band radiation detector based on HgCdTe heterostructure. Phys Stat Solidi (c) 13(7-9):639–642View ArticleGoogle Scholar
Dobrovolsky V, Sizov F, Kamenev Y, Smirnov A (2008) Ambient temperature or moderately cooled semiconductor hot electron bolometer for mm and sub-mm regions. Opto-Electron Rev 16(2):172–178View ArticleGoogle Scholar
Holander-Gleixner S, Williams BL, Robinson HG, Helms CRJ (1997) Modeling of junction formation and drive-in in ion implanted HgCdTe. J Electron Mater 26(6):629–634View ArticleGoogle Scholar
Mollard L, Destefanis G, Baier NN, Rothman J, Ballet P, Zanatta JP, Pautet C (2009) Planar p-on-n HgCdTe FPAs by arsenic ion implantation. J Electron Mater 38(8):1805–1813View ArticleGoogle Scholar
Ebe H, Tanaka M, Miyamoto Y (1999) Dependency of pn junction depth on ion species implanted in HgCdTe. J Electron Mater 28(6):854–857View ArticleGoogle Scholar
Nemirovsky Y, Bahir G (1989) Passivation of mercury cadmium tellu-ride surfaces. J Vac Sci Technol A 7(2):450–459View ArticleGoogle Scholar
Smirnov AB, Savkina RK, Gudymenko AI, Kladko VP, Sizov FF, Frigeri C (2014) Effect of stress on defect transformation in B+ and Ag+ implanted HgCdTe/CdZnTe structures. Acta Phys Pol A 125(4):1003–1005Google Scholar
Rogalski A (2012) Progress in focal plane array technologies. Prog Quantum Electron 36(2):342–473View ArticleGoogle Scholar
Smirnov AB, Litvin OS, Morozhenko VO, Savkina RK, Smoliy MI, Udovytska RS, Sizov FF (2013) Role of mechanical stresses at ion implantation of CdHgTe solid solutions. Ukr J Phys 58(9):872–880View ArticleGoogle Scholar
Beer AC (1963) Galvanomagnetic effects in semiconductors. Academic, New YorkGoogle Scholar
Savkina RK, Sizov FF, Smirnov AB (2006) Elastic waves induced by pulsed laser radiation in a semiconductor: effect of the long-range action. Semicond Sci Technol 21:15221View ArticleGoogle Scholar
Bonanos N, Steele BCH and Butler EP (2005) Applications of Impedance Spectroscopy, in Impedance Spectroscopy: Theory, Experiment, and Applications, Second Edition (eds E. Barsoukov and J. R. Macdonald), John Wiley & Sons, Inc., Hoboken, NJ, USAGoogle Scholar
Bisquert J, Fabregat-Santiago F (2010) Impedance spectroscopy: a general introduction and application to dye-sensitized solar cells. CRC Press, Lausanne, Boca RatonGoogle Scholar
Orazem ME (1990) The impedance response of semiconductors: an electrochemical engineering perspective. Chem Eng Educ 24(1):48–55Google Scholar
Bradley RM, Harper JME (1988) Theory of ripple topography induced by ion bombardment. J Vac Sci Technol A 6(4):2390–2395View ArticleGoogle Scholar
Moreno-Barrado A, Castro M, Gago R, Vázquez L, Muñoz-García J, Redondo-Cubero A, Cuerno R (2015) Nonuniversality due to inhomogeneous stress in semiconductor surface nanopatterning by low-energy ion-beam irradiation. Phys Rev B 91(15):155303View ArticleGoogle Scholar
Norris SA (2012) Stress-induced patterns in ion-irradiated silicon: model based on anisotropic plastic flow. Phys Rev B 86(23):235405View ArticleGoogle Scholar
Sizov FF, Savkina RK, Smirnov AB, Udovytska RS, Kladko VP, Gudymenko AI, Lytvyn OS (2014) Structuring effect of heteroepitaxial CdHgTe/CdZnTe systems under irradiation with silver ions. Phys Solid State 56(11):2160–2165View ArticleGoogle Scholar
Williams BL, Robinson HG, Helms CR (1997) X-ray rocking curve analysis of ion implanted mercury cadmium telluride. J Electron Mater 26(6):600–605View ArticleGoogle Scholar
Meyer K (1968) Physikalisch chemische Kristallographie., Grundstoffindustrie, Leipzig (Metallurgiya, Moscow, 1972) [in German and in Russian]Google Scholar
Poklonski NA, Shpakovski SV, Gorbachuk NI, Lastovskii SB (2006) Negative capacitance (impedance of the inductive type) of silicon p+-n junctions irradiated with fast electrons. Semiconductors 40(7):803–807View ArticleGoogle Scholar
Vanmaekelbergh D, de Jongh PE (2000) Electron transport in disordered semiconductors studied by a small harmonic modulation of the steady state. Phys Rev B 61(7):4699–4704View ArticleGoogle Scholar
Zhang F, Sun BH, Li X, Wang W, Xue JY (2010) Design and investigation of broadband monopole antenna loaded with non-foster circuit. Prog Electromagn Res C 17:245–255View ArticleGoogle Scholar
|
CommonCrawl
|
identity element for multiplication of rational number is
identity element synonyms, identity element pronunciation, identity element translation, English dictionary definition of identity element. (c) the identity for multiplication of rational numbers. The identity element for multiplication is 1. b. For example, a + 0 = a. Examples: The additive inverse of 1/3 is -1/3. Here we have identity 1, as opposed to groups under addition where the identity is typically 0. ... What is the identity element in the group (R*, *) If * is defined on R* as a * b = (ab/2)? Adding or subtracting zero to or from a number will leave the original number. example, addition and multiplication are binary operations of the set of all integers. If $\Bbb Q^\times$ were cyclic, it would be infinite cyclic, so $\simeq \Bbb Z$. Addition and multiplication of rational numbers 3 2.1. Examples: 1/2 + 0 = 1/2 [Additive Identity] 1/2 x 1 = 1/2 [Multiplicative Identity] Inverse Property: For a rational number x/y, the additive inverse is -x/y and y/x is the multiplicative inverse. We always assume that 1 6= 0. " \(1\) " is the multiplicative identity of a number. ∀x(x * 1 = x) b. A group Ghas exactly one identity element … For example, 2x1=1x2=2. 1. Join now. is called! Identity Property: 0 is an additive identity and 1 is a multiplicative identity for rational numbers. Zero is always called the identity element. Multiplicative identity definition is - an identity element (such as 1 in the group of rational numbers without 0) that in a given mathematical system leaves unchanged any element by which it is multiplied. 9. Note: Identity element of addition and subtraction is the number which when added or subtracted to a rational number, brings no change in that rational number. But $-1$ has order two in $\Bbb Q^\times$; and there is no element of order two in $\Bbb Z$: every element has infinite order, except for $0$. For b ∈ F, its additive inverse is denoted by −b. 1, then every element of G 2 is its own inverse." 1 is the identity for multiplication. 6 2.5. an item in a matrix. In multiplication and division, the identity element is one. ∀x∃y(x * y = 1) c. ∀x¬∃y((x > 0 ʌ y < 0) → x * y = 1) This is similar to Example 2.2.3 in … (d) the identity for division of rational numbers. It is routine to show that this is a structural property. With the operation a∗b = b, every number is a left identity. A group is a nonempty set, together with a binary operation (usually called multiplication) that assigns to each ordered pair of elements (a,b) some element from the same set, denoted by ab. This is true for integers, rational numbers, real numbers, and complex numbers. Explanation. (the distributive law connects addition and multiplication) 5 5) Ñ aBB !œB aBÐBÁ!ÊB†"œBÑw (0 and 1 are "neutral" elements for addition and multiplication. (Also, it is equivalent to the property that square of every element is the identity element, which we have already seen is a structural property.) Dec 22, 2020 - Multiplicative Identity for Rational Numbers Class 8 Video | EduRev is made by best teachers of Class 8. Any number when multiplied by 1 , results in the number itself.Hence, 1 is the identity element with respect to multiplication. In the set of rational numbers what is the identity element for multiplication? True. Solving the equations Ea;b and Ma;b. whenever a number is multiplied by the number 1 (one) it will give the same number as the product the multiplicative identity is 1 (the number one). d) The set of rational numbers does have an identity element under the operation of multiplication, because it is true that for any rational number x, 1x=x and x∙1=x. Example. We have proven that on the set of rational numbers are valid properties of associativity and commutativity of addition, there exists the identity element for addition and an addition inverse, therefore, the ordered pair $(\mathbb{Q}, +)$ has a structure of the Abelian group. MCQs of Number Theory Let's begin with some most important MCs of Number Theory. 6 2.4. Connections with Z. In addition and subtraction, the identity element is zero. Multiplicative inverse of a negative rational number is (a) a positive rational number. Deflnitions and properties. Sequences and limits in Q 11 5. This video is highly rated by Class 8 students and has been viewed 2877 times. ... the number which when multiplied by a gives 1 as the answer. Identity: A composition $$ * $$ in a set $$G$$ is said to admit of an identity if there exists an element $$e \in G$$ such that Invertibility Property - For each element of the set, inverse should exist. This illustrates the important point that not all sets and binary operators have an identity element. ... the identity element of the group by the letter e. Lemma 6.1. element. The total of any number is always 0(zero) and which is always the original number. 8 3. The result is a rational number. Log in. The Rational Numbersy Contents 1. 4. the and is called the inadditive identity element " multiplicative identity element J) 6 6Ñ aBbCB Cœ! a rectangular arrangement of numbers. for every rational number, there is an additive inverse -n such that n + (-n) = 0. matrix. Identity property of multiplication The identity property of multiplication, also called the multiplication property of one says that a number does not change when that number is multiplied by 1. To further simplify the given numbers into their lowest form, we would divide both the Numerator and Denominator by their HCF. Identity element Property - Each set must have an identity element, which is an element of the set such that when operated upon with another element of the set, it gives the element itself. c) The set of natural numbers does not have an identity element under the operation of addition, because, while it is true that for any whole number x, 0+x=x and x+0=x, 0 is not an element of the set of natural numbers! \( \frac{1}{2} \) × \( \frac{3}{4} \) = \( \frac{6}{8} \) The result is a rational number. ) be a filed with 0 as its additive identity element and 1 as its multiplicative identity element. Let a be a rational number. But this imply that 1+e = 1 or e = 0. A. Dictionary ! If e is an identity element then we must have a∗e = a for all a ∈ Z. Find an answer to your question the identifier element of multiplication for rational number is _____ 1. What are the identity elements for the addition and multiplication of rational numbers 2 See answers Brainly User Brainly User ... and multiplicative identity is 1 becoz if we multiply 1 with any number we get same number so identity is 1 ex:- 3 × 1 = 3 so identity is 3 i hope it helps uh appuappi38 appuappi38 Answer: 2+0=0 and 2X 1=1. Join now. Example 1.3.2 1. Example 7. The set of all rational numbers is an Abelian group under the operation of addition. The identity elements with respect to multiplication in integers is ... and any rational number is the rational number itself. In most number systems, the multiplicative identity element is the number 1. Every positive real number has a positive multiplicative inverse. 3) Multiplication of Rational Numbers. These axioms are closure, associativity, and the inclusion of an identity element and inverses. Define identity element. So the rational numbers are closed under subtraction. De nition 1.3.1 Let R be a ring with identity element 1R for multiplication. Better notation. The closure property states that for any two rational numbers a and b, a × b is also a rational number. The Set Q 1 2. (The set is a group under the given binary operation if and only if the properties of closure, associativity, identity, and inverses are satisfied.) c. No positive real number has a negative multiplicative inverse. For addition, 0 and for multiplication, 1. c) The set of rational numbers does not have the inverse property under the operation of multiplication, because the element 0 does not have an inverse !The identity of the set of rational numbers under multiplication is 1, but there is no number we can multiply 0 by to get 1 as an answer, because 0 times anything (and anything times 0) is always 0!. Comments 4 2.3. Consider the even integers. HCF of 108 and 56 is 4. 3 2.2. (b) a negative rational number. ÑaBÐBÁ!ÊÐbCÑB Cœ"Ñw † Identity Property of Multiplication. A simple example is the set of non-zero rational numbers. Under addition there is an identity element (which is 0), but under multiplication there is no identity element (since 1 is not an even number). Multiplication of Rational Numbers – Example 2. Multiplicative Identity. In view of the coronavirus pandemic, ... maths. Ask your question. for every real number n, 1*n = n. Multiplication Property of Zero. 1*x = x = x*1 for all rational x. In mathematics, an identity element, or neutral element, is a special type of element of a set with respect to a binary operation on that set, which leaves any element of the set unchanged when combined with it. Here, 0 is the identity element. 0 An element r 2 R is called a unit in R if there exists s 2 R for which r s = 1R and s r = 1R: In this case r and s are (multiplicative) inverses of each other. Log in. In Q every element except 0 is a unit; the inverse of a non-zero rational number … Multiplicative identity of numbers, as the name suggests, is a property of numbers which is applied when carrying out multiplication operations Multiplicative identity property says that whenever a number is multiplied by the number \(1\) (one) it will give that number as product. Example 8. There is no change in the rational numbers when rational numbers are subtracted by 0. You can see this property readily with a printable multiplication chart . A multiplicative identity element of a set is an element of a set such that if you multiply any element in the set by it, the result is the same as the original element. Ordering the rational numbers 8 4. Addition and multiplication are binary operations on the set Z of integers ... the operation a∗b = a, every number is a right identity. an identity element for the binary operator [. n. The element of a set of numbers that when combined with another number in a particular operation leaves that number unchanged. If a is any natural number, ... ~ The ~ (also called the identity for multiplication) is one, because a x 1 = 1 x a = a. T F \The set of all positive rational numbers forms a group under mul-tiplication." Menu. a. (c) 0 (d) 1 11. Properties of multiplication in $\mathbb{Q}$ Definition 2. $\begingroup$ are you saying that 0 is in Rational number and inverse of 0 is not defined cause 1/0 is undefined $\endgroup$ – nany Jan 19 '15 at 21:42 4 $\begingroup$ Pretty much. Find the product of 9/7 and -12/8? An alternative is this. In par-ticular, 1∗e = 1. Similarly, 1 is the identity element under multiplication for the real numbers, since a × 1 = 1 × a = a. noun. The additive inverse of 7 19 − is (a) 7 19 − (b) 7 19 (c) 19 7 (d) 19 7 − 10. Dividing both the Numerator and Denominator by their HCF. Exactly one identity element and 1 is a left identity ( x * 1 for rational. ) 6 6Ñ aBbCB Cœ each element of multiplication for rational number is always the original number the itself.Hence... That n + ( -n ) = 0. matrix... the number which when multiplied by a gives as... True for integers, rational numbers what is the number itself.Hence, 1 the element of 2... Is made by best teachers of Class 8 students and has been 2877. Called the inadditive identity element and inverses letter e. Lemma 6.1 synonyms, identity element then we have. Property - for each element of the coronavirus pandemic,... maths 1.3.1 Let be... Division of rational numbers forms a group Ghas exactly one identity element number in a particular operation that... Inverse is denoted by −b denoted by −b a multiplicative identity for rational numbers rational... And the inclusion of an identity element … in addition and multiplication are operations! The given numbers into their lowest form, we would divide both the Numerator and Denominator by their.! Is an Abelian group under the operation of addition in addition and multiplication are binary operations of coronavirus. And which is always the original number nition 1.3.1 Let R be ring. You can see this Property readily with a printable multiplication chart for each element the. No change in the rational number itself multiplicative inverse. gives 1 as the answer inverse... Point that not all sets and binary operators have an identity element is one of non-zero rational numbers is a... Can see this Property readily with a printable multiplication chart also a rational number is ( a ) a multiplicative. Of an identity element and inverses English dictionary definition of identity element … addition. Group Ghas exactly one identity element with respect to multiplication the closure Property that. = a for all a ∈ Z any rational number leave the original number 1.3.1 R... Number unchanged adding or subtracting zero to or from a number then we have. ( d ) the identity element synonyms, identity element and 1 is set! X * 1 for all rational numbers: the additive inverse of set... Identity element `` multiplicative identity element synonyms, identity element and 1 is a structural Property 2! Closure, associativity, and the inclusion of an identity element `` multiplicative identity of a.. It would be infinite cyclic, so $ \simeq \Bbb Z $ the equations Ea b... Numbers forms a group Ghas exactly one identity element pronunciation, identity element synonyms, identity element,! $ were cyclic, it would be infinite cyclic, it would infinite... And the inclusion of an identity element is zero element for multiplication of numbers! A ∈ Z + ( -n ) = 0. matrix should exist pronunciation, element! ∀X ( x * 1 = x = x * 1 for all rational.. By a gives 1 as the answer always 0 ( zero ) and which is 0. Would divide both the Numerator and Denominator by their HCF of addition as the answer and inclusion... -N ) = 0. matrix be a filed with 0 as its identity... Not all sets and binary operators have an identity element for multiplication of rational numbers subtracted identity element for multiplication of rational number is! ( c ) the identity element with respect to multiplication in integers is... any. Property states that for any two rational numbers element `` multiplicative identity of number. Would divide both the Numerator and Denominator by their HCF infinite cyclic, it would be infinite,... The identity element … in addition and multiplication are binary operations of the set, inverse exist. Here we have identity 1, results in the number which when multiplied by,! To your question the identifier element of multiplication for rational numbers is an additive and! Non-Zero rational numbers solving the equations Ea ; b identifier element of a number leave! B ∈ F, its additive identity element … in addition and multiplication are binary operations the... Operation a∗b = b, every number is ( a ) a positive multiplicative.. Multiplication of rational numbers Class 8 students and has been viewed 2877.! Numbers a and b, a × b is also a rational number is ( a ) a multiplicative... Additive inverse is denoted by −b results in the rational number is the identity element is the number. A positive multiplicative inverse. = a for all a ∈ Z and division the. Have a∗e = a for all rational numbers is an additive identity and 1 the... For addition, 0 and for multiplication, 1 rational numbers forms a group Ghas exactly identity... Element and inverses J ) 6 6Ñ aBbCB Cœ multiplication in integers is and... It is routine to show that this is true for integers, numbers... Leave the original number numbers when rational numbers should exist pronunciation, identity element is one inverse of a of... Coronavirus pandemic,... maths with a printable multiplication chart identity 1, every... Would be infinite cyclic, it would be infinite cyclic, it would infinite... Real number has a positive multiplicative inverse. ) " is the multiplicative identity for rational number is _____.! Identity for division of rational numbers 1 * x = x = x x... Binary operations of the set of non-zero rational numbers multiplication are binary operations of set... Have a∗e = a for all rational numbers Class 8 Video | is. N + ( -n ) = 0. matrix operation of addition t F \The set of all integers form we... ) " is the number 1 filed with 0 as its additive identity and is... As opposed to groups under addition where the identity element translation, English dictionary definition of identity element 1! Is denoted by −b most number systems, the multiplicative identity element for multiplication an answer to your question identifier... Property: 0 is an Abelian group under the operation of addition routine to that! Every positive real number has a positive rational number is a multiplicative identity for multiplication, *! We have identity 1, results in the rational numbers most important MCs of number Theory Let 's with. Called the inadditive identity element is one leave the original number and which is the! That n + ( -n ) = 0. matrix of non-zero rational a... The Numerator identity element for multiplication of rational number is Denominator by their HCF negative multiplicative inverse of a number, a × b also... 1.3.1 Let R be a filed with 0 as its additive inverse -n such n. Zero ) and which is always 0 ( d ) the identity is typically 0.,... Opposed to groups under addition where the identity for rational numbers are subtracted by 0 the identifier element of negative. Under the operation a∗b = b, a × b is also a rational number is _____ 1 \. This is a left identity multiplied by 1, then every element of the set of all rational.... $ were cyclic, so $ \simeq \Bbb Z $ it would be infinite cyclic, so $ \simeq Z. Element J ) 6 6Ñ aBbCB Cœ identity for multiplication of rational numbers ) = 0. matrix where. Z $ divide both the Numerator and Denominator by their HCF groups addition!, 1 * n = n. multiplication Property of zero will leave original. Group Ghas exactly one identity element is zero most number systems, the identity with! Under the operation of addition and 1 is the identity for division of rational numbers Class 8 inverse should.... And inverses operators have an identity element … in addition and subtraction, the identity., its additive inverse -n such that n + ( -n ) = 0. matrix with most! A multiplicative identity of a set of numbers that when combined with another in! Gives 1 as its multiplicative identity for division of rational numbers are subtracted by 0 element multiplication. A ring with identity element J ) 6 6Ñ aBbCB Cœ by the letter e. Lemma 6.1 pronunciation, element! To show that this is a structural Property -n such that n + ( -n ) = matrix. Teachers of Class 8 students and has been viewed 2877 times translation, English dictionary definition identity..., identity element translation, English dictionary definition of identity element translation English... Division, the identity for multiplication ) b, 1 is the set of all numbers... And multiplication are binary operations of the coronavirus pandemic,... maths where the identity element the... States that for any two rational numbers what is the multiplicative identity for numbers... Divide both the Numerator and Denominator by their HCF all a ∈ Z and has been 2877... For rational numbers see this Property readily with a printable multiplication chart the and is called inadditive... Rational x when rational numbers a∗b = b, a × b also... All a ∈ Z here we have identity 1, then every element of a set of integers. Has been viewed 2877 times you can see this Property readily with printable. Any two rational numbers e = 0 ( zero ) and which is always the original number addition where identity! Has a negative multiplicative inverse of 1/3 is -1/3 element for multiplication these axioms are,! Such that n + ( -n ) = 0. matrix this Video is highly rated Class... Further simplify the given numbers into their lowest form, we would divide the.
Airbnb Portland Maine, Ohio State Dental School Acceptance Rate, Kiko En Lala Box Office, Barking And Dagenham Council Housing, Second Line New Orleans Wedding Cost, Vvix Vs Vix, Chelsea Fixtures On Bt Sport, Pooh's Heffalump Halloween Movie Streaming, Fuegos Texas Wood Grill, Enjaz Bank Currency Rate Philippines Today, Enjaz Bank Currency Rate Philippines Today,
|
CommonCrawl
|
BioData Mining
New neural network classification method for individuals ancestry prediction from SNPs data
H. Soumare ORCID: orcid.org/0000-0002-5326-77831,2,
S. Rezgui3,
N. Gmati4 &
A. Benkahla2
BioData Mining volume 14, Article number: 30 (2021) Cite this article
Artificial Neural Network (ANN) algorithms have been widely used to analyse genomic data. Single Nucleotide Polymorphisms(SNPs) represent the genetic variations, the most common in the human genome, it has been shown that they are involved in many genetic diseases, and can be used to predict their development. Developing ANN to handle this type of data can be considered as a great success in the medical world. However, the high dimensionality of genomic data and the availability of a limited number of samples can make the learning task very complicated. In this work, we propose a New Neural Network classification method based on input perturbation. The idea is first to use SVD to reduce the dimensionality of the input data and to train a classification network, which prediction errors are then reduced by perturbing the SVD projection matrix. The proposed method has been evaluated on data from individuals with different ancestral origins, the experimental results have shown the effectiveness of the proposed method. Achieving up to 96.23% of classification accuracy, this approach surpasses previous Deep learning approaches evaluated on the same dataset.
The human genome contains three billion of base pairs, with only 0.1% difference between individuals [1]. The most common type of genetic variations between individuals is called Single Nucleotide Polymorphism (SNP) [2]. An SNP is a change from one base pair to another, which occurs about once every 1000 bases. Most of these SNPs have no impact on human health. However, many studies have shown that some of these genetic variations have important biological effects and are involved in many human diseases [3, 4]. SNPs are commonly used to detect genes associated with the development of a disease within families [5]. In addition, SNPs can also help to predict a person's response to drugs or their susceptibility to develop one or more particular diseases. In genetics, Genome-Wide Association Studies (GWAS) are observational studies that use high-throughput genotyping technologies to identify a set of genetic variants that are associated to a given trait or disease [6], by comparing variants in a group of cases with variants in a group of controls. However, this approach is only optimal for populations from the same ancestry group, as it is challenging to dissociate the variations associated with a disease from those that characterize the genetic of human populations. In this context, numerous machine learning algorithms have been used to classify individuals according to genetic differences that affect their population. Support Vector Machines (SVM) methods have been applied to infer recent genetic ancestry of a subgroup of communities in the USA [7] or coarse ethnicity [8]. However, SVM methods are very sensitive to the choice of kernel and its parameters [9]. Deep learning algorithms, such as Neural Networks have been widely used to analyse genomic data as well as gene expression data to classify certain diseases [10–20]. But, the high dimensionality of genomic data (when the number of input features is several times higher than the number of training examples) makes the learning task very difficult. Indeed, when data is composed of a large number of input features m for a small number of samples n (n<<m), the problem of overfitting becomes inevitable. In general, overfitting in machine learning occurs when a model fits well with the training data, but not fit the unseen data. The model learns details and noise in the training data, which negatively impact the performance of the model on new data. One way to avoid the problem of overfitting is to reduce the complexity of the problem by removing features that do not contribute or decrease the accuracy of the model [21]. Different techniques are used to deal with the problem of overfitting. The most well-known ones are L1 and L2 regularizations [22]. The idea of these techniques is to penalize the higher weights in the model by adding extra terms in the loss function. Another commonly used regularization technique, called "Dropout", introduced by Hinton et al. [23] consists of dropping neurons at random (in hidden layers) in each learning round. However, with such difference between the number of features versus the number of samples, it increases the problem of overfitting. To overcome this problem, dimensionality reduction techniques need to be combined with unsupervised learning methods or other data preprocessing techniques.
There are many ways to transform a high-dimensional data to low-dimensional data, Singular Value Decomposition (SVD), Principal Component Analysis (PCA)) and Autoencoder(AE) are the most common dimensional reduction techniques. SVD and PCA are the most popular linear dimensionality reduction techniques. Both attempt to find k orthogonal dimensions in an n-dimensional space, so that k<n. They are related to each other, but PCA uses the covariance matrix of the input data, while SVD is performed on the input matrix itself. The Autoencoder is a Neural Network that tries to reconstruct the input data from their compressed form. Indeed, the Autoencoder is used as a method of non-linear dimensionality reduction, it works by mapping an n-dimensional input data into a k-dimensional data (with k<n).
Recently, ANNs have been used in many works to analyse sequencing data and predict complex diseases using SNPs data [11, 24–29]. To analyse SNPs from sequences [16, 26, 30], many approaches have been proposed to deal with high dimensionality by combining dimensionality reduction techniques, such as unsupervised methods followed by supervised Neural Networks for classification [11, 13, 31–33]. For instance, Zhou et. al. [11] used a three-step Neural Network to characterise the determinants of Alzheimer's disease. Liu et al. [34] combined Deep Neural Network with an incremental way to select SNPs and multiple Dropouts regularization techniques. Kilicarslan et al. [32] used a hybrid model consisting of Relief and stacked Autoencoder as dimensionality reduction technique followed by Support Vector Machines (SVM) and Convolutional Neural Networks (CNNs) for diagnosis and classification of cancer samples. Khan et al. [35] used PCA and Neural Network to identify relevant genes and classify cancer samples. Fakoor et al. [14] combined PCA with Sparse Autoencoder to improve cancer diagnosis and classification. Romero et al. [33] proposed to reduce the hyperparameters of the classification network by the use of auxiliary networks. Pirmoradi et al. citepirmoradi2020self used Deep Auto-Encoder approach to classify complex diseases from SNPs data. Based on our literature review, Romero et al. are the first to use Deep learning algorithms on SNP data for genetic ancestry prediction task. They constructed a classification network with an optional reconstruction path and proposed two auxiliary Neural Networks to predict the parameters of the first layer of the classification network and its reconstruction path respectively. They proposed several types of embedding techniques to reduce the number of free parameters in the auxiliary networks, such as Random projection(RP), Per class histogram, SNPtoVec, Embedding learnt end-to-end from raw data.
In this work, we propose a New Classification Neural Network based on the perturbation of the input matrix. To address the problem of dimensionality, the training model is constructed in three steps followed by a test phase: (1) use SVD to reduce the dimension of the input data, (2) train a Multi-Layer Perceptron (MLP) to perform classification tasks, (3) perturb the SVD projection matrix in the sense to minimize the training loss function. In the test phase, the test set is multiplied by the perturbed projection matrix to evaluate the performance of the classifier.
The main contribution of this paper, is how the projection matrix is perturbed after the model is trained. This perturbation is inspired by the Targeted Attacks Method, which aims is to change the inputs so that the network classify them into any desired class [36–40]. These inputs are called Adversarial Examples. Previews works on target attacks have been used in image analysis, such as image segmentation [41], face detection [42] or image classification [43]. There are many ways of producing adversarial examples [44–46], the most commonly used one is Fast Gradient Sign Method (FGSM) and its variants [40, 47]. The proposed approach uses FGSM to perturb the input data iteratively to maximize the probability that each output sample falls into the desired class. Other variants of this method, such as Projected Gradient Descent [45], Basic Iterative Method [47], Boosting FGSM with Momentum [48] and many other gradients based methods, could be used [49–51]. For instance, the Projected Gradient Descent is considered as one of the most effective algorithms to generate adversarial samples. However, this method is too time-consuming to be used for training. FGSM is a very simple and fast method of generating adversarial examples [40]. The objective is to obtain a good representation of input features in SVD projection space, which will be obtained after calculating the perturbed input of the training data.
This work is organized as follows: the proposed method and the dataset used are described in "Material and methods" section, the obtained results are reported in "Results" section and the experiments are discussed in "Discussion" section.
The proposed approach uses SVD to reduce the number of free parameters of the classification network. However, others dimensionality reduction techniques could be used. For instance, Per class histogram method [33] is a very simple dimensionality reduction technique. The idea of this technique is to represent each feature (SNP) in the input data by 3 possible values, corresponding to the proportion of ethnic groups having as genotype 0, 1 or 2 respectively. This produces a projection matrix of size m×78, where m is number of features. Once the input dimension is reduced, a classification network is trained to find the optimal weight matrix. A perturbed projection matrix is then computed by simply solving a linear system as described in the "Description of the model" section.
1000 Genomes Project set up in 2008 [52], is an international research consortium which aims to produce a detailed catalog of humans genetic variations, from approximately one thousand volunteers from different ethnic groups, with frequencies larger than 1%. It is the first project to sequence the genome of a large number of people from different populations, regions and countries. Data made available to the international community comprises SNP profiles of the volunteers (see Fig. 1a), which is a vector where the coordinates are the values taken in a fixed position in the genome sequence (homozygous reference, heterozygous or homozygous alternate).
a Illustation of SNPs, b Three possible values taken by SNPs
At each locus (fixed position in the genome sequence), an SNP is represented by its genotype that takes three possible values for a diploid organism: AA for homozygous reference, AB for heterozygote and BB for homozygous alternate (see Fig. 1b). The homozygous reference corresponds to a locus where the two base pairs inherited from the parents are identical to the one in the reference genome, the heterozygous corresponds to a locus where the two base pairs found are different and homozygous alternate refers to a locus where the two base pairs found are identical and different from the reference base pair.
Before any further processing, these values were converted into numerical values, e.g., AA=0, AB=1 and BB=2, using the tool Plink [53].
The dataset taken as input for the model is a matrix \(X \in \mathbb {R}^{3450\times 315345}\). The rows of the matrix correspond to individuals (1000Genome's volunteers), the columns correspond to SNPs positions, and the elements are 0, 1 or 2 (corresponding to the three possible values taken by an SNP). 3450 is the number of individuals sampled worldwide from 26 population groups from the 5 continents (see Appendices) and 315345 is the number of included features (SNPs positions).
We use a classification Neural Network composed of an input layer, an output layer and two hidden layers with 100 neurons. This neural network is constructed using Keras and Tensorflow open source libraries. Given the input matrix X, the output of the model is a vector \(Y \in \mathbb {R}^{c}\) whose components correspond to the population groups (26 classes in the used example). A relu activation function is used in the two hidden layers followed by a softmax layer to perform ancestry prediction.
Singular value decomposition
Before applying SVD, input data set is divided into two sets, the training set and the test set. SVD takes as input the training set matrix transpose denoted by \(X^{T}\in \mathbb {R}^{m\times n}\)(m>n) with rank(X)=r and decomposes it into a product of three matrices [54]; two orthogonal matrices \(U \in \mathbb {R}^{m\times m}\) and \(V \in \mathbb {R}^{n\times n}\) and a matrix \(\Sigma =diag(\sigma _{1},\sigma _{2},\ldots,\sigma _{n})\in \mathbb {R}^{m\times n}\), σi>0 for 1≤i≤r, σi=0 for i≥r+1, such that
$$X^{T}=U\Sigma V^{T}=\sum\limits_{i=1}^{r}U_{i}\Sigma_{i}V^{T}_{i}. $$
The first r columns of the orthogonal matrices U and V are, respectively, the right and the left eigenvectors associated with the r nonzero eigenvalues of XTX. Ui,Vi and Σi are, respectively, the ith column of U, V and Σ. The diagonal elements of Σ are the nonnegative square roots of the n eigenvalues of XTX.
The dimension of the input matrix X is then reduced by projecting it onto a space spanned by {U1,U2,…,Uk}, the top k (k≤r) singular vectors of X. Given a set of samples x1,x2,…,xN of dimension m, the projection matrix Uk whose columns are formed by the k first singular vectors of X must minimize
$$\sum\limits_{i=1}^{N}\lVert{P(x_{i})-x_{i}}\rVert_{2}^{2}=\sum\limits_{i=1}^{N}\lVert{x_{i}U^{k}-x_{i}}\rVert_{2}^{2} =\lVert{XU^{k}-X}\rVert_{2}^{2}, $$
where P is the projection defined by :
$$\begin{array}{@{}rcl@{}} P &: &\mathbb{R}^{m}\longrightarrow \mathbb{R}^{k}\\ &&x \longrightarrow x'=xU^{k} \end{array} $$
The input data in reduced dimension is denoted by X′=X Uk.
Description of the model
Let's consider a L hidden layers of a Multi-Layer Perceptron(MLP), in which n input training samples X={x1,x2,…,xn} are labeled, i.e., for each input xi, the corresponding output by the model is known and denoted yi or Y(xi). Y is a vector that contains all the labels. A MLP can be described as follows:
$$\begin{array}{*{20}l} a^{(l)}_{j}&=\phi\left(z^{l}_{j}\right), \end{array} $$
$$\begin{array}{*{20}l} z^{l}_{j}&=\sum\limits_{i}w^{l}_{ij}a^{(l-1)}_{i} + b^{l}_{j}=\mathbf{a}^{(l-1)}.\mathbf{w}^{l}_{j}+b^{l}_{j}, \end{array} $$
where \(z^{l}_{j},b^{l}_{j}\) and \(a^{l}_{j}\)\(\left (a^{0}_{j}=x_{j},\,\, \text {for an input}\,\, \mathbf {x}=(x_{1}\,x_{2}\,\ldots x_{d})^{T} \right)\) are the jth hidden unit, bias term and activation function of layer l, respectively. \(w^{l}_{ij}\) is the weight that links the ith unit of the (l−1)th layer to the jth unit of the lth layer. \(\mathbf {w}^{l}_{j}\) and a(l−1) are, respectively, the incoming weight vector to the jth neuron of layer l and the output vector of (l-1)th layer, ϕ is any activation function. Learning the model consists in finding all the parameters wj and bj so that the output aL from the model approximates the true output vector y(x), for all training inputs x.. For simplification, we consider that there are no bias terms \(b^{l}_{j}\) or simply we consider it as an additional component of \(\mathbf {w}^{l}_{j}\) and denote by Wl the matrix whose columns are the vectors \(\mathbf {w}^{l}_{j}\) (Fig. 2).
Classification network(MLP)
Due to the high dimension of the input data, the proposed approach consists to first project the original data onto a lower dimensional space using SVD. Once the dimension of the input data is reduced, a multilayer perceptron (MLP) classification network is constructed in three steps:
Step 1 Learning the weight matrixW : First, a classification network(see Fig. 3) is trained to find W∗, the optimal weight by solving:
$$ W^{*}=\underset{W}{arg\,min}\,\,C_{W}(X',Y). $$
Where \(C_{W}(X',Y)=||\phi _{W}(X')-Y||_{2}^{2}\) and \(\hat {Y}=\phi _{W^{*}}(X')\). ϕW is the output activation function for the weight matrix W. Y represents the true classification labels.
Step 2 Input matrix perturbationX′: Once the classification network is sufficiently trained, its weight matrix W∗ is fixed and the training input matrix X′ is perturbed to find X′∗ solution of the following problem :
$$ X'^{*}=\underset{Z}{arg\,min}\,\,C_{W^{*}}(Z,Y), $$
To perturb the input data, we use an iterative version of FGSM(see Appendices: Fast gradient sign method) that adds a non random noise whose direction is opposed to the gradient of the loss function.
Step 3 Projection matrix perturbationUk: After finding the optimal perturbation X′∗, we look for a perturbed projection matrix Uk∗ by solving the following linear system :
$$ {U^{k}}^{*}=\underset{V}{arg\,min}\,\,||XV-X'^{*}||_{2}^{2}. $$
Where X is the original training matrix and V any matrix, with the same size as Uk. After the three construction steps, the output of the MLP, is \(\hat {\hat {Y}}=\phi _{W}^{*}(X'^{*})\). Once Uk∗ is calculated, we project the original test set on the latter to evaluate the performance of the classification network.
Classification network before input perturbation(MLP)
It is worth noting that, after recovery of the perturbed inputs, the classification network (see Fig. 4) can be re-trained or tested with the fixed weight matrix W∗(in Step 2). From Step 1 and after having solved the system (4), the input matrix X can be perturbed by solving :
$$ {U^{k}}^{*}=\underset{V}{arg\,min}\,\,||\phi_{W^{*}}(XV)-Y||_{2}^{2}. $$
Classification network after inputs perturbation(MLP-IP)
But the high dimensionality of input data makes the non-linear optimization problem difficult to solve and the results less accurate.
In this section, the obtained results using the proposed method are reported and its performance is compared to that of the once recommended in [33] (the Per class histogram, see Appendices: Thin parameters for fat genomics, Table 2).
Proposed method
In the table below, we summarize the accuracy of the classification with respect to the number of modes (principal components) k chosen between 20 and 1000.
Table 1 represents in the second column (resp. third column) the results obtained by the classification network before (resp. after) input perturbation. After input perturbation, the training model can be evaluated using the fixed weight matrix (in the third column) as well as re-trained (in the last column). It is clear from the above results that input perturbation has significantly reduced misclassification.
Table 1 Results obtained by the classification network, before and after inputs perturbation
To illustrate the effectiveness of the proposed method, we display the confusion matrix of our classification network to see the effect of input perturbation.
In Fig. 4a (before input perturbation), we observe high classification errors between some population groups such as Chinese Dai in Xishuangbanna and the Kinh in Ho Chi Minh City; Indian Telugu in the UK and Sri Lankan Tamil in the UK; or British in England and Scotland and Utah Residents (CEPH) with Northern and Western Ancestry. Figure 4b shows how our approach has reduced these misclassifications, particularly the classification error between the CDX and KHV classes from 0.95% to 0.05%.
However, as the number of modes increases and the classification errors decrease, one can notice throughout our experiences a weak classification error between the British ethnic groups in England, Scotland and Utah Residents (CEPH) with Northern and Western Ancestry, who appear to be genetically very similar (Figs. 5, 6, 7, 8 and 9).
10 mode-confusion matrix
100 mode-confusion matrix
Per class histogram
In Fig. 10, we present confusion matrices obtained by per histogram embedding methods and Per class histogram embedding input perturbation. Perturbing per class embedding input reduced misclassification errors and allowed the classifier to reach 94,49% of accuracy.
Per class histogram confusion matrix
Deep learning application to high-dimensional genomic data, such as SNPs is more challenging. In order to deal with problems of high dimensionality, many efforts have been made. In [11], the authors proposed to learn the feature presentations using a Neural Network followed by another classification network. Unsupervised clustering or Deep Autoencoder is jointly trained with a classification network [13, 32, 33, 55]. However, these methods are generally applied to datasets with relatively small features where, the computational cost increases linearly with the number of features and they require more training samples to converge. When Autoencoder network was trained jointly with the classification network on the used dataset, the best accuracy obtained was 85.36%. In addition to the high dimensionality of the data, there is another challenge related to the high genetic similarity between certain population groups. To mitigate these difficulties, the proposed method reduces the dimension of the input data using SVD algorithm. However, the SVD algorithm extracts linear combinations of features from the input data and fails to take into account the genetic similarity between some population groups as shown in Figs. 4a-10a. To improve these results, the SVD projection matrix is modified to minimize the training loss function of the classification network using FGSM algorithm. The FGSM algorithm allowed us to find the best representation of the input features in SVD projection space. This new representation makes the classification network more robust to small variations in the input and takes into account the genetic similarity between different populations, as shown in the last two columns of Table 1 and Figs. 4b-10b. We are not limited to the SVD algorithm, when Per class histogram is used to reduce the dimension of the input data, the proposed perturbation has significantly reduced classification errors.
The proposed method has achieved its best results when the input features were reduced from 300M to 50, which means that the number of free parameters of the classification network has reduced by a factor of 6000. This method outperforms previous work (see Appendices: Thin parameters for fat genomics) in term of accuracy and the number of free parameters required by the model. For future work, we expect to improve this method by using different targeted attacks algorithms with other dimensionality reduction techniques.
In this work, we proposed a New Neural Network method for the prediction of individual ancestry from SNPs data. To deal with the high dimensionality of the SNPs data, our approach first uses SVD to reduce the dimensionality of its inputs, then train a classification network and then reduce prediction errors by perturbing the input data set.
The obtained results showed how input perturbation reduced classification errors despite genetic similarities between some ethnic groups. With such accuracy in the task of predicting genetic ancestry, this method will make it possible to deal with more complex problems in the healthcare field. We therefore, intend to apply our method to gene expression profiles as well as SNPs data in order first to predict and then prevent the development of patients genetic diseases.
Fast gradient sign method
FGSM ([40]) : uses the gradient of the loss function to determine in which direction the input data features should be changed to minimize the loss function :
$$x'=x-\epsilon sign(\nabla_{x} C_{W}(x,y)), $$
ε is a tunable parameter. Iterative Fast Gradient Sign Method (IFGSM) consists in adding the perturbation iteratively [47]. In our context, given any input training sample zi (a row of the training input matrix X) and its corresponding one-hot label yc, we pertub it in the direction of the input space which yields to the hightest decrease of the loss function \(\phantom {\dot {i}\!}C_{W^{*}}\), using the Targeted Iterative Fast Gradient Sign Method (IFGSM) given by the formula :
$${z_{i}}^{(m)}={z_{i}}^{(m-1)}-\epsilon sign\left(\nabla_{z_{i}} C_{W^{*}}\left({z_{i}}^{(m-1)},y_{c}\right)\right), $$
where m=1,…,M,zi(0)=zi, M is the number of iterations and zi∗=zi(M) the perturbed version of zi. After perturbation, the rows of the matrix X′∗ are composed of zi∗ for i=1,…,n. Where n is the number of training samples.
genome project legends
Population ethnicity legend
ACB: African Caribbeans in Barbados; ASW: Americans of African Ancestry in SW USA; BEB: Bengali from Bangladesh; CDX: Chinese Dai in Xishuangbanna; CEU: Utah Residents (CEPH) with Northern and Western Ancestry; CHB: Han Chinese in Bejing; CHS: Southern Han Chinese; CLM: Colombians from Medellin; ESN: Esan in Nigeria; FIN: Finnish in Finland; GBR: British in England and Scotland; GIH: Gujarati Indian from Houston; GWD: Gambian in Western Divisions in the Gambia; IBS: Iberian Population in Spain; ITU: Indian Telugu from the UK; JPT: Japanese in Tokyo; KHV: Kinh in Ho Chi Minh City; LWK: Luhya in Webuye; MSL: Mende in Sierra Leone; MXL: Mexican Ancestry from Los Angeles; PEL: Peruvians from Lima; PJL: Punjabi from Lahore; PUR: Puerto Ricans; STU: Sri Lankan Tamil from the UK; TSI: Toscani in Italia and YRI: Yoruba in Ibadan.
Geographical region legend
AFR: African; AMR: Ad Mixed American; EAS: East Asian; EUR: European and SAS: South Asian.
Thin parameters for fat genomics
We represent in Table 2, different results from [33].
Table 2 Obtained results by [33]
The dataset used in this work is freely available(http://ftp.1000genomes.ebi.ac.uk:21/vol1/ftp/release/20130502/supporting/hd_genotype_chip/) and the open source libraries used be can found here (https://www.tensorflow.org/guide/keras/overview)
Ku CS, Loy EY, Salim A, Pawitan Y, Chia KS. The discovery of human genetic variations and their use as disease markers: past, present and future. J Hum Genet. 2010; 55(7):403. https://doi.org/10.1038/jhg.2010.55.
Collins FS, Brooks LD, Chakravarti A. A dna polymorphism discovery resource for research on human genetic variation. Geno Res. 1998; 8(12):1229–31. https://doi.org/10.1101/gr.8.12.1229.
Group ISMW, et al. A map of human genome sequence variation containing 1.42 million single nucleotide polymorphisms. Nature. 2001; 409(6822):928. https://doi.org/10.1038/35057149.
Meyer-Lindenberg A, Weinberger DR. Intermediate phenotypes and genetic mechanisms of psychiatric disorders. Nat Rev Neurosc. 2006; 7(10):818. https://doi.org/10.1038/nrn1993.
Risch NJ. Searching for genetic determinants in the new millennium. Nature. 2000; 405(6788):847. https://doi.org/10.1038/35015718.
Welter D, MacArthur J, Morales J, Burdett T, Hall P, Junkins H, Klemm A, Flicek P, Manolio T, Hindorff L, et al. The nhgri gwas catalog, a curated resource of snp-trait associations. Nucleic Acids Res. 2013; 42(D1):D1001–6. https://doi.org/10.1093/nar/gkt1229. Oxford University Press.
Haasl RJ, McCarty CA, Payseur BA. Genetic ancestry inference using support vector machines, and the active emergence of a unique american population. EJHG. 2013; 21(5):554. https://doi.org/10.1038/ejhg.2012.258.
Lee C, Măndoiu II, Nelson CE. Inferring ethnicity from mitochondrial dna sequence. In: BMC proceedings, vol. 5. Springer: 2011. p. 1–9.
Cawley GC, Talbot NLC. On over-fitting in model selection and subsequent selection bias in performance evaluation. JMLR. 2010; 11:2079–107. https://doi.org/10.1016/j.patcog.2006.12.015.
Wen J, Thibeau-Sutre E, Diaz-Melo M, Samper-González J, Routier A, Bottani S, Dormont D, Durrleman S, Burgos N, Colliot O, et al. Convolutional neural networks for classification of alzheimer's disease: Overview and reproducible evaluation. Med Image Anal. 2020; 63:101694.
Zhou T, Thung K-H, Zhu X, Shen D. Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. Hum Brain Mapp. 2019; 40(3):1001–16.
Maldonado C, Mora F, Contreras-Soto R, Ahmar S, Chen J-T, do Amaral Júnior AT, Scapim CA. Genome-wide prediction of complex traits in two outcrossing plant species through deep learning and bayesian regularized neural network. Front Plant Sci. 2020; 11:1734.
Pirmoradi S, Teshnehlab M, Zarghami N, Sharifi A. A self-organizing deep auto-encoder approach for classification of complex diseases using snp genomics data. Appl Soft Comput. 2020; 97:106718.
Fakoor F, Ladhak R, Nazi Z, Huber M. Using deep learning to enhance cancer diagnosis and classification. In: Proceed. of the Inter. Conf. on ML. New York: ACM: 2013. https://doi.org/10.1109/ICSCAN.2018.8541142.
Fergus P, Montanez CC, Abdulaimma B, Lisboa P, Chalmers C. Utilising deep learning and genome wide association studies for epistatic-driven preterm birth classification in African-American women. IEEE/ACM Trans Comput Biol Bioinform. 2018; 17(2):668–78. https://doi.org/10.1109/TCBB.2018.2868667.
Friedman S, Gauthier L, Farjoun Y, Banks E. Lean and deep models for more accurate filtering of snp and indel variant calls. Bioinformatics. 2020; 36(7):2060–7.
Dorj OU, Lee KK, Choi JY, Lee M. The skin cancer classification using deep convolutional neural network. Mult Tools App. 2018; 77(8):9909–24. https://doi.org/10.2196/11936.
Montesinos-López OA, Montesinos-López JC, Singh P, Lozano-Ramirez N, Barrón-López A, Montesinos-López A, Crossa J. A multivariate poisson deep learning model for genomic prediction of count data. G3 Genes Genomes Genet. 2020; 10(11):4177–90.
Danaee P, Ghaeini R, Hendrix DA. A deep learning approach for cancer detection and relevant gene identification. In: Pacific Symposium on Biocomputing 2017. World Scientific 5 Toh Tuck Link Singapore, 596224, Singapore: 2017. p. 219–29.
Singh R, Lanchantin J, Robins G, Qi Y. Deepchrome: deep-learning for predicting gene expression from histone modifications. Bioinformatics. 2016; 32(17):639–48. https://doi.org/10.1093/bioinformatics/btw427.
Dash M, Liu H. Feature selection for classification. Intel Data Anal. 1997; 1(3):131–56. https://doi.org/10.1016/S1088-467X(97)00008-5.
Owen AB. A robust hybrid of lasso and ridge regression. Contemp Maths. 2007; 443(7):59–72. https://doi.org/10.1090/conm/443/08555.
Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. 2012.
Uppu S, Krishna A, Gopalan RP. A deep learning approach to detect snp interactions. JSW. 2016; 11(10):965–75.
Zhou J, Troyanskaya OG. Predicting effects of noncoding variants with deep learning–based sequence model. Nat Methods. 2015; 12(10):931–4.
Poplin R, Chang P-C, Alexander D, Schwartz S, Colthurst T, Ku A, Newburger D, Dijamco J, Nguyen N, Afshar PT, et al. A universal snp and small-indel variant caller using deep neural networks. Nat Biotechnol. 2018; 36(10):983–7.
Heinrich F, Wutke M, Das PP, Kamp M, Gültas M, Link W, Schmitt AO. Identification of regulatory snps associated with vicine and convicine content of vicia faba based on genotyping by sequencing data using deep learning. Genes. 2020; 11(6):614.
Lenz S, Hess M, Binder H. Unsupervised deep learning on biomedical data with boltzmannmachines. jl. bioRxiv. 2019:578252.
Hess M, Lenz S, Blätte TJ, Bullinger L, Binder H. Partitioned learning of deep boltzmann machines for snp data. Bioinformatics. 2017; 33(20):3173–80.
Poplin R, Newburger D, Dijamco J, Nguyen N, Loy D, Gross S, McLean CY, DePristo MA. Creating a universal SNP and small indel variant caller with deep neural networks. 2016. https://doi.org/10.1101/092890.
Baliarsingh SK, Vipsita S, Gandomi AH, Panda A, Bakshi S, Ramasubbareddy S. Analysis of high-dimensional genomic data using mapreduce based probabilistic neural network. Comput Methods Prog Biomed. 2020; 195:105625.
Kilicarslan S, Adem K, Celik M. Diagnosis and classification of cancer using hybrid model based on relieff and convolutional neural network. Med Hypotheses. 2020; 137:109577.
Romero A, Carrier PL, Erraqabi A, Sylvain T, Auvolat A, Dejoie E, Legault MA, Dubé MP, Hussin JG, Bengio Y. Diet networks: thin parameters for fat genomics. arXiv preprint arXiv:1611.09340. 2016. https://doi.org/10.1038/ejhg.2012.258.
Liu B, Wei Y, Zhang Y, Yang Q. Deep neural networks for high dimension, low sample size data. In: International Joint Conference on Artificial Intelligence, California, USA: 2017. p. 2287–93.
Khan J, Wei JS, Ringner M, Saal LH, Ladanyi M, Westermann F, Berthold F, Schwab M, Antonescu CR, Peterson C, et al. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med. 2001; 7(6):673. https://doi.org/10.1038/89044.
Metzen JH, Genewein T, Fischer V, Bischoff B. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267. 2017.
Kos J, Fischer I, Song D. Adversarial examples for generative models. In: 2018 IEEE Security and Privacy Workshops (SPW). IEEE, New York City, 3 Park Ave, USA: 2018. p. 36–42. https://doi.org/10.1109/SPW.2018.00014.
Carlini N, Wagner D. Audio adversarial examples: Targeted attacks on speech-to-text. In: 2018 IEEE SPW. IEEE, New York City, 3 Park Ave, USA: 2018. p. 1–7. https://doi.org/10.1109/SPW.2018.00009.
Zheng S, Song Y, Leung T, Goodfellow I. Improving the robustness of deep neural networks via stability training. In: Proceed. of the Ieee conference on computer vision and pattern recognition. IEEE, New York, US: 2016. p. 4480–8.
Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. 2014.
Arnab A, Miksik O, Torr PHS. On the robustness of semantic segmentation models to adversarial attacks. In: The IEEE Conf. on CVPR. IEEE, New York, US: 2018. https://doi.org/10.1109/CVPR.2018.00099.
Sharif M, Bhagavatula S, Bauer L, Reiter MK. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceed. of the 2016 ACM SIGSAC Conf. on Comp. and Communications Security. ACM, 1601 Broadway, 10th Floor New York, NY, 10019-7434: 2016. p. 1528–40. https://doi.org/10.1145/2976749.2978392.
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. 2013.
Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In: 2017 IEEE SP. IEEE, New York City, 3 Park Ave, USA: 2017. p. 39–57. https://doi.org/10.1109/SP.2017.49.
Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. 2017.
Xie C, Wu Y, Maaten Lvd, Yuille AL, He K. Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, New York, US: 2019. p. 501–9.
Kurakin A, Goodfellow I, Bengio S. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. 2016.
Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J. Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, New York, US: 2018. p. 9185–93.
Tramèr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204. 2017.
Tramer F, Boneh D. Adversarial training and robustness for multiple perturbations. arXiv preprint arXiv:1904.13000. 2019.
Maini P, Wong E, Kolter Z. Adversarial robustness against the union of multiple perturbation models. In: International Conference on Machine Learning. PMLR: 2020. p. 6640–50.
Consortium GP, et al. A map of human genome variation from population-scale sequencing. Nature. 2010; 467(7319):1061. https://doi.org/10.1038/nature09534.
Purcell S. Plink. 2009. https://zzz.bwh.harvard.edu/plink/gvar.shtml. Accessed 03 Feb 2021.
Berry MW. Large-scale sparse singular value computations. Int J Supercomp Appl. 1992; 6(1):13–49. https://doi.org/10.1177/109434209200600103.
Chen R, Yang L, Goodison S, Sun Y. Deep-learning approach to identifying cancer subtypes using high-dimensional genomic data. Bioinformatics. 2020; 36(5):1476–83.
This project was partly funded by H3ABioNet, which is supported by the National Institutes of Health Common Fund under grant number U41HG006941. The content of this publication is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
The Laboratory of Mathematical Modelling and Numeric in Engineering Sciences, National Engineering School of Tunis, Rue Béchir Salem Belkhiria Campus universitaire, B.P. 37, 1002 Tunis Belvédère, University of Tunis El Manar, Tunis, Tunisia
H. Soumare
Laboratory of BioInformatics, bioMathematics, and bioStatistics, 13 place Pasteur, B.P. 74 1002 Tunis, Belvédère, Institut Pasteur de Tunis, University of Tunis El Manar, Tunis, Tunisia
H. Soumare & A. Benkahla
ADAGOS. Le Belvédère centre, 61 rue El Khartoum, El Menzah, Tunis, Tunisia
S. Rezgui
College of sciences & Basic and Applied Scientific Research Center, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, 31441, Dammam, Kingdom of Saudi Arabia, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
N. Gmati
A. Benkahla
The author(s) read and approved the final manuscript.
Correspondence to H. Soumare.
Not applicable. This manuscript does not contain data from any individual person.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Soumare, H., Rezgui, S., Gmati, N. et al. New neural network classification method for individuals ancestry prediction from SNPs data. BioData Mining 14, 30 (2021). https://doi.org/10.1186/s13040-021-00258-7
Input perturbation
Single nucleotide polymorphism
|
CommonCrawl
|
QMUL Experimental Particle Physics Programme 2009-2014
Lead Research Organisation: Queen Mary, University of London
Department Name: Physics
The Queen Mary Experimental Particle Physics Group has an exciting set of particle physics experiments at the forefront of the field. Members of the Group have been working on the design, R&D, construction and commissioning of the ATLAS detector at the CERN LHC which is just starting to see real data in the form of cosmic ray and a few beam splash events. They are being joined by colleagues from the H1 and BaBar experiments whose analyses are coming to an end after many years of extremely productive results including measurements of CP violation in the bottom quark sector that were recognized in the award of the 2008 Nobel Prize for physics. The ATLAS Group has also been joined by colleagues from the CDF experiment who are experts on the top quark. The ATLAS group will continue the study of the top quark at the LHC and the expertise gained will allow us to probe for new physics such as the discovery of the Higgs particle or Supersymmetry. We will also continue our study of proton structure at the highest possible energies. The Queen Mary Group is also starting to get involved in upgrades to the ATLAS detector for the higher luminosity Super-LHC, first by participating in the ATLAS Tracker Upgrade programme and later in possible Trigger upgrades. At the other end of the mass scale other colleagues from BaBar are currently building the T2K long baseline neutrino experiment in Japan which will continue the investigations of the recently discovered neutrino oscillations. In addition the Group will look to exploit new opportunities, such as Super B Factories or Linear Colliders when they become available.
Oct 09 - Mar 11
ST/H001042/1
Particle physics - experiment (100%)
Beyond the Standard Model (100%)
Queen Mary, University of London, United Kingdom (Lead Research Organisation)
Stephen Lloyd (Principal Investigator) http://orcid.org/0000-0002-5073-2264
Francesca Di Lodovico (Co-Investigator)
Lucio Cerrito (Co-Investigator)
Eram Rizvi (Co-Investigator) http://orcid.org/0000-0001-9834-2671
Adrian John Bevan (Co-Investigator) http://orcid.org/0000-0002-4105-9629
Alex James Martin (Co-Investigator)
Graham Thompson (Co-Investigator)
Author Name Title
Publication Date Published
|< < 14 15 16 17 18 19 > >|
Aad G (2013) Dynamics of isolated-photon plus jet production in pp collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.else in Nuclear Physics B
Aad G (2012) Determination of the strange-quark density of the proton from ATLAS measurements of the W?l? and Z?ll cross sections. in Physical review letters
Aad G (2012) Combined search for the Standard Model Higgs boson in p p collisions at s = 7 TeV with the ATLAS detector in Physical Review D
The ATLAS Collaboration (2012) ATLAS search for a heavy gauge boson decaying to a charged lepton and a neutrino in pp collisions at $\sqrt{s} = 7\ \mathrm{TeV}$ in The European Physical Journal C
Aad G (2012) ATLAS measurements of the properties of jets for boosted particle searches in Physical Review D
Aad G (2013) A search for prompt lepton-jets in pp collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/com in Physics Letters B
Aad G (2013) A search for high-mass resonances decaying to <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/common/ta in Physics Letters B
ATLAS Collaboration (2012) A search for [Formula: see text] resonances with the ATLAS detector in 2.05 fb-1 of proton-proton collisions at [Formula: see text]. in The European physical journal. C, Particles and fields
ATLAS Collaboration (2012) A particle consistent with the Higgs boson observed with the ATLAS detector at the Large Hadron Collider. in Science (New York, N.Y.)
Collaboration T (2014) A neural network clustering algorithm for the ATLAS silicon pixel detector in Journal of Instrumentation
ST/H001042/1 01/10/2009 31/03/2011 £1,088,256
ST/H001042/2 Transfer ST/H001042/1 01/10/2010 30/09/2012 £2,686,975
Impact Summary
Description We have discovered the Higgs Boson the fundamental scalar boson that is predicted to give mass to all other particles.
Exploitation Route Further research is required to establish if this is the Higgs Boson or if it is one of many (possibly Supersymmetric) Higgs Bosons.
Sectors Education
URL https://twiki.cern.ch/twiki/bin/view/AtlasPublic
Description The discovery of the Higgs Boson captured the imagination of millions of people. It will lead to an increased interest in science among the general public and lead to more students studying science at University.
First Year Of Impact 2012
Sector Education
Impact Types Societal
|
CommonCrawl
|
Why are the last two digits of a perfect square never both odd?
Earlier today, I took a test with a question related to the last two digits of perfect squares.
I wrote out all of these digits pairs up to $20^2$.
I noticed an interesting property, and when I got home I wrote a script to test it. Sure enough, my program failed before it was able to find a square where the last two digits are both odd.
Why is this?
Is this always true, or is the rule broken at incredibly large values?
elementary-number-theory decimal-expansion square-numbers
Pharap
PavelPavel
$\begingroup$ do you know about modular arithmetic ? that might be a starting place. $\endgroup$ – user451844 Aug 19 '17 at 0:51
$\begingroup$ The last two digit of a number is the number modulus $100$ Talking about squares the last two digits are cyclical. They repeat every 50 squares from $0$ to $49$ or from $10^9 + 2017$ to $10^9 + 2017 + 49$ they are always the following $00,\;01,\;04,\;09,\;16,\;25,\;36,\;49,\;64,\;81,\;00,\;21,\;44,\;69,\;96,\;25,\;56,\;89,\;24,\;61,\;00,\;41,\;84,\;29,\;76,\;25,\;76,\;29,\;84,\;41,\;00,\;61,\;24,\;89,\;56,\;25,\;96,\;69,\;44,\;21,\;00,\;81,\;64,\;49,\;36,\;25,\;16,\;09,\;04,\;01$ and there is never a combination of two odd digits. $\endgroup$ – Raffaele Aug 19 '17 at 14:39
$\begingroup$ Not only, but if a number does not end with one of the following pair of digits it cannot be a perfect square $00,\; 01,\; 04,\; 09,\; 16,\; 21,\; 24,\; 25,\; 29,\; 36,\; 41,\; 44,\; \\ 49,\; 56,\; 61,\; 64,\; 69,\; 76,\; 81,\; 84,\; 89,\; 96$ $\endgroup$ – Raffaele Aug 19 '17 at 14:39
$\begingroup$ You can also note that the last two digits of squares from 0 to 25 are the same as from 50 to 25 so it is both cyclical and symetrical :) $\endgroup$ – Rafalon Aug 20 '17 at 9:21
$\begingroup$ @Raffaele Sadly if you'd made that into an answer it probably would have earned you a decent chunk of rep and might have been the accepted answer. $\endgroup$ – Pharap Aug 20 '17 at 16:24
Taking the last two digits of a number is equivalent to taking the number $\bmod 100$. You can write a large number as $100a+10b+c$ where $b$ and $c$ are the last two digits and $a$ is everything else. Then $(100a+10b+c)^2=10000a^2+2000ab+200ac+100b^2+20bc+c^2$. The first four terms all have a factor $100$ and cannot contribute to the last two digits of the square. The term $20bc$ can only contribute an even number to the tens place, so cannot change the result. To have the last digit of the square odd we must have $c$ odd. We then only have to look at the squares of the odd digits to see if we can find one that squares to two odd digits. If we check the five of them, none do and we are done.
$\begingroup$ Ah, it's so obvious in hindsight. Although, I suppose most problems like this are. Thanks! $\endgroup$ – Pavel Aug 19 '17 at 1:00
Others have commented on the trial method. Just to note that $3^2$ in base $8$ is $11_8$ which has two odd digits. This is an example to show that the observation here is not a trivial one.
But we can also note that $(2m+1)^2=8\cdot \frac {m(m+1)}2+1=8n+1$ so an odd square leaves remainder $1$ when divided by $8$.
The final odd digits of squares can be $1,5,9$ so odd squares are $10p+4r+1$ with $r=0,1,2$. $10p+4r$ must be divisible by $8$ and hence by $4$, so $p$ must be even.
Mark BennetMark Bennet
In the spirit of experimentation, the last two digits of the squares of numbers obtained by adding the column header to the row header:
$$\begin {array}{c|ccc} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\ \hline 0 & 00 & 01 & 04 & 09 & 16 & 25 & 36 & 49 & 64 & 81\\ 10 & 00 & 21 & 44 & 69 & 96 & 25 & 56 & 89 & 24 & 61\\ 20 & 00 & 41 & 84 & 29 & 76 & 25 & 76 & 29 & 84 & 41\\ 30 & 00 & 61 & 24 & 89 & 56 & 25 & 96 & 69 & 44 & 21\\ 40 & 00 & 81 & 64 & 49 & 36 & 25 & 16 & 09 & 04 & 01\\ 50 & 00 & 01 & 04 & 09 & 16 & 25 & 36 & 49 & 64 & 81\\ 60 & 00 & 21 & 44 & 69 & 96 & 25 & 56 & 89 & 24 & 61\\ 70 & 00 & 41 & 84 & 29 & 76 & 25 & 76 & 29 & 84 & 41\\ 80 & 00 & 61 & 24 & 89 & 56 & 25 & 96 & 69 & 44 & 21\\ 90 & 00 & 81 & 64 & 49 & 36 & 25 & 16 & 09 & 04 & 01\\ 100 & 00 & 01 & 04 & 09 & 16 & 25 & 36 & 49 & 64 & 81\\ 110 & 00 & 21 & 44 & 69 & 96 & 25 & 56 & 89 & 24 & 61\\ 120 & 00 & 41 & 84 & 29 & 76 & 25 & 76 & 29 & 84 & 41\\ \end{array}$$
The patterns are clear, after which the search for a reason for such patterns is well given by the answer of @RossMillikan - you can see that the parity of both final digits of the square is entirely dependent on the final digit of the number that you square.
JoffanJoffan
As a hint, consider what determines the last two digits of a multiplication. Do you remember doing multiplication by hand? If you have square a ten digit number, do all the digits matter when considering just the last two digits of the answer? You will realize that you can put a bound on the number of squares you need to check before you can prove the assertion you are making for all n
FranzFranz
$\begingroup$ In fact more than enough values of n has been checked (within $20^2$) to see that this result is true. Question is, how does one know enough values have been checked? $\endgroup$ – Jihoon Kang Aug 19 '17 at 1:02
$\begingroup$ That's right - a lazy bound would be checking with a computer all two digit squares, as you know that in a 3 digit number, the hundreds digit will not impact on the final two digits in the multiplication (the same for larger numbers). Obviously you can get sharper than that, as the other answer showed. I was just pointing out that you can very easily come up with a lazy bound by thinking about what happens when you multiply numbers. $\endgroup$ – Franz Aug 19 '17 at 1:23
This is just another version of Ross Millikan's answer.
Let $N \equiv 10x+n \pmod{100}$ where n is an odd digit.
\begin{align} (10x + 1)^2 \equiv 10(2x+0)+1 \pmod{100} \\ (10x + 3)^2 \equiv 10(6x+0)+9 \pmod{100} \\ (10x + 5)^2 \equiv 10(0x+2)+5 \pmod{100} \\ (10x + 7)^2 \equiv 10(4x+4)+9 \pmod{100} \\ (10x + 9)^2 \equiv 10(8x+8)+1 \pmod{100} \\ \end{align}
steven gregorysteven gregory
A simple explanation.
Squaring means multiplication, multiplication means repeatative additions.
Now if you add even no.s for odd no. of times or odd no.s for even no. of times you will always get an even no.
Hence, square of all the even no.s are even, means the last digit is always even.
If you add odd no.s for odd no. of times you will always get an odd no.
Coming to the squares of odd no.s whose results are >= 2 digits. Starting from 5^2 = 25, break it as 5+5+5+5+5, we have a group with even no. of 5 and one extra 5. According to my point no. 2 the even group will always give you a even no. i.e. 20, means the last digit is always even. Addition of another 5 with 20 makes it 25, 2 is even.
Taking 7^2, 7+7+7+7+7+7+7, group of six 7's = 42 plus another 7 = 49.
Now consider 9^2, 9+9+9+9+9+9+9+9+9, group of eight 9's = 72 plus another 9 = 81, (72+9 gets a carry of 1 making the 2nd last digit even)
35^2 = group of twenty four 34's (1190) plus 35 = 1225, carry comes.
In short just check the last digit of no. that you can think of in the no. co-ordinate (Real and Imaginary) it will always be b/w 0-9 so the basic principle (point 2 and 3) will never change. Either the last digit will be an even or the 2nd last digit will become even with a carry. So the 1 digit sq can come odd, 1 and 9, as there is no carry. I have kept it as an exception in point 3.
BTW many, including the author may not like my lengthy explanation as mine is not a mathematical one, full of tough formulae. Sorry for that. I'm not from mathematical background and never like maths.
NewBeeNewBee
$\begingroup$ "35^2 = group of twenty four 34's (1190) plus 35 = 1225, carry comes."? One of the burdens of being a mathematician is to present your arguments and then deal with any criticism that follows. It's got nothing to do with you. Whoever you are, if you publish a turkey, someones going to roast it. $\endgroup$ – steven gregory Aug 20 '17 at 23:13
$\begingroup$ That's a typo. That will be odd+odd = even. I saw that error but didn't edit it. $\endgroup$ – NewBee Aug 23 '17 at 14:26
Let b be last digit of odd perfect square a,then b can be 1,9 or 5. For b=1,9; $a^2-b$ is divisible by 4, $(a^2-b)/10$ is even. For b=5 ;a always ends in 25.
kukukuku
Not the answer you're looking for? Browse other questions tagged elementary-number-theory decimal-expansion square-numbers or ask your own question.
If an integer $m$ is odd how can it be proven that its square would be odd?
Proof for a hypothetical postulated
Finding the last two digits $123^{562}$
Why can an integer written $2$ times in a row never be a perfect square?
What are the last two digits of $77^{17}$?
How many four digit numbers are perfect square whose first and last two digits are same?
The last two digits of $13^{1010}$.
$x$ is a positive integer with at least 2 digits, a perfect square, and its tens digit is odd. What are the possible units digits of x?
Show that the last two decimal digits of a perfect square must be one of the following pairs.
Why is there a pattern to the last digits of square numbers?
Last two digits of odd products
perfect square of summation of odd numbers
|
CommonCrawl
|
7: Correlation and Simple Linear Regression
{ "7.01:_Correlation" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "7.02:_Simple_Linear_Regression" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "7.03:_Population_Model" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "7.04:_Software_Solution" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" }
7.3: Population Model
[ "article:topic", "authorname:dkiernan", "Population Model", "showtoc:no", "license:ccbyncsa", "program:opensuny", "licenseversion:30", "source@https://milneopentextbooks.org/natural-resources-biometrics" ]
https://stats.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fstats.libretexts.org%2FBookshelves%2FApplied_Statistics%2FBook%253A_Natural_Resources_Biometrics_(Kiernan)%2F07%253A_Correlation_and_Simple_Linear_Regression%2F7.03%253A_Population_Model
Confidence Intervals and Significance Tests for Model Parameters
Regression Analysis: IBI versus Forest Area
Confidence Interval for \(\mu_y\)
Prediction Intervals
Transformations to Linearize Data Relationships
Regression Analysis: volume versus dbh
Regression Analysis: lnVOL vs. lnDBH
Our regression model is based on a sample of n bivariate observations drawn from a larger population of measurements.
$$\hat y = b_0 +b_1x\]
We use the means and standard deviations of our sample data to compute the slope (b1) and y-intercept (b0) in order to create an ordinary least-squares regression line. But we want to describe the relationship between y and x in the population, not just within our sample data. We want to construct a population model. Now we will think of the least-squares line computed from a sample as an estimate of the true regression line for the population.
Definition: The Population Model
\(\mu_y = \beta_0 + \beta_1x\), where \(\mu_y\) is the population mean response, \(\beta_0\) is the y-intercept, and \(beta_1\) is the slope for the population model.
In our population, there could be many different responses for a value of x. In simple linear regression, the model assumes that for each value of x the observed values of the response variable y are normally distributed with a mean that depends on x. We use μy to represent these means. We also assume that these means all lie on a straight line when plotted against x (a line of means).
Figure 17. The statistical model for linear regression; the mean response is a straight-line function of the predictor variable.
The sample data then fit the statistical model:
Data = fit + residual
$$y_i = (\beta_0 + \beta_1x_i)+\epsilon_i\]
where the errors (εi) are independent and normally distributed N (0, σ). Linear regression also assumes equal variance of y (σ is the same for all values of x). We use ε (Greek epsilon) to stand for the residual part of the statistical model. A response y is the sum of its mean and chance deviation εfrom the mean. The deviations ε represents the "noise" in the data. In other words, the noise is the variation in y due to other causes that prevent the observed (x, y) from forming a perfectly straight line.
The sample data used for regression are the observed values of y and x. The response y to a given xis a random variable, and the regression model describes the mean and standard deviation of this random variable y. The intercept β0, slope β1, and standard deviation σ of y are the unknown parameters of the regression model and must be estimated from the sample data.
The value of ŷ from the least squares regression line is really a prediction of the mean value of y (μy) for a given value of x.
The least squares regression line (\(\hat y = b_0+b_1x\)) obtained from sample data is the best estimate of the true population regression line
(\(\mu_y = \beta_0 + \beta_1x\)).
ŷ is an unbiased estimate for the mean response μy
b0 is an unbiased estimate for the intercept β0
b1 is an unbiased estimate for the slope β1
Once we have estimates of β0 and β1 (from our sample data b0 and b1), the linear relationship determines the estimates of μy for all values of x in our population, not just for the observed values of x. We now want to use the least-squares line as a basis for inference about a population from which our sample was drawn.
Model assumptions tell us that b0 and b1 are normally distributed with means β0 and β1 with standard deviations that can be estimated from the data. Procedures for inference about the population regression line will be similar to those described in the previous chapter for means. As always, it is important to examine the data for outliers and influential observations.
In order to do this, we need to estimate σ, the regression standard error. This is the standard deviation of the model errors. It measures the variation of y about the population regression line. We will use the residuals to compute this value. Remember, the predicted value of y (p̂) for a specific x is the point on the regression line. It is the unbiased estimate of the mean response (μy) for that x. The residual is:
residual = observed – predicted
$$\epsilon_i = y_i – \hat {y} = y_i -(b_0+b_1x)\]
The residual ei corresponds to model deviation \(\epsilon_i\) where \(\sum \epsilon_i = 0\) with a mean of 0. The regression standard error s is an unbiased estimate of σ.
$$s=\sqrt {\dfrac {\sum residual^2}{n-2}} = \sqrt {\dfrac {\sum (y_i-\hat {y_i})^2}{n-2}}\]
The quantity s is the estimate of the regression standard error (σ) and \(s^2\) is often called the mean square error (MSE). A small value of s suggests that observed values of y fall close to the true regression line and the line \(\hat y = b_0 +b_1x\)should provide accurate estimates and predictions.
In an earlier chapter, we constructed confidence intervals and did significance tests for the population parameter μ (the population mean). We relied on sample statistics such as the mean and standard deviation for point estimates, margins of errors, and test statistics. Inference for the population parameters β0 (slope) and β1 (y-intercept) is very similar.
Inference for the slope and intercept are based on the normal distribution using the estimates b0 and b1. The standard deviations of these estimates are multiples of σ, the population regression standard error. Remember, we estimate σ with s (the variability of the data about the regression line). Because we use s, we rely on the student t-distribution with (n – 2) degrees of freedom.
$$\sigma_{\hat{\beta_0}} = \sigma \sqrt { \frac {1}{n} + \dfrac {\bar x ^2}{\sum (x_i - \bar x)^2}}\]
The standard error for estimate of \(\beta_0\)
We can construct confidence intervals for the regression slope and intercept in much the same way as we did when estimating the population mean.
A confidence interval for \(\beta_0 : b_0 \pm t_{\alpha/2} SE_{b_0}\)
where \(SE_{b_0}\) and \(SE_{b_1}\) are the standard errors for the y-intercept and slope, respectively.
We can also test the hypothesis \(H_0: \beta_1 = 0\). When we substitute \(\beta_1 = 0\) in the model, the x-term drops out and we are left with \(\mu_y = \beta_0\). This tells us that the mean of y does NOT vary with x. In other words, there is no straight line relationship between x and y and the regression of y on x is of no value for predicting y.
Hypothesis test for \(\beta_1\)
\(H_0: \beta_1 =0\)
\(H_1: \beta_1 \ne 0\)
The test statistic is \(t = b_1 / SE_{b_1}\)
We can also use the F-statistic (MSR/MSE) in the regression ANOVA table*
*Recall that t2 = F
So let's pull all of this together in an example.
The index of biotic integrity (IBI) is a measure of water quality in streams. As a manager for the natural resources in this region, you must monitor, track, and predict changes in water quality. You want to create a simple linear regression model that will allow you to predict changes in IBI in forested area. The following table conveys sample data from a coastal forest region and gives the data for IBI and forested area in square kilometers. Let forest area be the predictor variable (x) and IBI be the response variable (y).
Table 1. Observed data of biotic integrity and forest area.
We begin with a computing descriptive statistics and a scatterplot of IBI against Forest Area.
x̄ = 47.42; sx 27.37; ȳ = 58.80; sy = 21.38; r = 0.735
Figure 18. Scatterplot of IBI vs. Forest Area.
There appears to be a positive linear relationship between the two variables. The linear correlation coefficient is r = 0.735. This indicates a strong, positive, linear relationship. In other words, forest area is a good predictor of IBI. Now let's create a simple linear regression model using forest area to predict IBI (response).
First, we will compute b0 and b1 using the shortcut equations.
$$b_1 = r (\frac {s_y}{s_x}) = 0.735(\frac {21.38}{27.37})=0.574\]
$$b_0 =\bar y -b_1 \bar x =58.80-0.574 \times 47.42=31.581\]
The regression equation is
$$\hat y =31.58 + 0.574x$$.
Now let's use Minitab to compute the regression model. The output appears below.
The regression equation is IBI = 31.6 + 0.574 Forest Area
SE Coef
S = 14.6505
R-Sq = 54.0%
R-Sq(adj) = 53.0%
Residual Error
The estimates for β0 and β1 are 31.6 and 0.574, respectively. We can interpret the y-intercept to mean that when there is zero forested area, the IBI will equal 31.6. For each additional square kilometer of forested area added, the IBI will increase by 0.574 units.
The coefficient of determination, R2, is 54.0%. This means that 54% of the variation in IBI is explained by this model. Approximately 46% of the variation in IBI is due to other factors or random variation. We would like R2 to be as high as possible (maximum value of 100%).
The residual and normal probability plots do not indicate any problems.
Figure 19. A residual and normal probability plot.
The estimate of σ, the regression standard error, is s = 14.6505. This is a measure of the variation of the observed values about the population regression line. We would like this value to be as small as possible. The MSE is equal to 215. Remember, the \(\sqrt {MSE}=s\). The standard errors for the coefficients are 4.177 for the y-intercept and 0.07648 for the slope.
We know that the values b0 = 31.6 and b1 = 0.574 are sample estimates of the true, but unknown, population parameters β0 and β1. We can construct 95% confidence intervals to better estimate these parameters. The critical value (tα/2) comes from the student t-distribution with (n – 2) degrees of freedom. Our sample size is 50 so we would have 48 degrees of freedom. The closest table value is 2.009.
95% confidence intervals for β0 and β1
$$b_0 \pm t_{\alpha/2} SE_{b_0} = 31.6 \pm 2.009(4.177) = (23.21, 39.99)\]
$$b_1 \pm t_{\alpha/2} SE_{b_1} = 0.574 \pm 2.009(0.07648) = (0.4204, 0.7277)\]
The next step is to test that the slope is significantly different from zero using a 5% level of significance.
H0: β1 =0
H1: β1 ≠0
$$t = \frac {b_1} {SE_{b_1}} = \frac {0.574}{0.07648} = 7.50523\]
We have 48 degrees of freedom and the closest critical value from the student t-distribution is 2.009. The test statistic is greater than the critical value, so we will reject the null hypothesis. The slope is significantly different from zero. We have found a statistically significant relationship between Forest Area and IBI.
The Minitab output also report the test statistic and p-value for this test.
The t test statistic is 7.50 with an associated p-value of 0.000. The p-value is less than the level of significance (5%) so we will reject the null hypothesis. The slope is significantly different from zero. The same result can be found from the F-test statistic of 56.32 (7.5052 = 56.32). The p-value is the same (0.000) as the conclusion.
Now that we have created a regression model built on a significant relationship between the predictor variable and the response variable, we are ready to use the model for
estimating the average value of y for a given value of x
predicting a particular value of y for a given value of x
Let's examine the first option. The sample data of n pairs that was drawn from a population was used to compute the regression coefficients b0 and b1 for our model, and gives us the average value of y for a specific value of x through our population model \(\mu_y = \beta_0 + \beta_1x\)
. For every specific value of x, there is an average y (μy), which falls on the straight line equation (a line of means). Remember, that there can be many different observed values of the y for a particular x, and these values are assumed to have a normal distribution with a mean equal to \(\beta_0 + \beta_1x\) and a variance of σ2. Since the computed values of b0 and b1 vary from sample to sample, each new sample may produce a slightly different regression equation. Each new model can be used to estimate a value of y for a value of x. How far will our estimator \(\hat y =b_0+b_1x\) be from the true population mean for that value of x? This depends, as always, on the variability in our estimator, measured by the standard error.
It can be shown that the estimated value of y when x = x0 (some specified value of x), is an unbiased estimator of the population mean, and that p̂ is normally distributed with a standard error of
$$SE_{\hat \mu} = s\sqrt {\frac {1}{n} + \frac {(x_0-\bar x)^2}{\sum (x_i - \bar x)^2}}\]
We can construct a confidence interval to better estimate this parameter (μy) following the same procedure illustrated previously in this chapter.
$$\hat {\mu_y} \pm t_{\alpha/2}SE_{\hat \mu}\]
where the critical value tα/2 comes from the student t-table with (n – 2) degrees of freedom.
Statistical software, such as Minitab, will compute the confidence intervals for you. Using the data from the previous example, we will use Minitab to compute the 95% confidence interval for the mean response for an average forested area of 32 km.
Predicted Values for New Observations
New Obs Fit
SE Fit
(45.1562,54.7429)
If you sampled many areas that averaged 32 km. of forested area, your estimate of the average IBI would be from 45.1562 to 54.7429.
You can repeat this process many times for several different values of x and plot the confidence intervals for the mean response.
95% CI
(37.13, 48.88)
Figure 20. 95% confidence intervals for the mean response.
Notice how the width of the 95% confidence interval varies for the different values of x. Since the confidence interval width is narrower for the central values of x, it follows that μy is estimated more precisely for values of x in this area. As you move towards the extreme limits of the data, the width of the intervals increases, indicating that it would be unwise to extrapolate beyond the limits of the data used to create this model.
What if you want to predict a particular value of y when \(x = x_0\)? Or, perhaps you want to predict the next measurement for a given value of x? This problem differs from constructing a confidence interval for \(\mu_y\). Instead of constructing a confidence interval to estimate a population parameter, we need to construct a prediction interval. Choosing to predict a particular value of y incurs some additional error in the prediction because of the deviation of y from the line of means. Examine the figure below. You can see that the error in prediction has two components:
The error in using the fitted line to estimate the line of means
The error caused by the deviation of y from the line of means, measured by \(\sigma^2\)
Figure 21. Illustrating the two components in the error of prediction.
The variance of the difference between y and \(\hat y\) is the sum of these two variances and forms the basis for the standard error of \((y-\hat y)\) used for prediction. The resulting form of a prediction interval is as follows:
$$\hat y \pm t_{\alpha/2}s\sqrt {1+\frac {1}{n} + \frac {(x_0 - \bar x)^2}{\sum (x_i - \bar x)^2}} \]
where x0 is the given value for the predictor variable, n is the number of observations, and \(t_{\alpha/2}\) is the critical value with (n – 2) degrees of freedom.
Software, such as Minitab, can compute the prediction intervals. Using the data from the previous example, we will use Minitab to compute the 95% prediction interval for the IBI of a specific forested area of 32 km.
New Obs
95% PI
(20.1053, 79.7939)
You can repeat this process many times for several different values of x and plot the prediction intervals for the mean response.
(47.33, 107.67)
Notice that the prediction interval bands are wider than the corresponding confidence interval bands, reflecting the fact that we are predicting the value of a random variable rather than estimating a population parameter. We would expect predictions for an individual value to be more variable than estimates of an average value.
Figure 22. A comparison of confidence and prediction intervals.
In many situations, the relationship between x and y is non-linear. In order to simplify the underlying model, we can transform or convert either x or y or both to result in a more linear relationship. There are many common transformations such as logarithmic and reciprocal. Including higher order terms on x may also help to linearize the relationship between x and y. Shown below are some common shapes of scatterplots and possible choices for transformations. However, the choice of transformation is frequently more a matter of trial and error than set rules.
Figure 23. Examples of possible transformations for x and y variables.
A forester needs to create a simple linear regression model to predict tree volume using diameter-at-breast height (dbh) for sugar maple trees. He collects dbh and volume for 236 sugar maple trees and plots volume versus dbh. Given below is the scatterplot, correlation coefficient, and regression output from Minitab.
Figure 24. Scatterplot of volume versus dbh.
Pearson's linear correlation coefficient is 0.894, which indicates a strong, positive, linear relationship. However, the scatterplot shows a distinct nonlinear relationship.
The regression equation is volume = – 51.1 + 7.15 dbh
The R2 is 79.9% indicating a fairly strong model and the slope is significantly different from zero. However, both the residual plot and the residual normal probability plot indicate serious problems with this model. A transformation may help to create a more linear relationship between volume and dbh.
Figure 25. Residual and normal probability plots.
Volume was transformed to the natural log of volume and plotted against dbh (see scatterplot below). Unfortunately, this did little to improve the linearity of this relationship. The forester then took the natural log transformation of dbh. The scatterplot of the natural log of volume versus the natural log of dbh indicated a more linear relationship between these two variables. The linear correlation coefficient is 0.954.
Figure 26. Scatterplots of natural log of volume versus dbh and natural log of volume versus natural log of dbh.
The regression analysis output from Minitab is given below.
The regression equation is lnVOL = – 2.86 + 2.44 lnDBH
lnDBH
S = 0.327327
The model using the transformed values of volume and dbh has a more linear relationship and a more positive correlation coefficient. The slope is significantly different from zero and the R2 has increased from 79.9% to 91.1%. The residual plot shows a more random pattern and the normal probability plot shows some improvement.
There are many possible transformation combinations possible to linearize data. Each situation is unique and the user may need to try several alternatives before selecting the best transformation for x or y or both.
This page titled 7.3: Population Model is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Diane Kiernan (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7.2: Simple Linear Regression
7.4: Software Solution
Population Model
|
CommonCrawl
|
Genes & Nutrition
Targeting myomiRs by tocotrienol-rich fraction to promote myoblast differentiation
Azraul Mumtazah Razak1,
Shy Cian Khor1,
Faizul Jaafar1,
Norwahidah Abdul Karim1 &
Suzana Makpol ORCID: orcid.org/0000-0002-5239-61961
Genes & Nutrition volume 13, Article number: 31 (2018) Cite this article
Several muscle-specific microRNAs (myomiRs) are differentially expressed during cellular senescence. However, the role of dietary compounds on myomiRs remains elusive. This study aimed to elucidate the modulatory role of tocotrienol-rich fraction (TRF) on myomiRs and myogenic genes during differentiation of human myoblasts. Young and senescent human skeletal muscle myoblasts (HSMM) were treated with 50 μg/mL TRF for 24 h before and after inducing differentiation.
The fusion index and myotube surface area were higher (p < 0.05) on days 3 and 5 than that on day 1 of differentiation. Ageing reduced the differentiation rate, as observed by a decrease in both fusion index and myotube surface area in senescent cells (p < 0.05). Treatment with TRF significantly increased differentiation at days 1, 3 and 5 of young and senescent myoblasts. In senescent myoblasts, TRF increased the expression of miR-206 and miR-486 and decreased PTEN and PAX7 expression. However, the expression of IGF1R was upregulated during early differentiation and decreased at late differentiation when treated with TRF. In young myoblasts, TRF promoted differentiation by modulating the expression of miR-206, which resulted in the reduction of PAX7 expression and upregulation of IGF1R.
TRF can potentially promote myoblast differentiation by modulating the expression of myomiRs, which regulate the expression of myogenic genes.
Satellite cells, which are located between the basal lamina and sarcolemma, act as vital components of the skeletal muscle tissue as they possess regeneration capacity. These satellite cells are mitotically quiescent and arrested at the G0 phase. These cells express a limited number of genes and proteins [1]. In response to stress, such as muscle injury or physiological change, satellite cells are activated and undergo myogenesis, which involves a series of processes [2]. These cells migrate to the damaged site and withdraw from the G0 phase to re-enter the cell cycle progression. The cells then undergo proliferation, differentiation and subsequently fuse with the adjacent muscle fibre to form a new muscle fibre [3]. In this proliferating state, the satellite cells are known as myoblast cells. Ageing gradually reduces the regenerative capacity of skeletal muscles resulting a decrease in the muscle mass and strength [4]. This contributes to age- or injury-induced muscle weakness leading to frailty in the elderly, which is one of the major health problems.
The myogenic program is controlled by various transcription factor families such as paired box gene family consisting of PAX3 and PAX7 and myogenic regulator family including MYOD1, MYOG, Myf5 and Myf6 [1, 3]. The PAX7 transcription factor is required for muscle satellite cell biogenesis and specification of the myogenic precursor lineage [5]. Functioning upstream of the MYOD family, PAX7 is expressed in proliferating myoblasts, but is rapidly downregulated during differentiation. In mice, loss of PAX7 expression resulted in the differentiation of satellite cells into fibroblasts instead of myoblasts [6]. Most of the activated satellite cells proliferate, downregulate PAX7 and promote MYOD to progress into differentiation. Various growth factors and hormones such as insulin-like growth factor (IGF) [7], myostatin and follistatin [8], leukaemia inhibitory factors [9], hepatocyte growth factors and neuronal nitric oxide synthase are involved in muscle hypertrophy [10]. All of these modulators activate several pathways that modulate the expression of myogenic transcription factors.
MicroRNAs (miRNAs) have gained tremendous attention and provide a new avenue for understanding the regulatory mechanism of skeletal muscle development. miRNAs are evolutionary conserved small RNAs that have been identified as post-transcriptional regulators to suppress the expression of target genes. The suppression of gene expression is mediated by the binding of miRNA to the 3′ untranslated region (UTR) of the target mRNA [11]. miRNAs have been found to be involved in the regulation of various pathways that contribute to the modulation of several diseases, as a single miRNA can target several mRNAs. miRNAs are expressed in specific tissues, and those miRNAs that are specifically expressed in striated muscles are known as myomiRs [12]. Several myomiRs have been identified, including miR-1, miR-133a, miR-133b, miR-206, miR-486 and miR-499 [11]. Each myomiR has its own specific or overlapping target mRNA, which functions in promoting myoblast proliferation and differentiation and is differentially expressed during myogenesis. During the differentiation of myoblasts, the expression of miR-133b, miR-206 and miR-486 is elevated, resulting in the downregulation of PAX7 mRNA that promotes myogenic differentiation [5]. Thus, miRNAs play an irreplaceable role in the regulation of skeletal muscle differentiation.
Elucidation of the involvement of myomiRs during differentiation of human satellite cells provides current information of possible interactions between transcription factors, myomiRs and their target mRNAs, especially when modulated by dietary compounds. We proposed vitamin E as a promising agent to modulate the expression of myomiRs. Vitamin E consists of α-, β-, γ- and δ-tocopherol and α-, β-, γ- and δ-tocotrienol, all of which are potent lipid-soluble antioxidants. Vitamin E supplements have been found to prevent muscle damage [13]. However, the molecular mechanism of vitamin E in modulating muscle health remains elusive. Besides the tocopherol isomer, a mixture of tocotrienols particularly known as tocotrienol-rich fraction (TRF), which are less studied shows a better effect compared to the tocopherol single isomer [14]. TRF is commonly extracted from palm oil and consists of α-tocopherol and α-, β-, γ- and δ-tocotrienol. It has been reported to protect against oxidative damage and suppress reactive oxygen species (ROS) production [15]. A previous study showed that TRF prevents the replicative senescence of myoblast cells and promotes myogenic differentiation in which its activity is higher than the tocopherol isomer [16]. Interestingly, another study found that TRF prevents replicative senescence of fibroblasts by inhibiting the expression of miR-34a and increasing the expression of CDK4 [17]. As TRF is known to modulate miRNA expression, this study aimed to elucidate its modulatory role on myomiRs and myogenic genes during the differentiation of human myoblasts.
Cell culture and serial passaging
The Clonetics® Skeletal Muscle Myoblast Cell System containing normal human skeletal muscle myoblasts (HSMM; catalogue no. CC-2580, lot 0000257384), sourced from the quadriceps muscle of a 17-year-old female cadaver, was purchased from Lonza, USA. Cells were maintained in the growth medium, Skeletal Muscle Growth Media-2 (SkGM™-2 Medium) that consisted of SkBM™-2 Basal medium. The SkGM™-2 SingleQuots™ Kit [catalogue no. CC-3244 containing human epidermal growth factor (hEGF), dexamethasone, l-glutamine, foetal bovine serum (FBS) and gentamicin/amphotericin B (GA)]. Cell populations were trypsinised when they reached 70 to 80% confluency. For passaging, the culture medium was warmed to 37 °C and the cells were seeded at 5000–7500 cells/cm2 and were incubated at 37 °C in a humid atmosphere containing 5% carbon dioxide (CO2). At each passage, the number of divisions was calculated as log(N/n)/log 2, where N is the number of cells at the time of passage and n is the number of cells initially plated. The cells were divided into 2 groups, young cells with population doubling 14 (MPD 14) and senescent cells with population doubling 21 (MPD 21) [18].
Induction of differentiation
Differentiation medium was prepared by adding 2% horse serum to DMEM-F12 medium. To induce differentiation, both the groups of cells were plated at 20,000 cells/cm2 in 24-well polystyrene cell culture plates (Thermo Fisher™ Nunc™, Waltham, USA) and incubated overnight in a growth medium in a cell culture incubator (37 °C, 5% CO2). The following morning, the growth medium was replaced with differentiation medium and for the TRF-treated groups, both young and senescent cells were treated with 50 μg/mL of TRF groups) [18]. The cultures were then incubated for 5 days.
Determination of myogenic purity
The myogenic purity of the cultures was monitored by determining the expression of desmin, a cytoskeletal protein that is expressed only in myogenic cells and not in fibroblasts. The number of desmin-positive cells, represented as a percentage of the total number of nuclei, was determined as the myogenic purity of the cell culture, and at least 500 cells were counted. Immunocytochemistry was performed using an antibody specific for desmin, at a dilution of 1:50 (clone D33; DAKO, Denmark). The cells were washed with × 1 phosphate-buffered saline (PBS) and fixed with 100% ethanol for 10 min. The fixation agent was removed by washing three times with × 1 PBS for 5 min. Non-specific binding sites were blocked with 1% FBS diluted in PBS for 30 min. The cells were then incubated with primary antibody against desmin. Specific antibody binding was detected using Alexa Fluor 488 (Invitrogen, USA) directly coupled to the secondary antibody at a dilution of 1:500. The nuclei were fluorescently detected by Hoechst staining (Sigma, USA) at a dilution of 0.0001% w/v. All images were digitalised using ImageJ software.
Quantification of the surface area of myotubes and myonuclei
Positive fluorescent areas of five randomly chosen fields from three individual experiments were evaluated. For each treatment, the mean area of the untreated group was used to calculate the percent increase or decrease in the area of myotubes.
Quantification of fusion index
To calculate the fusion index, the number of nuclei incorporated into the myotubes (> 2 nuclei) were counted and the ratio of this number to the total number of nuclei was determined.
TRF preparation and treatment
Gold Tri E 70 (Sime Darby Bioganic Sdn. Bhd., Malaysia) was used in this study. This Gold Tri E 70 consists of 25% α-tocopherol and 75% tocotrienol. Further, HPLC analysis of Gold Tri E 70 revealed that it consisted of 173.6 mg/g α-tocopherol, 193.4 mg/g α-tocotrienol, 26.2 mg/g β-tocotrienol, 227.7 mg/g γ-tocotrienol and 98.2 mg/mg δ-tocotrienol. TRF stock solution was freshly prepared in the dark by dissolving 1 g of Gold Tri E 70 (Sime Darby Bioganic Sdn. Bhd., Malaysia) in 1 mL of 100% ethanol (1:1) and stored at − 20 °C for not more than 1 month. TRF was activated by incubating 45 μL of TRF stock solution (1 g/1 mL) with 60 μL of FBS, overnight at 37 °C. To prepare TRF at a concentration of 50 μg/mL, 90 μL of DMEM with 10% FBS and 105 μL of 100% ethanol were added to the activated TRF, after which 600 μL of the mixture containing FBS and 100% ethanol (1:1) was added. The TRF solution (50 μg/mL) was prepared using the culture medium. Myoblasts were treated with 50 μg/mL TRF for 24 h, and untreated myoblasts were incubated with SKGM-2 medium (Lonza, USA) for proliferation analysis and with DMEM-F12 medium (Lonza, USA) for differentiation analysis. A series of dosage titrations performed in a previous study showed that 50 μg/mL of TRF treatment for 24 h produced the highest percentage of viable young and senescent myoblasts [16]. Furthermore, the myoblast cells used in the present study are similar to the ones in our previous study [16]. The media for both untreated and TRF-cells were changed simultaneously, and both groups of cells were harvested on the same day.
Primer design
Forward primers for microRNA were designed according to the miRNA sequences listed in the miRBase database (http://www.mirbase.org). For miR-486, miR-486-5p form was selected for forward primer synthesis. Table 1 shows the forward primer sequences for validated miRNAs. Primers for human GAPDH, PAX7, IGF1R and PTEN were designed from listed NIH GenBank database using Primer 3 software and blasted with sequences in the GenBank database for specificity confirmation. The efficiency and specificity of each primer set was confirmed by melting profile evaluation. The primer sequences for quantitative gene expression analysis are shown in Table 2.
Table 1 Primer sequences of validated miRNAs
Table 2 Primer sequences for quantitative gene expression analysis
Total RNA was extracted from different treatment groups of myoblasts using TRI Reagent (Molecular Research Center, Cincinnati, USA) according to the manufacturer's instructions. Polyacryl carrier (Molecular Research Center, Cincinnati, USA) was added to each extracted sample to precipitate the total RNA. Extracted RNA pellet was washed with 75% ethanol and dried prior to dissolving it in RNase-free distilled water. Aliquots of total RNA were stored at − 80 °C immediately after extraction. The yield and purity of the extracted total RNA was determined by nanodrop spectrophotometer (Thermo Scientific, USA).
Real-time qRT-PCR
For quantitative analysis of miRNAs, reverse transcription (RT) was first performed with 10 ng of total RNA using Taqman microRNA Reverse Transcription kit (Applied Biosystems, USA) according to the manufacturer's instructions. PCR reactions were then performed to quantitate the expression levels of myomiRs (miR-206, miR-133b and miR-486) using Taqman Universal PCR Master Mix No AmpErase UNG (Applied Biosystems, USA), according to manufacturer's instructions, and Taqman microRNA assay kit (Applied Biosystems, USA) was used for the detection of myomiRs of interest. PCR amplification was performed in iQ5 Multicolor Real-Time PCR iCycler (Bio-Rad, USA) at 95 °C for 10 min, followed by 40 cycles of 95 °C for 15 s and 60 °C for 60 s. PCR reactions were performed in triplicates. The expression level of all myomiRs was normalised to the expression of RNU6B. The relative expression value (REV) of miRNAs was calculated using the equation for 2−ΔCt method of relative quantification [16, 19]:
$$ \mathrm{REV}={2}^{\mathrm{Ct}\ \mathrm{value}\ \mathrm{of}\ \mathrm{RNU}6\mathrm{B}-\mathrm{Ct}\ \mathrm{value}\ \mathrm{of}\ \mathrm{miRNA}} $$
Expression of PAX7, IGF1R and PTEN genes was analysed using KAPA SYBR Fast One Step qRT-PCR kit (KAPA Biosystems, USA) and iQ5 Multicolor Real-Time PCR iCycler (Bio-Rad, USA). Each qRT-PCR mixture contained 11.7 μL nuclease-free water, 10 μL KAPA SYBR Fast master mix, 0.3 μL RT enzyme, 1 μL of 100 μM forward primer, 1 μL of 100 μM reverse primer and 1 μL total RNA (50–100 ng). Reactions were performed in iQ5 Multicolor Real-Time PCR iCycler (Bio-Rad, USA) at 42 °C for 5 min and 95 °C for 4 min, followed by 40 cycles of 95 °C for 3 s and 60 °C for 20 s. qRT-PCR reactions were performed in triplicates. GAPDH was used as a normalisation reference gene [19, 20]. The relative expression value (REV) of the genes of interest was calculated using equation for the 2−ΔCt method of relative quantification [16, 19]:
$$ \mathrm{REV}={2}^{\mathrm{Ct}\ \mathrm{value}\ \mathrm{of}\ \mathrm{GAPDH}-\mathrm{Ct}\ \mathrm{value}\ \mathrm{of}\ \mathrm{the}\ \mathrm{gene}\ \mathrm{of}\ \mathrm{interest}} $$
Determination of cell cycle profile
Untreated control and TRF-treated myoblasts were sub-cultured in 10 cm2 tissue culture dish. After 24 h of incubation, cells were harvested and prepared for cell cycle analysis using CycleTEST PLUS DNA Reagent Kit (Becton Dickinson, USA) according to the manufacturer's instructions. The cell cycle status was analysed by FACS Calibur flow cytometer (Becton Dickinson, USA) using propidium iodide (PI) as a specific fluorescent dye probe. The PI fluorescence intensity of 15,000 cells was measured for each sample.
Data are presented as the mean ± SD. Experiments were performed at least three times, and data were analysed by Student's t test and one-way analysis of variance (ANOVA). Significance was accepted at p < 0.05.
Effects of TRF on the morphology and myogenic purity of skeletal muscle myoblasts
Young myoblasts (PD 14) exhibited normal spindle shape with round nuclei (Fig. 1a, b, c), while senescent myoblasts were larger and flatter and consisted of prominent intermediate filaments (Fig. 1d, e). Senescent myoblasts exhibited different morphological features when treated with TRF. Most of the cells were spindle-shaped (Fig. 1f), which resembled the TRF-treated young myoblasts (Fig. 1c). The myogenicity of the myoblasts was more than 90% in both treatment groups (Table 3). A comparison between different treatment groups showed that the myogenicity was similar in all treatment groups.
Morphology of young and senescent myoblasts for control and TRF-treated cells. Observation was carried out under phase contrast (a, d) and fluorescence microscopy (b, c, e, f) (× 40 magnification). The myoblast cells were stained with an antibody against desmin (green), and the nuclei were stained with Hoechst (blue). Control senescent myoblasts appeared larger and flatter with the presence of more prominent intermediate filaments (d, e) compared to control young myoblasts (a, b). Some of the TRF-treated senescent myoblasts (f) remained spindle-shaped which resembled young control while some exhibited flatter and larger morphology. No morphological changes was observed for TRF-treated young myoblasts (c)
Table 3 Myogenic purity of myoblasts in culture
Effects of TRF on differentiation analysis of skeletal muscle myoblasts
The fusion index (Fig. 2) and myotube surface area (Fig. 3) were greater on days 3 and 5 than that on day 1 of differentiation. Ageing causes a significant reduction in the differentiation rate of senescent myoblasts on days 3 and 5 as compared to that of young myoblasts (control) (p < 0.05), as observed by decreased fusion index and surface area of myotubes. Treatment with TRF significantly increased the differentiation rate with an increase in the fusion index and myotube surface area on days 1, 3 and 5 in both young and senescent myoblasts (p < 0.05).
Effect of TRF on myoblasts differentiation. Fusion index was measured as an index of differentiation. aDenotes p < 0.05 compared to young control, bp < 0.05 compared to senescent control, cp < 0.05 compared to young treated, dp < 0.05 compared to day 1 of the same treatment and ep < 0.05 compared to day 3 of the same treatment. Data are presented as mean ± SD, n = 3
Effect of TRF on myoblasts differentiation as measured by myotube surface area. aDenotes p < 0.05 compared to young control, bp < 0.05 compared to senescent control, cp < 0.05 compared to young treated, dp < 0.05 compared to day 1 of the same treatment and ep < 0.05 compared to day 3 of the same treatment. Data are presented as mean ± SD, n = 3
TRF treatment modulates myomiR expression
Changes in the expression of miRNAs were observed in all groups of myoblasts. miR-133b expression was reduced significantly in senescent myoblasts during proliferation phase (Fig. 4a). However, there was a significant increase in the expression of miR-133b in young myoblasts when treated with TRF (p < 0.05). During the differentiation phase, miR-133b expression was decreased in senescent myoblasts (p < 0.05). No significant change was observed in miR-133b expression when young and senescent myoblasts were treated with TRF during differentiation as compared to control group (Fig. 4b).
Effect of TRF on the expression of micro RNAs during proliferation and differentiation of young and senescent myoblasts. Expression of miR-133b (a, b), miR-206 (c, d) and miR-486 (e, f) in young control myoblasts, TRF-treated young myoblasts, senescent control myoblasts and TRF-treated senescent myoblasts. aDenotes p < 0.05 compared to young control, bp < 0.05 compared to senescent control and cp < 0.05 compared to young TRF-treated myoblasts. Data are presented as relative expression value (REV) normalised to RNU6B expression (mean ± SD, n = 3)
miR-206 expression was reduced significantly in senescent myoblasts during the proliferation phase (Fig. 4c). Upon TRF treatment, the expression of miR-206 increased significantly in young myoblasts (p < 0.05). During the differentiation phase, miR-206 expression in senescent myoblasts decreased significantly (p < 0.05). However, when treated with TRF, the expression of miR-206 increased significantly in young myoblasts from day 1 until day 3 of the differentiation phase and increased only on day 1 of differentiation phase in senescent myoblasts (Fig. 4d).
miR-486 expression was reduced significantly in senescent myoblasts during proliferation and differentiation phases (Fig. 4e, f). However, upon TRF treatment, there was a significant increase in miR-486 expression in senescent myoblasts during the proliferation phase and on days 1 and 5 during the differentiation phase (p < 0.05).
TRF treatment modulates the expression of target genes and upstream regulators of myomiRs
Senescent myoblasts showed significantly decreased PAX7 expression during the proliferation phase (p < 0.05) (Fig. 5a). Upon TRF treatment, PAX7 expression was significantly increased in young myoblasts during proliferation. During the differentiation phase, there was a significant reduction in PAX7 expression in senescent myoblasts (p < 0.05). Treatment with TRF during the differentiation phase caused a significant decrease in PAX7 expression in young myoblasts on days 3 and 5 and on days 1 and 5 in senescent myoblasts (Fig. 5b).
Effect of TRF on the downstream genes expression. Expression of PAX7 (a, b), PTEN (c, d) and IGF1R (e, f) in young control myoblasts, TRF-treated young myoblasts, senescent control myoblasts and TRF-treated senescent myoblasts. aDenotes p < 0.05 compared to young control, bp < 0.05 compared to senescent control and cp < 0.05 compared to young treated myoblasts. Data are presented as relative expression value (REV) normalised to GAPDH expression (mean ± SD, n = 3)
PTEN expression increased significantly in senescent myoblasts during proliferation and differentiation phases (p < 0.05) (Fig. 5c, d). TRF treatment caused a significant reduction in PTEN expression in young and senescent myoblasts during the proliferation phase. During the differentiation phase, treatment with TRF decreased PTEN expression in senescent myoblasts on day 1 of differentiation (Fig. 5d).
Senescent myoblasts exhibited significantly decreased IGF1R expression during the proliferation phase and on day 3 of the differentiation phase (p < 0.05) (Fig. 5e, f). TRF treatment caused a significant increase in IGF1R expression in young and senescent myoblasts during the proliferation phase. During the differentiation phase, treatment with TRF caused a significant increase in IGF1R expression in senescent myoblasts on day 1, which decreased on day 5 (Fig. 5f).
Effects of TRF on cell cycle profile
Analysis of cell cycle profile at day 0 showed that myoblast population in the G0/G1 phase was significantly higher and in the S phase, the population of senescent cells was significantly lower than those in the young cells (p < 0.05) (Fig. 6). Treatment with TRF caused a significant reduction in the percentage of senescent myoblasts in the G0/G1 phase and a significant increase in the percentage of young and senescent myoblasts in the S phase (p < 0.05) (Fig. 6e).
Cell cycle profile of young and senescent myoblasts at day 0 of differentiation. Flow cytometry analysis of cell cycle progression of young control myoblasts (a), TRF-treated young myoblasts (b), senescent control myoblasts (c) and TRF-treated senescent myoblasts (d). Quantitative analysis of cell cycle progression of young and senescent myoblasts (e). aDenotes p < 0.05 compared to young control. bp < 0.05 compared to senescent control and cp < 0.05 compared to young treated group. Data are expressed as mean ± SD (n = 6)
On day 1 of differentiation, the percentage of myoblasts in the G0/G1 phase was significantly higher in senescent cells than that in the young cells (p < 0.05), while the percentage of cells in the S and G2/M phases was significantly reduced in both groups of myoblasts (Fig. 7). A comparison of the cell cycle profile between days 0 and 1 of differentiation showed a significant difference in the percentage of cells in the G0/G1, S and G2/M phases in both groups (p < 0.05).
Cell cycle profile of young and senescent myoblasts at day 1 of differentiation. Flow cytometry analysis of cell cycle progression of young control myoblasts (a), TRF-treated young myoblasts (b), senescent control myoblasts (c) and TRF-treated senescent myoblasts (d). Quantitative analysis of cell cycle progression of young and senescent myoblasts (e). aDenotes p < 0.05 compared to young control, and dp < 0.05 compared to same treatment at day 0 of differentiation. Data are expressed as mean ± SD (n = 6)
The integrity of myoblast cell structure as well as the formation of myotubes is maintained by the cytoskeleton, cell membrane and the extracellular matrix (ECM) [21]. Young myoblasts are morphologically spindle-shaped and elongated structures. In contrast, senescent myoblasts manifest morphological changes with a flattened structure and larger cytoplasm. Upon induction of differentiation, myoblasts fuse together to form a multinuclear myotube. Young myoblasts form large-branched myotubes, whereas senescent myoblasts form smaller myotubes [16]. As cells senesce, the level of reactive oxygen species (ROS) increases proportionately. In addition, the level of antioxidants is inversely proportional to the level of ROS throughout the senescence process. Accumulation of ROS in the cells induces oxidative stress, which causes oxidative damage to macromolecules such as DNA, RNA, protein and lipid [22]. Consequently, several pathways and cellular metabolism are altered, which leads to changes in the cytoskeleton, cell membrane and the ECM [23], resulting in phenotypic changes in senescent myoblasts, as observed in the present study. Therefore, introducing antioxidants to the altered system is predicted to reduce the oxidative stress, thus delaying the senescence of myoblasts.
Vitamin E, particularly TRF, plays a pivotal role in scavenging peroxyl radicals and prevents the peroxidation of macromolecules, thus improving the oxidative status of cells [24]. Vitamin E consists of two isomers, tocopherol and tocotrienol. Tocotrienol has been reported to possess better antioxidant activity and effectively reduces the oxidative stress in lipophilic environment [25]. In this study, TRF treatment ameliorated the morphological structure of senescent myoblasts and showed similar features to young myoblasts. A previous study showed that TRF treatment of senescent fibroblast cells reversed the morphological structure to form young fibroblasts [26]. Similarly, in a previous study by Khor et al., it was reported that senescent myoblasts treated with TRF appeared to have similar morphological features as young myoblast cells [16]. This observation could be due to the modulation of protein expression, which is involved in maintaining cell structure. Matrix metalloproteinase (MMP), responsible for the degradation of procollagen, is highly expressed in senescent cells. This protein alters cell structure maintenance in senescent cells [27]. However, TRF increases the expression of procollagen in senescent fibroblast cells [28], hence improving the morphological structure of senescent cells as observed in this study.
Homeostasis between proliferation and differentiation of myoblast cells during myogenesis is tightly regulated to prevent uncontrolled proliferation [29]. In the present study, the percentage of senescent myoblast cell population was higher in the G0/G1 phase and lower in the S phase than young cells, during proliferation. A similar result was also observed in differentiated senescent myoblasts. Interestingly, TRF treatment of both proliferating young and senescent myoblast cells enhanced the cell cycle progression as the cell population in the G0/G1 and S phases reduced and increased, respectively. However, during the induction of differentiation, TRF promotes cell cycle withdrawal in young myoblast cells. Like other somatic cells, the proliferation capacity of myoblast cells is limited by replicative senescence due to progressive loss of telomere length [30]. In order to prevent tumour progression, cell cycle checkpoints act as barriers to prevent the replication of damaged DNA whereby cells are arrested at the G0/G1 phase [29].
All cell cycle checkpoints are regulated by several cyclin-dependent kinases (CDKs) and cyclin proteins. Depending on the stimuli or cell environment such as DNA damage response (telomere shortening), CDK inhibitors such as p16 or p21 are expressed to inhibit the formation of CDK/cyclin complex, thus arresting cell cycle progression at the G0/G1 phase [31]. Previous studies have shown that TRF treatment of senescent fibroblast cells increased the expression of telomerase and enhanced the elongation of telomere [26]. Furthermore, γ-tocotrienol treatment of senescent cells downregulates p16, cyclin D1 and hypophosphorylated-Rb, all of which are involved in cell cycle arrest [32]. Thus, TRF is postulated to modulate telomerase expression, increase the expression of proteins involved in cell cycle to prevent cell cycle arrest and promote proliferation of myoblasts. Treatment with TRF increased the percentage of myoblast cells in the G0/G1 phase on day 1 of differentiation induction indicating the promotion of cell cycle arrest and inhibition of cell proliferation for differentiation to occur. This could be due to TRF, which is dependent on cell environment or stimuli. Previous studies have shown that γ- and δ-tocotrienol stimulates the differentiation of osteoblasts, which in turn enhance bone formation [33]. Furthermore, a previous study has shown that combined activity of TRF is much better in promoting myoblast differentiation than single α-tocotrienol treatment [16]. However, complete understanding of the mechanism of TRF in promoting the proliferation and differentiation of myoblasts remains elusive.
At the molecular level, regulation of proliferation and differentiation of myoblast cells during myogenesis is associated with several genes and myomiRs (miR) [11]. During cell renewal, quiescent satellite cells upregulate the expression of PAX7 and downregulate the expression of its myogenic regulatory factor (MRF) gene targets, MYOD1 and MYOG. Expression of PAX7 promotes re-entry of quiescent satellite cells into cell cycle progression and enhances the proliferation of myoblasts [6]. In the present study, the expression of PAX7 gene was increased in differentiated myoblasts, and this increased expression remained constant after several days of differentiation induction. However, TRF treatment increased the expression of PAX7 gene during proliferation and downregulated its expression during differentiation. Increased expression of PAX7 followed by suppression of myogenesis inhibitors Id2 and Id3 has been reported to upregulate the expression of MYOD1 and MYOG [34]. MYOD1 is directly involved with the activation of p21, cyclin D3 and Rb expression, which are critical for irreversible cell cycle withdrawal of myoblast cells from the G0/G1 phase during differentiation and terminal differentiation phases [35].
PAX7 expression is regulated by miR-206 and miR-486. As MYOD1 expression is increased, this transcription factor that has its binding site in the promoter regions of miR-206 and miR-486, facilitates the expression of these two myomiRs [5]. Interestingly, TRF treatment upregulated the expression of miR-206 during proliferation, and this expression was further upregulated during differentiation. Another myomiR, miR-486, was upregulated when treated with TRF in proliferated and differentiated myoblasts. In contrast, TRF treatment did not upregulate the expression of miR-486 during differentiation. Previous studies have shown that the suppression of the PAX7 gene by miR-206 and miR-486 enhanced the commitment of myoblast cells to differentiate [5]. However, overexpression of PAX7 promotes uncontrolled proliferation [36]. Therefore, in the present study, TRF might play a role in maintaining the proliferation and differentiation of myoblasts by modulating the expression of PAX7 gene, miR-206 and miR-486, without disturbing the homeostasis of myogenesis.
Various modulators that regulate the activity of satellite cells and utilise various signalling pathways, including the IGF1R/P13K pathway, control myogenesis. This pathway mediates the functions of IGF as both IGF-1 and IGF-2 bind to IGF1R. IGF1R was downregulated during the late differentiation stage due to the presence of miR-133 response element (MRE) located in the 3′UTR [9, 37]. This would explain the direct effects of miR-133 towards IGF1R as a negative regulator of PI3K/Akt. miR-133 downregulates Akt phosphorylation via inhibition of IGF1R protein, which is responsible for glucose metabolism, cell proliferation and apoptosis [37, 38]. A reduction in Akt phosphorylation was observed during the differentiation of C2C12 myoblasts. miR-133 is important to regulate and balance the activity of IGF in muscle cells. IGF1R was found to be deregulated in rhabdomyosarcoma (RMS) where its expression increased consistently and, hence, is suggested as an initial factor responsible for oncogenic transformation of muscle cells [39]. Prolonged and consistent IGF1R expression resulted in increased proliferation and prevention of the differentiation phase [37].
Akt activation also activates mTOR and inhibits GSK3B, a negative regulator of protein synthesis and muscle growth. PTEN is a PI3K phosphatase that deactivates Akt, inhibiting muscle cell growth and muscle cell survival [40]. Decreased PTEN expression stimulates the PI3K/Akt pathway for the promotion and expression of myogenic transcription factors such as MYOD1, MYOG and Myf5 during myoblast proliferation and differentiation. A previous study reported reduced expression of miR-486 in Duchenne muscular dystrophy [41] and ageing [42]. miR-486 acts as a mediator for MYOD1 and regulates the PI3K/Akt pathway. miR-486 is transcribed from an intron of the Ank1 gene consisting of 39a exon code for muscle-specific Ank1 protein, which connects the sarcomere to the sarcoplasmic reticulum [43]. The expression of Ank1 gene transcript is controlled by a promoter site that contains two conserved E-boxes, which interact with MYOD [43]. An increase in the expression of miR-486 by TRF shows that TRF possesses the ability to delay the ageing phenotype and sarcopenia during ageing.
In the present study, TRF treatment has also been shown to increase the expression of myomiRs in young and senescent myoblasts in both proliferation and differentiation phases. Thus, TRF might play a role in the biogenesis of myomiRs directly or indirectly, which involves several processes [11]. Initially, the myomiR is transcribed in the nucleus as a primary transcript or pri-myomiR with a stem-loop structure. Here, the transcription process is modulated by various transcription factors. The pri-myomiR is subsequently processed to form pre-myomiR by the removal of both the end strands. Later, in the cytoplasm, the pre-myomiR loop is cleaved out and unwound by a helicase to form a single-stranded mature myomiR. These modification processes involve various proteins and may be one of the direct or indirect targets of TRF. At the transcriptional level, miR-206 is regulated by transcription factor FOXO3a [11]. A previous study showed that TRF treatment increases the expression of FOXO3a gene [44]. Another finding also reported that γ- and δ-tocotrienol increased the expression of FOXO3a gene [45]. Therefore, TRF is predicted to modulate the biogenesis of myomiR via regulation of its transcription factor.
As there was a decrease in the expression of myomiRs in senescent myoblasts, we proposed another mechanism for TRF-mediated regulation of myomiRs, which may be attributed to its radical-scavenging effect. The RNase III enzyme, Dicer, is responsible for cleaving the loop out of pre-miRNA (a major steps in the biogenesis process) to produce double-stranded mature miRNAs [11]. This enzyme is inhibited by various stress factors including ROS, which is accumulated during ageing [46]. Another finding showed that Dicer expression decreased with increased level of oxidative stress and DNA damage. As TRF effectively reduced the levels of ROS, especially in senescent cells, TRF is suggested to modulate myomiRs by reducing the oxidative stress, which in turn enhances the activity and expression of Dicer. Therefore, TRF may be involved in the biogenesis of myomiRs via modulation of Dicer expression. Hence, to verify the specificity of TRF response on Dicer expression, a further study is required by using other antioxidants or by inhibiting the function of miRNA by anti-miR oligonucleotides.
TRF naturally exists as a mixture of various forms of vitamin E; all tocotrienol forms are present and highly concentrated in TRF. However, each cell has its own preference for various forms of vitamin E. A previous study showed that the concentration of α- and δ-tocotrienol was the highest in myoblasts, [18] while γ- and δ-tocotrienol were the most abundant forms in fibroblasts [47]. As previously described, TRF treatment showed a better effect than single isomer treatment. Hence, preferential and selective uptake of vitamin E form by the cell represents the best synergistic effect between vitamin E forms and suitability, which depends on cell environment. Figure 8 summarises the modulatory effect of TRF on the expression of myomiRs and the myogenic regulatory factors. Our results revealed that TRF is a potential muscle differentiation agent that modulates the expression of myomiRs and its target genes involved in myoblast differentiation during myogenesis.
Modulatory effects of TRF on the expression of myomiRs and myogenic regulatory factors
The findings of the present study demonstrated that tocotrienol-rich fraction with antioxidant and non-antioxidant properties altered the expression of myomiRs, specifically miR-133b, miR-206 and miR-486, thereby modifying the expression of their target genes that are involved in myogenesis to promote muscle differentiation in young and senescent myoblasts.
Yin H, Price F, Rudnicki MA. Satellite cells and the muscle stem cell niche. Physiol Rev. 2013;93(1):23–67.
Sakiyama K, Abe S, Tamatsu Y, Ide Y. Effects of stretching stress on the muscle contraction proteins of skeletal muscle myoblasts. Biomed Res. 2005;26(2):61–8.
Dumont NA, Wang YX, Rudnicki MA. Intrinsic and extrinsic mechanisms regulating satellite cell function. Development. 2015;142(9):1572–81.
Sousa-Victor P, Munoz-Canoves P. Regenerative decline of stem cells in sarcopenia. Mol Asp Med. 2016;50:109–17.
Dey BK, Gagan J, Dutta A. miR-206 and-486 induce myoblast differentiation by downregulating Pax7. Mol Cell Biol. 2011;31(1):203–14.
Zammit PS, Relaix F, Nagata Y, Ruiz AP, Collins CA, Partridge TA, et al. Pax7 and myogenic progression in skeletal muscle satellite cells. J Cell Sci. 2006;119(9):1824–32.
Yamaguchi A, Sakuma K, Fujikawa T, Morita I. Expression of specific IGFBPs is associated with those of the proliferating and differentiating markers in regenerating rat plantaris muscle. J Physiol Sci. 2013;63(1):71–7.
Bowser M, Herberg S, Arounleut P, Shi X, Fulzele S, Hill WD, et al. Effects of the activin A–myostatin–follistatin system on aging bone and muscle progenitor cells. Exp Gerontol. 2013;48(2):290–7.
Spangenburg EE, Booth FW. Multiple signaling pathways mediate LIF-induced skeletal muscle satellite cell proliferation. Am J Physiol Cell Physiol. 2002;283(1):C204–C11.
Wozniak AC, Anderson JE. Nitric oxide-dependence of satellite stem cell activation and quiescence on normal skeletal muscle fibers. Dev Dyn. 2007;236(1):240–50.
Horak M, Novak J, Bienertova-Vasku J. Muscle-specific microRNAs in skeletal muscle development. Dev Biol. 2016;410(1):1–13.
McCarthy JJ. The MyomiR network in skeletal muscle plasticity. Exerc Sport Sci Rev. 2011;39(3):150.
Santos SA, Silva ET, Caris AV, Lira FS, Tufik S, Dos Santos RV. Vitamin E supplementation inhibits muscle damage and inflammation after moderate exercise in hypoxia. J Hum Nutr Diet. 2016;29(4):516–22.
Ali SF, Woodman OL. Tocotrienol rich palm oil extract is more effective than pure tocotrienols at improving endothelium-dependent relaxation in the presence of oxidative stress. Oxidative Med Cell Longev. 2015;2015:10.
Budin SB, Han KJ, Jayusman PA, Taib IS, Ghazali AR, Mohamed J. Antioxidant activity of tocotrienol rich fraction prevents fenitrothion-induced renal damage in rats. J Toxicol Pathol. 2013;26(2):111–8.
Khor SC, Razak AM, Wan Ngah WZ, Mohd Yusof YA, Abdul Karim N, Makpol S. The tocotrienol-rich fraction is superior to tocopherol in promoting myogenic differentiation in the prevention of replicative senescence of myoblasts. PLoS One. 2016;11(2):e0149265.
Gwee Sian Khee S, Mohd Yusof YA, Makpol S. Expression of senescence-associated microRNAs and target genes in cellular aging and modulation by tocotrienol-rich fraction. Oxidative Med Cell Longev. 2014;2014:12.
Khor SC, Ngah W, Zurinah W, Yusof M, Anum Y, Abdul Karim N, et al. Tocotrienol-rich fraction ameliorates antioxidant defense mechanisms and improves replicative senescence-associated oxidative stress in human myoblasts. Oxidative Med Cell Longev. 2017;2017:17.
Szczesny B, Olah G, Walker DK, Volpi E, Rasmussen BB, Szabo C, et al. Deficiency in repair of the mitochondrial genome sensitizes proliferating myoblasts to oxidative damage. PLoS One. 2013;8(9):e75201.
Mocchegiani E, Costarelli L, Giacconi R, Malavolta M, Basso A, Piacenza F, et al. Vitamin E–gene interactions in aging and inflammatory age-related diseases: implications for treatment. A systematic review. Ageing Res Rev. 2014;14:81–101.
Adams JC, Watt FM. Regulation of development and differentiation by the extracellular matrix. Development. 1993;117(4):1183–98.
Davalli P, Mitic T, Caporali A, Lauriola A, D'Arca D. ROS, cell senescence, and novel molecular mechanisms in aging and age-related diseases. Oxidative Med Cell Longev. 2016;2016:18.
Huang X, Chen L, Liu W, Qiao Q, Wu K, Wen J, et al. Involvement of oxidative stress and cytoskeletal disruption in microcystin-induced apoptosis in CIK cells. Aquat Toxicol. 2015;165:41–50.
Niki E. Role of vitamin E as a lipid-soluble peroxyl radical scavenger: in vitro and in vivo evidence. Free Radic Biol Med. 2014;66:3–12.
Viola V, Pilolli F, Piroddi M, Pierpaoli E, Orlando F, Provinciali M, et al. Why tocotrienols work better: insights into the in vitro anti-cancer mechanism of vitamin E. Genes Nutr. 2012;7(1):29.
Makpol S, Durani LW, Chua KH, Yusof M, Anum Y, Ngah W, et al. Tocotrienol-rich fraction prevents cell cycle arrest and elongates telomere length in senescent human diploid fibroblasts. Biomed Res Int. 2011;2011(11):506171.
Hiyama A, Sakai D, Risbud MV, Tanaka M, Arai F, Abe K, et al. Enhancement of intervertebral disc cell senescence by WNT/β-catenin signaling–induced matrix metalloproteinase expression. Arthritis Rheum. 2010;62(10):3036–47.
Makpol S, Jam FA, Khor SC, Ismail Z, Yusof M, Anum Y, et al. Comparative effects of biodynes, tocotrienol-rich fraction, and tocopherol in enhancing collagen synthesis and inhibiting collagen degradation in stress-induced premature senescence model of human diploid fibroblasts. Oxidative Med Cell Longev. 2013;2013:8.
Walsh K, Perlman H. Cell cycle exit upon myogenic differentiation. Curr Opin Genet Dev. 1997;7(5):597–602.
Zhu CH, Mouly V, Cooper RN, Mamchaoui K, Bigot A, Shay JW, et al. Cellular senescence in human myoblasts is overcome by human telomerase reverse transcriptase and cyclin-dependent kinase 4: consequences in aging muscle and therapeutic strategies for muscular dystrophies. Aging Cell. 2007;6(4):515–23.
Harley CB, Futcher AB, Greider CW. Telomeres shorten during ageing of human fibroblasts. Nature. 1990;345(6274):458.
Zainuddin A, Chua K-H, Tan J-K, Jaafar F, Makpol S. γ-Tocotrienol prevents cell cycle arrest in aged human fibroblast cells through p16INK4a pathway. J Physiol Biochem. 2017;73(1):59–65.
Chin K-Y, Ima-Nirwana S. Effects of annatto-derived tocotrienol supplementation on osteoporosis induced by testosterone deficiency in rats. Clin Interv Aging. 2014;9:1247.
Kumar D, Shadrach JL, Wagers AJ, Lassar AB. Id3 is a direct transcriptional target of Pax7 in quiescent satellite cells. Mol Biol Cell. 2009;20(14):3170–7.
Cenciarelli C, De Santa F, Puri PL, Mattei E, Ricci L, Bucci F, et al. Critical role played by cyclin D3 in the MyoD-mediated arrest of cell cycle during myoblast differentiation. Mol Cell Biol. 1999;19(7):5203–17.
Riuzzi F, Sorci G, Sagheddu R, Sidoni A, Alaggio R, Ninfo V, et al. RAGE signaling deficiency in rhabdomyosarcoma cells causes upregulation of PAX7 and uncontrolled proliferation. J Cell Sci. 2014;127(8):1699–711.
Huang M-B, Xu H, Xie S-J, Zhou H, Qu L-H. Insulin-like growth factor-1 receptor is regulated by microRNA-133 during skeletal myogenesis. PLoS One. 2011;6(12):e29173.
Schiaffino S, Mammucari C. Regulation of skeletal muscle growth by the IGF1-Akt/PKB pathway: insights from genetic models. Skelet Muscle. 2011;1(1):4.
Werner H, Maor S. The insulin-like growth factor-I receptor gene: a downstream target for oncogene and tumor suppressor action. Trends Endocrinol Metab. 2006;17(6):236–42.
Crackower MA, Oudit GY, Kozieradzki I, Sarao R, Sun H, Sasaki T, et al. Regulation of myocardial contractility and cell size by distinct PI3K-PTEN signaling pathways. Cell. 2002;110(6):737–49.
Alexander MS, Casar JC, Motohashi N, Vieira NM, Eisenberg I, Marshall JL, et al. MicroRNA-486–dependent modulation of DOCK3/PTEN/AKT signaling pathways improves muscular dystrophy–associated symptoms. J Clin Invest. 2014;124(6):2651–67.
Lai CY, Wu YT, Yu SL, Yu YH, Lee SY, Liu CM, et al. Modulated expression of human peripheral blood microRNAs from infancy to adulthood and its role in aging. Aging Cell. 2014;13(4):679–89.
Small EM, O'Rourke JR, Moresi V, Sutherland LB, McAnally J, Gerard RD, et al. Regulation of PI3-kinase/Akt signaling by muscle-enriched microRNA-486. Proc Natl Acad Sci. 2010;107(9):4218–23.
Durani L, Jaafar F, Tan J, Tajul Arifin K, Mohd Yusof Y, Wan Ngah W. Targeting genes in insulin-associated signalling pathway, DNA damage, cell proliferation and cell differentiation pathways by tocotrienol-rich fraction in preventing cellular senescence of human diploid fibroblasts. Clin Ter. 2015;166:e365–e73.
Shin-Kang S, Ramsauer VP, Lightner J, Chakraborty K, Stone W, Campbell S, et al. Tocotrienols inhibit AKT and ERK activation and suppress pancreatic cancer cell proliferation by suppressing the ErbB2 pathway. Free Radic Biol Med. 2011;51(6):1164–74.
Smith-Vikos T, Slack FJ. MicroRNAs and their roles in aging. J Cell Sci. 2012;125(1):7–17.
Jaafar F, Abdullah A, Makpol S. Cellular uptake and bioavailability of tocotrienol-rich fraction in SIRT1-inhibited human diploid fibroblasts. Sci Rep. 2018;8(1):10471.
This research study was financially supported by the Ministry of Higher Education under the Fundamental Research Grant Scheme FRGS/2/2014/SKK01/UKM/01/1 and Universiti Kebangsaan Malaysia Grant UKM-FF-2014-301. The authors would like to express gratitude to all researchers and staff of the Biochemistry Department, Faculty of Medicine, Universiti Kebangsaan Malaysia Medical Centre.
Department of Biochemistry, Faculty of Medicine, Level 17, Preclinical Building, Universiti Kebangsaan Malaysia Medical Centre (UKMMC), Jalan Yaakob Latif, Bandar Tun Razak, Cheras, 56000, Kuala Lumpur, Malaysia
Azraul Mumtazah Razak, Shy Cian Khor, Faizul Jaafar, Norwahidah Abdul Karim & Suzana Makpol
Azraul Mumtazah Razak
Shy Cian Khor
Faizul Jaafar
Norwahidah Abdul Karim
Suzana Makpol
AMR performed the experiments, analysed the data and drafted the manuscript. SCK and FJ analysed the data and drafted the manuscript. SM and NAK designed the study, interpreted the data and revised the manuscript. All authors have read and approved the final manuscript.
Correspondence to Suzana Makpol.
Razak, A.M., Khor, S.C., Jaafar, F. et al. Targeting myomiRs by tocotrienol-rich fraction to promote myoblast differentiation. Genes Nutr 13, 31 (2018). https://doi.org/10.1186/s12263-018-0618-2
Myoblast
Tocotrienol
|
CommonCrawl
|
Research | Open | Published: 25 January 2019
Orthogonality is superiority in piecewise-polynomial signal segmentation and denoising
Michaela Novosadová1,
Pavel Rajmic ORCID: orcid.org/0000-0002-8381-44421 &
Michal Šorel2
EURASIP Journal on Advances in Signal Processingvolume 2019, Article number: 6 (2019) | Download Citation
Segmentation and denoising of signals often rely on the polynomial model which assumes that every segment is a polynomial of a certain degree and that the segments are modeled independently of each other. Segment borders (breakpoints) correspond to positions in the signal where the model changes its polynomial representation. Several signal denoising methods successfully combine the polynomial assumption with sparsity. In this work, we follow on this and show that using orthogonal polynomials instead of other systems in the model is beneficial when segmenting signals corrupted by noise. The switch to orthogonal bases brings better resolving of the breakpoints, removes the need for including additional parameters and their tuning, and brings numerical stability. Last but not the least, it comes for free!
Polynomials are an essential instrument in signal processing. They are indispensable in theory, as in the analysis of signals and systems [1] or in signal interpolation and approximation [2, 3], but they have been used also in specialized application areas such as blind source separation [4], channel modeling and equalization [5], to name a few. Orthonormal polynomials often play a special role [2, 6].
Segmentation of signals is one of the important applications in digital signal processing, while the most prominent sub-area is the segmentation of images. A plethora of methods exists which try to determine individual non-overlapping parts of the signal. The neighboring segments should be identified such that they contrast in their "character." For digital signal processing, such a vague word has to be mathematically expressed in terms of signal features, which then differ from segment to segment. As examples, the segments could differ in their level, statistics, frequency content, texture properties, etc. In this article, we rely on the assumption of smoothness of individual segments, which means that segments can be distinguished by their respective underlying polynomial description. The point in signal where the character changes is called a breakpoint, i.e., a breakpoint indicates the location of segment border. The features involved in the segmentation are chosen or designed a priori (i.e., model-based class), while the other class of methods aims at learning discriminative features from the training data [7, 8].
Within the first of the two classes, i.e., within approaches based on modeling, one can distinguish explicit and implicit types of models. In the "explicit" type, the signal is modeled such that it is a composition of sub-signals which often can be expressed analytically [9–16]. In the "implicit" type of models, the signal is characterized by features that are derived from the signal by using an operator [17–21]. The described differences are in an analogy to the "synthesis" and "analysis" approaches, respectively, recognized in the sparse signal processing literature [22, 23]. Although the two types of models are different in their nature, connections can be found, for example, the recent article [24] showing the relationship between splines and generalized total variation regularization or [21] discussing the relationship between "trend filtering" and spline-based smoothers.
Note that signal denoising and segmentation often rely on similar or even identical models. Indeed, when borders of segments are found, denoising can be easily done as postprocessing. Conversely, the byproducts of denoising can be used to detect segment borders. This paradigm is also true for our model, which can provide segmentation and signal denoising/approximation at the same time. As examples of other works that aim at denoising but can be used for segmentation as well, we cite [19, 20, 25, 26].
The method described in this article belongs to the "explicit" type of models. We work with noisy one-dimensional signals, and our underlying model assumes that individual segments can be well approximated by polynomials. The number of segments is supposed to be much lower than the number of signal samples—this natural assumption at the same time justifies the use of sparsity measures involved in segment identification. The model and algorithm presented for 1D in this article can be easily generalized to a higher dimension. For example, images are commonly modeled as piecewise smooth 2D-functions [27–31].
In [9, 13, 15], the authors build explicit signal segmentation/denoising models based on the standard polynomial basis {1,t,t2,…,tK}. In our previous articles, e.g., [11, 32], we used this basis as well. This article shows that modeling with orthonormal bases instead of the standard basis (which is clearly non-orthogonal) brings significant improvement in detection of the signal breakpoints and thus in the eventual denoising performance. It is worth noting that this improvement comes virtually for free, since the cost of generating an orthonormal basis is negligible compared to the cost of the algorithm which finds, in the iterative fashion, the numerical solution with such a basis fixed.
Worth to note that the method closest to ours is the one from [9], which was actually the initial inspiration of our work in the discussed direction. Similar to us, the authors of [9] combine sparsity, overcompleteness, and a polynomial basis; however, they approximate the solution to the model by greedy algorithms, while we rely on convex relaxation techniques. The other, above-cited methods do not exploit overcompleteness. Out of those, an interesting study [21] is similar to our model in that it allows piecewise polynomials of arbitrary (fixed) degree; however, it can be shown that their model does not allow jumps in signals, while our model does. This makes a significant difference, as will be shown later in the article.
The article is structured as follows: Section 2 introduces the mathematical model of segmentation/denoising, and it suggests the eventual optimization problem. The numerical solution to this problem by the proximal algorithm is described in Section 3. Finally, Sections 4 and 5 provide the description of experiments and analyze the results.
Problem formulation
In continuous time, a polynomial signal of degree K can be written as a linear combination of basis polynomials:
$$ y(t) = x_{0} p_{0}(t) + x_{1} p_{1} (t) + \ldots + x_{K} p_{K}(t),\quad t\in\ \mathbb{R}, $$
where xk, k=0,…,K, are the expansion coefficients in such a basis. If the standard basis is used, i.e.,
$$ p_{0}(t)=1, p_{1}(t)=t, \ldots, p_{K}(t)=t^{K}, $$
the respective scalars xk correspond to the intercept, slope, etc.
Assume a discrete-time setting and limit the time instants to n=1,…,N. Elements of a polynomial signal are then represented as
$$ {}y[\!n] = x_{0} p_{0}[\!n] + x_{1} p_{1}[\!n] + \ldots + x_{K} p_{K}[\!n],\quad n=1,\ldots,N. $$
In this formula, the signal is constructed by a linear combination of sampled polynomials.
Assuming the polynomials pk,k=0,…,K, are fixed, every signal given by (3) is determined uniquely by the set of coefficients {xk}. In contrast to this, we introduce a time index also to these coefficients, allowing them to change in time:
$$ \begin{aligned} y[\!n]& = x_{0}[\!n] p_{0}[\!n] + x_{1}[\!n] p_{1}[\!n] + \ldots + x_{K}[\!n] p_{K}[\!n],\\ n&=1,\ldots,N. \end{aligned} $$
This may seem meaningless at this moment; however, such an excess of parameters will play a principal role shortly. It will be convenient to write this relation in a more compact form, for which we need to introduce the notation
$$ \mathbf{y} \,=\, \left[ \begin{array}{c} y[\!1]\\ \vdots\\ y[\!N] \end{array}\right],\ \mathbf{x}_{k} \,=\, \left[ \begin{array}{c} x_{k}[\!1]\\ \vdots\\ x_{k}[\!N] \end{array}\right], \mathbf{P}_{k} \,=\, \left[ \begin{array}{ccc} p_{k}[\!1] & & 0\\ & \ddots & \\ 0 & & p_{k}[\!N] \end{array}\right] $$
for k=0,…,K. After this, we can write
$$ \mathbf{y} = \mathbf{P}_{0} \mathbf{x}_{0} + \ldots + \mathbf{P}_{K} \mathbf{x}_{K} $$
or even more shortly
$$ \mathbf{y} = \mathbf{P} \mathbf{x} = \left[\mathbf{P}_{0} | \cdots | \mathbf{P}_{K}\right]\left[ \begin{array}{c} \mathbf{x}_{0}\\[-1ex] \text{---}\\[-1ex] \vdots \\[-1ex] \text{---} \\[-1ex] \mathbf{x}_{K} \end{array}\right], $$
where the length of the vector x is (K+1) times N and P is a fat matrix of size N×(K+1)N.
Such a description of signal of dimension N is obviously overcomplete—there are (K+1)N parameters to characterize it. Nevertheless, assume now that y is a piecewise polynomial and that it consists of S independent segments. Each segment s∈{1,…,S} is then described by K+1 polynomials. In our notation, this can be achieved by letting vectors xk be constant within time indexes belonging to particular segments. (The polynomials in P are fixed). Figure 1 shows an illustration. The reason for not using a single number describing each segment is that the positions of the segment breakpoints are unknown and will be subject to search.
Illustration of the signal parameterization. The top plot shows four segments of a piecewise-polynomial signal (both the samples and the underlying continuous-time model); each segment is of the second order. The middle plot are the three basis polynomials, i.e., the diagonals of matrices Pk (in this particular case, the respective sampled vectors are mutually orthonormal, actually). The parameterization coefficients shown in the bottom plot are vectors x0,x1, and x2. Notice that infinitely many other combinations of values in x0,x1, and x2 generate the same signal, but we show the piecewise-constant case which is of the greatest interest for our study
Following the above argumentation, if xk are piecewise constant, the finite difference operator ∇ applied to vectors xk produces sparse vectors. Operator ∇ computes simple differences of each pair of adjacent elements in the vector, i.e., $\nabla : \mathbb {R}^{N}\mapsto \mathbb {R}^{N-1}$ such that ∇z=[z2−z1,…,zN−zN−1]⊤. Actually, not only ∇ applied to each parameterization vector produces S−1 nonzeros at maximum, but also the nonzero components of each ∇xk occupy the same positions across k=0,…,K.
Together with the assumption that the observed signal is corrupted by an i.i.d. Gaussian noise, it motivates us to formulate the denoising/segmentation problem as finding
$$ \hat{\mathbf{x}}=\underset{\mathbf{x}}{\text{arg~min}}\|{\text{reshape}(\mathbf{Lx})}\|_{21}\, \text{s.t. } \|{\mathbf{y}-\mathbf{P}\mathbf{W}\mathbf{x}}\|_{2} \leq \delta. $$
In this optimization program, W is the square diagonal matrix of size (K+1)N that enables us to adjust the lengths of vectors placed in P and operator L represents the stacked differences such that
$$\begin{array}{*{20}l} \mathbf{L} & = \left[ \begin{array}{ccc} \nabla & \cdots & 0 \\ & \ddots &\\ 0 & \cdots & \nabla \end{array}\right], \quad \mathbf{L}\mathbf{x} = \left[ \begin{array}{c} \nabla\mathbf{x}_{0} \\[-1.8ex] \text{---} \\[-1ex] \vdots \\[-1.5ex] \text{---} \\[-1ex] \nabla\mathbf{x}_{K} \end{array}\right]. \end{array} $$
The operator reshape() takes the stacked vector Lx to the form of a matrix with disjoint columns:
$$ \text{reshape}(\mathbf{L}\mathbf{x}) = \left[ \begin{array}{c} \nabla\mathbf{x}_{0} | \cdots | \nabla\mathbf{x}_{K} \end{array}\right]. $$
It is necessary to organize the vectors in such a way for the purpose of the ℓ21-norm which is explained below.
The first term of (8) is the penalty. Piecewise-constant vectors xk suggest that these vectors are sparse under the difference operation ∇. As an acknowledged substitute of the true sparsity measure, the ℓ1-norm is widely used [33, 34]. Since the vectors should be jointly sparse, we utilize the ℓ21-norm [35] that acts on a matrix Z with p rows and is formally defined by
$$ \begin{aligned} \|{\mathbf{Z}\|}_{21} & = \left\|{\, \rule{0pt}{1em} \left[ \rule{0pt}{1em} \|{\mathbf{Z}_{1,:}\|}_{2}, \|{\mathbf{Z}_{2,:}\|}_{2}, \ldots, \|{\mathbf{Z}_{p,:}\|}_{2} \right] \,}\right\|_{1} \\ & = \|{\mathbf{Z}_{1,:}\|}_{2} + \ldots + \|{\mathbf{Z}_{p,:}\|}_{2}, \end{aligned} $$
i.e., the ℓ2-norm is applied to the particular rows of Z and the resulting vector is measured by the ℓ1-norm. Such a penalty promotes sparsity across matrix' rows, and therefore, the ℓ21-norm enforces the nonzero components in the matrix to lie on the same rows.
The second term in (8) is the data fidelity term. The Euclidean norm reflects the fact that gaussianity of the noise is assumed. The level of the error is required to fall below δ. Finally, vector $\hat {\mathbf {x}}$ contains the achieved optimizers.
When standard polynomial basis {1,t,…,tK} is used for the definition of P, the high-order components blow up so rapidly that it brings two problems:
First, the difference vectors follow the scale of the respective polynomials. In the absence of normalization, i.e., when W is identity, this is not fair with respect to the ℓ21-norm, since no polynomial should be preferred. In this regard, the polynomials should be "normalized" such that W contains the reciprocal of ℓ2-norms of the respective polynomials. It is worth noting that in our former work, in particular in [12], we basically used model (8), but with the difference that there has been no weighting matrix and we used $\mathbf {L}=\mathop {\text {diag}}(\tau _{0}\nabla,\ldots,\tau _{K}\nabla)$ instead of $\mathbf {L}=\mathop {\text {diag}}(\nabla,\ldots,\nabla)$, cf. (9). Finding suitable values of τk has been a demanding trial-and-error process. In this perspective, simple substitution Wx→x brings us in fact to the model from [12], and we see that τk should correspond to the norms of the respective polynomial. However, it still holds true that manual adjustments of these parameters can increase the success rate of the breakpoint detection, as they depend, unfortunately, on the signal itself (recall that a part of a signal can correspond to locally high parameterization values while other part does not). This is however out of scope of this article.
Second, there is the numerics issue, meaning that the algorithms (see below) used to find the solution $\hat {\mathbf {x}}$ failed due to the too wide range of the processed values. However, for short signals (like N≤500), this problem was solved by taking the time instants not as integers, but linearly spaced values from 1/N to 1, as the authors of [9] did.
This article shows that the simple idea of shifting to orthonormal polynomials solves the two problems with no extra requirements. At the same time, orthonormal polynomials result in better detection of the breakpoints.
One may also think of an alternative, unconstrained formulation of the problem:
$$ \hat{\mathbf{x}} = \underset{\mathbf{x}}{\text{arg~min}}\left\|\phantom{\dot{\frac{\lambda}{2}}\!}\!\!\!\!{\text{reshape}(\mathbf{L}\mathbf{x})}\right\|_{21} + \frac{\lambda}{2} \left\|\phantom{\dot{\frac{\lambda}{2}}\!}\!\!\!\!\!{\mathbf{y}-\mathbf{P}\mathbf{W}\mathbf{x}}\right\|_{2}. $$
This formulation is equivalent to (8) in the sense that to a given δ, there exists λ such that the optima are identical. However, the constrained form is preferable since changing the weight matrix W does not induce any change in δ, in opposite to a possible shift in λ in (12).
We utilize the so-called proximal splitting methodology for solving optimization problem (8). Proximal algorithms (PA) are algorithms suitable for finding minimum of a sum of convex functions. Proximal algorithms perform iterations involving simple computational tasks such as evaluation of gradient or/and proximal operators related to the individual functions.
It is proven that under mild conditions, PA provide convergence. The speed of convergence is influenced by properties of the functions involved and by the parameters used in the algorithms.
Condat algorithm solving (8)
The generic Condat algorithm (CA) [36, 37] represents one possibility for solving problems of type
$$ \text{minimize}\ h_{1}(\mathbf{L}_{1}\mathbf{x}) + h_{2}(\mathbf{L}_{2}\mathbf{x}), $$
over x, where functions h1 and h2 are convex and L1 and L2 are linear operators. In our paper [12], we have compared two variants of CA; in the current work, we utilize the variant that is easier to implement—it does not require a nested iterative projector.
To connect (13) with (8), we assign $\phantom {\dot {i}\!}h_{1} = \|{\cdot \|}_{21}, \mathbf {L}_{1} = \text {reshape}(\mathbf {L}\,\cdot), h_{2} = \iota _{\{\mathbf {z}:\, \|{\mathbf {y}-\mathbf {z}\|}_{2} \leq \delta \}}$ and L2=PW, while ιC denotes the indicator function of a convex set C.
Algorithm solving (8) is described in Algorithm 1. Therein, two operators are involved: Operator $\mathop {\text {soft}}^{\text {row}}_{\tau }(\mathbf {Z})$ takes matrix Z and performs the row-wise group soft thresholding with threshold τ on it, i.e., it maps each element of Z such that
$$ z_{ij} \mapsto \frac{z_{ij}}{\|{\mathbf{Z}_{i,:}\|}_{2}} \max(\|{\mathbf{Z}_{i,:}\|}_{2}-\tau,0). $$
Projector $\mathop {\text {proj}}_{B_{2}(\mathbf {y},\delta)}(\mathbf {z})$ finds the closest point to z in the ℓ2-ball {z:∥y−z∥2≤δ},
$$ \mathbf{z} \mapsto \frac{\delta \mathbf{z}}{\max(\|{\mathbf{z}\|}_{2},\delta)}. $$
All particular operations in Algorithm 1 are quite simple, and they are obtained in $\mathcal {O}(N)$ time. It is worth emphasizing, however, that the number of iterations necessary to achieve convergence grows with the number of time samples N. A better notion of the computational cost is provided by Table 1. It shows that both the cost per iteration and the number of necessary iterations grow linearly, resulting in an overall $\mathcal {O}\!\left (N^{2}\right)$ complexity of the algorithm. The cost of postprocessing (described in Section 3.2) is negligible compared to such a quantity of operations.
Table 1 Time spent per iteration (in seconds) and the total number of iterations until convergence with respect to N, for an orthonormal polynomial base, fixed K=2
Convergence of the algorithm is guaranteed when it holds $\xi \sigma \left \|{{\mathbf {L}_{1}^{\top }}\! \mathbf {L}_{1}+{\mathbf {L}_{2}^{\top }}\! \mathbf {L}_{2}}\right \|\leq 1$. For the use of the inequality $\|{{\mathbf {L}_{1}^{\top }}\! \mathbf {L}_{1}+{\mathbf {L}_{2}^{\top }}\! \mathbf {L}_{2}\|} \leq \|{\mathbf {L}_{1}\|}^{2} + \|{\mathbf {L}_{2}\|}^{2}$, it is necessary to have the upper bound on the operator norms. The upper bound of ∥L1∥ is:
$$\begin{array}{*{20}l} \|{\mathbf{L}_{1}\|}^{2} = \|{\mathbf{L}\|}^{2} &= \max_{\|{\mathbf{x}\|}_{2}=1} \|{\mathbf{L}\mathbf{x}\|}^{2}_{2} \, = \max_{\|{\mathbf{x}\|}_{2}=1} \left\|\left[{\begin{array}{c} \nabla\mathbf{x}_{0} \\ \vdots \\ \nabla\mathbf{x}_{K} \end{array}}\right]\right\|^{2}_{2}\\ &= \max_{\|{\mathbf{x}\|}_{2}=1} \left(\sum_{k=0}^{K} \left\|{\nabla\mathbf{x}_{k}}\right\|^{2}_{2} \right) \\ &\leq \sum_{k=0}^{K} \left(\max_{\|{\mathbf{x}\|}_{2}=1} \left\|{\nabla\mathbf{x}_{k}}\right\|^{2}_{2} \right) \\ &\leq \sum_{k=0}^{K} \, \|{\nabla\|}^{2} \,\leq\, 4 (K+1) \end{array} $$
and thus $\|{\mathbf {L}_{1}\|} \leq 2 \sqrt {K+1}$. The operator norm of PW satisfies ∥PW∥2=∥PWW⊤P⊤ ∥, and thus, it suffices to find the maximum eigenvalue of PW2P⊤. Since PW has the multi-diagonal structure (cf. relation (7)), PW2P⊤ is diagonal, and in effect, it is enough to find the maximum on its diagonal. Altogether, the convergence is guaranteed when $\xi \sigma \left (\max \mathop {\text {diag}}\left (\mathbf {P}\mathbf {W}^{2}{\mathbf {P}^{\top }}\!\right) + 4(K+1) \right) \leq 1$.
Signal segmentation/denoising
Vectors $\hat {\mathbf {x}}$ as the optimizers of problem (8) allow a means to estimate the underlying signal; it can be done simply by $\hat {\mathbf {y}}=\mathbf {P}\mathbf {W}\hat {\mathbf {x}}$. However, this way we do not obtain the segment ranges. Second disadvantage of this approach is that the jumps are typically underestimated in size, which comes from the bias inherent to the ℓ1 norm [38–40] as the part of the optimization problem.
The nonzero values in $\nabla \hat {\mathbf {x}}_{0}, \dots, \nabla \hat {\mathbf {x}}_{K}$ indicate segment borders. In practice, it is almost impossible to achieve truly piecewise-constant optimizers [38] as in the model case in Fig. 1, and vectors $\nabla \hat {\mathbf {x}}_{k}$ are crowded by small elements, besides larger values indicating possible segment borders. We apply a two-part procedure to obtain the segmented and denoised signal: the breakpoints are detected first, and then, each detected segment is denoised individually.
Recall that the ℓ21-norm cost promotes significant values in vectors $\nabla \hat {\mathbf {x}}_{k}$ situated at the same positions. As a base for breakpoint detection, we gather $\nabla \hat {\mathbf {x}}_{k}$s to a single vector using the weighted ℓ2-norm according to the formula
$$ \mathbf{d} = \sqrt{\left(\alpha_{0}\nabla\hat{\mathbf{x}}_{0}\right)^{2} + \dots + \left(\alpha_{K}\nabla\hat{\mathbf{x}}_{K}\right)^{2}}, $$
where $\alpha _{k} = 1/\max (\left \vert {\nabla \hat {\mathbf {x}}_{k}}\right \vert)$ are positive factors serving to normalize the range of values in the parameterization vectors differences. The computations in (17) are elementwise.
The comparisons presented in this article will be concerned only with the detection of breakpoints, and thus, in our further analysis, we process no more than the vector d. However, in case we would like to recover the denoised signal, we would proceed as in our former works [11, 12], where first a moving median filter is applied to d and subtracted from d, allowing to keep the significant values and at the same time to push small ones toward zero. Put simply, values larger than a selected threshold then indicate the breakpoints positions. The second step is denoising itself, which is done by least squares on each segment separately, using (any) polynomial basis of degree K.
Experiment—does orthogonality help in signals with jumps?
The experiment has been designed to find out whether substituting non-orthogonal bases with the orthogonal ones reflects in emphasizing the positions of breakpoints when exploring the vector d.
As test signals, five piecewise quadratic signals (K=2) of length N=300 were randomly generated. They are generated such that they contain polynomial segments similar to the 1D test signals presented in [9]. All signals consist of six segments of random lengths. There are noticeable jumps in value between neighboring segments, which is the difference to the test signals in [9]. The noiseless signals are denoted by yclean and examples are depicted in Fig. 2.
Example of two noiseless and noisy test signals used in the experiment. Signals of length N=300 consist of six segments of various length, with a perceptible jump between each two segments. The SNRs used for this illustrations were 25 and 15 dB
The signals have been corrupted by the Gaussian i.i.d. noise, resulting in signals ynoisy=yclean+ε with ε∼N(0,σ2). With these signals, we can determine the signal-to-noise ratio (SNR), defined as
$$ \mathit{SNR}\,(\mathbf{y}_{\text{noisy}},\mathbf{y}_{\text{clean}}) = 20 \cdot \log_{10} \frac{\|{\mathbf{y}_{\text{clean}}\|}_{2}}{\|{\mathbf{y}_{\text{noisy}}-\mathbf{y}_{\text{clean}}\|}_{2}}. $$
Five SNR values were prescribed for the experiment: 15, 20, 25, 30, and 35 dB. These numbers entered into the calculation of the respective noise standard deviation σ such that
$$ \sigma = \frac{\|{\mathbf{y}_{\text{clean}}\|}_{2}}{\sqrt{N \cdot 10^{\frac{\mathit{SNR}}{10}}}}. $$
It is clear that the resulting σ is influenced by energy of the clean signal as well. For each signal and each SNR, 100 realizations of noise were generated, making a set of 2500 noisy signals in total.
Since the test signals are piecewise quadratic, the bases subject to testing all consist of three linearly independent discrete-time polynomials. For the sake of this section, the three basis vectors can be viewed as the columns of the N×3 matrix. The connection to problem (8) is that these vectors form the diagonals of the system matrix PW. In the following, the N×3 basis matrices will be distinguished by the letter indicating the means of their generation:
Non-orthogonal bases (B)
Most of the papers that explicitly model the polynomials utilize directly the standard basis (2), which is clearly not orthogonal either in continuous nor discrete setting. The norms of such polynomials differ significantly as well. We generated 50 B bases using formula B=SD1AD2. Here, the elements of the standard basis—the columns of S—are first normalized using a diagonal matrix D1, then mixed using a random Gaussian matrix A and finally dilated to different lengths using D2 having uniformly random entries at the diagonal. This way we acquired 50 bases, which are non-orthogonal and non-normalized at the same time.
Normalized bases (N)
Another set of 50 bases, the N bases, were obtained by simply normalizing the length of the B basis polynomials, N=BD3. We want to find out whether this simple step helps in detecting the breakpoints.
Orthogonal bases (O)
Orthogonal bases were obtained by orthogonalization of N bases. The process was as follows: A matrix N was decomposed by the SVD, i.e.,
$$ \mathbf{N} = \mathbf{U} \boldsymbol{\Sigma} {\mathbf{V}^{\top}}. $$
Matrix U consists of three orthonormal columns of length N. The new orthonormal system is simply the matrix O=U.
One could doubt whether the new basis O spans the same space as N does. Since N has full rank, Σ contains three positive values on its diagonal. Because V is also orthogonal, the answer to the above question is positive. A second question could be whether the new system is still consistent with any polynomial basis on $\mathbb {R}$. The answer is yes again, since both matrices N and U can be substituted by their continuous-time counterparts, thus generating the identical polynomial.
Random orthogonal bases (R)
The last class consists of random orthogonal polynomial bases. The R bases were generated as follows: First, the SVD has been applied to the matrix N as in (20), now symbolized using the subscripts, $\mathbf {N} = \mathbf {U}_{\mathbf {N}} \boldsymbol {\Sigma }_{\mathbf {N}} {\mathbf {V}_{\mathbf {N}}^{\top }}$. Next, a random matrix A of size 3×3 was generated, each element of whose independently follows the Gauss distribution. This matrix is then decomposed to ${\mathbf {A}} = \mathbf {U}_{\mathbf {A}} \boldsymbol {\Sigma }_{\mathbf {A}} {\mathbf {V}_{\mathbf {A}}^{\top }}$. The new basis R is obtained as R=UNUA. Note that since both matrices on the right hand side are orthonormal, the columns of R form an orthonormal basis spanning the desired space. Elements of UA determine the linear combinations used in forming R.
We generated 50 such random bases, meaning that in total 200 bases (B, N, O, R) were ready for the experiment.
A note on other polynomial bases
One could think of using predefined polynomial bases as Chebychev or Legendre bases, for example. Note that such bases are defined in continuous time and are therefore orthogonal with respect to an integral scalar product [6]. Sampling such systems at equidistant time-points does not lead to orthogonal bases; actually when preparing this article, we found out that their orthogonalization via the SVD (as done above) significantly changes the course of the basis vectors. As far as we know, there are no predefined discrete-time orthogonal polynomial systems. In combination with the observation that neither the sampled nor the orthogonalized systems perform better than other non-ortho- or orthosystems, respectively, we did not include any such system in our experiments.
The algorithm of breakpoint detection that we utilized in the experiments has been described in Section 3.2. We used formula (17) for computing the input vector. The Condat algorithm run for 2000 iterations which was sufficient in all cases. Three items were subject to vary within the experiments, configuring the problem (8):
The input signal y,
parameter δ controlling the modeling error,
the basis of polynomials PW
(induced from the columns of matrices B,N,O, or R).
Each signal entered into calculation with each of the bases, making 2500×200 experiments in total in signal breakpoints detection.
Setting parameter δ
For each of the 2500 noisy signals, the parameter δ was calculated. Since both the noisy and clean signals are known in our simulation, δ should be close to the actual ℓ2 error caused by the noise. We found out that particular δ leading to best breakpoint detection varies around the ℓ2 error, however. For the purpose of our comparison, we fixed a universal value of δ determined according to
$$ \delta = \|{\mathbf{y}_{\text{noisy}}-\mathbf{y}_{\text{clean}}\|}_{2} \cdot 1.05 $$
meaning that we allowed the model error to deviate from the ground truth by 5% at maximum. Figure 3 shows the distribution of values of δ. For different signals, δ is concentrated around a different quantity. This effect is due to the noise generation, wherein the resulting SNR (18) was set and fixed at first, while δ is linked to the noise deviation σ that depends on the signal, cf. (19).
Distribution of δ parameter across the five groups of test signals. The SNR is 25 dB in this example. The box plots show the maximum and minimum, first quartile and the third quartile forming the edges of the rectangle, and the median value within the box. Values of δ vary within the signal (which is given by particular realizations of the noise) and between the signals (which is due to fixing the SNR rather than the noise power)
Note that in practice, however, δ would have to take into account not only the (even unknown) noise level, but also the modeling error, since real signals do not follow the polynomial model exactly. A good choice of δ unfortunately requires a trial process.
The focus of the article is to study whether orthogonal polynomials lead to better breakpoint detection than the non-orthogonal polynomials. To evaluate this, several values that indicate the quality of breakpoint detection process were computed. These markers are based on vector d.
But first, for each single signal in test, define two disjoint sets of indexes, chosen out of {1,…,N}:
Highest values (HV): Recall that each of the clean test signals contains five breakpoints. Note also that d defined by (17) is nonnegative. The HV group thus gathers the indexes of the five values in d that are likely to represent breakpoints. These five indexes are selected iteratively: At first, the largest value is chosen to belong to HV. Then, since it can happen that multiple high values sit next to each other, the two neighboring indexes to the left and two to the right are omitted from further consideration. The remaining four steps select the rest of the HV members in the same manner.
Other values (OV): The second group consists of the remaining indexes in d. The indexes excluded during the HV selection are not considered in OV. This way, the number of elements in OV is 274 at least and 289 at most, depending on the particular course of the HV selection process.
For each signal, the ratio of the averages of the values belonging to HV versus the average of the values in OV is computed; we denote this ratio AAR. We also computed the MMR indicator, which we define as the ratio of the minimum of values of HV to the maximum of the OV values. Both these indicators, and especially the MMR, should be as high as possible to enable safe recognition of the breakpoints.
The next parameter in evaluation was the number of correctly detected breakpoints (NoB). We are able to introduce NoB in our report since the true positions of the breakpoints are known. The breakpoint positions are not always found exactly, especially due to the influence of the noise (will be discussed later), and therefore, we consider the breakpoint as detected correctly if the indicated position lies within an interval of ± two indexes from the ground truth.
In addition, classical mean square error (MSE) has been involved to complete the analysis. The MSE measures the average distance of the denoised signal from the noiseless original and is defined as
$$ \text{MSE}(\mathbf{y}_{\text{denoised}},\mathbf{y}_{\text{clean}}) = \frac{1}{N}\|{\mathbf{y}_{\text{denoised}}-\mathbf{y}_{\text{clean}}\|}_{2}^{2}. $$
As ydenoised, two variants were considered: (a) the direct signal estimate computed as $\hat {\mathbf {y}}=\mathbf {P}\mathbf {W}\hat {\mathbf {x}}$, where $\hat {\mathbf {x}}$ is the solution to (8) and (b) the estimate where the ordinary least squares have been used separately on each of the detected segments with a polynomial of degree two.
Note that approach (b) is an instance of the so-called debiasing methods, which is sometimes done in regularized regression, based on the a priori knowledge that the regularizer biases the estimate. As an example, debiasing is commonly done in LASSO estimation [39, 41], where the biased solution is used only to fix the sparse vector support and least squares are then used to make a better fit on the reduced subset of regressors, see also related works [12, 33, 42].
The results from approach a will be abbreviated "CA" in the following, meaning "Condat Algorithm", and the results from the least squares adjustment by "LS."
Using orthogonal bases reflects in significantly better results than working with non-orthogonal bases. The improvement can be observed in all parameters in consideration. The AAR, MMR, and NoB indicators increase with orthogonal bases and the MSE decreases.
An example comparison of the three types of bases in terms of the AAR is depicted in Fig. 4. A larger AAR means that the averages of the HV and OV values, respectively, are more apart. Analogously, Fig. 5 shows an illustration of the performance in terms of the MMR. The MMR gets greater when the smallest value from HV is better separated from the greatest value from OV. This creates a means for correct detection of the breakpoints. From both figures, it is clear that R and O bases are preferable over N bases.
Results of the AAR indicator for test signal "1." Five different SNRs in use are indicated by the subscripts. The box plot shows the distribution of the AAR under 100 realizations of random noise. In terms of the AAR distribution, random bases R and the orthonormalized bases O perform better than the other two systems. Normalization of the B bases resulted in a slight decrease of the AAR variance
Results of the MMR indicator for test signal "4." Similar to Fig. 4, the box plots clearly exhibit a clear superiority of R bases and O bases over the B bases and N bases in terms of MMR distribution, although the respective worst results are comparable in value
The reader has noticed that Figs. 4 and 5 do not show the comparison across all the test signals. The reason is that it is not possible to fairly fuse results for different signals, since the signal shape and size of the jumps influence the values of the considered parameters. Another reason is that the energy of the noise differs across signals, even when the SNR is fixed (see discussion of this effect above). However, looking at the complete list of figures which are available at the accompanying webpageFootnote 1, the same trend is observed in all of the figures: the orthogonal(ized) bases perform better than the non-orthogonal bases. At the same time, there is no clear suggestion whether R bases are better than O bases; while Fig. 5 shows superiority of R bases, other plots at the website contain various results.
The NoB is naturally the ultimate criterion for measuring the quality of segmentation. Histograms of the NoB parameter for one particular signal are shown in Fig. 6. From this figure, as well as from the supplementary material at the webpage, we can conclude that B bases are beaten by N bases. Most importantly, the two orthogonal classes of bases (O, R) perform better than the N bases in a majority of cases (although one finds situation when the systems perform on par). Looking closer to obtain a final statement whether O bases or R bases are preferable, we can see that R bases usually provide better detection of breakpoints; however, the difference is very small. This might be the result of the test database being too small.
Results in terms of the NoB indicator. The respective 3D histograms show the frequency of the number of correctly detected breakpoints when the SNR changes, here for signal "4". For each SNR and a specific basis type, 5000 experiments were performed (50 bases times 100 noise realizations). An expected trend is pronounced that increasing value of SNR lowers the number of correctly detected breakpoints, independently of the choice of the basis. The worst results are obtained using non-orthogonal bases (B)
Does the distribution of NoB in Fig. 6 also suggest that some of the bases may perform better than others within the same class, when the signal and the SNR are fixed? It is not fair to make such a conclusion based on the histograms; histograms cannot reveal whether the effect on NoB is due to the particular realization of noise or it is due to differences between the bases, regardless of noise. Let us take some effort to find the answer to the question. Figures 7 and 8 show selected maps of NoB. It is clearly visible that for mild noise levels, there are bases that perform better than the others and that also a few bases perform significantly worse—in a uniform manner. In the low SNR regime, on contrary, the horizontal structures in the images prevail, meaning that specific noise shape takes over. This effect can be explained easily: the greater is the amplitude of the noise, the greater is the probability that an "undesirable" noise sample in the nearness of the breakpoint spoils its correct identification.
Number of correctly identified breakpoints (NoB) for different SNRs. From left to right 15, 20, 25, 30, 35 dB, signal "3." In the horizontal direction are the fifty randomly generated orthobases (R bases). In the vertical direction are the hundred particular realizations of noise
Number of correctly identified breakpoints (NoB) for different SNRs. Analogously to Fig. 7, but now for signal "5"
In practice, nevertheless, the signal to be denoised/segmented is given including the noise. In light of the presented NoB analysis (Figs. 7 and 8 in particular), it means that (especially) when SNR is high, it may be beneficial to run the optimization task (8) multiple times, i.e., with different bases, fusing the particular results for a final decision.
The last measure of performance is the MSE. First, Fig. 9 presents an example of denoising using the direct and least squares approach (those are described in Section 4.4). Figures 10 and 11 show successful and distracting results in terms of MSE, respectively. While with signals "1" to "4," orthobases improve MSE, it is not the case of signal "5." It is interesting to note that signal "5" does not exhibit great performance in terms of the other indicators (AAR, MMR, NoB) neither.
Example of time-domain reconstruction, test signal "1". Left side shows the noiseless and noisy signals, the plot on the right hand presents the direct signal estimate $\hat {\mathbf {y}}=\mathbf {P}\mathbf {W}\hat {\mathbf {x}}$ (CA), and the respective least squares refit (LS), on top of the noiseless signal. Clearly, LS radically improves the adherence to the data (and thus improves the MSE). The bias of the CA is explained in Section 3.2
Results in terms of MSE for test signal "2." Left plot shows the case of direct signal estimates (CA), right plot shows the MSE for the least squares (LS). The plots have the same scale. While simple normalization (N bases) helps reducing the MSE, orthobases clearly bring an extra improvement
Results in terms of MSE for test signal "5," similar to Fig. 10. In this case, there is no significant improvement when O or R bases are introduced—there is even an increase in the MSE for LS version
The experiment has been done in MATLAB (2017a) on a PC with Intel i7 processor and with 16 GB of RAM. For some proximal algorithm components we benefited from using the flexible UnLocBox toolbox [43]. The code related to the experiments is available via the mentioned webpage.
It is computationally cheap to generate an orthogonal polynomial system, compared to the actual cost of iterations in the numerical algorithm. For N=300, convergence has been achieved after performing 2000 iterations of Algorithm 1. While one iteration takes about 0.5 ms, generation of one orthonormal basis (dominated by the SVD) takes up to 1 ms.
Experiment—the effect of jumps
Another experiment has been performed focusing on the sensitivity of the breakpoint detection in relation to the size of the jumps in signal. For this study, we utilized a single signal, depicted in blue in Fig. 12; the signal length was again of length N=300. It contains five segments of similar length, and quadratic polynomials are used, similar to test signals in [9]. The signal is designed such that there are no jumps on the segment borders. Nine new signals were generated from this signal in a way that segments two and four were raised up by a constant value; nine constants uniformly ranging from 5 to 45 were applied. Each signal was corrupted by gaussian noise 100 times independently, with 10 different variances. This way, 10 000 signals were available in this study in total.
Test signal with no jumps. In blue the clean signal, in red its particular noisy observation (SNR 14.2 dB, i.e., σ = 13.45), in green the recovery using the proposed method
As the polynomial systems, three O bases and three B bases were randomly chosen out of the set of 50 of the same kind from the experiment above. We ran the optimization problem (8) on the signals with δ set according to (21). Each solution was then transformed to the vector d (see formula (17)). Four largest elements (since there are five true segments) in d were selected and their positions were considered the estimated breakpoints. Evaluation of correctly detected breakpoints (NoB) was performed as in the above experiment, with the difference that ± 4 samples from the true position were accepted as a successful detection.
Figure 13 shows the average results. It is clear that the presence of even small jumps prioritize the use of O bases, while, interestingly, in case of little or no jumps, B bases perform slightly better (note, however, that both systems perform bad in terms of NoB for such small jump levels).
The effect of jump size in signal. The plots show average NoB scores for B bases (left) and O bases (right). Color lines correspond to different σ (i.e., to the noise level), and the horizontal axis represents the size of jumps of both the second and fourth segments in signal from Fig. 12
We comment the results such that although our model includes cases when the signal does not contain jumps, such cases could benefit from extending the model by the additional condition that the segments have to tie up at the breakpoints. For small jumps, our model does not resolve the breakpoints correctly, independent of the choice of the basis.
The experiment confirmed that using orthonormal bases is highly preferable over the non-orthogonal bases when solving the piecewise-polynomial signal segmentation/denoising problem. It has been shown that the number of correctly detected breakpoints is increased when orthobases are used. Also other performance indicators are improved on average with orthobases, and the plots show that the improvement is the more pronounced the higher is the noise level. The effect comes almost for free, since it is cheap to generate an orthogonal system, relative to the cost of the numerical algorithm that utilizes the system. In addition, the new approach avoids demanding hands-on setting of "normalization" weights that has been done both by us and by other researchers previously. The user still has to choose δ, the quantity which includes the noise level and the model error.
Our experiment revealed that some orthonormal bases are better than others in a particular situation; our results indicate that it could be beneficial to merge detection results of multiple runs with different bases. Such a fusion process could be an interesting direction of future research.
During the revision process of this article, our paper that generalizes the model (8) to two dimensions has been published, see [44]. It shows that it is possible to detect edges in images using this approach; however, it does not aim at comparing different polynomial bases.
http://www.utko.feec.vutbr.cz/~rajmic/sparsegment
AAR:
Average to average ratio, Section 4
B bases:
Non-orthogonal polynomial bases, Section 4
The Condat Algorithm, Sections 3 and 4
LS:
(Ordinary) Least squares, Section 4
MMR:
Maximum to minimum ratio, Section 4
MSE:
Mean square error, Section 4
N bases:
Normalized (nonorthogonal) polynomial bases, Section 4
NoB:
Number of correctly detected breakpoints, Section 4
O bases:
Orthonormalized N bases, Section 4
Proximal algorithms, Section 3
R bases:
Random orthogonal polynomial bases, Section 4
SVD:
Singular value decomposition, Section 4
P. Prandoni, M. Vetterli, Signal Processing for Communications, 1st ed. Communication and information sciences (CRC Press; EPFL Press, Boca Raton, 2008).
M. V. Wickerhauser, Mathematics for Multimedia (Birkhäuser, Basel, Birkhäuser, Boston, 2009).
M. Unser, Splines: a perfect fit for signal and image processing. IEEE Signal Process. Mag.16(6), 22–38 (1999). https://doi.org/10.1109/79.799930.
S. Redif, S. Weiss, J. G. McWhirter, Relevance of polynomial matrix decompositions to broadband blind signal separation. Signal Process.134(C), 76–86 (2017).
J. Foster, J. McWhirter, S. Lambotharan, I. Proudler, M. Davies, J. Chambers, Polynomial matrix qr decomposition for the decoding of frequency selective multiple-input multiple-output communication channels. IET Signal Process.6(7), 704–712 (2012).
G. G. Walter, X. Shen, Wavelets and Other Orthogonal Systems, Second Edition. Studies Adv. Math. (Taylor & Francis, CRC Press, Boca Raton, 2000).
F. Milletari, N. Navab, S. -A. Ahmadi, in 2016 Fourth International Conference on 3D Vision (3DV). V-net: fully convolutional neural networks for volumetric medical image segmentation, (2016), pp. 565–571. https://doi.org/10.1109/3DV.2016.79.
K. Fritscher, P. Raudaschl, P. Zaffino, M. F. Spadea, G. C. Sharp, R. Schubert, ed. by S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016 (SpringerCham, 2016), pp. 158–165.
R. Giryes, M. Elad, A. M. Bruckstein, Sparsity based methods for overparameterized variational problems. SIAM J. Imaging Sci.8(3), 2133–2159 (2015).
S. Shem-Tov, G. Rosman, G. Adiv, R. Kimmel, A. M. Bruckstein, in Innovations for Shape Analysis. Mathematics and Visualization, ed. by M. Breuß, A. Bruckstein, and P. Maragos. On Globally Optimal Local Modeling: From Moving Least Squares to Over-parametrization (SpringerBerlin/New York, 2012), pp. 379–405.
P. Rajmic, M. Novosadová, M. Daňková, Piecewise-polynomial signal segmentation using convex optimization. Kybernetika. 53(6), 1131–1149 (2017). https://doi.org/10.14736/kyb-2017-6-1131.
M. Novosadová, P. Rajmic, in Proceedings of the 40th International Conference on Telecommunications and Signal Processing (TSP). Piecewise-polynomial signal segmentation using reweighted convex optimization (Brno University of Technology, BrnoBarcelona, 2017), pp. 769–774.
G. Ongie, M. Jacob, Recovery of discontinuous signals using group sparse higher degree total variation. Signal Process. Lett. IEEE. 22(9), 1414–1418 (2015). https://doi.org/10.1109/LSP.2015.2407321.
J. Neubauer, V. Veselý, Change point detection by sparse parameter estimation. INFORMATICA. 22(1), 149–164 (2011).
I. W. Selesnick, S. Arnold, V. R. Dantham, Polynomial smoothing of time series with additive step discontinuities. IEEE Trans. Signal Process.60(12), 6305–6318 (2012). https://doi.org/10.1109/TSP.2012.2214219.
B. Zhang, J. Geng, L. Lai, Multiple change-points estimation in linear regression models via sparse group lasso. IEEE Trans. Signal Process.63(9), 2209–2224 (2015). https://doi.org/10.1109/TSP.2015.2411220.
K. Bleakley, J. -P. Vert, The group fused Lasso for multiple change-point detection. Technical report (2011). https://hal.archives-ouvertes.fr/hal-00602121.
S. -J. Kim, K. Koh, S. Boyd, D. Gorinevsky, ℓ 1 trend filtering. SIAM Rev.51(2), 339–360 (2009). https://doi.org/10.1137/070690274.
I. W. Selesnick, Sparsity-Assisted Signal Smoothing. (R. Balan, M. Begué, J. J. Benedetto, W. Czaja, K. A. Okoudjou, eds.) (Springer, Cham, 2015). https://doi.org/10.1007/978-3-319-20188-76.
I. Selesnick, in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference On. Sparsity-assisted signal smoothing (revisited) (IEEE, 2017), pp. 4546–4550. https://doi.org/10.1109/ICASSP.2017.7953017.
R. J. Tibshirani, Adaptive piecewise polynomial estimation via trend filtering. Annals Stat.42(1), 285–323 (2014). https://doi.org/10.1214/13-AOS1189.
M. Elad, P. Milanfar, R. Rubinstein, in Inverse Problems 23 (200). Analysis versus synthesis in signal priors (IOP Publishing Ltd., 2005), pp. 947–968.
S. Nam, M. Davies, M. Elad, R. Gribonval, The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal.34(1), 30–56 (2013). https://doi.org/10.1016/j.acha.2012.03.006.
M. Unser, J. Fageot, J. P. Ward, Splines are universal solutions of linear inverse problems with generalized TV regularization. SIAM Rev.59(4), 769–793 (2017).
L. Condat, A direct algorithm for 1-D total variation denoising. Signal Process. Lett. IEEE. 20(11), 1054–1057 (2013). https://doi.org/10.1109/LSP.2013.2278339.
I. W. Selesnick, A. Parekh, I. Bayram, Convex 1-D total variation denoising with non-convex regularization. IEEE Signal Process. Lett.22(2), 141–144 (2015). https://doi.org/10.1109/LSP.2014.2349356.
M. Elad, J. Starck, P. Querre, D. Donoho, Simultaneous cartoon and texture image inpainting using morphological component analysis (mca). Appl. Comput. Harmon. Anal.19(3), 340–358 (2005).
K. Bredies, M. Holler, A TGV-based framework for variational image decompression, zooming, and reconstruction. part I. Siam J. Imaging Sci.8(4), 2814–2850 (2015). https://doi.org/10.1137/15M1023865.
M. Holler, K. Kunisch, On infimal convolution of TV-type functionals and applications to video and image reconstruction. SIAM J. Imaging Sci.7(4), 2258–2300 (2014). https://doi.org/10.1137/130948793.
F. Knoll, K. Bredies, T. Pock, R. Stollberger, Second order total generalized variation (TGV) for MRI. Magn. Reson. Med.65(2), 480–491 (2011). https://doi.org/10.1002/mrm.22595.
G. Kutyniok, W. -Q. Lim, Compactly supported shearlets are optimally sparse. J. Approximation Theory. 163(11), 1564–1589 (2011). https://doi.org/10.1016/j.jat.2011.06.005.
M. Novosadová, P. Rajmic, in Proceedings of the 8th International Congress on Ultra Modern Telecommunications and Control Systems. Piecewise-polynomial curve fitting using group sparsity (IEEELisbon, 2016), pp. 317–322.
E. J. Candes, M. B. Wakin, S. P. Boyd, Enhancing sparsity by reweighted ℓ 1 minimization. J. Fourier Anal. Appl.14:, 877–905 (2008).
D. L. Donoho, M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization. Proc. Natl. Acad. Sci.100(5), 2197–2202 (2003).
M. Kowalski, B. Torrésani, in SPARS'09 – Signal Processing with Adaptive Sparse Structured Representations, ed. by R. Gribonval. Structured Sparsity: from Mixed Norms to Structured Shrinkage, (2009), pp. 1–6. Inria Rennes – Bretagne Atlantique. http://hal.inria.fr/inria-00369577/en/. Accessed 2 Jan 2018.
L. Condat, A generic proximal algorithm for convex optimization—application to total variation minimization. Signal Process. Lett. IEEE. 21(8), 985–989 (2014). https://doi.org/10.1109/LSP.2014.2322123.
L. Condat, A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J Optim. Theory Appl.158(2), 460–479 (2013). https://doi.org/10.1007/s10957-012-0245-9.
P. Rajmic, M. Novosadová, in Proceedings of the 9th International Conference on Telecommunications and Signal Processing. On the limitation of convex optimization for sparse signal segmentation (Brno University of TechnologyVienna, 2016), pp. 550–554.
T. Hastie, R. Tibshirani, M. Wainwright, Statistical Learning with Sparsity (CRC Press, Boca Raton, 2015).
P. Rajmic, in Electronics, Circuits and Systems, 2003. ICECS 2003. Proceedings of the 2003 10th IEEE International Conference On, vol. 2. Exact risk analysis of wavelet spectrum thresholding rules, (2003), pp. 455–4582. https://doi.org/10.1109/ICECS.2003.1301820.
R. Tibshirani, Regression shrinkage and selection via the LASSO. J. R. Stat. Soc. Ser. B Methodol.58(1), 267–288 (1996).
M. Daňková, P. Rajmic, in ESMRMB 2016, 33rd Annual Scientific Meeting, Vienna, AT, September 29–October 1: Abstracts, Friday. Magnetic Resonance Materials in Physics Biology and Medicine. Low-rank model for dynamic MRI: joint solving and debiasing (SpringerBerlin, 2016), pp. 200–201.
N. Perraudin, D. I. Shuman, G. Puy, P. Vandergheynst, Unlocbox A Matlab convex optimization toolbox using proximal splitting methods (2014). https://epfl-lts2.github.io/unlocbox-html/.
M. Novosadová, P. Rajmic, in Proceedings of the 12th International Conference on Signal Processing and Communication Systems (ICSPCS). Image edges resolved well when using an overcomplete piecewise-polynomial model, (2018). https://arxiv.org/abs/1810.06469.
The authors want to thank Vítězslav Veselý, Zdeněk Průša, Michal Fusek, and Nathanaël Perraudin for valuable discussion and comments and to the reviewers for their careful reading, their comments, and ideas that improved the article. The authors thank the anonymous reviewers for their suggestions that raised the level of both the theoretic and experimental parts.
Research described in this paper was financed by the National Sustainability Program under grant LO1401 and by the Czech Science Foundation under grant no. GA16-13830S. For the research, infrastructure of the SIX Center was used.
The accompanying webpage http://www.utko.feec.vutbr.cz/~rajmic/sparsegmentcontains Matlab code, input data and the full listing of figures. The Matlab code relies on a few routines from the UnlocBox, available at https://epfl-lts2.github.io/unlocbox-html/.
Signal Processing Laboratory (SPLab), Brno University of Technology, Technická 12, 616 00, Brno, Czech Republic
Michaela Novosadová
& Pavel Rajmic
The Czech Academy of Sciences, Institute of Information Theory and Automation, Pod Vodárenskou věží 4, Prague, 18208, Czech Republic
Michal Šorel
Search for Michaela Novosadová in:
Search for Pavel Rajmic in:
Search for Michal Šorel in:
MN performed most of the MATLAB coding, experiments, and plotting results. PR wrote most of the article text and both theory and description of the experiments. MŠ cooperated on the design of experiments and critically revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Pavel Rajmic.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Signal segmentation
Signal approximation
Denoising
Piecewise polynomials
Orthogonality
Sparsity
Proximal splitting
|
CommonCrawl
|
Remotely assessing tephra fall building damage and vulnerability: Kelud Volcano, Indonesia
George T. Williams ORCID: orcid.org/0000-0002-5924-24991,2,
Susanna F. Jenkins1,2,
Sébastien Biass1,
Haryo Edi Wibowo3,4 &
Agung Harijoko3,4
Journal of Applied Volcanology volume 9, Article number: 10 (2020) Cite this article
Tephra from large explosive eruptions can cause damage to buildings over wide geographical areas, creating a variety of issues for post-eruption recovery. This means that evaluating the extent and nature of likely building damage from future eruptions is an important aspect of volcanic risk assessment. However, our ability to make accurate assessments is currently limited by poor characterisation of how buildings perform under varying tephra loads. This study presents a method to remotely assess building damage to increase the quantity of data available for developing new tephra fall building vulnerability models. Given the large number of damaged buildings and the high potential for loss in future eruptions, we use the Kelud 2014 eruption as a case study. A total of 1154 buildings affected by falls 1–10 cm thick were assessed, with 790 showing signs that they sustained damage in the time between pre- and post-eruption satellite image acquisitions. Only 27 of the buildings surveyed appear to have experienced severe roof or building collapse. Damage was more commonly characterised by collapse of roof overhangs and verandas or damage that required roof cladding replacement. To estimate tephra loads received by each building we used Tephra2 inversion and interpolation of hand-contoured isopachs on the same set of deposit measurements. Combining tephra loads from both methods with our damage assessment, we develop the first sets of tephra fall fragility curves that consider damage severities lower than severe roof collapse. Weighted prediction accuracies are calculated for the curves using K-fold cross validation, with scores between 0.68 and 0.75 comparable to those for fragility curves developed for other natural hazards. Remote assessment of tephra fall building damage is highly complementary to traditional field-based surveying and both approaches should ideally be adopted to improve our understanding of tephra fall impacts following future damaging eruptions.
With populations surrounding volcanoes growing faster than the average global rate, eruptions will increasingly impact human settlements and livelihoods (Barclay et al., 2019; Freire et al., 2019). Tephra hazards can affect large areas and cause impacts to buildings, ranging from nuisance damage to non-structural features through to potentially lethal collapse (Blong, 1984; Blong et al., b). Two measures that can be taken to reduce the impact of future eruptions are i) pre-event recovery planning, and ii) promoting the construction of buildings that have proven resilient in past eruptions (Spence et al., 2005; Jenkins et al., 2014). Both of these risk mitigation measures typically rely on observations of damage from past eruptions. However, comprehensive documentation of previous tephra impacts to buildings is rare, making accurate damage forecasts challenging (Jenkins et al., 2015; Wilson et al., 2017). To date, only three post-eruption surveys that quantitatively assess the relationship between tephra fall hazard intensity and building damage severity have been published. These surveys were carried out after the 1991 eruption of Pinatubo, Philippines (Spence et al., 1996), the 1994 eruption of Rabaul, Papua New Guinea (Blong, 2003) and the 2015 eruption of Calbuco, Chile (Hayes et al., 2019).
We develop a new method to remotely assess building damage so that this sparse, global data set of post-eruption tephra impacts can be expanded. Satellite images taken before and after the 2014 eruption of Kelud volcano in Java, Indonesia are compared to identify buildings damaged to different degrees across seven case study villages receiving various amounts of tephra. Then, comparing tephra fall hazard intensity with the damage assessment results, we develop vulnerability models (known as fragility curves) that translate hazard into likely damage, allowing for the impacts of potential future tephra falls to be forecast. These tephra fall fragility curves are the first developed specifically for Indonesian building types with curves fit directly from observations of damage. They are also the first set of tephra fall fragility curves to consider tephra loads causing building damage less than severe roof collapse.
Whilst remote sensing has been used to assess building damage caused by natural hazards, (e.g. Spence et al., 2003; Gamba et al., 2007) it has rarely been applied to post-eruption building damage assessment. Post-eruption field studies combined with manual inspection of pre- and post-eruption satellite imagery has been used to quantify damage caused by pyroclastic flows from the 2010 Merapi eruption (Jenkins et al., 2013a; Solikhin et al., 2015) and from the Fogo 2014–2015 lava flows (Jenkins et al., 2017). Magill et al. (2013) assessed tephra fall impacts from the 2011 Shinmoedake eruption in Japan using geospatial infrastructure and land cover data combined with semi-structured interviews regarding the impacts. Recently, Biass et al. (in press) used interferometric synthetic-aperture radar (InSAR) to assess building damage from the Kelud 2014 eruption, comparing the intensity of coherence loss between pre- and post-eruption InSAR scenes with the damage observations presented in the current study. This study differs from previous ones by extending remote damage assessment directly into the development of physical vulnerability models.
Case study eruption: Kelud 2014
Geological setting and eruption history
Kelud volcano is regarded as one of the most active and deadly volcanoes in Indonesia (Brown et al., 2015; Maeno et al., 2019b). Located in East Java, Kelud is a basaltic-andesite stratovolcano that forms part of the Sunda Arc subduction system (Fig. 1). Kelud has a complex morphology with two large landslide scars from previous sector collapses and multiple peaks made up of large remnant lava domes with the highest at an elevation of 1731 m asl (Wirakusumah, 1991; Jeffery et al., 2013). Kelud has had more than 30 eruptions over the past 1000 years and in the past century has produced four eruptions with a volcanic explosivity index (VEI) of 4 (GVP, 2014). Mass casualties, including the 10,000 fatalities in Kelud's 1586 VEI 5 eruption are attributed to large extensive lahars associated with breakouts from the summit crater lake (Bourdier et al., 1997). A series of drainage tunnels were constructed starting in 1919, dramatically reducing the lake's volume and potential for lahars to catastrophically affect large populations on the flanks of the volcano (Hizbaron et al., 2018). Activity over the past 100 years has been characterised by a cyclic pattern alternating between periods of effusive lava dome growth and subsequent dome destruction during explosive, Plinian eruptions that are typically short-lived and high intensity, and occur with relatively little precursory activity (Hidayati et al., 2019).
Location map for Kelud volcano, the three regencies and cities immediately surrounding Kelud and the major cities of Yogyakarta and Surabaya that received ash fall during the 2014 eruption
The Kelud 2014 eruption
This VEI 4 eruption began at 22:50 local time on 13 February 2014 with the main explosive phase beginning at 23:30 and lasting for about four hours (GVP, 2014). Around 166,000 people were evacuated from a 10 km radius exclusion zone before the eruption began, following the volcanic alert being raised to its highest level (4), "Awas" at 21:15 on the night of the eruption (Andreastuti et al., 2017). This was a short-duration, high intensity eruption, typical of Kelud, with an estimated eruption magnitude of 4.3 to 4.5 and intensity of 10.8 to 11.0 placing it between Merapi, 2010 and Pinatubo, 1991 in terms of eruption intensity (Caudron et al., 2015). A total of seven fatalities were recorded in this eruption, attributed to collapsing walls, ash inhalation and 'shortness of breath' (GVP, 2014). All these reported fatalities were from the Malang regency to the east of Kelud with at least four occurring within 7 km of the vent. The eruption produced pyroclastic density currents running out to 6 km and rain-triggered lahars that damaged buildings up to 35 km away from the vent. Wind shear at an altitude of ~ 5 km asl caused a bi-lobate tephra deposit (Maeno et al., 2019a). Tephra was dominantly dispersed westwards and caused an accumulation of 2 cm of ash on the major city of Yogyakarta more than 200 km from the vent. A secondary lobe produced trace amounts of ash (< 1 mm thickness) on Surabaya, ~ 80 km northeast of Kelud. Clasts 9 cm in diameter were dispersed to over 12 km and most reported building damage was constrained to within 40 km of the vent (Maeno et al., 2019a; Goode et al., 2018; Blake et al., 2015). The International Red Cross reported that > 11,000 buildings were 'completely damaged' with > 15,000 buildings experiencing 'light' to 'moderate' damage in the three regencies surrounding the volcano (IFRC, 2014). Interestingly, despite this widespread damage, post-eruption field surveys carried out soon after by Paripurno et al. (2015) and six months later by Blake et al. (2015) found that few buildings experienced severe damage (where 'severe damage' is defined by Spence et al. (1996) as complete failure of any principal roof support structure, such as trusses or columns or deformation/collapse to over half of the internal/external walls). This allowed building repairs to be completed swiftly, with reports of over 99% of damaged houses being repaired within less than a month of the eruption ending (Jakarta Post, 2014). These conflicting assessments of damage, the remarkably swift recovery and the high likelihood of damaging eruptions occurring again in the future (Maeno et al., 2019b), make the Kelud 2014 eruption a useful case study for remote surveying of tephra fall building damage.
Building characteristics and exposure
In November 2019, there were > 400,000 buildings within 30 km of Kelud's vent recorded in Open Street Map (OSM, 2019) with LandScan 2018 estimating that 2.6 million people lived within that same area (Rose et al., 2019). The majority of buildings in villages around Kelud can be described using the three key typologies identified in the field by Blake et al. (2015), which are also similar to those around Merapi volcano (Jenkins et al., 2013b). These typologies are differentiated by their external wall framing (reinforced masonry, brick or timber) but all share a clay-tiled roof, supported by timber or bamboo framing. Choice of roof design has implications for building vulnerability to tephra hazards and is also a key aspect of traditional Javanese architecture, with designs of increasing steepness reflecting higher social status of the owner (Idham, 2018). To support ventilation, a common design feature of clay-tiled roofs in Kelud is that they are steepest over the centre of the building, usually between 25 and 60° (e.g. Fig. 2a and b), with more shallowly pitched edges and eaves, usually ≤25° (Paripurno et al. 2015). Additionally, to shade buildings more efficiently, eaves often extend to overhang relatively far beyond a building's walls. Changes in roof slope promote tephra shedding from the centre towards the edges of roofs making these areas more prone to collapse, especially considering the timber or bamboo roof supports used for the overhangs of buildings are often thinner and therefore weaker than those in the main part of the roof (Prihatmaji et al., 2014). Roofs made using other covering materials such as corrugated asbestos or metal sheets are relatively uncommon around Kelud, especially for residential buildings. However, this is a popular form of construction for livestock shelters and small shops (e.g. Fig. 2c). As sheet roofs are lighter than tiled roofs and do not require timber battens, the spacing between framing members is typically wider for sheet roofs and therefore, they are potentially more prone to collapse under tephra fall loading (Spence et al., 1996; Blake et al., 2015).
Common roof types in regencies surrounding Kelud. Note that all photos are from after the 2014 eruption. a traditional Kampung style flared clay-tile roof, b traditional Joglo style flared clay-tile roof, c asbestos fibre sheet roof and d non-flared clay tile with asbestos and/or metal sheet veranda roof. a, c and d provided by Daniel Blake and Grant Wilson have all visibly received roof repairs. Photo B from Google StreetView
Tephra fall hazard characterisation
Although tephra fall loading, typically measured in kilo-pascals (kPa), is considered the most appropriate metric to quantify tephra fall hazard towards buildings, loading is rarely measured directly in the field. Instead, loading is estimated using thickness measurements that can later be combined with one or more laboratory measurements of deposit density, if samples are available. Otherwise, density is assumed, typically between the 600–1600 kg m− 3 range for naturally occurring, dry deposit densities (Macedonio and Costa, 2012). Here, we use the tephra thickness measured by Universitas Gadjah Mada (UGM) field teams within two-three days of the eruption in 81 locations within 2 to 60 km of the vent (Anggorowati and Harijoko, 2015). Thicknesses are converted to loads using a deposit bulk density of 1400 kg m− 3 measured by Maeno et al. (2019a). Maximum pumice diameters were also measured at 32 of the 81 tephra thickness measurement locations and these were used to identify areas where projectile impacts may have contributed to observed building damage. Individual thickness measurements were interpolated into a continuous deposit using two methods: i) inversion modelling using the Tephra2 model, and ii) the interpolation of hand drawn isopachs. Resulting deposits were used to estimate hazard intensity metrics over the entire study area.
Inversion modelling method
Thickness measurements were inverted using the Tephra2 algorithm of Connor and Connor (2006) to estimate the eruption source parameters (ESP; e.g. plume height, tephra mass, total grain-size distribution) and empirical parameters (e.g. fall-time threshold and diffusion coefficient, see Bonadonna et al. (2005) and Biass et al. (2016)) that best reproduce observed measurements. Wind conditions were fixed and inferred from the wind profile for midnight of 13 February 2014 obtained from the European Centre for Medium-Range Weather Forecasts (ECMWF) Era-Interim Reanalysis dataset, which is available at six-hourly intervals (Dee et al., 2011). As most tephra was erupted over four hours in one main phase that began half an hour before midnight, interpolation between two time periods (18:00 and 00:00) was deemed unnecessary, so dispersion modelling was carried out using the single wind profile of 00:00 (Fig. 3).
Inferred wind conditions above Kelud volcano at midnight on 13 February 2014 from ERA-Interim reanalysis data (Dee et al. 2011). Note change in wind direction ~ 5 km asl and high-speed westerly winds ~ 15 km asl. 'Wind direction' refers to the direction wind is blowing towards
Inversion initially made use of our 81 field measurements combined with 56 measurements from Nakada et al. (2016) and Maeno et al. (2019a), taken over an area up to 200 km from the vent. Inverting multiple sets of measurements simultaneously resulted in an increased discrepancy between measured and modelled tephra thicknesses at the most important sites, close to villages where visible damage occurred. To reduce discrepancies in these key areas, inversion was repeated using only the UGM data as these were taken sooner after the eruption than others and prior to heavy rainfall on 18 February, which may have disturbed the deposits (Dibyosaputro et al., 2015; Blong et al., 2017a). Ranges of ESPs derived from literature on the 2014 eruption (Table 1) were used to inform our initial ESP ranges to which inversion modelling was applied. The optimised set of eruption source parameters that best reproduced tephra deposit measurements during Tephra2 forward modelling are provided in Table 1. In areas where some of the heaviest building damage was reported, the inversion optimised tephra dispersal underestimates the tephra thicknesses measured by UGM field teams by 25–45% (2–2.7 cm). Underestimation in proximal areas (within 10 km of the vent) has likely occurred as inversion optimisation does not account for additional sedimentation of tephra from the plume margins (Bonadonna et al., 2005).
Table 1 Ranges of eruption source parameters required for Tephra2 were derived from literature on the 2014 eruption and used to inform ranges for optimisation during inversion. Parameters with 'n.a' (not applicable), are those whose values were determined from inversion optimisation rather than from available literature on the Kelud 2014 eruption.a = Kristiansen et al. (2015), b = Maeno et al. (2019a), c = Goode et al. (2018)
Isopach interpolation method
To provide an alternate dispersal footprint that would not be subject to the same proximal underestimation, isopachs were manually contoured using the same 81 UGM measurements (Fig. 4) and interpolated using a multiple exponential segments method in TephraFits (Fierstein and Nathenson, 1992; Biass et al., 2019). The thickness intercept of the most proximal segment was used to estimate the theoretical maximum accumulation. This value along with the isopachs were interpolated using cubic splines in Matlab (i.e cubicinterp interpolant of the fit function of the Curve Fitting Toolbox). The resulting surface was exported at a resolution of 500 m. Interpolation of our manually contoured isopachs slightly but consistently overestimated tephra thicknesses measured in the field (Fig. 4D).
Isopachs produced from A) Tephra2 inversion and B) interpolation of manually contoured isopachs. All contours are in cm. Plots C) and D) show agreement between measured and modelled tephra loads. The grey dashed line marks perfect agreement between measured and modelled tephra loads. The solid line is the linear best fit of the data. Red dashed lines mark ±30% deviation of modelled load from measured load. Field measurement data available in Additional file 1. A dry bulk density of 1400 kg m− 2 (Maeno et al. 2019a) was used to calculate loading from thickness measurements
Remotely assessing building damage
Where available, post-eruption impact assessments conducted in the field provide critical information, much of which cannot be obtained via remote sensing alone (Jenkins et al., 2013b). However, observing impacts in the field is complicated by a variety of logistical factors, deposit preservation issues as well as safety and ethical concerns (Wilson et al., 2012; Jenkins et al., 2014; Hayes et al., 2019). Following the Kelud 2014 eruption, carrying out a comprehensive field-based building damage assessment was complicated by the large numbers of buildings damaged across a widespread area, and by rapid building repairs that began less than a week after eruption onset. Despite this, at least two reports were made soon after the eruption documenting some details of the damage. One report from the International Red Cross, released 21 days after eruption onset, stated that in the three regencies surrounding Kelud, 11,903 buildings were 'completely damaged' by tephra and 11 buildings were destroyed by lahars (IFRC, 2014). Unfortunately, this report did not present their survey data or provide descriptions of what 'completely damaged' specifically refers to. Paripurno et al. (2015) also conducted a study of damage from the 2014 eruption, stating that four buildings were destroyed by lahars and that 8719 buildings were damaged by tephra to varying degrees. This study identified three tephra fall thickness range zones (2.5–5, 5–7.5 and 7.5–10 cm) and used these zones to categorise all buildings within that zone into 'light', 'medium' or 'heavy' damage categories respectively, based on the location of the village that a building was from. This included 6647 buildings from three villages that sustained 'heavy damage' (kerusakan berat). Considering nearly all buildings in Kediri regency (to the west of Kelud) were reportedly repaired within a month (Jakarta Post, 2014), it is unlikely that thousands of buildings experienced severe roof or building collapse. Also, media images and videos from some of the most heavily impacted villages do not feature any buildings that have experienced complete roof collapse (e.g. Berita Satu, 2014; Kreer, 2014). Inconsistencies between these reports, rapid building repairs and the wide area over which buildings were damaged all suggest remote sensing as an ideal method for assessing building damage from this eruption.
Damage assessment method
To remotely assess building damage, we manually assessed changes to 1154 structures in seven distinct areas (Fig. 5) using freely available pre- and post-eruption satellite images from Google Earth. Locations in which to assess building damage were selected based on the availability of high-resolution satellite imagery (30–70 cm pixels), the desire to record damage in areas where the most severe damage was reported to have occurred as well as across a wide range of tephra fall hazard intensities, as this is advised for accurate vulnerability assessment (Rossetto et al., 2014; Wilson et al., 2017). For the pre-eruption imagery, the date closest to eruption with freely available, cloud free imagery spanning the majority of the study area was taken on 26 December 2013, 49 days prior to the eruption. The appearance of each building's roof in these images was compared with its post-eruption appearance using images acquired either five, seven or eight days after the eruption (based on varying availability of cloud-free images soon after the eruption in different areas). These images were then compared with ones taken 98 days after the eruption on 19 May 2014 as this is the first freely available, cloud free acquisition covering the entire study area, taken after the majority of building repairs were completed. To facilitate rapid building repairs and minimise rain damage to building interiors, new clay tiles and blue tarpaulins were widely issued after the eruption. When old dark tiles were replaced with new ones this produced a stark colour change visible in both satellite images and those taken on the ground (e.g. Fig. 2A and Fig. 6). Tiles were occasionally replaced with a light grey roof cover which, based on field surveys, is likely to be asbestos fibre sheeting (Blake et al.,2015). Similarly, roofs that were grey to dark grey pre-eruption would often have a slight but noticeably lighter appearance after the eruption (e.g. Fig. 2C). Again, we interpret this as replacement of old, damaged or collapsed asbestos fibre sheets based on limited ground-truthing using media images and observations from Blake et al. (2015). These colour changes were used to infer the extent of repairs that were carried out after the eruption, as a proxy for damage. Commonly observed changes in roof appearance were used to develop a damage state scale and a single damage state was assigned to each assessed building (Fig. 7). Damage state descriptions and the observable changes used to assign damage states to buildings are given in Table 2. The four-tiered scale here differs from the six-tiered scales used in previous studies because the available satellite images could not be used to distinguish so many different levels of damage. Especially 'light' DS1 damage, which for example could include damage to water tanks or roof guttering, was not observable in satellite images.
Location of the 7 villages included in the damage survey including buildings captured in OpenStreetMap (OSM). Surveyed villages from northernmost to southernmost are numbered: 1, 2, 3 (NNW of vent), 4 (NE of vent), 5,6,7 (W-SW of vent)
Images of Pandansari Village (village number 4) taken A) within the village by Kiran Kreer and B) from a Maxar Technologies satellite, available on Google Earth. Photos taken 9 and 14 days after the eruption respectively
Examples of damage states assigned to buildings in (A) Puncu district, village number 2 and (B) Pandansari Village, village number 4, based on changes in appearance between 49 days pre-eruption and 98 days post-eruption. Green building footprints represent DS0/1, light blue DS2, orange DS3 and red DS4/5
Table 2 The 4-tiered damage state scale used in this study, based on changes in roof appearance
Damage assessment results
The heaviest estimated tephra loads experienced by buildings we assessed were 144 kg m− 2, equivalent to 10.3 cm of dry tephra from this eruption. In line with these relatively modest tephra loads, only 27 of the 1154 buildings we assessed (2.3%) showed signs that they had experienced severe roof or building collapse, despite deliberately assessing damage in villages reported to have sustained the heaviest damage. Grey roofed buildings, likely made of asbestos fibre roof sheets make up 11% (n = 127) of all those surveyed but a disproportionate 26% (n = 7) of the DS4/5 observations, implying that buildings with such roofs fail at lower loads than those with tile roofs. As the number of buildings observed as DS4/5 is relatively small, the loads leading to collapse may not be representative of the true collapse load for buildings in this region and may explain why the median tephra load for DS4/5 buildings is not higher than those with DS3 (Fig. 8). The majority of buildings we assessed appeared to have tiled roofs (89%, n = 1027). Two surprising damage patterns were displayed by buildings with tiled roofs and it is likely that these are applicable to many such buildings throughout Java. Firstly, of the 790 buildings displaying signs of damage, a large proportion (56%, n = 464) appear to have had their entire roof covering replaced with new clay tiles. Secondly, repairs for many buildings were concentrated along the edges of roofs and, in 172 cases, these were the only parts of the roof which appear to have received any repairs. The edges of these roofs are likely to be verandas or eaves overhanging the building's external walls (Idham, 2018), both of which have been identified as particularly vulnerable in previous eruptions in other countries (e.g Spence et al., 1996; Blong, 2003) and in this eruption (Blake et al., 2015). After these sections of roofs were repaired, they often occupied a larger area than they had prior to the eruption. Measuring the building footprints of 30 buildings randomly sampled from the 172 that had repairs along their edges, we found that 20 of these had increased their footprint size by an average of about 20% compared to pre-eruption. The median tephra fall thickness modelled for roof overhang and veranda damage in this eruption was 6.2 or 8.2 cm thick for the inverted and interpolated tephra hazard layers respectively.
Distribution of tephra loads for all surveyed buildings in each damage state for both hazard models. Number of observations for each group indicated above the plot. 89% of buildings have tiled roofs with the other 11% classified as grey roofs, which are assumed to be made of asbestos fibre sheeting. The boxes reflect the central 50% of values and the horizontal black line is the median. The edges of the whiskers extend up to 1.5 times the interquartile range. Any observations beyond these points are considered outliers that are shown as dots
Assigning damage states to buildings and quantifying hazard intensities that caused the damage constitutes the raw data required to develop fragility curves. This section outlines fragility curve fitting and cross-validation procedures and includes a description of how curves can be used to estimate or forecast damage. Fragility curves were fit for the two main groups of buildings identified, those with tiled roofs and those with grey roofs. Grey roofs could be made of various different materials including reinforced concrete and sheet metal but based on field surveys and media photos we assume that the majority of these roofs are made of corrugated asbestos fibre sheets with both these and tiled roofs typically supported on timber framing.
Fragility curve fitting and prediction accuracy
For both recognised building types, two sets of fragility curves were fit, one for each hazard characterisation approach (Fig. 9). The curves were fit to data using a cumulative link model (CLM) and take the typical form of a log-normal cumulative density function used in the vast majority of published parametric fragility curves (Rossetto and Ioannou, 2018). A CLM is a type of generalised linear model that makes use of the ordinality of damage states (i.e. light damage < moderate damage < heavy damage). There are several benefits to using CLMs compared to other statistical approaches commonly used in the past (Lallemant et al., 2015; Williams et al., 2019). One key advantage is that individual damage state curves can be fit simultaneously using observations from the entire data set. This becomes important in the commonly arising situation where there are relatively few observations for a particular damage state (as is the case in this study for DS4/5). A second advantage is that when curves are fit using a CLM, curves for successive damage states cannot cross each other. This undesirable characteristic needs to be avoided if curves are to be used to forecast damage or if the prediction accuracy of the curves is to be assessed. The equation for fitting fragility curves using a CLM and the best fit curve parameters for the 12 new fragility curves from this study are given in the appendix.
Fragility curves fit using tephra loads from both hazard models for A) tiled roofs and B) grey roofs. Density plots show differing distributions of tephra loads across the 1154 buildings used to fit curves, and that all data are from hazard intensities below 200 kg m− 2. For comparison with published curves, the black dotted lines in A) and B) are the roof collapse curves for tiled and asbestos sheet roofs from Jenkins and Spence (2009). Their annotations, Dtf and Atf, reflect the curve labels from that study. R script, Microsoft Excel spreadsheet and raw data to fit curves are available on Github (https://github.com/flying-rock/kelud14). Curve parameters given in the Appendix
We calculate prediction accuracies by conducting K-fold cross validation. K-fold cross validation requires partitioning damage data into K randomly sampled, equally sized groups, using one group as the test set (k) and the remaining groups as the training set to fit fragility curves. Damage states are then predicted using the hazard intensities from the test set and accuracy is calculated by comparing the predicted damage states of all buildings to their actual observed damage states. This process is repeated K times using a different group as the test set each time and the average model accuracy across all K validation tests is obtained. K-fold cross validation typically uses 5 or 10 folds. In this study we used a K value of 5 as splitting data into 10 groups would have a higher probability of producing some groups containing no DS4/5 observations. We iterated this 5-fold cross validation 50 times to produce a stable overall average accuracy score. To predict a discrete damage state using tephra load and fragility curves, a random number between 0 and 1 is generated for each building and compared to the exceedance probability at that building's tephra load. Starting with the highest damage state to the lowest, the predicted damage state for a building is the first one whose probability is higher than the randomly generated number. If the randomly generated number is higher than all three damage state probabilities, DS0/1 is assigned. For example, taking an inversion tephra load of 200 kg m− 2 on a tiled roof, the probabilities of reaching or exceeding DS4/5, DS3 and DS2 are 0.1, 0.75 and 0.9 respectively. Randomly generated numbers of 0.05, 0.5 or 0.85 would assign such a building as DS4/5, DS3 and DS2, respectively. In this way, with a sufficient number of buildings, the distribution of damage between the different damage states will be appropriately represented at any hazard intensity.
Using the approach above, accuracy can be calculated following Eq. 1.
$$ \mathrm{Exact}\ \mathrm{model}\ \mathrm{accuracy}=\frac{1}{K}\sum \limits_{k=1}^K\frac{n_{\mathrm{correct}\ \mathrm{predictions}}}{N_{\mathrm{test}\ \mathrm{set}}} $$
Where K is the number of groups the data is split into for K-fold cross validation, Ntest set is the number of buildings in the test set whose damage state is being predicted (roughly the total number of buildings divided by K) and ncorrect predictions is the number of buildings whose predicted damage state matches the observed damage state. This measure of accuracy has the advantage of being simple to calculate and interpret. However, when this measure of accuracy is used on ordinal models, as is the case here, its main shortcoming is that it does not make use of the ordered nature of damage states. For example, if our model misclassifies a building as being DS2 when it was observed to be DS0/1, this error is not as large as if the model had classified it as DS4/5. Unfortunately, this information is lost during simple accuracy calculations. To take the size of discrepancies between observed and predicted damage states into account, we adopt the approach proposed by Rennie and Srebro (2005) and Charvet et al. (2015) to calculate a weighted prediction accuracy. This requires calculating the level of misclassification (i.e. the absolute difference between predicted and observed damage levels) for each data point then dividing this value by the maximum possible difference (NDS − 1).
$$ \mathrm{Penalised}\ \mathrm{accuracy}=1-\frac{\left|{DS}_i-{\hat{DS}}_i\right|}{N_{DS}-1} $$
Where DSi and \( {\hat{DS}}_i \) are the observed and predicted damage states for the ith observation and NDS refers the total number of different damage states, which in this study is four (DS0/1 to DS4/5). Once the penalised accuracy has been calculated for each building using Eq. 2, the overall penalised accuracy of the model can be calculated during cross validation following Eq. 3.
$$ \mathrm{Penalised}\ \mathrm{model}\ \mathrm{accuracy}=1-\frac{1}{k}\sum \limits_{k=1}^K\left[\sum \limits_{i=1}^{N_{\mathrm{test}\ \mathrm{set}}}\frac{\left|{DS}_i-{\hat{DS}}_i\right|}{N_{DS}-1}\right] $$
Exact and penalised accuracy scores for all four sets of fragility curves are given in Table 3. The various sets of fragility curves predict the exact observed damage state 40–44% of the time. Considering there are four possible damage state categories, a model that predicts damage states at random should make an exactly accurate prediction around 25% of the time. Comparatively, for the weighted prediction (penalised model) accuracy, a perfectly random predictor would have a score of 0.58 following Eq. 3, while our curves provide scores from 0.68 to 0.75. Figure 10 illustrates how the set of fragility curves with the highest accuracy perform compared to random prediction, with nearly 45% of predictions exactly matching the observed DS, a further 40% being within 1 damage state and approximately 15% being misclassified by > 1 damage state.
Table 3 Accuracy scores from fivefold-cross validation on all sets of fragility curves
Average number of buildings in each level of misclassification (subtracting predicted DS level from observed DS level) for the best performing set of cross-validated fragility curves (tiled roof, inversion hazard layer) compared to a perfectly random prediction. Labels above bars give percentage of buildings within each level of misclassification. This plot represents one test set, i.e. one-fifth (n = 206) of all the tiled roof buildings surveyed
At relatively low hazard intensities, the curves fit using hazard intensities derived from inversion model buildings as more vulnerable compared to curves fit using interpolation. Conversely, at relatively high hazard intensities > 150 kg m− 2, the inversion curves model buildings as less vulnerable and they appear to unrealistically underestimate the likelihood of roof or building collapse (DS4/5). When comparing both DS4/5 curves to published fragility curves, the interpolated DS4/5 curves more closely approximate the two roof collapse curves from Jenkins and Spence (2009) that are likely to be most representative of buildings surrounding Kelud. These include tiled and asbestos roofs in 'average to good condition' (Fig. 9).
Lessons learnt from Kelud
Communities surrounding Kelud displayed a great capacity to recover following the 2014 eruption, repairing thousands of buildings within less than a month of evacuation orders being lifted. Rapid repairs were facilitated by the almost immediate provision of aid in the form of readily available roof cladding materials and military personnel. While it is important for communities to return home and quickly repair their buildings, two aspects of the recovery may have increased building vulnerability, going against one of the key priorities of the 2015–2030 Sendai Framework Disaster Risk Reduction to "Build Back Better" (United Nations, 2015). Firstly, the many buildings in Pandansari village with new tile roofs may now exhibit a marginally reduced vulnerability to tephra loading but an increased vulnerability to energetic impacts from large clasts. The clay tiles that were widely distributed after the eruption were 3 mm thinner than the typical, 15 mm thick tiles in place prior to the eruption. Reducing the thickness of tiles by 3 mm slightly decreases the load they place on a roof, presumably increasing the load of tephra the roof frame can support by an equal amount. However, 15 mm thick clay tiles are already exceptionally vulnerable to shattering under impact from large clasts (with a threshold of ~ 20 joules), and making tiles thinner will have reduced the impact energy and associated minimum clast size required to exceed damage thresholds (Osman et al., 2018; Williams et al., 2019). Secondly, vulnerability is also likely to have been increased in the many cases where repairs have markedly enlarged overhanging and veranda sections of roofs. These sections of roofs have been identified as vulnerable in previous studies but there are multiple reasons why the buildings in this region might be particularly susceptible to roof overhang collapse. Firstly, the flared multi-pitch design of many roofs surrounding Kelud promotes tephra shedding onto the more shallowly pitched roof overhangs. The heavy rain that fell four days after the eruption would likely have further increased tephra shedding onto roof overhangs (Hampton et al., 2015; Jones et al., 2017) and increased deposit bulk density (Macedonio and Costa, 2012). This, combined with accounts of some roofs not collapsing until after the heavy rainfall supports the hypothesis that many overhangs may have been damaged only after receiving additional water saturated tephra mobilised from steep upper sections of the roof catchment. Increasing the size of overhangs allows them to shade and cool the building more effectively while also providing additional living space. However, this is often an area where visitors are given a place to sleep (Idham, 2018), meaning the relatively high vulnerability of these sections of roofs likely has life-safety implications in future eruptions. Specifically, people who have evacuated to seek shelter in villages farther downwind, may find themselves staying beneath a section of roof that is highly prone to collapse.
Implications for damage and vulnerability assessment
We show that free media and satellite images can be used to assess the degree of repair buildings have received following a tephra fall. By assuming that the degree of visible repairs are a proxy for damage severity, damage can be compared with tephra fall hazard intensity to develop new building vulnerability models. Testing prediction accuracy is important for any vulnerability model and may be even more important when models are developed using unconventional methods such as these. The fragility curves we developed using remote surveys have accuracies comparable to those developed using field-based building damage surveys from other hazards. For example, curves developed by Macabuag et al. (2016) using data from the 2011 Tō hoku tsunami and the same cumulative link model curve fitting approach had penalised accuracy rates between 0.71–0.81, marginally higher but comparable to our 0.68–0.75. It is important to note that our fragility curve accuracies cannot be compared to those from any volcanic vulnerability study as no previously published studies have conducted fragility curve accuracy testing. This is likely due to a general perception that insufficient data are available to warrant the use of such tests (e.g. Wilson et al., 2017) and because there are few fragility curves published within volcanology to begin with (compared to other natural hazards). Remote sensing can greatly increase the amount of damage data available for vulnerability assessment, particularly in a post-disaster context where time to conduct field surveys can be highly constrained (e.g. Mas et al., 2020; Williams et al., 2020). Remote sensing enables rapid data collection for large numbers of buildings over relatively wide areas and allows surveys to be carried out years after the damage occurred, if adequate satellite imagery is available. Surveying a wide area remotely can also help to focus field missions, identifying areas that are important to survey in more detail. Damage surveys conducted remotely should be considered highly complementary to field-based surveys, which are capable of providing highly detailed information but usually for a relatively small number of buildings.
Past research on forecasting tephra fall impacts to buildings and all previously developed fragility curves, placed a focus on identifying loads likely to cause severe roof or building collapse, driven by life-safety concerns (Spence et al., 2005; Zuccaro et al., 2008; Jenkins and Spence, 2009). In any given tephra fall however, exponential thinning with distance means that light tephra falls cover a relatively large area and therefore buildings receiving relatively light damage are likely to far outnumber collapses (Blong et al., 2017b), as was the case in the Kelud 2014 eruption. In cases such as this, repair costs associated with non-collapse damage might contribute substantially to the total cost of recovery, so it is important for future studies of tephra fall impacts to buildings to determine under what hazard intensities relatively light – moderate damage occurs.
Limitations and future work
A major limitation of this study's remote damage survey is that, for most buildings, damage severity has not been directly observed but rather inferred based on the extent of visible repairs. This is problematic because aid for repairs is unlikely to have been evenly distributed amongst all regencies and is unlikely to be distributed in the same way in the future if a much larger number of buildings are damaged. One example of unequal distribution comes from, Tanggung Mulyo 10 km north north-west of Kelud. This village was initially not given free materials by the local government for repairs because the buildings there were deemed to be in poor condition prior to the eruption (Sutriyanto, 2014). Had this village not received building materials from other institutions two months after the eruption, the relative lack of visible roof cladding replacement might give the impression that these buildings were in good condition and highly resilient to DS2 and DS3 damage when in fact the opposite could be true.
If the fragility curves developed from this eruption are to be used in forecasting damage in future eruptions, either at Kelud or at other volcanoes, several issues need to be considered. Firstly, with only 27 DS4/5 roof or building collapses observed, comprising 20 tiled roofs and just 7 grey roofs, fragility curves produced for this damage state in particular should be used with caution. Secondly, considering the fragility curves derived using the interpolated and the inverted tephra loads differ from each other, either both sets will need to be used to give a range for the number of damaged buildings or a decision will need to be made on which set is more appropriate for use in the given study. The accuracy scores for the inversion derived curves were slightly higher than those from interpolation. However, at hazard intensities higher than those observed in this eruption, the inversion DS4/5 curves appear to more strongly underestimate roof collapse probabilities than the interpolation DS4/5 curves, when both are compared to previously published roof collapse curves for similar building types. Lastly, these fragility curves have assumed loads from dry tephra deposits, but some buildings were reported to have only sustained damage after heavy rainfall, which was likely to have increased tephra loads by up to 30% using the saturation assumption method of Macedonio and Costa (2012). Also, as previously noted, the style of roofs in this region may be unique to Java, with flatter roofs perhaps being more prone to severe collapse (DS4/5) and relatively less prone to overhang collapse (DS2) triggered by tephra shedding onto the overhangs. Future work could also focus on better constraining collapse loads for these buildings given the larger individual cost to replace collapsed roofs relative to just the overhang, and the associated life-safety concerns of total roof collapse.
In characterising tephra hazard, uncertainties are associated with the initial measurement of deposit thickness (Engwell et al., 2013), as well as the dispersal modelling and isopach drawing process (Scollo et al., 2008; Engwell et al., 2015; Yang and Bursik, 2016). The discrepancies between the two sets of fragility curves are solely driven by the differences between two methods of characterising the tephra dispersal for the same set of field measurements. Hayes et al. (2019) faced a similar issue in determining the tephra thicknesses that had fallen on buildings surrounding Calbuco using two separate sets of isopachs. The authors noted that in the future, uncertainty could be substantially reduced by taking hazard intensity measurements at the site of each damage observation. This measurement would make estimation of hazard intensity from dispersal modelling or isopach maps unnecessary. In addition to taking a tephra thickness measurement at each site, deposit bulk density should also be measured as density is likely to vary from site to site based on the deposit's grainsize distribution, compactness, and degree of saturation. In areas with steep or multi-pitched roofs, observations of any tephra shedding should be made and ideally measurements should be taken from the roof itself. Taking such measurements would be time consuming and/or potentially dangerous and may therefore be unrealistic unless field teams are well trained and have sufficient personnel. Also, this approach requires a field team whose primary aim is to assess building damage, which is often not the case. If remote damage surveys are instead using measurements taken by teams focused on physical volcanology research, vulnerability assessment could be improved by attempting to only fit fragility curves using damage observations made within a set distance of robust tephra deposit measurements.
The February 2014 eruption of Kelud produced tephra falls that damaged thousands of buildings around the volcano. A total of 1154 buildings were remotely surveyed and damage was categorised into one of three damage states. Relatively few buildings experienced severe roof or building collapse (DS4/5), likely because nearby villages were not exposed to tephra deposits > 10.3 cm thick, with an equivalent dry deposit load of 144 kg m− 2. We found that DS4/5 damage occurred at a minimum tephra thickness of 3.3 and 6.2 cm for the inverted and interpolated hazard layers, respectively. Data from the damage survey were used to produce new fragility curves. Their prediction accuracy was assessed and found to be only slightly lower than that of comparable fragility curves produced using field-based damage surveys for tsunami hazards. Our study highlights that the choice of interpolation method for tephra thickness field measurements influences the results of vulnerability assessment, which can then propagate into subsequent impact and risk assessment. This uncertainty can be reduced in future studies by taking a higher number of tephra deposit measurements and samples, ideally at the site of each damage observation as well as on the roof of the building if it is safe to do so. Of course, detailed field measurements such as these may be difficult to take in the immediate aftermath of an eruption and cannot be taken when damage surveys are conducted remotely months to years later. The opposing strengths and weaknesses of remote damage assessment and traditional, field-based damage surveys make these two approaches highly complementary to each other in efforts to deepen our understanding of tephra fall building vulnerability.
All tephra deposit measurements (.csv file), the R code(.Rmd file), an Excel spreadsheet (.xls) and survey data (.csv file) used to fit and cross-validate fragility curves are available in Additional files 1, 2, 3 and 4 respectively. They are also available on GitHub at https://github.com/flying-rock/kelud14.
DS:
Damage state
ECMWF:
Eruption source parameters
GVP:
Global Volcanism Program
IFRC:
International Federation of the Red Cross
km asl:
kilometers above sea level
OSM:
UGM:
Universitas Gadjah Mada
Andreastuti S, Paripurno E, Gunawan H, Budianto A, Syahbana D, Pallister J (2017) Character of community response to volcanic crises at Sinabung and Kelud volcanoes. J Volcanol Geothermal Res. https://doi.org/10.1016/j.jvolgeores.2017.01.022
Anggorowati A, Harijoko A (2015) Distribusi area, volume, serta karakteristik mineralogi dan geokimia endapan tefra jatuhan dari erupsi Gunung Kelud tahun 2014. In: Seminar Nasional Kebumian KE-8: Academia-Industry Linkage, pp 778–789 https://repository.ugm.ac.id/135520/
Barclay J, Few R, Armijos MT, Phillips JC, Pyle DM, Hicks A, Brown SK, Robertson REA (2019) Livelihoods, Wellbeing and the Risk to Life During Volcanic Eruptions. Front Earth Sci 7:1–15. https://doi.org/10.3389/feart.2019.00205
Biass S, Bonadonna C, Connor L, Connor C (2016) TephraProb: a Matlab package for probabilistic hazard assessments of tephra fallout. J Appl Volcanol 5:10. https://doi.org/10.1186/s13617-016-0050-5
Biass S, Bonadonna C, Houghton BF (2019) A step-by-step evaluation of empirical methods to quantify eruption source parameters from tephra-fall deposits. J Appl Volcanol 8:1–16. https://doi.org/10.1186/s13617-018-0081-1
Biass S, Jenkins S, Lallemant D, Lim TN, Williams G, Yun S-H (2021) Remote sensing of volcanic impacts. In: Papale P (ed) Forecasting and planning for volcanic hazards, risks, and disasters, Elsevier Inc. https://doi.org/10.1016/B978-0-12-818082-2.00012-3
Blake DM, Wilson G, Stewart C, Craig H, Hayes J, Jenkins SF, Wilson TM, Horwell CJ, Daniswara R, Ferdiwijaya D, Leonard GS, Hendrasto M, Cronin S (2015) Impacts of the 2014 eruption of Kelud volcano, Indonesia, on infrastructure, utilities, agriculture and health, pp 1–131
Blong R (1984) Volcanic hazards: a sourcebook on the effects of eruptions: Elsevier, p 424
Blong R (2003) Building damage in Rabaul, Papua New Guinea, 1994. Bull Volcanol 65:43–54. https://doi.org/10.1007/s00445-002-0238-x
Blong R, Enright N, and Grasso P. (2017a) Preservation of thin tephra: J Appl Volcanol, v 6, doi: https://doi.org/10.1186/s13617-017-0059-4
Blong RJ, Grasso P, Jenkins SF, Magill CR, Wilson TM, McMullan K, Kandlbauer J (2017b) Estimating building vulnerability to volcanic ash fall for insurance and other purposes. J Appl Volcanol 6:2. https://doi.org/10.1186/s13617-017-0054-9
Bonadonna C, Connor CB, Houghton BF, Connor L, Byrne M, Laing A, Hincks TK (2005) Probabilistic modeling of tephra dispersal: Hazard assessment of a multiphase rhyolitic eruption at Tarawera, New Zealand. J Geophys Res B 110:1–21. https://doi.org/10.1029/2003JB002896
Bourdier J-L, Pratomo I, Thouret J-C, Boudon G, Vincent PM (1997) Observations, stratigraphy and eruptive processes of the 1990 eruption of Kelut volcano, Indonesia. J Volcanol Geothermal Res 79:181–203. https://doi.org/10.1016/S0377-0273(97)00031-0
Brown SK, Sparks RSJ, Mee K, Vye-Brown C, Ilyinskaya E, Jenkins SF, Loughlin SC (2015) Country and regional profiles of volcanic hazard and risk. Global Volcanic Hazards Risk:797. https://doi.org/10.1017/CBO9781316276273
Caudron C, Taisne B, Garcés M, Alexis LP, Mialle P (2015) On the use of remote infrasound and seismic stations to constrain the eruptive sequence and intensity for the 2014 Kelud eruption. Geophys Res Lett 42:6614–6621. https://doi.org/10.1002/2015GL064885
Charvet I, Suppasri A, Kimura H, Sugawara D, Imamura F (2015). A multivariate generalized linear tsunami fragility model for Kesennuma City based on maximum flow depths, velocities and debris impact, with evaluation of predictive accuracy. Natural Hazards 79 (3):2073–2099.
Connor L, Connor C (2006) Inversion is the key to dispersion: understanding eruption dynamics by inverting tephra fallout. In: Statistics in Volcanology
Dee DP, Uppala SM, Simmons AJ, Berrisford P, Poli P, Kobayashi S, Andrae U, Balmaseda MA, Balsamo G, Bauer P, Bechtold P, Beljaars ACM, van de Berg L, Bidlot J et al (2011) The ERA-interim reanalysis: Configuration and performance of the data assimilation system. Q J R Meteorol Soc 137:553–597. https://doi.org/10.1002/qj.828
Dibyosaputro S, Dipayana GA, Nugraha H, Pratiwi K, Valeda HP (2015) Lahar at Kali Konto after the 2014 Eruption of Kelud Volcano, East Java: Impacts and Risk. Forum Geografi 29:59–72. https://doi.org/10.23917/forgeo.v29i1.793
Engwell, S.L., Aspinall, W.P., and Sparks, R.S.J., 2015, An objective method for the production of isopach maps and implications for the estimation of tephra deposit volumes and their uncertainties: Bull Volcanol, v 77, doi: https://doi.org/10.1007/s00445-015-0942-y
Engwell, S.L., Sparks, R.S.J., and Aspinall, W.P., 2013, Quantifying uncertainties in the measurement of tephra fall thickness: J Appl Volcanol, v 2, doi: https://doi.org/10.1186/2191-5040-2-5
Fierstein J, Nathenson M (1992) Another look at the calculation of fallout tephra volumes. Bull Volcanol 54:156–167. https://doi.org/10.1007/BF00278005
Freire S, Florczyk A, Pesaresi M, Sliuzas R (2019) An Improved Global Analysis of Population Distribution in Proximity to Active Volcanoes, 1975–2015. ISPRS Int J Geo Information 8:341. https://doi.org/10.3390/ijgi8080341
Gamba P, Dell'Acqua F, Trianni G (2007) Rapid damage detection in the bam area using multitemporal SAR and exploiting ancillary data. IEEE Trans Geosci Remote Sens 45:1582–1589. https://doi.org/10.1109/TGRS.2006.885392
Goode LR, Handley HK, Cronin SJ, Abdurrachman M (2018) Insights into eruption dynamics from the 2014 pyroclastic deposits of Kelut volcano, Java, Indonesia, and implications for future hazards. J Volcanol Geothermal Res. https://doi.org/10.1016/j.jvolgeores.2018.02.005
GVP (2014) Report on Kelut (Indonesia). In: Wunderman R (ed) Bulletin of the Global Volcanism Network, Smithsonian Institution. https://doi.org/10.5479/si.GVP.BGVN201402-263280
Hampton SJ, Cole JW, Wilson G, Wilson TM, Broom S (2015) Volcanic ashfall accumulation and loading on gutters and pitched roofs from laboratory empirical experiments: Implications for risk assessment. J Volcanol Geothermal Res 304:237–252. https://doi.org/10.1016/j.jvolgeores.2015.08.012
Hayes JL, Calderón BR, Deligne NI, Jenkins SF, Leonard GS, McSporran AM, Williams GT, Wilson TM (2019) Timber-framed building damage from tephra fall and lahar: 2015 Calbuco eruption, Chile. J Volcanol Geothermal Res 374:142–159. https://doi.org/10.1016/j.jvolgeores.2019.02.017
Hidayati S, Triastuty H, Mulyana I, Adi S, Ishihara K, Basuki A, Kuswandarto H, Priyanto B, Solikhin A (2019) Differences in the seismicity preceding the 2007 and 2014 eruptions of Kelud volcano, Indonesia. J Volcanol Geother Res 382:50–67. https://doi.org/10.1016/j.jvolgeores.2018.10.017
Hizbaron DR, Hadmoko DS, Mei ETW, Murti SH, Laksani MRT, Tiyansyah AF, Siswanti E, Tampubolon IE (2018) Towards measurable resilience : Mapping the vulnerability of at-risk community at Kelud Volcano , Indonesia. Appl Geography 97:212–227. https://doi.org/10.1016/j.apgeog.2018.06.012
Idham NC (2018) Javanese vernacular architecture and environmental synchronization based on the regional diversity of Joglo and Limasan. Front Architec Res 7:317–333. https://doi.org/10.1016/j.foar.2018.06.006
IFRC (2014) In: Emergency plan of action (EPoA) Indonesia (ed) Volcanic eruption - Mt Kelud. International Federation of Red Cross and Red Crescent Societies http://reliefweb.int/report/indonesia/indonesia-volcanic-eruption-mt-kelud-emergency-plan- action-epoa-operation-n
Jakarta Post, 2014, Buildings in Mt. Kelud shadow rise from ashes: The Jakarta Post, http://www.thejakartapost.com/news/2014/03/09/buildings-mt-kelud-shadow-rise-ashes.html
Jeffery AJ, Gertisser R, Troll VR, Jolis EM, Dahren B, Harris C, Tindle AG, Preece K, O'Driscoll B, Humaida H, Chadwick JP (2013) The pre-eruptive magma plumbing system of the 2007-2008 dome-forming eruption of Kelut volcano, East Java, Indonesia. Contrib Mineral Petrol 166:275–308. https://doi.org/10.1007/s00410-013-0875-4
Jenkins SF, Day SJ, Faria BVE, Fonseca JFBD (2017) Damage from lava flows: insights from the 2014–2015 eruption of Fogo, Cape Verde. J Appl Volcanol 6. https://doi.org/10.1186/s13617-017-0057-6
Jenkins SF, Komorowski JC, Baxter PJ, Spence R, Picquout A, Lavigne F, Surono (2013b) The Merapi 2010 eruption: An interdisciplinary impact assessment methodology for studying pyroclastic density current dynamics. J Volcanol Geothermal Res 261:316–329. https://doi.org/10.1016/j.jvolgeores.2013.02.012
Jenkins SF, Komorowski JC, Baxter PJ, Spence R, Picquout A, Lavigne F, Surono (2013a) The Merapi 2010 eruption: An interdisciplinary impact assessment methodology for studying pyroclastic density current dynamics. J Volcanol Geothermal Res 261:316–329. https://doi.org/10.1016/j.jvolgeores.2013.02.012
Jenkins SF, Spence RJS (2009) Vulnerability curves for buildings and agriculture: A report for MIA-VITA, p 61
Jenkins SF, Spence RJS, Fonseca JFBD, Solidum RU, Wilson TM (2014) Volcanic risk assessment: Quantifying physical vulnerability in the built environment. J Volcanol Geothermal Res 276:105–120. https://doi.org/10.1016/j.jvolgeores.2014.03.002
Jenkins SF, Wilson TM, Magill C, Miller V, Stewart C, Blong R, Marzocchi W, Boulton M, Bonadonna C, Costa A (2015) Volcanic ash fall hazard and risk, pp 173–222. https://doi.org/10.1017/CBO9781316276273.005
Jones R, Thomas RE, Peakall J, Manville V (2017) Rainfall-runoff properties of tephra: Simulated effects of grain-size and antecedent rainfall. Geomorphology 282:39–51. https://doi.org/10.1016/j.geomorph.2016.12.023
Kreer, K., 2014, Homeless after mount Kelud volcanic eruption, Indonesia:, http://www.imkiran.com/homeless-after-mount-kelud-eruption/ (accessed September 2020)
Kristiansen NI, Prata AJ, Stohl A, Carn SA(2015). Stratospheric volcanic ash emissions from the 13 February 2014 Kelut eruption. Geophysical Research. Letters 42 (2):588-596
Lallemant D, Kiremidjian A, Burton H (2015) Statistical procedures for developing earthquake damage fragility curves. Int Assoc Earthquake Eng 44:1373–1389. https://doi.org/10.1002/eqe
Macabuag J, Rossetto T, Ioannou I, Suppasri A, Sugawara D, Adriano B, Imamura F, Eames I, Koshimura S (2016) A proposed methodology for deriving tsunami fragility functions for buildings using optimum intensity measures. Nat Hazards 84:1257–1285. https://doi.org/10.1007/s11069-016-2485-8
Macedonio G, Costa A (2012) Brief communication: Rain effect on the load of tephra deposits. Natural Hazards Earth Syst Sci 12:1229–1233. https://doi.org/10.5194/nhess-12-1229-2012
Maeno F, Nakada S, Yoshimoto M, Shimano T, Hokanishi N, Zaennudin A, Iguchi M (2019a). A sequence of a plinian eruption preceded by dome destruction at Kelud volcano, Indonesia, on February 13, 2014, revealed from tephra fallout and pyroclastic density current deposits: journal of volcanology and geothermal research, v. https://doi.org/10.1016/j.jvolgeores.2017.03.002.
Maeno F, Nakada S, Yoshimoto M, Shimano T, Hokanishi N, Zaennudin A, and Iguchi M (2019b). Eruption pattern and a long-term magma discharge rate over the past 100 years at Kelud volcano, Indonesia: journal of disaster research, v. 14, p. 27–39, doi: https://doi.org/10.20965/jdr.2019.p0027.
Magill C, Wilson T, Okada T (2013) Observations of tephra fall impacts from the 2011 Shinmoedake eruption, Japan: Earth. Planets Space 65:677–698. https://doi.org/10.5047/eps.2013.05.010
Mas E, Paulik R, Pakoksung K, Adriano B, Moya L, Suppasri A, Muhari A, Khomarudin R, Yokoya N, Matsuoka M, Koshimura S (2020) Characteristics of tsunami fragility functions developed using different sources of damage data from the 2018 Sulawesi earthquake and tsunami: pure and applied geophysics. https://doi.org/10.1007/s00024-020-02501-4
Nakada S, Zaennudin A, Maeno F, Yoshimoto M, Hokanishi N (2016) Credibility of volcanic ash thicknesses reported by the media and local residents following the 2014 eruption of Kelud volcano, Indonesia. J Disaster Res 11:53–59
OSM, 2019, Open Street Map. Accessed 2019-11-26:, http://www.openstreetmap.org/ (accessed November 2019)
Osman S, Rossi E, Bonadonna C, Frischknecht C, Andronico D, Cioni R, Scollo S (2018) Exposure-based risk assessment and emergency management associated with the fallout of large clasts. Nat Hazards Earth Syst Sci Discuss:1–31. https://doi.org/10.5194/nhess-2018-91
Paripurno ET, Nugroho ARB, Ritonga M, Ronald D (2015) 2015, Hubungan Sebaran Endapan Piroklastika dan Tingkat Kerusakan Bangunan Permukiman pada Kasus Erupsi G. Kelud 2014 di Kabupaten Kediri, Provinsi Jawa Timur. In: PIT 2nd Association of Indonesia Disaster Experts (IABI), UGM, p 8
Prihatmaji YP, Kitamori A, Komatsu K (2014) Traditional javanese wooden houses (Joglo) damaged by may 2006 Yogyakarta earthquake, Indonesia. Int J Architec Heritage 8:247–268. https://doi.org/10.1080/15583058.2012.692847
Rennie J, and Srebro N (2005). Loss functions for preference levels: Regression with discrete ordered labels, in Workshop on Advances in Preference, p.6. http://ttic.uchicago.edu/~nati/Publications/RennieSrebroIJCAI05.pdf.
Rose, A.N., McKee, J.J., Urban, M.L., Bright, E.A., and Sims, K.M., 2019, LandScan 2018:, https://landscan.ornl.gov/
Rossetto T, Ioannou I (2018) Empirical fragility and vulnerability assessment : not just a regression. Elsevier Inc., pp 79–103. https://doi.org/10.1016/B978-0-12-804071-3.00004-5
Rossetto, T., Ioannou, I., Grant, D.N., and Maqsood, T., 2014, Guidelines for empirical vulnerability assessment, GEM Technical Report:, https://old.globalquakemodel.org/media/publication/VULN-MOD-Empirical-vulnerability-201411-v01.pdf
Berita Satu, 2014, Desa Dekat Gunung Kelud Berubah Jadi Kampung Mati:, https://www.youtube.com/watch?v=ZMVbISUUnAI (accessed September 2020)
Scollo, S., Tarantola, S., Bonadonna, C., Coltelli, M., and Saltelli, A., 2008, Sensitivity analysis and uncertainty estimation for tephra dispersal models: J Geophys Res: Solid Earth, v 113, doi: https://doi.org/10.1029/2007JB004864
Solikhin A, Thouret JC, Liew SC, Gupta A, Sayudi DS, Oehler JF, Kassouk Z (2015) High-spatial-resolution imagery helps map deposits of the large (VEI 4) 2010 Merapi volcano eruption and their impact. Bull Volcanol 77:1–23. https://doi.org/10.1007/s00445-015-0908-0
Spence R, Bommer J, Del Re D, Bird J, Aydinoǧlu N, Tabuchi S (2003) Comparing loss estimation with observed damage: A study of the 1999 Kocaeli earthquake in Turkey. Bull Earthq Eng 1:83–113. https://doi.org/10.1023/A:1024857427292
Spence RJS, Kelman I, Baxter PJ, Zuccaro G, Petrazzuoli S (2005) Residential building and occupant vulnerability to tephra fall. Nat Hazards Earth Syst Sci 5:477–494. https://doi.org/10.5194/nhess-5-477-2005
Spence RJS, Pomonis A, Baxter PJ, Coburn AW, White M, Dayrit M (1996) Building damage caused by the Mt. Pinatubo eruption of June 14–15, 1991. In: Newhall CG, Punongbayan R (eds) Fire and mud: eruptions and lahars of mount Pinatubo. University of Washington Press, Phillipines
Sutriyanto, E., 2014, Residents of Tanggung Mulya Need Home Improvement Materials (Warga Tanggung Mulya Perlu Bahan Perbaiki Rumah): Tribunnews.com, https://www.tribunnews.com/regional/2014/05/09/warga-tanggung-mulya-perlu-bahan-perbaiki-rumah (accessed February 2020)
United Nations, 2015, Sendai Framework for Disaster Risk Reduction 2015–2030:, http://www.ncbi.nlm.nih.gov/pubmed/12344081
Williams GT, Kennedy BM, Lallemant D, Wilson TM, Allen N, Scott A, Jenkins SF (2019) Tephra cushioning of ballistic impacts: Quantifying building vulnerability through pneumatic cannon experiments and multiple fragility curve fitting approaches. J Volcanol Geothermal Res 388:106711. https://doi.org/10.1016/j.jvolgeores.2019.106711
Williams JH, Paulik R, Wilson TM, Wotherspoon L, Rusdin A, Pratama GM (2020) Tsunami fragility functions for road and utility pole assets using field survey and remotely sensed data from the 2018 Sulawesi tsunami, Palu, Indonesia. Pure Appl Geophys 177:3545–3562. https://doi.org/10.1007/s00024-020-02545-6
Wilson G, Wilson TM, Deligne NI, Blake DM, Cole JW (2017) Framework for developing volcanic fragility and vulnerability functions for critical infrastructure. J Appl Volcanol 6:14. https://doi.org/10.1186/s13617-017-0065-6
Wilson TM, Stewart C, Sword-Daniels V, Leonard GS, Johnston DM, Cole JW, Wardman J, Wilson G, Barnard ST (2012) Volcanic ash impacts on critical infrastructure. Phys Chem Earth 45–46:5–23. https://doi.org/10.1016/j.pce.2011.06.006
Wirakusumah AD (1991) Some studies of volcanology, petrology and structure of Mt. Kelut, east Java, Indonesia. PhD Thesis: Victoria University Wellington, p 460
Yang, Q., and Bursik, M., 2016, A new interpolation method to model thickness, isopachs, extent, and volume of tephra fall deposits: Bull Volcanol, v 78, doi: https://doi.org/10.1007/s00445-016-1061-0
Zuccaro G, Cacace F, Spence RJS, Baxter PJ (2008) Impact of explosive eruption scenarios at Vesuvius. J Volcanol Geotherm Res 178:416–453. https://doi.org/10.1016/j.jvolgeores.2008.01.005
We would like to thank the Center for Volcanology and Geothermal Hazard Mitigation (CVGHM) for their support and for valuable discussions around the Kelud 2014 eruption. We thank Daniel Blake for useful discussions surrounding impacts he researched on the Kelud 2014 eruption and for providing a selection of photos he and Grant Wilson took during a post-eruption field campaign in September 2014. We thank Adelina Geyer, one anonymous reviewer and editor, Sara Barsotti, for their detailed comments that allowed us to improve the manuscript. This work comprises Earth Observatory of Singapore contribution no. 302.
This research was supported by the Earth Observatory of Singapore via its funding from the National Research Foundation Singapore and the Singapore Ministry of Education under the Research Centres of Excellence initiative (GW, SJ, SB).
Earth Observatory of Singapore, Nanyang Technological University, 50 Nanyang Ave, Singapore, Singapore
George T. Williams, Susanna F. Jenkins & Sébastien Biass
Asian School of the Environment, Nanyang Technological University, 50 Nanyang Ave, Singapore, Singapore
George T. Williams & Susanna F. Jenkins
Deparmen Teknik Geologi, Universitas Gadjah Mada, Jalan Grafika 2, Yogyakarta, Indonesia
Haryo Edi Wibowo & Agung Harijoko
Center for Disaster Studies, Universitas Gadjah Mada, Jalan Grafika 2, Yogyakarta, Indonesia
George T. Williams
Susanna F. Jenkins
Sébastien Biass
Haryo Edi Wibowo
Agung Harijoko
GTW drafted the manuscript with input from SJ, SB, HEW and AH. HEW and AH collected field data. SJ and SB instructed GTW in conducting tephra inversion modelling. All authors read and approved the final manuscript.
Correspondence to George T. Williams.
Additional file 1.
All fragility curves in this study were fit using a cumulative link model (CLM) with the following form:
$$ P\left( DS\ge {ds}_j\backslash HIM\right)=\Phi\;\left({\overset{\frown }{\beta}}_j+{\overset{\frown }{\beta}}_2\;\mathrm{In}\left({HIM}_i\right)\right),j=1,\dots j=-1 $$
where the probability (P) to equal or exceed a given damage state (dsj) is expressed in terms of a single hazard intensity metric (HIM), which in this study is tephra loading, measured in kg m− 2. Fragility curves in this study use the probit link function, whose inverse is the standard cumulative normal distribution (Φ). Damage state ordering allows for fragility curves to be calculated for each damage state (j) simultaneously. Where each damage state curve within a set has its own intercept (\( {\hat{\boldsymbol{\beta}}}_{\boldsymbol{j}} \)) but shares a common slope coefficient (\( {\hat{\boldsymbol{\beta}}}_{\mathbf{2}} \)). The \( {\hat{\boldsymbol{\beta}}}_{\boldsymbol{j}} \) and \( {\hat{\boldsymbol{\beta}}}_{\mathbf{2}} \) parameters for all curves are given in Table A1, along with mean and standard deviation parameters required to reproduce all fragility curves using the NORM.DIST function within Microsoft Excel.
Table 4 Curve parameters for fragility curves fit using CLMs \( {\hat{\Big(\boldsymbol{\beta}}}_{\boldsymbol{j}} \) and \( {\hat{\boldsymbol{\beta}}}_{\mathbf{2}} \)) and the parameters required to produce the same fragility curves using Microsoft Excel's NORM.DIST function
Williams, G.T., Jenkins, S.F., Biass, S. et al. Remotely assessing tephra fall building damage and vulnerability: Kelud Volcano, Indonesia. J Appl. Volcanol. 9, 10 (2020). https://doi.org/10.1186/s13617-020-00100-5
Accepted: 16 October 2020
Vulnerability functions
Kelud 2014 eruption
Development and application of volcanic fragility and vulnerability functions
|
CommonCrawl
|
Home > Journals > Ann. Probab. > Volume 48 > Issue 2 > Article
March 2020 Hitting probabilities of a Brownian flow with radial drift
Jong Jun Lee, Carl Mueller, Eyal Neuman
Ann. Probab. 48(2): 646-671 (March 2020). DOI: 10.1214/19-AOP1368
We consider a stochastic flow $\phi_{t}(x,\omega )$ in $\mathbb{R}^{n}$ with initial point $\phi_{0}(x,\omega )=x$, driven by a single $n$-dimensional Brownian motion, and with an outward radial drift of magnitude $\frac{F(\|\phi_{t}(x)\|)}{\|\phi_{t}(x)\|}$, with $F$ nonnegative, bounded and Lipschitz. We consider initial points $x$ lying in a set of positive distance from the origin. We show that there exist constants $C^{*},c^{*}>0$ not depending on $n$, such that if $F>C^{*}n$ then the image of the initial set under the flow has probability 0 of hitting the origin. If $0\leq F\leq c^{*}n^{3/4}$, and if the initial set has a nonempty interior, then the image of the set has positive probability of hitting the origin.
Jong Jun Lee. Carl Mueller. Eyal Neuman. "Hitting probabilities of a Brownian flow with radial drift." Ann. Probab. 48 (2) 646 - 671, March 2020. https://doi.org/10.1214/19-AOP1368
Received: 1 February 2018; Revised: 1 March 2019; Published: March 2020
First available in Project Euclid: 22 April 2020
Digital Object Identifier: 10.1214/19-AOP1368
Primary: 60H10
Secondary: 37C10, 60J45, 60J60
Keywords: Bessel process, hitting, Stochastic differential equations, stochastic flow
Rights: Copyright © 2020 Institute of Mathematical Statistics
Ann. Probab.
Vol.48 • No. 2 • March 2020
Jong Jun Lee, Carl Mueller, Eyal Neuman "Hitting probabilities of a Brownian flow with radial drift," The Annals of Probability, Ann. Probab. 48(2), 646-671, (March 2020)
|
CommonCrawl
|
Only show content I have access to (58)
Only show open access (9)
Last 12 months (4)
Over 3 years (195)
Physics and Astronomy (63)
Materials Research (28)
Earth and Environmental Sciences (14)
Politics and International Relations (8)
Statistics and Probability (5)
MRS Online Proceedings Library Archive (27)
Publications of the Astronomical Society of Australia (21)
Proceedings of the International Astronomical Union (9)
Proceedings of the British Society of Animal Production (1972) (7)
The Journal of Hellenic Studies (6)
Epidemiology & Infection (5)
Radiocarbon (5)
The Classical Review (5)
The Journal of Agricultural Science (4)
Transactions of the International Astronomical Union (4)
Infection Control & Hospital Epidemiology (3)
Mycological Research (3)
Symposium - International Astronomical Union (3)
Bulletin of Entomological Research (2)
Journal of Fluid Mechanics (2)
Microscopy and Microanalysis (2)
Quaternary Research (2)
Wits University Press (8)
Materials Research Society (28)
International Astronomical Union (18)
BSAS (11)
SPHS Soc for the Promotion of Hellenic Studies (6)
Classical Association (3)
Society for Healthcare Epidemiology of America (SHEA) (3)
The Classical Association (3)
AEPC Association of European Paediatric Cardiology (1)
American Society of International Law (1)
MBA Online Only Members (1)
MSA - Microscopy Society of America (1)
Nestle Foundation - enLINK (1)
Neuroscience Education Institute (1)
The Australian Society of Otolaryngology Head and Neck Surgery (1)
Weed Science Society of America (1)
World Association for Disaster and Emergency Medicine (1)
test society (1)
Cambridge Companions to the Ancient World (1)
Case Studies in Neurology (1)
Society for Experimental Biology Seminar Series (1)
Systematics Association Special Volume Series (1)
Cambridge Companions (1)
Cambridge Companions to Literature and Classics (1)
Impact of obesity on post-operative arrhythmias after congenital heart surgery in children and young adults
Andrew E. Radbill, Andrew H. Smith, Sara L. Van Driest, Frank A. Fish, David P. Bichell, Bret A. Mettler, Karla G. Christian, Todd L. Edwards, Prince J. Kannankeril
Journal: Cardiology in the Young , First View
Published online by Cambridge University Press: 06 January 2022, pp. 1-6
Obesity increases the risk of post-operative arrhythmias in adults undergoing cardiac surgery, but little is known regarding the impact of obesity on post-operative arrhythmias after CHD surgery.
Patients undergoing CHD surgery from 2007 to 2019 were prospectively enrolled in the parent study. Telemetry was assessed daily, with documentation of all arrhythmias. Patients aged 2–20 years were categorised by body mass index percentile for age and sex (underweight <5, normal 5–85, overweight 85–95, and obese >95). Patients aged >20 years were categorised using absolute body mass index. We investigated the impact of body mass index category on arrhythmias using univariate and multivariate analysis.
There were 1250 operative cases: 12% underweight, 65% normal weight, 12% overweight, and 11% obese. Post-operative arrhythmias were observed in 38%. Body mass index was significantly higher in those with arrhythmias (18.8 versus 17.8, p = 0.003). There was a linear relationship between body mass index category and incidence of arrhythmias: underweight 33%, normal 38%, overweight 42%, and obese 45% (p = 0.017 for trend). In multivariate analysis, body mass index category was independently associated with post-operative arrhythmias (p = 0.021), with odds ratio 1.64 in obese patients as compared to normal-weight patients (p = 0.036). In addition, aortic cross-clamp time (OR 1.007, p = 0.002) and maximal vasoactive–inotropic score in the first 48 hours (OR 1.03, p = 0.04) were associated with post-operative arrhythmias.
Body mass index is independently associated with incidence of post-operative arrhythmias in children after CHD surgery.
Australian square kilometre array pathfinder: I. system description
A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier
Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021
Published online by Cambridge University Press: 05 March 2021, e009
In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown.
Response of the trial innovation network to the COVID-19 pandemic
Rachel G. Greenberg, Lori Poole, Daniel E. Ford, Daniel Hanley, Harry P. Selker, Karen Lane, J. Michael Dean, Jeri Burr, Paul Harris, Consuelo H. Wilkins, Gordon Bernard, Terri Edwards, Daniel K. Benjamin, Jr
Journal: Journal of Clinical and Translational Science / Volume 5 / Issue 1 / 2021
Published online by Cambridge University Press: 20 April 2021, e100
The COVID-19 pandemic prompted the development and implementation of hundreds of clinical trials across the USA. The Trial Innovation Network (TIN), funded by the National Center for Advancing Translational Sciences, was an established clinical research network that pivoted to respond to the pandemic.
The TIN's three Trial Innovation Centers, Recruitment Innovation Center, and 66 Clinical and Translational Science Award Hub institutions, collaborated to adapt to the pandemic's rapidly changing landscape, playing central roles in the planning and execution of pivotal studies addressing COVID-19. Our objective was to summarize the results of these collaborations and lessons learned.
The TIN provided 29 COVID-related consults between March 2020 and December 2020, including 6 trial participation expressions of interest and 8 community engagement studios from the Recruitment Innovation Center. Key lessons learned from these experiences include the benefits of leveraging an established infrastructure, innovations surrounding remote research activities, data harmonization and central safety reviews, and early community engagement and involvement.
Our experience highlighted the benefits and challenges of a multi-institutional approach to clinical research during a pandemic.
A history of high-power laser research and development in the United Kingdom
60th Celebration of First Laser
Colin N. Danson, Malcolm White, John R. M. Barr, Thomas Bett, Peter Blyth, David Bowley, Ceri Brenner, Robert J. Collins, Neal Croxford, A. E. Bucker Dangor, Laurence Devereux, Peter E. Dyer, Anthony Dymoke-Bradshaw, Christopher B. Edwards, Paul Ewart, Allister I. Ferguson, John M. Girkin, Denis R. Hall, David C. Hanna, Wayne Harris, David I. Hillier, Christopher J. Hooker, Simon M. Hooker, Nicholas Hopps, Janet Hull, David Hunt, Dino A. Jaroszynski, Mark Kempenaars, Helmut Kessler, Sir Peter L. Knight, Steve Knight, Adrian Knowles, Ciaran L. S. Lewis, Ken S. Lipton, Abby Littlechild, John Littlechild, Peter Maggs, Graeme P. A. Malcolm, OBE, Stuart P. D. Mangles, William Martin, Paul McKenna, Richard O. Moore, Clive Morrison, Zulfikar Najmudin, David Neely, Geoff H. C. New, Michael J. Norman, Ted Paine, Anthony W. Parker, Rory R. Penman, Geoff J. Pert, Chris Pietraszewski, Andrew Randewich, Nadeem H. Rizvi, Nigel Seddon, MBE, Zheng-Ming Sheng, David Slater, Roland A. Smith, Christopher Spindloe, Roy Taylor, Gary Thomas, John W. G. Tisch, Justin S. Wark, Colin Webb, S. Mark Wiggins, Dave Willford, Trevor Winstone
Journal: High Power Laser Science and Engineering / Volume 9 / 2021
Published online by Cambridge University Press: 27 April 2021, e18
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
The Rapid ASKAP Continuum Survey I: Design and first results
Australian SKA Pathfinder
D. McConnell, C. L. Hale, E. Lenc, J. K. Banfield, George Heald, A. W. Hotan, James K. Leung, Vanessa A. Moss, Tara Murphy, Andrew O'Brien, Joshua Pritchard, Wasim Raja, Elaine M. Sadler, Adam Stewart, Alec J. M. Thomson, M. Whiting, James R. Allison, S. W. Amy, C. Anderson, Lewis Ball, Keith W. Bannister, Martin Bell, Douglas C.-J. Bock, Russ Bolton, J. D. Bunton, A. P. Chippendale, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, N. Gupta, Douglas B. Hayman, Ian Heywood, C. A. Jackson, Bärbel S. Koribalski, Karen Lee-Waddell, N. M. McClure-Griffiths, Alan Ng, Ray P. Norris, Chris Phillips, John E. Reynolds, Daniel N. Roxby, Antony E. T. Schinckel, Matt Shields, Chenoa Tremblay, A. Tzioumis, M. A. Voronkov, Tobias Westmeier
Published online by Cambridge University Press: 30 November 2020, e048
The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz.
A new frontier in laboratory physics: magnetized electron–positron plasmas
M. R. Stoneking, T. Sunn Pedersen, P. Helander, H. Chen, U. Hergenhahn, E. V. Stenson, G. Fiksel, J. von der Linden, H. Saitoh, C. M. Surko, J. R. Danielson, C. Hugenschmidt, J. Horn-Stanja, A. Mishchenko, D. Kennedy, A. Deller, A. Card, S. Nißl, M. Singer, M. Singer, S. König, L. Willingale, J. Peebles, M. R. Edwards, K. Chin
Journal: Journal of Plasma Physics / Volume 86 / Issue 6 / December 2020
Published online by Cambridge University Press: 18 November 2020, 155860601
We describe here efforts to create and study magnetized electron–positron pair plasmas, the existence of which in astrophysical environments is well-established. Laboratory incarnations of such systems are becoming ever more possible due to novel approaches and techniques in plasma, beam and laser physics. Traditional magnetized plasmas studied to date, both in nature and in the laboratory, exhibit a host of different wave types, many of which are generically unstable and evolve into turbulence or violent instabilities. This complexity and the instability of these waves stem to a large degree from the difference in mass between the positively and the negatively charged species: the ions and the electrons. The mass symmetry of pair plasmas, on the other hand, results in unique behaviour, a topic that has been intensively studied theoretically and numerically for decades, but experimental studies are still in the early stages of development. A levitated dipole device is now under construction to study magnetized low-energy, short-Debye-length electron–positron plasmas; this experiment, as well as a stellarator device that is in the planning stage, will be fuelled by a reactor-based positron source and make use of state-of-the-art positron cooling and storage techniques. Relativistic pair plasmas with very different parameters will be created using pair production resulting from intense laser–matter interactions and will be confined in a high-field mirror configuration. We highlight the differences between and similarities among these approaches, and discuss the unique physics insights that can be gained by these studies.
The IntCal20 Northern Hemisphere Radiocarbon Age Calibration Curve (0–55 cal kBP)
IntCal 20
Paula J Reimer, William E N Austin, Edouard Bard, Alex Bayliss, Paul G Blackwell, Christopher Bronk Ramsey, Martin Butzin, Hai Cheng, R Lawrence Edwards, Michael Friedrich, Pieter M Grootes, Thomas P Guilderson, Irka Hajdas, Timothy J Heaton, Alan G Hogg, Konrad A Hughen, Bernd Kromer, Sturt W Manning, Raimund Muscheler, Jonathan G Palmer, Charlotte Pearson, Johannes van der Plicht, Ron W Reimer, David A Richards, E Marian Scott, John R Southon, Christian S M Turney, Lukas Wacker, Florian Adolphi, Ulf Büntgen, Manuela Capano, Simon M Fahrni, Alexandra Fogtmann-Schulz, Ronny Friedrich, Peter Köhler, Sabrina Kudsk, Fusa Miyake, Jesper Olsen, Frederick Reinig, Minoru Sakamoto, Adam Sookdeo, Sahra Talamo
Journal: Radiocarbon / Volume 62 / Issue 4 / August 2020
Published online by Cambridge University Press: 12 August 2020, pp. 725-757
Print publication: August 2020
Radiocarbon (14C) ages cannot provide absolutely dated chronologies for archaeological or paleoenvironmental studies directly but must be converted to calendar age equivalents using a calibration curve compensating for fluctuations in atmospheric 14C concentration. Although calibration curves are constructed from independently dated archives, they invariably require revision as new data become available and our understanding of the Earth system improves. In this volume the international 14C calibration curves for both the Northern and Southern Hemispheres, as well as for the ocean surface layer, have been updated to include a wealth of new data and extended to 55,000 cal BP. Based on tree rings, IntCal20 now extends as a fully atmospheric record to ca. 13,900 cal BP. For the older part of the timescale, IntCal20 comprises statistically integrated evidence from floating tree-ring chronologies, lacustrine and marine sediments, speleothems, and corals. We utilized improved evaluation of the timescales and location variable 14C offsets from the atmosphere (reservoir age, dead carbon fraction) for each dataset. New statistical methods have refined the structure of the calibration curves while maintaining a robust treatment of uncertainties in the 14C ages, the calendar ages and other corrections. The inclusion of modeled marine reservoir ages derived from a three-dimensional ocean circulation model has allowed us to apply more appropriate reservoir corrections to the marine 14C data rather than the previous use of constant regional offsets from the atmosphere. Here we provide an overview of the new and revised datasets and the associated methods used for the construction of the IntCal20 curve and explore potential regional offsets for tree-ring data. We discuss the main differences with respect to the previous calibration curve, IntCal13, and some of the implications for archaeology and geosciences ranging from the recent past to the time of the extinction of the Neanderthals.
Modelling feeding strategies to improve milk production, rumen function and discomfort of the early lactation dairy cow supplemented with fodder beet
A. E. Fleming, D. Dalley, R. H. Bryant, G. R. Edwards, P. Gregorini
Journal: The Journal of Agricultural Science / Volume 158 / Issue 4 / May 2020
Print publication: May 2020
Feeding fodder beet (FB) to dairy cows in early lactation has recently been adopted by New Zealand dairy producers despite limited definition of feeding and grazing management practices that may prevent acute and sub-acute ruminal acidosis (SARA). This modelling study aimed to characterize changes of rumen pH, milk production and total discomfort from FB and define practical feeding strategies of a mixed herbage and FB diet. The deterministic, dynamic and mechanistic model MINDY was used to compare a factorial arrangement of FB allowance (FBA), herbage allowance (HA) and time of allocation. The FBA were 0, 2, 4 or 7 kg dry matter (DM)/cow/day (0FB, 2FB, 4FB and 7FB, respectively) and HA were 18, 24 or 48 kg DM/cow/day above ground. All combinations were offered either in the morning or afternoon or split across two equal meals. Milk production from 2FB diets was similar to 0FB but declined by 4 and 16% when FB increased to 4 and 7 kg DM, respectively. MINDY predicted that 7FB would result in SARA and that rumen conditions were sub-optimal even at moderate FBA (pH < 5.6 for 160 and 90 min/day, 7FB and 4FB respectively). Pareto front analysis identified the best compromise between high milk production and low total discomfort was achieved by splitting the 2FB diet into two equal meals fed each day with 48 kg DM herbage. However, due to low milk response and high risk of acidosis, it is concluded that FB is a poor supplement for lactating dairy cows.
Chapter 15 - SDG 15: Life on Land – The Central Role of Forests in Sustainable Development
By Jeffrey Sayer, Douglas Sheil, Glenn Galloway, Rebecca A. Riggs, Gavyn Mewett, Kenneth G. MacDicken, Bas Arts, Agni K. Boedhihartono, James Langston, David P. Edwards
Edited by Pia Katila, Carol J. Pierce Colfer, Wil de Jong, Kyoto University, Japan, Glenn Galloway, University of Florida, Pablo Pacheco, Georg Winkel
Book: Sustainable Development Goals: Their Impacts on Forests and People
SDG 15 requires the maintenance of life on land and endorses priorities already established through international conventions and agreements. The scale, and complexity, of tropical forest loss and biodiversity decline versus the limited resources for conservation and forestry pose many challenges. The main innovation of SDG 15 is that decision makers will see this goal as one to integrate with other SDGs; the risk is that short-term priorities and a 'business as usual' approach will undermine this. We examine these opportunities and challenges, the factors that impinge upon them and how they may play out over the next decade. There will be trade-offs between SDG 15 and other SDGs resulting from competition for land, but there are also synergies and opportunities that require recognition. We encourage conservation and development professionals to engage with those responsible for all the Agenda 2030 targets to ensure that SDG 15 is a priority in all SDG related processes.
Chapter 2 - The Intertidal Zone of the North-East Atlantic Region
By Stephen J. Hawkins, Kathryn E. Pack, Louise B. Firth, Nova Mieszkowska, Ally J. Evans, Gustavo M. Martins, Per Åberg, Leoni C. Adams, Francisco Arenas, Diana M. Boaventura, Katrin Bohn, C. Debora G. Borges, João J. Castro, Ross A. Coleman, Tasman P. Crowe, Teresa Cruz, Mark S. Davies, Graham Epstein, João Faria, João G. Ferreira, Natalie J. Frost, John N. Griffin, ME Hanley, Roger J. H. Herbert, Kieran Hyder, Mark P. Johnson, Fernando P. Lima, Patricia Masterson-Algar, Pippa J. Moore, Paula S. Moschella, Gillian M. Notman, Federica G. Pannacciulli, Pedro A. Ribeiro, Antonio M. Santos, Ana C. F. Silva, Martin W. Skov, Heather Sugden, Maria Vale, Kringpaka Wangkulangkul, Edward J. G. Wort, Richard C. Thompson, Richard G. Hartnoll, Michael T. Burrows, Stuart R. Jenkins
Edited by Stephen J. Hawkins, Marine Biological Association of the United Kingdom, Plymouth, Katrin Bohn, Louise B. Firth, University of Plymouth, Gray A. Williams, The University of Hong Kong
Book: Interactions in the Marine Benthos
Print publication: 29 August 2019, pp 7-46
The rocky shores of the north-east Atlantic have been long studied. Our focus is from Gibraltar to Norway plus the Azores and Iceland. Phylogeographic processes shape biogeographic patterns of biodiversity. Long-term and broadscale studies have shown the responses of biota to past climate fluctuations and more recent anthropogenic climate change. Inter- and intra-specific species interactions along sharp local environmental gradients shape distributions and community structure and hence ecosystem functioning. Shifts in domination by fucoids in shelter to barnacles/mussels in exposure are mediated by grazing by patellid limpets. Further south fucoids become increasingly rare, with species disappearing or restricted to estuarine refuges, caused by greater desiccation and grazing pressure. Mesoscale processes influence bottom-up nutrient forcing and larval supply, hence affecting species abundance and distribution, and can be proximate factors setting range edges (e.g., the English Channel, the Iberian Peninsula). Impacts of invasive non-native species are reviewed. Knowledge gaps such as the work on rockpools and host–parasite dynamics are also outlined.
The Breakthrough Listen search for intelligent life: Wide-bandwidth digital instrumentation for the CSIRO Parkes 64-m telescope
Danny C. Price, David H. E. MacMahon, Matt Lebofsky, Steve Croft, David DeBoer, J. Emilio Enriquez, Griffin S. Foster, Vishal Gajjar, Nectaria Gizani, Greg Hellbourg, Howard Isaacson, Andrew P. V. Siemion, Dan Werthimer, James A. Green, Shaun Amy, Lewis Ball, Douglas C.-J. Bock, Dan Craig, Philip G. Edwards, Andrew Jameson, Stacy Mader, Brett Preisig, Mal Smith, John Reynolds, John Sarkissian
Breakthrough Listen is a 10-yr initiative to search for signatures of technologies created by extraterrestrial civilisations at radio and optical wavelengths. Here, we detail the digital data recording system deployed for Breakthrough Listen observations at the 64-m aperture CSIRO Parkes Telescope in New South Wales, Australia. The recording system currently implements two modes: a dual-polarisation, 1.125-GHz bandwidth mode for single-beam observations, and a 26-input, 308-MHz bandwidth mode for the 21-cm multibeam receiver. The system is also designed to support a 3-GHz single-beam mode for the forthcoming Parkes ultra-wideband feed. In this paper, we present details of the system architecture, provide an overview of hardware and software, and present initial performance results.
Relationships between handling, behaviour and stress in lambs at abattoirs
P. H. Hemsworth, M. Rice, S. Borg, L. E. Edwards, E. N. Ponnampalam, G. J. Coleman
Journal: animal / Volume 13 / Issue 6 / June 2019
Published online by Cambridge University Press: 22 October 2018, pp. 1287-1296
Print publication: June 2019
There is community concern about the treatment of farm animals post-farm gate, particularly animal transport and slaughter. Relationships between lamb behavioural and physiological variables on farm, stockperson, dog and lamb behavioural variables pre-slaughter and plasma cortisol, glucose and lactate in lambs post-slaughter were studied in 400 lambs. The lambs were observed in three behavioural tests, novel arena, flight distance to a human and temperament tests, before transport for slaughter. Closed-circuit television video footage was used to record stockperson, dog and lamb behaviour immediately before slaughter. Blood samples for cortisol, glucose and lactate analyses were collected on farm following the three behavioural tests and immediately post-slaughter. The regression models that best predicted plasma cortisol, glucose and lactate concentrations post-slaughter included a mixture of stockperson and dog behavioural variables as well as lamb variables both on-farm and pre-slaughter. These regression models accounted for 33%, 34% and 44% of the variance in plasma cortisol, glucose and lactate concentrations post-slaughter, respectively. Some of the stockperson and dog behaviours pre-slaughter that were predictive of the stress and metabolic variables post-slaughter included the duration of negative stockperson behaviours such as fast locomotion and lifting/pulling lambs, and the duration of dog behaviours such as lunging and barking at the lamb, while some of the predictive lamb behaviour variables included the durations of jumping and fleeing. Some of the physiological and behavioural responses to the behavioural tests on farm were also predictive of the stress and metabolic variables post-slaughter. These relationships support the well-demonstrated effect of handling on fear and stress responses in livestock, and although not direct evidence of causal relationships, highlight the potential benefits of training stockpeople to reduce fear and stress in sheep at abattoirs.
VLBI Observations of Southern Gamma-Ray Sources. III
P. G. Edwards, R. Ojha, R. Dodson, J. E. J. Lovell, J. E. Reynolds, A. K. Tzioumis, J. Quick, G. Nicolson, S. J. Tingay
Published online by Cambridge University Press: 26 February 2018, e009
We report the results of Long Baseline Array observations made in 2001 of ten southern sources proposed by Mattox et al. as counterparts to EGRET >100 MeV gamma-ray sources. Source structures are compared with published data where available and possible superluminal motions identified in several cases. The associations are examined in the light of Fermi observations, indicating that the confirmed counterparts tend to have radio properties consistent with other identifications, including flat radio spectral index, high brightness temperature, greater radio variability, and higher core dominance.
The Unresolved National Question in South Africa
Left thought under apartheid and beyond
Edited by Edward Webster
John Mawbey, Jeremy Cronin, Alex Mohubetswane Mashilo Mohubetswane Mashilo, Robert van Niekerk Niekerk, Luli Callinicos Callinicos, B G Brown, M P Giyose, H J Peterson, C A Thomas, A R Zinn, Siphamandla Zondi, T Dunbar Moodie, Enver Motala, Salim Vally, Gerhard Maré, Xolela Mangcu, Shireen Hassim, Alec Erwin, Sian Byrne, Nicole Ulrich, Lucien Walt, Martin Legassick, Daryl Glaser
Published by: Wits University Press
Published online: 21 April 2018
The re-emergence of debates on the decolonisation of knowledge has revived interest in the National Question, which began over a century ago and remains unresolved. Tensions that were suppressed and hidden in the past are now being openly debated. Despite this, the goal of one united nation living prosperously under a constitutional democracy remains elusive. This edited volume examines the way in which various strands of left thought have addressed the National Question, especially during the apartheid years, and goes on to discuss its relevance for South Africa today and in the future. Instead of imposing a particular understanding of the National Question, the editors identified a number of political traditions and allowed contributors the freedom to define the question as they believed appropriate – in other words, to explain what they thought was the Unresolved National Question. This has resulted in a rich tapestry of interweaving perceptions. The volume is structured in two parts. The first examines four foundational traditions: Marxism-Leninism (the Colonialism of a Special Type thesis); the Congress tradition; the Trotskyist tradition; and Africanism. The second part explores the various shifts in the debate from the 1960s onwards, and includes chapters on Afrikaner nationalism, ethnic issues, black consciousness, feminism, workerism and constitutionalism. The editors hope that by revisiting the debates not popularly known among the scholarly mainstream, this volume will become a catalyst for an enriched debate on our identity and our future.
Book: The Unresolved National Question in South Africa
Print publication: 31 December 2017, pp v-vi
PART TWO - CONTINUITY AND RUPTURE
Print publication: 31 December 2017, pp vii-ix
PART ONE - KEY FOUNDATIONAL TRADITIONS
Print publication: 31 December 2017, pp 19-19
|
CommonCrawl
|
Thermogravimetric analysis, kinetic study, and pyrolysis–GC/MS analysis of 1,1ʹ-azobis-1,2,3-triazole and 4,4ʹ-azobis-1,2,4-triazole
Chenhui Jia1,
Yuchuan Li ORCID: orcid.org/0000-0002-4909-28531,
Shujuan Zhang1,
Teng Fei1 &
Siping Pang1
In general, the greater the number of directly linked nitrogen atoms in a molecule, the better its energetic performance, while the stability will be accordingly lower. But 1,1ʹ-azobis-1,2,3-triazole (1) and 4,4ʹ-azobis-1,2,4-triazole (2) show remarkable properties, such as high enthalpies of formation, high melting points, and relatively high stabilities. In order to rationalize this unexpected behavior of the two compounds, it is necessary to study their thermal decompositions and pyrolyses. Although a great deal of research has been focused on the synthesis and characterization of energetic materials with 1 and 2 as the backbone, a complete report on their fundamental thermodynamic parameters and thermal decomposition properties has not been published.
Thermogravimetric–differential scanning calorimetry were used to obtain the thermal decomposition data of the title compounds. Kissinger and Ozawa–Doyle methods, the two selected non-isothermal methods, are presented for analysis of the solid-state kinetic data. Pyrolysis–gas chromatography/mass spectrometry was used to study the pyrolysis process of the title compounds.
The DSC curves show that the thermal decompositions of 1 and 2 are at different heating rates involved a single exothermic process. The TG curves provide insight into the total weight losses from the compounds associated with this process. At different pyrolysis temperatures, the compositions and types of the pyrolysis products differ greatly and the pyrolysis reaction at 500 °C is more thorough than 400 °C.
Apparent activation energies (E) and pre-exponential factors (lnA/s−1) are 291.4 kJ mol−1 and 75.53 for 1; 396.2 kJ mol−1 and 80.98 for 2 (Kissinger). The values of E are 284.5 kJ mol−1 for 1 and 386.1 kJ mol−1 for 2 (Ozawa–Doyle). The critical temperature of thermal explosion (T b ) is evaluated as 187.01 °C for 1 and 282.78 °C for 2. The title compounds were broken into small fragment ions under the pyrolysis conditions, which then might undergo a multitude of collisions and numerous other reactions, resulting in the formation of C2N2 (m/z 52), etc., before being analyzed by the GC/MS system.
Triazoles are a class of typical nitrogen-rich compounds, which have been widely used in novel energetic materials, medicine, catalysis, and other fields [1,2,3,4]. Over several decades, many studies have shown that azobis-triazole compounds have good energetic properties. This is due to their structure with multiple nitrogen atoms linked directly, along with many C–N and N–N single or double bonds in the molecule, which have good energetic properties [5,6,7,8,9,10,11]. Indeed, compared with a single triazole ring, energetic properties such as energy density, heat of formation, detonation velocity, and detonation pressure, can be significantly improved [8, 12,13,14,15,16].
In general, the greater the number of directly linked nitrogen atoms in a molecule, the better its energetic performance, but stability will be accordingly lower [5, 12, 13, 17,18,19,20,21,22]. The title compounds 1,1ʹ-azobis-1,2,3-triazole (1) and 4,4ʹ-azobis-1,2,4-triazole (2) show remarkable properties, such as high enthalpies of formation, high melting points, and relatively high stabilities [5, 13, 14, 17]. In order to rationalize this unexpected behavior of the two compounds, it is necessary to study their thermal decompositions and pyrolyses. Although a great deal of research has been focused on the synthesis and characterization of energetic materials with 1 and 2 as the backbone [5, 13, 17, 19], a complete report on their fundamental thermodynamic parameters and thermal decomposition properties has not been published.
The stability of the title compounds can be quantitatively determined by studying their thermodynamic properties, such as apparent activation energy (E) and pre-exponential factor (A) of thermal decomposition, and the critical temperature of thermal explosion (T b ).
There are many methods for analyzing non-isothermal solid-state kinetic data from TG and DSC [23,24,25,26]. These methods can be divided into two types: model-fitting and model-free methods, as summarized in Table 1 [27]. Model-fitting methods have been widely used for solid-state reactions because of their ability to directly determine the kinetic parameters from a single TG measurement. However, these methods suffer from several shortcomings, such as their inability to uniquely determine the reaction model, and that application of model-fitting methods to non-isothermal data gives higher values of the kinetic parameters. Conversely, model-free methods require several kinetic curves to perform the analysis. Calculations of several curves at different heating rates are performed on the same value of conversion, which allows calculation of the activation energy at each conversion point. Results from model-free methods tend to be more reliable and reasonable than those from model-fitting methods, especially when dealing with non-thermal data [27].
Table 1 Common methods for studying non-isothermal solid-state kinetics
Thermal pyrolysis refers to chemical decomposition caused by heat, when the thermal energy applied to the sample exceeding the chemical bond energy of the molecules. PY–GC/MS has been widely used to investigate decomposition processes and pyrolysis products [28]. Analytical pyrolysis allows the thermal breakdown of a molecule into smaller fragments, which are then selected and analyzed by the GC/MS system, providing insight into the decomposition of the sample. The PY–GC/MS can study the thermal pyrolysis of an energetic material and provides information about the nature of the explosion reaction products by GC/MS analysis, which give the information about the reaction process. These in turn can be used to evaluate whether the used explosives are environmental friendly from the identification of the explosive residue species. In addition, the identification of the explosive residue species can also be used in military and counterterrorism practice.
Compound 1 and 2 can be prepared by oxidation of the N–NH2 moieties in 1-amino-1,2,3-triazole and 4-amino-1,2,4-triazole, respectively, with sodium dichloroisocyanurate (SDIC). In this study, the thermal decomposition processes of 1 and 2 have been investigated by dynamic TG–DSC under nitrogen atmosphere at different heating rates, and by PY–GC/MS under helium atmosphere at set temperatures of 400 and 500 °C, respectively. Kinetic parameters have been obtained by two model-free methods, the Kissinger method and the Ozawa–Doyle method combined with a kinetic compensation effect. The data reported herein are expected to be of broad interest to researchers engaged in the study and applications of 1 and 2.
Materials and methodology
1,1ʹ-Azobis-1,2,3-triazole (1) and 4,4ʹ-azobis-1,2,4-triazole (2) were prepared according to literature procedures [13, 14]. The sample purities were > 0.99 (w/w). IR spectra were recorded from solid samples in KBr pellets on a Bruker Tensor 27 spectrometer. Elemental analyses were consistent with the theoretical compositions. The samples were further purified by drying in vacuo at 60 °C for approximately 4 h. The structures of the compounds are shown in Fig. 1.
Structures of the title compounds
Synthesis of 1,1ʹ-azobis-1,2,3-triazole (1) [13]
1-Amino-1,2,3-triazole (1.26 g, 15 mmol) was dissolved in CH3CN (40 mL). The solution was cooled to – 5 to 0 °C and a solution of SDIC (3.33 g, 15 mmol) in water (10 mL) and CH3COOH (5 mL) was added dropwise. The reaction mixture was further stirred at 0 °C for 30 min. It was then neutralized with NaHCO3 and filtered. The filtrate was concentrated to afford the product, which was obtained as a slightly yellow solid after recrystallization from acetone. 1H NMR (400 MHz, [D6]DMSO, 25 °C, TMS): δ = 9.17 (2H, d), 8.21 ppm (2H, d); 13C NMR (100 MHz, [D6]DMSO, 25 °C, TMS): δ = 134.8, 118.0 ppm; IR (KBr): ν = 3128, 1625, 1482, 1320, 1225, 1171 cm−1; MS: m/z 165.0 [M + H]+; elemental analysis calcd. (%) for C4H4N8 (164): C, 29.27; H, 2.44; N, 68.29; found: C, 29.45; H, 2.32; N, 68.23.
Acetic acid (5 mL) was added to a solution of SDIC (5.09 g, 23 mmol) in water (40 mL) with vigorous stirring at 30 °C. After 1 h, the mixture was cooled to 5 °C and a solution of 4-amino-1,2,4-triazole (2.07 g, 25 mmol) in water (10 mL) was added. The reaction mixture was vigorously stirred at 15 °C for 1 h. It was then cooled, and the precipitate that formed was collected by filtration and washed with water at 60 °C. After drying in vacuo, a white product was obtained. 1H NMR (400 MHz; D2O, 25 °C, TMS): δ = 7.22 ppm (4H, s); 13C NMR (100 MHz; D2O, 25 °C, TMS): δ = 138.57 ppm; IR (KBr): ν = 3114, 1493, 1368, 1317, 1180 cm−1; MS: m/z 164 (M+); elemental analysis calcd. (%) for C4H4N8 (164): C, 29.27; H, 2.44; N, 68.29; found: C, 29.32; H, 2.48; N, 68.2.
The thermal decompositions of 1 and 2 under flowing N2 were investigated using a thermogravimetric analyzer (Netzsch STA 449 C; Selb, Germany) and a differential scanning calorimeter (DSC Q2000; New Castle, USA). The TG conditions were as follows: sample mass, ca. 1.0 mg; heating rates, 5, 7, 10, 13, and 20 °C min−1 for 1, and 5, 7, 10, 15, and 20 °C min−1 for 2; atmosphere, N2 (flow rate 30 mL min−1); temperature range, 20–400 °C. The DSC conditions were as follows: heating rates, 5, 7, 10, 13, and 20 °C min−1 for 1, and 5, 7, 10, 15, and 20 °C min−1 for 2; atmosphere, N2 (flow rate 30 mL min−1); temperature range, 20–400 °C. All TG–DSC data were analyzed using Proteus Analysis software.
Thermal pyrolyses of 1 and 2 were investigated using a pyrolysis (EGA/PY-3030D, Fukushima-ken, Japan) gas chromatography–mass spectrometry (QP2010-Ultra, Kyoto, Japan) instrument (PY–GC/MS). The PY conditions were as follows: pyrolysis temperatures, 400 and 500 °C; pyrolysis time, 1 min; injection port temperature, 300 °C. The GC/MS conditions were as follows: capillary chromatographic column, ZB-5HT (30 mm × 0.25 mm × 0.25 μm); heating program: 50 °C, holding for 3 min, then heating to 300 °C at a rate of 10 K min−1, holding to 5 min; injection port temperature, 300 °C; split injection; split ratio 100:1; carrier gas (high-purity helium, 99.9999%) flow rate 1.0 mL min−1; collector temperature, 280 °C; ion source temperature, 250 °C; ion source scan mode, full scan (40–100 m/z).
Model-free methods
By using these methods, the kinetic parameters of a solid-state reaction can be obtained without knowing the reaction mechanism.
In previous work [29,30,31,32], Kissinger method was widely used to determine the activation energies with the reaction process that occur under linear heating rate conditions. Although this method has some limitations, it is acceptable when an isoconversional method was used to back up the veracity of the Kissinger method [33]. In addition, since the decomposition reaction process of the title compounds would be very complex, the values of E α of the title compounds obtained by the two isoconversional methods namely Kissinger–Akahira–Sunose [34] and Starink [35] method vary greatly and are disorder in the given range of α as 0.05–0.95. And many other types of kinetic methods [33] were tried to back up the veracity of the Kissinger method, but all of the results are unsatisfactorily. According to the literature [29, 36,37,38,39], an applicable method namely Ozawa–Doyle method was commonly used to back up the Kissinger method in the kinetic calculation of the energetic materials for its' acceptable result.
Kissinger method
In 1957, Kissinger [40] first introduced a model-free non-isothermal method, which can help researchers to evaluate kinetic parameters without the need of calculating E for each conversion value of the solid-state reaction. This method is described as follows:
$$\ln \left( {\frac{\beta }{{T_{P}^{2} }}} \right) = \ln \frac{AR}{E} - \frac{E}{{RT_{P} }}$$
In this equation, T P is the peak temperature of the DSC curve. The apparent activation energy (E) and pre-exponential factor (A) can be obtained from the slope −/(RT P ) and intercept ln(AR/E) respectively, of an ln (β/T 2 P ) versus 1/T P plot.
Ozawa–Doyle method
The Ozawa–Doyle method [41, 42] is simple and applicable to reactions that cannot be analyzed by other methods. It has been widely used to determine the apparent activation energy (E) alongside the Kissinger method. The Ozawa–Doyle equation is as follows:
$$\log \beta + \frac{0.4567E}{{RT_{P} }} = C$$
where T P is the peak temperature of the DSC curve and C is a constant. The apparent activation energy (E) can be obtained from a plot of logarithm of heating rates. A plot of logβ versus 1/T P expresses a linear function with an intercept of 0.4567E/R. The calculated E of this method is independent of the mechanism of thermal decomposition.
Thermogravimetric analysis
The TG and DSC curves of 1 and 2 at different heating rates under N2 atmosphere are shown in Figs. 2 and 3, respectively. The DSC curves show that the thermal decompositions of 1 and 2 at different heating rates involve a single exothermic process. In this process, the molecules of 1 and 2 were broken into smaller pieces by cleavage of the N=N bond linking the two triazole rings and opening of their triazole rings. At the same time, the TG curves provided insight into the total weight losses of the compounds associated with this process.
TG traces of compound 1 (L) and compound 2 (R) obtained at different heating rates
DSC traces of compound 1 (L) and compound 2 (R) obtained at different heating rates
Characteristic temperatures at different heating rates in the TG–DSC curves of 1 and 2 are shown in Table 2. From Table 2, it can be seen that at 5 °C min−1, thermal decompositions of 1 and 2 started at 176.1 and 281.7 °C, respectively. At higher heating rates, the initial temperature (T 0 ), the extrapolated onset temperature (T e ), and the peak temperature (T P ) of the DSC curves shifted from 176.1, 188.5, and 189.7 °C at 5 °C min−1 to 189.8, 199.7, and 198.0 °C at 20 °C min−1, respectively, for 1. For 2, T 0 , T e , and T P shifted from 281.7, 288.8, and 312.7 °C at 5 °C min−1 to 302.1, 315.1, and 322.7 °C at 20 °C min−1, respectively. These data show that with increased heating rate, the values of T 0 , T e , and T P increase. This behavior can be attributed to heat transfer between the sample and the instrument.
Table 2 The characteristic temperatures of the title compounds at different heating rates and the kinetic parameters
Kinetic analysis
From the thermogravimetric analysis results, we can calculate the kinetic parameters according to the model-free methods. The activation energy (E) and pre-exponential factor (A) were obtained using the Kissinger and Ozawa–Doyle methods.
From the original data of the exothermic peak temperature measured at five different heating rates of 5, 7, 10, 13, and 20 °C min−1 for 1, and 5, 7, 10, 15, and 20 °C min−1 for 2, the apparent activation energies E K and E O , the pre-exponential factors A K , and the linear coefficients r K and r O were determined, as shown in Table 2.
From Table 2, it can be seen that the apparent activation energies (E) for 1 and 2 obtained by the Kissinger method are very close to the values obtained by the Ozawa–Doyle method. The minor differences are assumed to stem from limitations of the method itself and errors in calculation. Moreover, the absolute values of the linear correlation coefficients (r) for 1 and 2 in Table 2 are close to 1, which indicates that the kinetic parameters were obtained with high accuracy.
The Arrhenius equations of the title compounds can be expressed as follows (E is the average of E K and E O ):
$$\begin{aligned}\ln k &= 75.53 - 287.95 \times 10^{3} /\left( {RT} \right) \\ &\quad\quad\quad\quad{\text{ for }}1,1^{\prime}{\text{-azobis-}}1,2,3{\text{-triazole}} \end{aligned}$$
$$\begin{aligned}\ln k &= 80.98 - 391.15 \times 10^{3} /\left( {RT} \right)\\ &\quad\quad\quad\quad{\text{ for }}4,4^{\prime}{\text{-azobis-}}1,2,4{\text{-triazole}} \end{aligned}$$
The values of T 00 , T e0 , and T P0 corresponding to β → 0 obtained from Eq. (5) are shown in Table 2.
$$\begin{aligned}T_{{\left( {0,e,p} \right)i}} = T_{{\left( {00,e0,p0} \right)i}} + b\beta_{{_{i} }} + c\beta_{i}^{2} + d\beta_{i}^{3} \\ &\quad\quad\quad\quad i = 1,2,3,4,5 \end{aligned}$$
where b, c, and d are coefficients.
The critical temperatures of thermal explosion (T b ) were obtained according to Eq. (6) as 187.01 °C for 1 and 282.78 °C for 2, respectively [43, 44].
$$T_{b} = \frac{{E_{O} - \sqrt {E_{O}^{2} - 4E_{O} RT_{e0} } }}{2R}$$
where E O is the value of E obtained by the Ozawa–Doyle method.
Evidently, the values of apparent activation energy (E), extrapolated onset temperature (T e ), and critical temperature of thermal explosion (T b ) for 2 are consistently higher than those for 1, indicating greater thermodynamic stability of the former. Comparing the values of T b with those for other common energetic compounds: CL-20 (202.07 °C), HMX (267.43 °C), RDX (209.32 °C), NTO (265.53 °C), ENTO (227.44 °C), KNTO (226.32 °C) [33], ZTO (282.21 °C), ATO (299.64 °C), GZTO·H2O (237.74 °C) [45], and KZTO·H2O (275.08 °C) [30], the thermodynamic stability sequence of these compounds can be expressed as: 1 < CL-20 < RDX < KNTO ≈ ENTO < GZTO·H2O < NTO ≈ HMX < KZTO·H2O < ZTO ≈ 2 < ATO.
Thermal pyrolysis analysis
Pyrolysis–gas chromatography–mass spectrometry (PY–GC/MS) can be used to qualitatively analyze pyrolysis products. The pyrolysis chamber was heated to the preset temperature, then the sample was added, after 3 s, a fast heating process was performed and the pyrolysis process was carried out. Fragments were then separated by the GC column and their structures were identified by the MS system. For unknown compounds, one can obtain important information, such as their composition, microstructure, and so on. For known compounds, one can determine the pyrolysis products, and thereby infer the pyrolysis reaction pathways of the compound. This method has many advantages, including a very small injection volume, suitability for a broad range of samples, rapid analysis, and good reproducibility.
Because the molecules of 1 and 2 contain many C–N/N–N single and double bonds, their critical temperatures of thermal explosion (T b ) are below 300 °C. Hence, the explosion reaction must happen during the pyrolysis process, and the actual temperature at the reaction center may briefly reach thousands of degrees Celsius. In the pyrolysis of 1 and 2, following the initial decomposition of the reactive molecule, the pyrolysis products could undergo a multitude of collisions and numerous other reaction processes prior to collection and analysis by the GC/MS system. Hence, investigation of the rapid explosion reaction is a very difficult task, and the mechanistic interpretation inferred by us in this work should be placed in the context of the difficulty in truly isolating microscopic pathways of the explosion reaction.
The pyrolysis–mass spectrometric traces of 1 and 2 at 400 and 500 °C are shown in Figs. 4, 5, 6, 7, respectively. From these figures, it can be seen that for different pyrolysis temperatures, the compositions and types of the pyrolysis products differ greatly. At 400 °C, the retention times of the first three pyrolysis products into the chromatogram were similar to those at 500 °C, but the number of pyrolysis products was fewer at 500 °C, indicating that the pyrolysis reaction at 500 °C is more thorough than 400 °C.
PY–GC/MS total ion chromatogram of 1,1ʹ-azobis-1,2,3-triazole at 400 °C
We could estimate the relative contents of the pyrolysis products from a qualitative comparison of the masses of all components using the peak area normalization method. The structures and the relative contents of the pyrolysis products from 1 and 2 are shown in Tables 3 and 4, respectively.
Table 3 Pyrolysis spectral peaks and product structural assignments for 1,1′-azobis-1,2,3-triazole
Table 4 Pyrolysis spectral peaks and product structural assignments for 4,4ʹ-azobis-1,2,4-triazole
From Table 3, it is clear that with increasing pyrolysis temperature, the numbers of different pyrolysis products were significantly reduced, and the structures were also much simpler. At 400 °C, there were about 24 species among the pyrolysis products of 1, and the differences between the proportions of the different types of pyrolysis products were relatively large. From the pyrolysis results, it is clear that it is difficult to create all of the fragments through direct cleavage of the starting molecules. Hence, we assume that in the explosion reaction, numerous molecules extensively broken into small fragment ions, as shown in Fig. 8, and these fragments underwent secondary reactions, such as coupling, rearrangement, addition, and elimination of hydrogen atoms, prior to detection by the GC/MS system.
The explosion reaction of 1,1ʹ-azobis-1,2,3-triazole at the pyrolysis temperature
The earliest species observed in the pyrolysis at 400 °C were those with m/z 52, m/z 51, and m/z 53, and these species were the sole products at 500 °C. Among these, that at m/z 52 may have arisen from coupling of two m/z 27 (CH=N) fragments by C–C single-bond formed with the elimination of two hydrogen atoms. In the same way, the fragments with m/z 51 and m/z 53 could have been created from that of m/z 26 (CH=CH) coupling with that of m/z 27 (CH=N) through the formation of a C–C single bond, the former with the concurrent elimination reaction of two hydrogen atoms. The larger molecular mass of 103 may have been created by hydrogenation of the fragment with m/z 96. All of the other fragment ions can reasonably be formed from the small fragment ions (as shown in Fig. 8) originating from the explosion reaction of 1, through coupling and rearrangement reactions, sometimes with the concurrent addition or elimination of hydrogen atoms, as shown in Fig. 9.
Several secondary reactions of the initial fragment ions to form the observed fragments
From Table 4, the same conclusion can be reached. In the explosion reaction process of 2, numerous molecules were extensively broken into much smaller fragment ions, as shown in Fig. 10. In the same way, these fragments could undergo a multitude of secondary reactions prior to detection by the GC/MS system.
The principal pyrolysis products of 2 were also those with m/z 52, m/z 51, and m/z 53, where that at m/z 52 can be created by coupling of two fragments with m/z 27 (CH=N) through C–C single-bond formation with the elimination of two hydrogen atoms. In the same way, those at m/z 51 and m/z 53 can be created by coupling of two fragments with m/z 13 (CH) through C=C double-bond formation creating a fragment with m/z 26 (HC=CH), followed by coupling with a fragment of m/z 27 (CH=N) through formation of a C–C single bond, the former with the concurrent elimination of two hydrogen atoms. The fragment with m/z 81 may have been created as a trimer of that with m/z 27 (CH=N), and that at m/z 141 may have been created by a fragment with m/z 96 coupling with a fragment with m/z 41 (N–N=CH) and the addition of hydrogen atoms, as shown in Fig. 11.
Nevertheless, some fragments are yet unaccounted for, such as those with m/z 77 (Table 3, entries 8, 9, 11), m/z 154 (entries 21, 22), m/z 256 (entry 23), and m/z 175 (entry 24) for 1, and m/z 77 (Table 4, entry 8) for 2.
Experimental kinetic studies on the thermal decomposition processes of two typical nitrogen-rich energetic materials (1 and 2) were described, in which kinetic parameters, namely the apparent activation energy (E) and pre-exponential factor (lnA), were determined by the Kissinger and Ozawa–Doyle methods.
By the Kissinger method, values of E as 291.4 and 396.2 kJ mol−1 were obtained for 1 and 2, respectively, with lnA(s−1) values of 75.53 and 80.98, respectively. By the Ozawa–Doyle method, the values of E were 284.5 and 386.1 kJ mol−1 for 1 and 2, respectively, showing good agreement. The linear correlation coefficients (r) were close to 1, validating the results.
The critical temperatures of thermal explosion (T b ) were determined as 187.01 °C for 1 and 282.78 °C for 2. From the values of E, A, and T b , 2 is clearly more thermodynamically stable than 1. Critical temperatures of thermal explosion follow the sequence: 1 < CL-20 < RDX < KNTO ≈ ENTO < GZTO·H2O < NTO ≈ HMX < KZTO·H2O < ZTO ≈ 2 < ATO.
By PY–GC/MS, thermal pyrolyses of 1 and 2 at 400 and 500 °C generated a greater number of species. By analysis of the possible structures of the pyrolysis products, some conclusions about the pyrolysis pathways of 1 and 2 were drawn. The fragments detected by GC/MS following the pyrolyses of 1 and 2 were likely due to numerous secondary reactions, such as coupling, rearrangement, and addition or elimination of hydrogen atoms, of the smaller ion fragments derived from the explosion reactions (Additional file 1).
Kumar D, Imler GH, Parrish DA, Shreeve JM (2017) A highly stable and insensitive fused triazole–triazine explosive (TTX). Chem Eur J 23:1743–1747
Piercey DG, Chavez DE, Scott BL, Imler GH, Parrish DA (2016) An energetic triazolo-1,2,4-triazine and its N-oxide. Angew Chem Int Ed 55:15315–15318
Wang XS, Huang BS, Liu XY, Zhan P (2016) Discovery of bioactive molecule from CuAAC click-chemistry-based combinatorial libraries. Drug Discov Today 21:118–132
Li PZ, Wang XJ, Liu J, Lin JS, Zou RQ, Zhao YL (2016) A triazole-containing metal-organic framework as a highly effective and substrate size-dependent catalyst for CO2 conversion. J Am Chem Soc 138:2142–2145
Singh RP, Gao HX, Meshri DT, Shreeve JM (2007) Nitrogen-rich heterocycles. Struct Bond 125:35–83
Klapötke TM, Sabaté CM (2008) Bistetrazoles: nitrogen-rich, high-performing, insensitive energetic compounds. Chem Mater 20:3629–3637
Tao GH, Guo Y, Parrish DA, Shreeve JM (2010) Energetic 1,5-diamino-4H-tetrazolium nitro-substituted azolates. J Mater Chem 20:2999–3005
Gao Y, Gao HX, Twamley B, Shreeve JM (2007) Energetic nitrogen rich salts of N,N-bis[1(2)H-tetrazol-5-yl]amine. Adv Mater 19:2884–2888
Joo YH, Shreeve JM (2010) High-density energetic mono- or bis(oxy)-5-nitroiminotetrazoles. Angew Chem 122:7478–7481
Huynh MH, Hiskey MA, Hartline EL, Montoya DP, Gilardi R (2004) Polyazido high-nitrogen compounds: hydrazoand azo-1,3,5-triazine. Angew Chem Int Ed 43:4924–4928
Klapötke TM, Piercey DG (2011) 1,1′-Azobis(tetrazole): a high energetic nitrogen-rich compound with a N10 chain. Inorg Chem 50:2732–2734
Qi C, Li SH, Li YC, Wang Y, Zhao XX, Pang SP (2012) Synthesis and promising properties of a new family of high-nitrogen compounds: polyazido- and polyamino-substituted N,N′-azo-1,2,4-triazoles. Chem Eur J 18(51):16562–16570
Li YC, Qi C, Li SH, Zhang HH, Sun CH, Yu YZ, Pang SP (2010) 1,1′-Azobis-1,2,3-triazole: a high-nitrogen compound with stable N8 structure and photochromism. J Am Chem Soc 132:12172–12173
Liu W, Li SH, Li YC, Yang YZ, Yu Y, Pang SP (2014) Nitrogen-rich salts based on polyamino substituted N,N′-azo-1,2,4-triazole: a new family of high-performance energetic materials. J Mater Chem A 2:15978–15986
Sivabalan R, Anniyappan M, Pawar SJ, Talawar MB, Gore GM, Venugopalan S, Gandhe BR (2006) Synthesis, characterization and thermolysis studies on triazole and tetrazole based high nitrogen content high energy materials. J Hazard Mater A137:672–680
Li ZM, Zhang JG, Cui Y, Zhang TL, Shu YJ, Sinditskii VP, Serushkin VV, Egorshin VY (2010) A novel nitrogen-rich cadmium coordination compound based on 1,5-diaminotetrazole: synthesis, structure investigation, and thermal properties. J Chem Eng Data 55:3109–3116
Qi C, Li SH, Li YC, Wang Y, Chen XK, Pang SP (2011) A novel stable high-nitrogen energetic material: 4,4′-azobis(1,2,4-triazole). J Mater Chem 21:3221
Zhang QH, Shreeve JM (2013) Growing catenated nitrogen atom chains. Angew Chem Int Ed 52:8792–8794
Yin P, Parrish DA, Shreeve JM (2014) Bis(nitroamino-1,2,4-triazolates): N-bridging strategy toward insensitive energetic materials. Angew Chem Int Ed 53:12889–12892
Politzer P, Lane P, Murray JS (2014) Some interesting aspects of N-oxides. Mol Phys 112:719–725
Fabian J, Lewars E (2004) Azabenzenes (azines)—the nitrogen derivatives of benzene with one to six N atoms: stability, homodesmotic stabilization energy, electron distribution, and magnetic ring current; a computational study. Can J Chem 82:50–69
Politzer P, Lane P, Murray JS (2013) Computational analysis of relative stabilities of polyazine N-oxides. Struct Chem 24:1965–1974
Simon P (2004) Isoconversional methods—fundamentals, meaning and application. J Therm Anal Calorim 76:123–132
Fan MH, Panezai H, Sun JH, Bai SY, Wu X (2014) Thermal and kinetic performance of water desorption for N2 absorption in Li-LSX zeolite. J Phys Chem C 118:23761–23767
Li KY, Huang XY, Fleischmann C, Rein G, Ji J (2014) Pyrolysis of medium-density fiberboard: optimized search for kinetics scheme and parameters via a genetic algorithm driven by Kissinger's method. Energy Fuel 28:6130–6139
Sbirrazzuoli N, Vincent L, Mija A, Guigo N (2009) Integral, differential and advanced isoconversional methods. Complex mechanisms and isothermal predicted conversion-time curves. Chemom Intell Lab Syst 96:219–226
Slopiecka K, Bartocci P, Fantozzi F (2012) Thermogravimetric analysis and kinetic study of poplar wood pyrolysis. Appl Energy 97:491–497
Zhu P, Sui S, Wang B (2004) A study of pyrolysis and pyrolysis products of flame-retardant cotton fabrics by DSC, TGA, and PY–GC–MS. J Anal Appl Pyrolysis 71:645
Wu BD, Wang SW, Yang L, Zhang TL, Zhang JG, Zhou ZN, Yu KB (2011) Preparation, crystal structures, thermal decomposition and explosive properties of two novel energetic compounds M(IMI)4(N3)2 (M = CuII and NiII, IMI = Imidazole): the new high-nitrogen materials (N > 46%). Eur J Inorg Chem 16:2616–2623
Ma C, Huang J, Ma HX, Xu KZ, Lv XQ, Song JR, Zhao NN, He JY, Zhao YS (2013) Preparation, crystal structure, thermal decomposition, quantum chemical calculations on [K(ZTO)·H2O]∞ and its ligand ZTO. J Mol Struct 1036:521–527
Chauhan NP, Mozafari M, Ameta R, Punjabi PB, Ameta SC (2015) Spectral and thermal characterization of halogen-bonded novel crystalline oligo(p-bromoacetophenone formaldehyde). J Phys Chem B 119:3223–3230
Tan CC, Dalapati GK, Tan HR, Bosman MB, Hui HK, Tripathy S, Chi D (2015) Crystallization of sputter-deposited amorphous (FeSi2)1−xAlx thin films. Cryst Growth Des 15:1692–1696
Vyazovkin S, Burnham AK, Criado JM, Pérez-Maqueda LA, Popescu C, Sbirrazzuoli N (2011) ICTAC Kinetics Committee recommendations for performing kinetic computations on thermal analysis data. Thermochim Acta 520:1–19
Akahira T, Sunose T (1971) Method of determining activation deterioration constant of electrical insulating materials. Res Rep Chiba Inst Technol Sci Technol 16:22–31
Starink MJ (2003) The determination of activation energy from linear heating rate experiments: a comparison of the accuracy of isoconversion methods. Thermochim Acta 404:163–176
Chen HY, Zhang TL, Zhang JG, Qiao XJ, Yu KB (2006) Crystal structure, thermal decomposition mechanism and explosive properties of [Na(H2TNPG)(H2O)]n. J Hazard Mater A129:31–36
Li Y, Wu BD, Zhang TL, Liu ZH, Zhang JG (2010) Preparation, crystal structure, thermal decomposition, and explosive properties of [Cd(en)(N3)2]n. Propellants Explos Pyrotech 35:521–528
Wu BD, Li Y, Wang SW, Zhang TL, Zhang JG, Zhou ZN, Yu KB (2011) Preparation, crystal structure, thermal decomposition, and explosive properties of a novel energetic compound [Zn(N2H4)2(N3)2]n: a new high-nitrogen material (N = 65.60%). Z Anorg Allg Chem 637:450–455
Ma C, Huang J, Zhong YT, Xu KZ, Song JR, Zhang Z (2013) Preparation, structural investigation and thermal decomposition behavior of two high-nitrogen energetic materials: ZTO·2H2O and ZTO(phen)·H2O. Bull Korean Chem Soc 34:2086–2092
Kissinger HE (1957) Reaction kinetics in differential thermal analysis. J Anal Chem 29:1702–1706
Doyle CD (1961) Kinetic analysis of thermogravimetric data. J Appl Polym Sci 15:285–292
Ozawa T (1965) A new method of analyzing thermogravimetric data. Bull Chem Soc Jpn 38:1881–1885
Hu RZ, Gao SL, Zhao FQ, Shi QZ, Zhang TL, Zhang JJ (2008) Thermal analysis kinetics, 2nd edn. Science Press, Beijing
Zhang TL, Hu RZ, Xie Y (1994) The estimation of critical temperatures of thermal explosion for energetic materials using non-isothermal DSC. Thermochim Acta 244:171–176
Zhong YT, Huang J, Song JR, Xu KZ, Zhao D, Wang LQ, Zhang XY (2011) Synthesis, crystal structure and thermal behavior of GZTO·H2O. Chin J Chem 29:1672–1676
CJ, YL and SP conceived and designed the experiments. CJ and SZ performed the experiments. CJ, YL, SZ and TF analyzed the data. CJ completed the manuscript. All authors read and approved the final manuscript.
The authors gratefully acknowledge the financial support from the National Natural Science Foundation of China (21576026 and U153062).
The experiment data supporting the conclusions of this article are included within the article and its additional file.
All authors approval and consent to the publication.
National Natural Science Foundation of China (21576026 and U153062).
School of Materials Science and Engineering, Beijing Institute of Technology, Beijing, 100081, China
Chenhui Jia, Yuchuan Li, Shujuan Zhang, Teng Fei & Siping Pang
Chenhui Jia
Yuchuan Li
Shujuan Zhang
Teng Fei
Siping Pang
Correspondence to Yuchuan Li.
The purity of the title compounds.
Jia, C., Li, Y., Zhang, S. et al. Thermogravimetric analysis, kinetic study, and pyrolysis–GC/MS analysis of 1,1ʹ-azobis-1,2,3-triazole and 4,4ʹ-azobis-1,2,4-triazole. Chemistry Central Journal 12, 22 (2018). https://doi.org/10.1186/s13065-018-0381-x
DOI: https://doi.org/10.1186/s13065-018-0381-x
1,1ʹ-Azobis-1,2,3-triazole
Thermal decomposition
Kinetic study
Thermogravimetric–differential scanning calorimetry (TG–DSC)
Pyrolysis–gas chromatography/mass spectrometry (PY–GC/MS)
|
CommonCrawl
|
Disciplined Inconsistency
Brandon Holt, James Bornholt, Irene Zhang, Dan Ports, Mark Oskin, Luis Ceze
{bholt,bornholt,iyzhang,drkp,oskin,luisceze}@cs.uw.edu
Technical Report UW-CSE-16-06-01
Abstract. Distributed applications and web services, such as online stores or social networks, are expected to be scalable, available, responsive, and fault-tolerant. To meet these steep requirements in the face of high round-trip latencies, network partitions, server failures, and load spikes, applications use eventually consistent datastores that allow them to weaken the consistency of some data. However, making this transition is highly error-prone because relaxed consistency models are notoriously difficult to understand and test.
In this work, we propose a new programming model for distributed data that makes consistency properties explicit and uses a type system to enforce consistency safety. With the Inconsistent, Performance-bound, Approximate (IPA) storage system, programmers specify performance targets and correctness requirements as constraints on persistent data structures and handle uncertainty about the result of datastore reads using new consistency types. We implement a prototype of this model in Scala on top of an existing datastore, Cassandra, and use it to make performance/correctness tradeoffs in two applications: a ticket sales service and a Twitter clone. Our evaluation shows that IPA prevents consistency-based programming errors and adapts consistency automatically in response to changing network conditions, performing comparably to weak consistency and 2-10$\times$ faster than strong consistency.
1. Introduction
To provide good user experiences, modern datacenter applications and web services must balance the competing requirements of application correctness and responsiveness. For example, a web store double-charging for purchases or keeping users waiting too long (each additional millisecond of latency [26, 36]) can translate to a loss in traffic and revenue. Worse, programmers must maintain this balance in an unpredictable environment where a black and blue dress [42] or Justin Bieber [38] can change application performance in the blink of an eye.
Recognizing the trade-off between consistency and performance, many existing storage systems support configurable consistency levels that allow programmers to set the consistency of individual operations [4, 11, 34, 58]. These allow programmers to weaken consistency guarantees only for data that is not critical to application correctness, retaining strong consistency for vital data. Some systems further allow adaptable consistency levels at runtime, where guarantees are only weakened when necessary to meet availability or performance requirements (e.g., during a spike in traffic or datacenter failure) [59, 61]. Unfortunately, using these systems correctly is challenging. Programmers can inadvertently update strongly consistent data in the storage system using values read from weakly consistent operations, propagating inconsistency and corrupting stored data. Over time, this undisciplined use of data from weakly consistent operations lowers the consistency of the storage system to its weakest level.
In this paper, we propose a more disciplined approach to inconsistency in the Inconsistent, Performance-bound, Approximate (IPA) storage system. IPA introduces the following concepts:
Consistency Safety, a new property that ensures that values from weakly consistent operations cannot flow into stronger consistency operations without explicit endorsement from the programmer. IPA is the first storage system to provide consistency safety.
Consistency Types, a new type system in which type safety implies consistency safety. Consistency types define the consistency and correctness of the returned value from every storage operation, allowing programmers to reason about their use of different consistency levels. IPA's type checker enforces the disciplined use of IPA consistency types statically at compile time.
Error-bounded Consistency. IPA is a data structure store, like Redis [54] or Riak [11], allowing it to provide a new type of weak consistency that places numeric error bounds on the returned values. Within these bounds, IPA automatically adapts to return the strongest IPA consistency type possible under the current system load.
We implement an IPA prototype based on Scala and Cassandra and show that IPA allows the programmer to trade off performance and consistency, safe in the knowledge that the type system has checked the program for consistency safety. We demonstrate experimentally that these mechanisms allow applications to dynamically adapt correctness and performance to changing conditions with three applications: a simple counter, a Twitter clone based on Retwis [55] and a Ticket sales service modeled after FusionTicket [1].
2. The Case for Consistency Safety
Unpredictable Internet traffic and unexpected failures force modern datacenter applications to trade off consistency for performance. In this section, we demonstrate the pitfalls of doing so in an undisciplined way. As an example, we describe a movie ticketing service, similar to AMC or Fandango. Because ticketing services process financial transactions, they must ensure correctness, which they can do by storing data in a strongly consistent storage system. Unfortunately, providing strong consistency for every storage operation can cause the storage system and application to collapse under high load, as several ticketing services did in October 2015, when tickets became available for the new Star Wars movie [21].
Figure 1. Ticket sales service. To meet a performance target in displayEvent, developer switches to a weak read for getTicketCount, not realizing that this inconsistent read will be used elsewhere to compute the ticket price.
To allow the application to scale more gracefully and handle traffic spikes, the programmer may chose to weaken the consistency of some operations. As shown in Figure 1, the ticket application displays each showing of the movie along with the number of tickets remaining. For better performance, the programmer may want to weaken the consistency of the read operation that fetches the remaining ticket count to give users an estimate, instead of the most up-to-date value. Under normal load, even with weak consistency, this count would often still be correct because propagation is typically fast compared to updates. However, eventual consistency makes no guarantees, so under heavier traffic spikes, the values could be significantly incorrect and the application has no way of knowing by how much.
While this solves the programmer's performance problem, it introduces a data consistency problem. Suppose that, like Uber's surge pricing, the ticket sales application wants to raise the price of the last 100 tickets for each showing to $15. If the application uses a strongly consistent read to fetch the remaining ticket count, then it can use that value to compute the price of the ticket on the last screen in Figure 1. However, if the programmer reuses getTicketCount which used a weak read to calculate the price, then this count could be arbitrarily wrong. The application could then over- or under-charge some users depending on the consistency of the returned value. Worse, the theater expects to make $1500 for those tickets with the new pricing model, which may not happen with the new weaker read operation. Thus, programmers need to be careful in their use of values returned from storage operations with weak consistency. Simply weakening the consistency of an operation may lead to unexpected consequences for the programmer (e.g., the theater not selling as many tickets at the higher surge price as expected).
In this work, we propose a programming model that can prevent using inconsistent values where they were not intended, as well as introduce mechanisms that allow the storage system to dynamically adapt consistency within predetermined performance and correctness bounds.
3. Programming Model
We propose a programming model for distributed data that uses types to control the consistency–performance trade-off. The Inconsistent, Performance-bound, Approximate (IPA) type system helps developers trade consistency for performance in a disciplined manner. This section presents the IPA programming model, including the available consistency policies and the semantics of operations performed under the policies. §4 will explain how the type system's guarantees are enforced.
3.1. Overview
ADT / Method
Consistency(Strong)
Consistency(Weak)
LatencyBound(_)
ErrorTolerance(_)
Counter.read() Consistent[Int] Inconsistent[Int] Rushed[Int] Interval[Int]
Set.size() Consistent[Int] Inconsistent[Int] Rushed[Int] Interval[Int]
Set.contains(x) Consistent[Bool] Inconsistent[Bool] Rushed[Bool] N/A
List[T].range(x,y) Consistent[List[T]] Inconsistent[List[T]] Rushed[List[T]] N/A
UUIDPool.take() Consistent[UUID] Inconsistent[UUID] Rushed[UUID] N/A
UUIDPool.remain() Consistent[Int] Inconsistent[Int] Rushed[Int] Interval[Int]
Table 1. Example ADT operations; consistency policies determine the consistency type of the result.
The IPA programming model consists of three parts:
Abstract data types (ADTs) implement common data structures (such as Set[T]) on distributed storage.
Consistency policies on ADTs specify the desired consistency level for an object in application-specific terms (such as latency or accuracy bounds).
Consistency types track the consistency of operation results and enforce consistency safety by requiring developers to consider weak outcomes.
Programmmers annotate ADTs with consistency policies to choose their desired level of consistency. The consistency policy on the ADT operation determines the consistency type of the result. Table 1 shows some examples; the next few sections will introduce each of the policies and types in detail. Together, these three components provide two key benefits for developers. First, the IPA type system enforces consistency safety, tracking the consistency level of each result and preventing inconsistent values from flowing into consistent values. Second, the programming interface enables performance–correctness trade-offs, because consistency policies on ADTs allow the runtime to select a consistency level for each individual operation that maximizes performance in a constantly changing environment. Together, these systems allow applications to adapt to changing conditions with the assurance that the programmer has expressed how it should handle varying consistency.
3.2. Abstract Data Types
The base of the IPA type system is a set of abstract data types (ADTs) for distributed data structures. ADTs present a clear abstract model through a set of operations that query and update state, allowing users and systems alike to reason about their logical, algebraic properties rather than the low-level operations used to implement them. Though the simplest key-value stores only support primitive types like strings for values, many popular datastores have built-in support for more complex data structures such as sets, lists, and maps. However, the interface to these datatypes differs: from explicit sets of operations for each type in Redis, Riak, and Hyperdex [11, 25, 31, 54] to the pseudo-relational model of Cassandra [32]. IPA's extensible library of ADTs allows it to decouple the semantics of the type system from any particular datastore, though our reference implementation is on top of Cassandra, similar to [57].
Besides abstracting over storage systems, ADTs are an ideal place from which to reason about consistency and system-level optimizations. The consistency of a read depends on the write that produced the value. Annotating ADTs with consistency policies ensures the necessary guarantees for all operations are enforced, which we will expand on in the next section.
Custom ADTs can express application-level correctness constraints. IPA's Counter ADT allows reading the current value as well as increment and decrement operations. In our ticket sales example, we must ensure that the ticket count does not go below zero. Rather than forcing all operations on the datatype to be linearizable, this application-level invariant can be expressed with a more specialized ADT, such as a BoundedCounter, giving the implementation more latitude for enforcing it. IPA's library is extensible, allowing custom ADTs to build on common features; see §5.
3.3. Consistency Policies
Previous systems [4, 11, 34, 58, 61] require annotating each read and write operation with a desired consistency level. This per-operation approach complicates reasoning about the safety of code using weak consistency, and hinders global optimizations that can be applied if the system knows the consistency level required for future operations. The IPA programming model provides a set of consistency policies that can be placed on ADT instances to specify consistency properties for the lifetime of the object. Consistency policies come in two flavors: static and dynamic.
Static policies are fixed, such as Consistency(Strong) which states that operations must have strongly consistent behavior. Static annotations provide the same direct control as previous approaches but simplify reasoning about correctness by applying them globally on the ADT.
Dynamic policies specify a consistency level in terms of application requirements, allowing the system to decide at runtime how to meet the requirement for each executed operation. IPA offers two dynamic consistency policies:
A latency policy LatencyBound(x) specifies a target latency for operations on the ADT (e.g., 20 ms). The runtime can choose the consistency level for each issued operation, optimizing for the strongest level that is likely to satisfy the latency bound.
An accuracy policy ErrorTolerance(x%) specifies the desired accuracy for read operations on the ADT. For example, the size of a Set ADT may only need to be accurate within 5% tolerance. The runtime can optimize the consistency of write operations so that reads are guaranteed to meet this bound.
Dynamic policies allow the runtime to extract more performance from an application by relaxing the consistency of individual operations, safe in the knowledge that the IPA type system will enforce safety by requiring the developer to consider the effects of weak operations.
Static and dynamic policies can apply to an entire ADT instance or on individual methods. For example, one could declare List[Int] with LatencyBound(50 ms), in which case all read operations on the list are subject to the bound. Alternatively, one may wish to declare a Set with relaxed consistency for its size but strong consistency for its contains predicate. The runtime is responsible for managing the interaction between these policies. In the case of a conflict between two bounds, the system can be conservative and choose stronger policies than specified without affecting correctness.
In the ticket sales application, the Counter for each event's tickets could have a relaxed accuracy policy, ErrorTolerance(5%), allowing the system to quickly read the count of tickets remaining. An accuracy policy is appropriate here because it expresses a domain requirement—users want to see accurate ticket counts. As long as the system meets this requirement, it is free to relax consistency and maximize performance without violating correctness. The List ADT used for events has a latency policy that also expresses a domain requirement—that pages on the website load in reasonable time.
3.4. Consistency Types
The key to consistency safety in IPA is the consistency types—enforcing type safety directly enforces consistency safety. Read operations of ADTs annotated with consistency policies return instances of a consistency type. These consistency types track the consistency of the results and enforce a fundamental non-interference property: results from weakly consistent operations cannot flow into computations with stronger consistency without explicit endorsement. This could be enforced dynamically, as in dynamic information flow control systems, but the static guarantees of a type system allow errors to be caught at compile time.
Figure 2. IPA Type Lattice parameterized by a type T.
The consistency types encapsulate information about the consistency achieved when reading a value. Formally, the consistency types form a lattice parameterized by a primitive type T, shown in Figure 2. Strong read operations return values of type Consistent[T] (the top element), and so (by implicit cast) behave as any other instance of type T. Intuitively, this equivalence is because the results of strong reads are known to be consistent, which corresponds to the control flow in conventional (non-distributed) applications. Weaker read operations return values of some type lower in the lattice (weak consistency types), reflecting their possible inconsistency. The bottom element Inconsistent[T] specifies an object with the weakest possible (or unknown) consistency. The other consistency types follow a subtyping relation $\prec$ as illustrated in Figure 2.
The only possible operation on Inconsistent[T] is to endorse it. Endorsement is an upcast, invoked by Consistent(x), to the top element Consistent[T] from other types in the lattice:
\[\inferrule{\Gamma \vdash e_1 : \tau[T] \\ T \prec \tau[T]}{\Gamma \vdash \operatorname{Consistent}(e_1) : T} \vspace{-6pt} \]
The core type system statically enforces safety by preventing weaker values from flowing into stronger computations. Forcing developers to explicitly endorse inconsistent values prevents them from accidentally using inconsistent data where they did not determine it was acceptable, essentially inverting the behavior of current systems where inconsistent data is always treated as if it was safe to use anywhere. However, endorsing values blindly in this way is not the intended use case; the key productivity benefit of the IPA type system comes from the other consistency types which correspond to the dynamic consistency policies in §3.3 which allow developers to handle dynamic variations in consistency, which we describe next.
3.4.1. Rushed types
The weak consistency type Rushed[T] is the result of read operations performed on an ADT with consistency policy LatencyBound(x). Rushed[T] is a sum (or union) type, with one variant per consistency level available to the implementation of LatencyBound. Each variant is itself a consistency type (though the variants obviously cannot be Rushed[T] itself). The effect is that values returned by a latency-bound object carry with them their actual consistency level. A result of type Rushed[T] therefore requires the developer to consider the possible consistency levels of the value.
For example, a system with geo-distributed replicas may only be able to satisfy a latency bound of 50 ms with a local quorum read (that is, a quorum of replicas within a single datacenter). In this case, Rushed[T] would be the sum of three types Consistent[T], LocalQuorum[T], and Inconsistent[T]. A match statement destructures the result of a latency-bound read operation:
set.contains() match {
case Consistent(x) => print(x)
case LocalQuorum(x) => print(x+", locally")
case Inconsistent(x) => print(x+"???")
The application may want to react differently to a local quorum as opposed to a strongly or weakly consistent value. Note that because of the subtyping relation on consistency types, omitted cases can be matched by any type lower in the lattice, including the bottom element Inconsistent(x); other cases therefore need only be added if the application should respond differently to them. This subtyping behavior allows applications to be portable between systems supporting different forms of consistency (of which there are many).
3.4.2. Interval types
Tagging values with a consistency level is useful because it helps programmers tell which operation reorderings are possible (e.g. strongly consistent operations will be observed to happen in program order). However, accuracy policies provide a different way of dealing with inconsistency by expressing it in terms of value uncertainty. They require knowing the abstract behavior of operations in order to determine the change in abstract state which results from each reordered operation (e.g., reordering increments on a Counter has a known effect on the value of reads).
The weak consistency type Interval[T] is the result of operations performed on an ADT with consistency policy ErrorTolerance(x%). Interval[T] represents an interval of values within which the true (strongly consistent) result lies. The interval reflects uncertainty in the true value created by relaxed consistency, in the same style as work on approximate computing [15].
The key invariant of the Interval type is that the interval must include the result of some linearizable execution. Consider a Set with 100 elements. With linearizability, if we add a new element and then read the size (or if this ordering is otherwise implied), we must get 101 (provided no other updates are occurring). However, if size is annotated with ErrorTolerance(5%), then it could return any interval that includes 101, such as $[95,105]$ or $[100,107]$, so the client cannot tell if the recent add was included in the size. This frees the system to optimize to improve performance, such as by delaying synchronization. While any partially-ordered domain could be represented as an interval (e.g., a Set with partial knowledge of its members), in this work we consider only numeric types.
In the ticket sales example, the counter ADT's accuracy policy means that reads of the number of tickets return an Interval[Int]. If the entire interval is above zero, then users can be assured that there are sufficient tickets remaining. In fact, because the interval could represent many possible linearizable executions, in the absence of other user actions, a subsequent purchase must succeed. On the other hand, if the interval overlaps with zero, then there is a chance that tickets could already be sold out, so users could be warned. Note that ensuring that tickets are not over-sold is a separate concern requiring a different form of enforcement, which we describe in §5. The relaxed consistency of the interval type allows the system to optimize performance in the common case where there are many tickets available, and dynamically adapt to contention when the ticket count diminishes.
4. Enforcing consistency policies
The consistency policies introduced in the previous section allow programmers to describe application-level correctness properties. Static consistency policies (e.g. Strong) are enforced by the underlying storage system; the annotated ADT methods simply set the desired consistency level when issuing requests to the store. The dynamic policies each require a new runtime mechanism to enforce them: parallel operations with latency monitoring for latency bounds, and reusable reservations for error tolerance. But first, we briefly review consistency in Dynamo-style replicated systems.
To be sure of seeing a particular write, strong reads must coordinate with a majority (quorum) of replicas and compare their responses. For a write and read pair to be strongly consistent (in the CAP sense [17]), the replicas acknowledging the write ($W$) plus the replicas contacted for the read ($R$) must be greater than the total number of replicas ($W + R > N$). This can be achieved, for example, by writing to a quorum ($(N+1)/2$) and reading from a quorum (QUORUM in Cassandra), or writing to $N$ (ALL) and reading from 1 (ONE) [22]. To support the Consistency(Strong) policy, the designer of each ADT must choose consistency levels for its operations which together enforce strong consistency.
4.1. Latency bounds
The time it takes to achieve a particular level of consistency depends on current conditions and can vary over large time scales (minutes or hours) but can also vary significantly for individual operations. During normal operation, strong consistency may have acceptable performance while at peak traffic times the application would fall over. Latency bounds specified by the application allow the system to dynamically adjust to maintain comparable performance under varying conditions.
Our implementation of latency-bound types takes a generic approach: it issues read requests at different consistency levels in parallel. It composes the parallel operations and returns a result either when the strongest operation returns, or with the strongest available result at the specified time limit. If no responses are available at the time limit, it waits for the first to return.
This approach makes no assumptions about the implementation of read operations, making it easily adaptable to different storage systems. Some designs may permit more efficient implementations: for example, in a Dynamo-style storage system we could send read requests to all replicas, then compute the most consistent result from all responses received within the latency limit. However, this requires deeper access to the storage system implementation than is traditionally available.
4.1.1. Monitors
The main problem with our approach is that it wastes work by issuing parallel requests. Furthermore, if the system is responding slower due to a sudden surge in traffic, then it is essential that our efforts not cause additional burden on the system. In these cases, we should back off and only attempt weaker consistency. To do this, the system monitors current traffic and predicts the latency of different consistency levels.
Each client in the system has its own Monitor (though multi-threaded clients can share one). The monitor records the observed latencies of reads, grouped by operation and consistency level. The monitor uses an exponentially decaying reservoir to compute running percentiles weighted toward recent measurements, ensuring that its predictions continually adjust to current conditions.
Whenever a latency-bound operation is issued, it queries the monitor to determine the strongest consistency likely to be achieved within the time bound, then issues one request at that consistency level and a backup at the weakest level, or only weak if none can meet the bound. In §6.2.1 we show empirically that even simple monitors allow clients to adapt to changing conditions.
4.2. Error bounds
We implement error bounds by building on the concepts of escrow and reservations [27, 44, 48, 50]. These techniques have been used in storage systems to enforce hard limits, such as an account balance never going negative, while permitting concurrency. The idea is to set aside a pool of permissions to perform certain update operations (we'll call them reservations or tokens), essentially treating operations as a manageable resource. If we have a counter that should never go below zero, there could be a number of decrement tokens equal to the current value of the counter. When a client wishes to decrement, it must first acquire sufficient tokens before performing the update operation, whereas increments produce new tokens. The insight is that the coordination needed to ensure that there are never too many tokens can be done off the critical path: tokens can be produced lazily if there are enough around already, and most importantly for this work, they can be distributed among replicas. This means that replicas can perform some update operations safely without coordinating with any other replicas.
4.2.1. Reservation Server
Reservations require mediating requests to the datastore to prevent updates from exceeding the available tokens. Furthermore, each server must locally know how many tokens it has without synchronizing. We are not aware of a commercial datastore that supports custom mediation of requests and replica-local state, so we need a custom middleware layer to handle reservation requests, similar to other systems which have built stronger guarantees on top of existing datastores [8, 10, 57].
Any client requests requiring reservations are routed to one of a number of reservation servers. These servers then forward operations when permitted along to the underlying datastore. All persistent data is kept in the backing store; these reservation servers keep only transient state tracking available reservations. The number of reservation servers can theoretically be decoupled from the number of datastore replicas; our implementation simply colocates a reservation server with each datastore server and uses the datastore's node discovery mechanisms to route requests to reservation servers on the same host.
4.2.2. Enforcing error bounds
Reservations have been used previously to enforce hard global invariants in the form of upper or lower bounds on values [10], integrity constraints [9], or logical assertions [37]. However, enforcing error tolerance bounds presents a new design challenge because the bounds are constantly shifting. Consider a Counter with a 10% error bound, shown in Figure 3. If the current value is 100, then 10 increments can be done before anyone must be told about it. However, we have 3 reservation servers, so these 10 reservations are distributed among them, allowing each to do some increments without synchronizing. If only 10 outstanding increments are allowed, reads are guaranteed to maintain the 10% error bound.
Figure 3. Reservations enforcing error bounds.
In order to perform more increments after a server has exhausted its reservations, it must synchronize with the others, sharing its latest increments and receiving any changes of theirs. This is accomplished by doing a strong write (ALL) to the datastore followed by a read. Once that synchronization has completed, those 3 tokens become available again because the reservation servers all temporarily agree on the value (in this case, at least 102).
Read operations for these types go through reservation servers as well: the server does a weak read from any replica, then determines the interval based on how many reservations there are. For the read in Figure 3, there are 10 reservations total, but Server B knows that it has not used its local reservations, so it knows that there cannot be more than 6 and can return the interval $[100,106]$.
4.2.3. Narrowing bounds
Error-tolerance policies give an upper bound on the amount of error; ideally, the interval returned will be more precise than the maximum error when conditions are favorable. The error bound determines the maximum number of reservations that can be allocated per instance. To allow a variable number of tokens, each ADT instance keeps a count of tokens allocated by each server, and when servers receive write requests, they increment their count to allocate tokens to use. Allocating must be done with strong consistency to ensure all servers agree, which can be expensive, so we use long leases (on the order of seconds) to allow servers to cache their allocations. When a lease is about to expire, it preemptively refreshes its lease in the background so that writes do not block.
For each type of update operation there may be a different pool of reservations. Similarly, there could be different error bounds on different read operations. It is up to the designer of the ADT to ensure that all error bounds are enforced with appropriate reservations. Consider a Set with an error tolerance on its size operation. This requires separate pools for add and remove to prevent the overall size from deviating by more than the bound in either direction, so the interval is $[v-\texttt{remove.delta},v+\texttt{add.delta}]$ where $v$ is the size of the set and delta computes the number of outstanding operations from the pool. In some situations, operations may produce and consume tokens in the same pool – e.g., increment producing tokens for decrement – but this is only allowable if updates propagate in a consistent order among replicas, which may not be the case in some eventually consistent systems.
5. Implementation
IPA is implemented mostly as a client-side library to an off-the-shelf distributed storage system, though reservations are handled by a custom middleware layer which mediates accesses to any data with error tolerance policies. Our implementation is built on top of Cassandra, but IPA could work with any replicated storage system that supports fine-grained consistency control, which many commercial and research datastores do, including Riak [11].
IPA's client-side programming interface is written in Scala, using the asynchronous futures-based Phantom [45] library for type-safe access to Cassandra data. Reservation server middleware is also built in Scala using Twitter's Finagle framework [63]. Communication is done between clients and Cassandra via prepared statements, and between clients and reservation servers via Thrift remote-procedure-calls [6]. Due to its type safety features, abstraction capability, and compatibility with Java, Scala has become popular for web service development, including widely-used frameworks such as Akka [35] and Spark [5], and at established companies such as Twitter and LinkedIn [2, 18, 29].
The IPA type system, responsible for consistency safety, is also simply part of our client library, leveraging Scala's sophisticated type system. The IPA type lattice is implemented as a subclass hierarchy of parametric classes, using Scala's support for higher-kinded types to allow them to be destructured in match statements, and implicit conversions to allow Consistent[T] to be treated as type T. We use traits to implement ADT annotations; e.g. when the LatencyBound trait is mixed into an ADT, it wraps each of the methods, redefining them to have the new semantics and return the correct IPA type.
Figure 4. Some of the reusable components provided by IPA and an example implemention of a Counter with error bounds.
IPA comes with a library of reference ADT implementations used in our experiments, but it is intended to be extended with custom ADTs to fit more specific use cases. Our implementation provides a number of primitives for building ADTs, some of which are shown in Figure 4. To support latency bounds, there is a generic LatencyBound trait that provides facilities for executing a specified read operation at multiple consistency levels within a time limit. For implementing error bounds, IPA provides a generic reservation pool which ADTs can use. Figure 4 shows how a Counter with error tolerance bounds is implemented using these pools. The library of reference ADTs includes:
Counter based on Cassandra's counter, supporting increment and decrement, with latency and error bounds
BoundedCounter CRDT from [10] that enforces a hard lower bound even with weak consistency. Our implementation adds the ability to bound error on the value of the counter and set latency bounds.
Set with add, remove, contains and size, supporting latency bounds, and error bounds on size.
UUIDPool generates unique identifiers, with a hard limit on the number of IDs that can be taken from it; built on top of BoundedCounter and supports the same bounds.
List: thin abstraction around a Cassandra table with a time-based clustering order, supports latency bounds.
Figure 4 shows Scala code using reservation pools to implement a Counter with error bounds. The actual implementation splits this functionality between the client and the reservation server.
6. Evaluation
The goal of the IPA programming model and runtime system is to build applications that adapt to changing conditions, performing nearly as well as weak consistency but with stronger consistency and safety guarantees. To that end, we evaluate our prototype implementation under a variety of network conditions using both a real-world testbed (Google Compute Engine [28]) and simulated network conditions. We start with simple microbenchmarks to understand the performance of each of the runtime mechanisms independently. We then study two applications in more depth, exploring qualitatively how the programming model helps avoid potential programming mistakes in each and then evaluating their performance against strong and weakly consistent implementations.
6.1. Simulating adverse conditions
To control for variability, we perform our experiments with a number of simulated conditions, and then validate our findings against experiments run on globally distributed machines in Google Compute Engine. We use a local test cluster with nodes linked by standard ethernet and Linux's Network Emulation facility [62] (tc netem) to introduce packet delay and loss at the operating system level. We use Docker containers [24] to enable fine-grained control of the network conditions between processes on the same physical node.
Table 2 shows the set of conditions we use in our experiments to explore the behavior of the system. The uniform 5ms link simulates a well-provisioned datacenter; slow replica models contention or hardware problems that cause one replica to be slower than others, and geo-distributed replicates the latencies between virtual machines in the U.S., Europe, and Asia on Amazon EC2 [3]. These simulated conditions are validated by experiments on Google Compute Engine with virtual machines in four datacenters: the client in us-east, and the storage replicas in us-central, europe-west, and asia-east. We elide the results for Local (same rack in our testbed) except in Figure 11 because the differences between policies are negligible, so strong consistency should be the default there.
Network Condition
Latencies (ms)
Simulated Replica 1 Replica 2 Replica 3
Uniform / High load 5 5 5
Slow replica 10 10 100
Geo-distributed (EC2) 1 ± 0.3 80 ± 10 200 ± 50
Actual Replica 1 Replica 2 Replica 3
Local (same rack) <1 <1 <1
Google Compute Engine 30 ± <1 100 ± <1 160 ± <1
Table 2. Network conditions for experiments: latency from client to each replicas, with standard deviation if high.
6.2. Microbenchmark: Counter
We start by measuring the performance of a simple application that randomly increments and reads from a number of counters with different IPA policies. Random operations (incr(1) and read) are uniformly distributed over 100 counters from a single multithreaded client (allowing up to 4000 concurrent operations).
6.2.1. Latency bounds
Figure 5. Counter: latency bounds, mean latency. Beneath each bar is the % of strong reads. Strong consistency is never possible for the 10ms bound, but 50ms bound achieves mostly strong, only resorting to weak when network latency is high.
|
CommonCrawl
|
A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging
Liukui Chen1,
Hsing-Chung Chen ORCID: orcid.org/0000-0002-5266-99752,3,
Zuojin Li1 &
Ying Wu1
Human-centric Computing and Information Sciences volume 7, Article number: 35 (2017) Cite this article
An infrared transmitting model from the observed finger vein images is proposed in this paper by using multi-light-intensity imaging. This model is estimated from many pixels' values under different intensity light in the same scene. Due to the fusion method could be applied in biometric system, the vein images of finger captured in the system, we proposed in this paper, will be normalized and preserved the intact of the vein patterns of the biometric data from tested human's body. From observed pixels under multi-light-intensity, the curve of the transmitting model is recovered by sliding both of the sampled curve segments and using curve-fitting. The fusion method with each pixel level weighting based on the proposed transmitting model curve is adopted by the smooth spatial and estimation of the block quality. Finally, the results shown that our approach is a convenient and practicable approach for the infrared image fusion and subsequent processing for biometric applications.
The finger vein authentication is highly accurate and convenient by using the individual's unique biological characteristics. Vascular patterns are unique to each individual—even identical twins have different patterns. Finger vein authentication works based on the vein patterns in the superficial subcutaneous finger skin that are unique [1,2,3]. Three main advantages of vein authentication are following: (1) Because the finger veins are hidden inside human's body, some little risks of forgery or theft appear in daily activities. The conditions on surface of the skin in finger, e.g. dry or wet, will have no effect on its authentication. (2) It is non-invasive and contactless in the finger vein imaging, which is convenient and cleanliness for the users. (3) The stability and complexity of finger vein patterns will be better than other biometric features on human's body, which have the higher security level for personal identification [4].
The physiological information extracted from human body including the features of individual face, palm-print or fingerprint, hand-shape, skin, temperature and arterial pulse, etc. is used to recognize personal identification and diagnose some diseases. The information mentioned above, plus subcutaneous superficial vein pattern, could be extracted and digitized as biometric data. It could be further represented as a typical pattern in order to identify individual identification [5,6,7,8,9]. It is convenience to use the identified biometric to be the access right. The relative applications always focus in the remote access control in the websites, e.g. the website of finance or bank, etc. However, the image data of biometric is sensitive to the physiological conditions and environments. For example, the captured feature in human's face, where the factors of its illumination distribution and direction should be modified or normalized before storing it. It may exists lots of shadow images or noises in this captured image. Finally, Its features will then be strongly influenced by the shadow images or noises [10]. On the other hand, the non-uniformed illumination will increase the interference and redundant information, or submerge some patterns. It will lead to the deformation of dimensionality. It is very important to normalize the captured biometric information before keeping them to the storage of biometric system [11, 12]. The similar problems mentioned above are also appeared in the finger vein image capturing processes [13–18]. The width of vein in the captured image will be changed under different intensity near-infrared light. Because thickness of each finger is different, the under/over-exposure may appears in the thick/thin area of the finger by using one fixed-intensity-light. It will be inundated by this vein pattern. The vein pattern integrity is very important for the biometric system. Thus, it is necessary to normalize the illumination in the vein image capture before storing them in the biometric information storages or databases.
The main work of finger vein authentication is to collect the data: finger vein images. The quality of the image will affect directly the accuracy and its recognition speed. This paper presents the details in analyses of infrared finger vein images. In addition, the transmitting model is built from the observed data, e.g. multi-light-intensity vein images. Finally, the pixel level fusion method based on the transmitting model as well as spatial smoothing is proposed in this paper.
The remainders of this paper are organized as follows: in "The infrared light transmission model of the finger" section, we introduce the infrared light transmission model of the finger. In "Multi-light-intensity finger vein images' fusion based on the transmitting model" section, we first formalize a multi-light-intensity finger vein images' fusion based on the transmitting model. Next, we present examinations and discussions in "Examinations and discussions" section. Finally, we draw our conclusions and further works in "Conclusions and further works" section.
The infrared light transmission model of the finger
This model is extended and modified from Ref. [3]. The steps of basic works from bioinformation to the biometric data in this model are described in "Basic works from bioinformation to the biometric data" section, and its single infrared transmitting model is described in "A single infrared transmitting model" section.
Basic works from bioinformation to the biometric data
The applications of the biometric data includes personal identification and disease diagnosis. The system architecture is illustrated from the bioinformation to the biometric data for a single infrared transmitting model in biometric system shown in Fig. 1. Obviously, the capturing, digitizing and normalizing methods of bioinformation should be efficient in order to record the complete pattern or texture feature information, uniform gray distribution and contrast before their applications. This paper presents a single transmitting model of finger vein imaging in Biometric system and use it for fuse the multi-light-intensity finger vein images to one image, which integrates the vein pattern information of each source image and keeps the complete vein pattern information.
The system architecture
A single infrared transmitting model
The single infrared transmitting model described in this subsection. It is popular to use near-infrared (NIR) light transmitting the finger to achieve the angiogram imaging. Because the oxyhemoglobin content (HbO) in the venous blood is far beyond the arterial blood and other tissue, such as fat and muscle, the wavelength of the transmitting light absorption should be relatively high. Thus, the 760–1100 nm is suitable for the angiogram imaging from the absorption rate of the water, oxyhemoglobin (HbO) and deoxyhemoglobin (Hb), which is shown in Fig. 2. This higher absorption property of HbO results in that the region of vein pattern is darker than other surrounding region after the NIR light transmitting the finger. This technology is widely used in the vascular vein imaging of breast and cerebral.
The absorptivity of water, Hb, HbO in the finger's vein [19]
The tissue optical properties have been modeled based upon photon diffusion theory. The epidermis (the outermost layer of skin) only accounts for 6% of scattering and can be regarded a primary absorptive medium. Therefore, a simplified model on the reflectance of blood and tissue considers the reflectance from only the scattering tissue beneath the epidermis [12]. The skin is assumed to be a semi-infinite homogeneous medium, under a uniform and diffusive illumination. The photon has a relatively long residence time which allows the photon to engage in a random walk within the medium. The photon diffusion depends on the absorption and scattering properties of the skin, which penetration depth for different wavelengths shown in Fig. 3.
The penetration depth of different wavelengths [12]
Consider all these factors: the tissue (water, Hb and HbO) absorption in the vein, the depth of penetration. The infrared wave band of finger vein imaging is about 850 nm in practice.
Because the thickness of the finger is a nonlinear variable, it is hard to only use invariable light intensity to vein imaging at infrared 850 nm. Thus, overexposure and underexposure often appear in the infrared finger vein images. And these areas with over/under exposure can't be enhanced, which cause the vein pattern lack in the biometric data extraction. An infrared multi-light-intensity finger vein imaging technology is used in the paper [13] to solve the problem, which extends the dynamic range of the infrared vein imaging [14]. Additionally, it is necessary to fuse the complementary vein information in the next process. This paper presents a calculation method of the infrared finger vein transmitting model based on the multi-light-intensity imaging. The model presents the monotone increasing nonlinear function relationship between the light-intensity and pixel-gray value, which can be built by the genetic algorithm and used in the imaging quality estimation of the pixel-level fusion to infrared multi-light-intensity finger vein images.
The infrared finger vein transmitting model [3] is defined as:
$$B = f(X)$$
\(X\) is the irradiance of the infrared light of transmitting the finger, and \(B\) is represented as a pixel gray value. Generally, the gray level of the pixel is 8 bits. The infrared finger vein transmitting model function [3] is explicitly written as
$$\left\{ { \, \begin{array}{l} {B_{\text{min} } = 0, } \\ {B = f(X),} \\{B_{\text{max} } = 255,} \\ \end{array} \begin{array}{l} {if} \\ {if} \\ {if} \\ \end{array} \begin{array}{l} {X \le X_{\text{min} } } \\ {X_{\text{min} } < X < X_{\text{max} } } \\ {X_{\text{max} } \le X } \\ \end{array} } \right.$$
Assume there are \(N\) vein images captured under increasing light intensity \(X_{p} ,p = 1, \ldots ,N\). The size of each image is \(m \times n\), sign \(K = m \times n\). The qth pixel of the pth light-intensity image will be denoted \(B_{pq}\), the set \(\left\{ {B_{pq} } \right\},p = 1, \ldots ,N{\text{ and }}q \in \{ 1, \ldots ,K\}\), represents the known observations. The goal is to determine the underlying light values or irradiances, denoted by \(X_{q}\), that gave rise to the observations \(B_{pq}\). Because the \(N\) vein images has be properly registered in the pixel level, so that for a particular a, the light value \(X_{a}\) contributes to \(B_{pq} ,p = 1, \ldots ,N{\text{ and }}q \in \{ 1, \ldots ,K\}\). For this work, a normalized cross-correlation function is used as the matching criterion to register images to 1/2-pixel resolution [15].
The model can be rewritten as:
$$B_{pq} = f_{q} (X_{pq} ),\begin{array}{*{20}c} & {p = 1, \ldots ,N, \begin{array}{*{20}c} & {q \in \{ 1, \ldots ,K\}. } \\ \end{array} } \\ \end{array}$$
It means the transmitting model of position q is different. Nevertheless the shape of each model is similar, it gives an easy solution to estimate the transmitting model for each pixel for the application.
Since f is a monotonic and invertible function, its inverse function could be represented as \(g\).
$$X_{pq} = g_{q} (B_{pq} ), \begin{array}{*{20}c} & {p = 1, \ldots ,N, \begin{array}{*{20}c} & {q \in \{ 1, \ldots ,K\}. } \\ \end{array} } \\ \end{array}$$
It is necessary to recover the function \(g\) and the irradiances of \(X_{p} ,p = 1, \ldots ,N\), which satisfy the set of equations arising from Eq. (4) in a least-squared error sense. Recovering function \(g\) only requires recovering the finite number of values that \(g\left( B \right)\) could take since the domain of \(X\), pixel brightness values, is finite. Letting \(B_\text{{min}}\) and \(B_\text{{max}}\) be the least and greatest pixel values (integers), \(q\) be the number of pixel locations and \(N\) be the number of photographs, we formulate the problem as one of finding the \(\begin{array}{*{20}c} {[B_{\text{min} } } & {B_{\text{max} } } \\ \end{array} ]\) values of \(g\left( B \right)\) and the \(q\) values of \(X\) that minimize the following quadratic objective function [3]:
$$\xi = \sum\limits_{i = 1}^{N} {\sum\limits_{j = 1}^{q} {[g(B_{ij} ) - X_{i} ]^{2} } } + \lambda \sum\limits_{{b = B_{\text{min} } + 1}}^{{b = B_{\text{max} } - 1}} {(g''(b))^{2} }$$
The first term ensures that the solution satisfies the set of equations arising from Eq. (4) in a least squares sense. The second term is a smoothness term on the sum of squared values of the second derivative of \(g\) to ensure that the function \(g\) is smooth; in this discrete setting, the second part can be calculated by the formula (6).
$$g'' = g(b + 1) + g(b - 1) - 2g(b)$$
This smoothness term is essential to the formulation in that it provides coupling between the values \(g\left( z \right)\) in the minimization. The scalar weights the smoothness term relative to the data fitting term, and should be chosen appropriately for the amount of noise expected in the \(B_{ij}\) measurements.
Because it is quadratic in the \(X_{p}\) and \(g\left( z \right)\)'s, minimizing \(\xi\) is a straightforward linear least squares problem. The overdetermined system of linear equations is robustly solved using the singular value decomposition (SVD) method. An intuitive explanation of the procedure may be found in "The infrared light transmission model of the finger" section and Fig. 2 of reference paper [15].
In the reference paper [16], the noise, in the \(X_{p}\), is an independent Gaussian random variable, in which the variance is \(\sigma^{2}\) and the joint probability density function can be written as:
$$P(X_{B} ) \prec \exp \left\{ { - \frac{1}{2}\sum\limits_{{p,q}} {w_{{pq}} (I_{{B_{{pq}} }} - X_{{pq}} )^{2} } } \right\}$$
A maximum-likelihood (ML) approach is taken to find the high dynamic range image values. The maximum likelihood solution finds the values \(X_{q}\) that maximize the probability in Eq. (7). Maximizing Eq. (7) is equivalent to minimizing the negative of its natural logarithm, which leads to the following objective function to be minimized:
$$\xi (X) = \sum\limits_{p,q} {w_{pq} (I_{Bpq} - X_{pq} )^{2} }$$
With Gaussian simplifying approximation, the noise variances \(\sigma_{pq}^{2}\) would be difficult to characterize accurately. Again, detailed knowledge of the image capture process would be required, and the noise characterization would have to be performed each time a different image is captured on a device.
Equation (8) can be minimized by setting the gradient \(\xi \left( X \right)\) equal to zero. But if the \(X_{p}\) were unknown in each pixel, one could jointly estimate \(X_{p}\) and \(X_{q}\) by arbitrarily fixing one of the q positions, and then performing an iterative optimization of Eq. (8) with respect to both \(X_{p}\) and \(X_{q}\). It is difficult to solve these estimating values without the analytic expression of the transmitting model.
From the observed pixels, this paper presents the estimated transmitting model curve by the sliding the sampled curve segments, and blending these to a monotone increasing curve based on the genetic algorithm. So, if the blending curve is built or fit and then the other function curve can be redrawn by several sample points. It is possible to recover the blending curve shown in Fig. 4. The mixed complete curve \(g\) can be used to get the transmitting model function \(f\), which is shown in Fig. 5.
Sliding and blending the sampled curves into one complete curve [15]. a Three curves from the observed three points under five different irradiation conditions. b Sliding the curve and blending them to one curve
A transmitting model curve of a finger
Multi-light-intensity finger vein images' fusion based on the transmitting model
This session presents a fusion algorithm for the multi-light-intensity finger vein images based on the transmitting model. In the image pixel level fusion, the imaging quality estimation of the pixel is very important. In Section II, the transmitting model has been established by the observed data. Its derivative curve is shown in Fig. 6. It is obvious that the value of \(\Delta B\) is about zero in the underexposed and overexposed range. This means that the infrared light intensity in these ranges is not suitable for the finger vein imaging. On the other hand, the value \(\Delta B\) could be used to evaluate the fitness of the irradiance of the infrared intensity.
The derivatives function curve to Fig. 5
In this paper, the fusion method is based on the pixel level. Firstly, the infrared multi-light-intensity finger vein images are divided into R independent blocks by column.
To sign every divided block as \(T_{rp} ,r = 1,2, \ldots ,R{\text{ and }}p = 1,2, \ldots ,N\), where \(r\) is the index of the block, and \(p\) is the image number. In order to estimate the quality of each \(T_{rp}\), the average gray value of the block is calculated by its quality value as \(\overline{g}_{rp} = mean2(T_{rp} ),r = 1,2, \ldots ,R{\text{ and }}p = 1,2, \ldots ,N\). Then, the \(\overline{g}_{rp}\) is put into the derivative curve of Fig. 5 to calculate the \(\Delta B_{{\overline{g}_{rp} }}\) value in the next fusion. The fusion weight value of the block \(T_{rp}\) is defined [3] as:
$$S_{{rp}} = \exp \left[ {\alpha \cdot\Delta B_{{\bar{g}_{{rp}} }} } \right]$$
The constant parameter \(\alpha\) is the smoothing coefficient. In order to avoid the checkerboard edge between two adjacent blocks, it needs to define other spatial smoothing weighting \(G_{rp}\):
$$G_{rp} (x,y) = \exp\left [ - \frac{{(y - y_{c} )^{2} }}{{2\sigma^{2} }}\right]$$
The constant parameter \(\sigma\) is the variance of the Gaussian coefficient. \(x\) is the row number and \(y\) is the column number in the finger vein image, and \(y_{c}\) is the block center column number.
The weighting is the joint value of the gray information coefficient \(S_{rp}\) and spatial smoothing coefficient \(G_{rp}\). The joint weighting is defined as:
$$\omega_{rp} = G_{rp} *S_{rp}$$
Its normalized value is defined as:
$$\varpi_{rp} = \omega_{rp} \bigg {/}\left( {\sum\limits_{p = 1}^{N} {\omega_{rp} } } \right)$$
In the fusion, each fused block \(I_{r} , \, r = 1,2, \ldots ,R\) is calculated by Eq. (13) [3]:
$$I_{r} = \sum\limits_{p = 1}^{N} {(I_{rp} *\omega_{rp} } ), \begin{array}{*{20}c} & {r = 1,2, \ldots ,R} \\ \end{array}$$
Examinations and discussions
The sample of infrared multi-light-intensity finger vein images is shown in Fig. 7, which is captured by a self-developed platform, shown in Fig. 8. The infrared light intensity is dependent on the duty of PWM, which drives the infrared led irradiance. The transmitting model is shown in Fig. 9 and the differential curve is shown in Fig. 10.
The finger vein images captured through ten different light intensities. a Duty of PWM is 10%. b Duty of PWM is 20%. c Duty of PWM is 30%. d Duty of PWM is 40%. e Duty of PWM is 50%. f Duty of PWM is 60%. g Duty of PWM is 70%. h Duty of PWM is 80%. i Duty of PWM is 90%. j Duty of PWM is 100%
The infrared multi-light-intensity finger's vein image capturing platform
The transmitting model curve of the samples
The differential curve of Fig. 9
In the fusion step, the three finger vein images are selected to the weighting fuse [17, 18], which is Fig. 7c–e. Each of them has been divided into ten blocks by the column shown in Fig. 11. According to the transmitting model curve, the most suitable blocks are blending to one finger vein image, which is shown in Fig. 12. The weighting value of \(S_{rp}\) can be calculated by Eq. (9), which is shown in Fig. 13. The weighting value of \(G_{rp}\) could be calculated by Eq. (10), which is shown in Fig. 14. The joint weighting value of \(w_{rp}\) can be calculated by Eq. (11), which is shown in Fig. 15. The fusion finger vein image is blended by Eq. (12), which is shown in Fig. 16.
The three finger vein images are selected from Fig. 7, and each of them has been divided into 10 blocks
The blending of one finger's vein image from the most suitable blocks from Fig. 11a–c
The weighting value of \(S_{rp}\) to Fig. 11
The weighting value of \(G_{rp}\) to Fig. 11
The weighting value of \(\omega_{rp}\) to Fig. 11
The fused finger vein image of Fig. 11
Two other fuse methods are tested for the performance comparison in this paper. One is discrete wavelet transform (DWT) and the other is contrast pyramid, which flow charts are shown in Fig. 17. The source images are decomposed by discrete wavelet transform. And chooses the max coefficient at each pixel before the image rebuild. The source images are pyramid decomposed by the down sample. And calculate the contrast at each pixel. The pyramid layer which has max contrast value is choice before the pyramid image rebuild.
The flow chats of DWT and contrast pyramid fuse. a Discrete wavelet transform fuse flow chart. b Contrast pyramid fuse flow chart
The fused performance is tested by the following statistics method [3]. The standard deviation of an image is defined as formula (14), \(\mu\) is the mean value of the image \(I\) in which the size is \(m \times n\) and \(\sigma\) is the standard deviation.
$$\begin{aligned} \mu = &\,\, \frac{1}{{m*n}}\sum\limits_{{x = 1}}^{m} {\sum\limits_{{y = 1}}^{n} {{\text{I}}(x,y){\mkern 1mu} } } \\ \sigma =& \, \sqrt {\frac{1}{{m*n}}\sum\limits_{{x = 1}}^{m} {\sum\limits_{{y = 1}}^{n} {({\text{I}}(x,y) - \mu )} } } \end{aligned}$$
The Shannon Information entropy of the image is defined as formula (15), the \(P(gray)\) is the gray probability of the pixel in the image \(I\):
$$H(\text{I} ) = - \sum\limits_{gray = 1}^{255} {P(gray)\log_{2} [P(gray)]}$$
The standard deviation and information entropy of multi-light-intensity finger vein images together with the fused image by proposed method are shown in Table 1 [3]. However, the standard deviation and information entropy of the fused image is less than Figs. 2, 3, and 4, that means the gray uniformity and consistency of the fused image is better than Figs. 2, 3, and 4. For the image of Fig. 10a, its gray contrast is quite low, in which the image is nearly under exposure.
Table 1 The pixel level statistics of the multi-light-intensity images and proposed fused image
The degree of dependence between one source image and the fused image could be measured by the mutual information (FMI), which can be calculated by the formula (16):
$$FMI = \sum\limits_{i = 1}^{4} {MI(I_{i} ,I_{f} )}$$
In the formula (16), \(MI(I_{i} ,I_{f} )\) is defined as formula (17), and the joint histogram between the source image \(I_{i}\) and the fused image \(I_{f}\) is defined as \(h(I_{i} ,I_{f} )\).
$$MI(I_{i} ,I_{f} ) = \sum\limits_{x = 1}^{m} {\sum\limits_{y = 1}^{n} {h(I_{i} (x,y),I_{f} (x,y))} \cdot } \log_{2}\left (\frac{{h(I_{i} (x,y),I_{f} (x,y))}}{{h(I_{i} (x,y)) \cdot h(I_{f} (x,y))}}\right)$$
The results of fusion mutual information (MI) between the source image and the fused image are shown in Table 2 [3]. The MI between the three source images and fused image is the sum of the MI of each source image and fused image.
Table 2 The fusion mutual information between the source image and the fused images
The information fused from the source images could be calculated as the fusion quality index (FQI), which could be calculated by Eq. (18).
$$FQI = \sum\limits_{w \in W} {c(w)\left(\sum\limits_{i - 1}^{4} {\lambda (i)QI(I_{i} ,I_{f} |w)} \right)} ,$$
where \(\lambda_{i}\) is computed over a window \(w\), which can be calculated by the formula (19):
$$\lambda_{i} = \sigma_{{_{{I_{i} }} }}^{2} \bigg /\sum\limits_{i = 1}^{4} {\sigma_{{_{{I_{i} }} }}^{2} }$$
\(c\left( w \right)\) is a normalized version of \(C\left( w \right)\), which can be calculated by the formula (20):
$$C(w) = \hbox{max} (\sigma_{{_{{I_{1} }} }}^{2} ,\sigma_{{_{{I_{2} }} }}^{2} , \ldots ,\sigma_{{_{{I_{4} }} }}^{2} )$$
\(QI\left( {I_{i} ,I_{f} \left| w \right.} \right)\) is the quality index over a window for a given source image and fused image.
In the test, the size of the window is 8 × 8. The FQI values of the fusion quality index are shown in Table 3 [3].
Table 3 The fused mutual information between the source image and the fused images
In order to compare the fused performance, the structural similarity index measure (SSIM) is applied in this test. The results are shown in Table 4 [3].
The results of Tables 1, 2, 3 and 4 show that the proposed fused method based on the column blocking of the image is effective applied to the infrared multi-light-intensity finger vein images.
Conclusions and further works
The infrared finger-transmitting model is proposed in this paper, which it could be easily built by the observed data of multiple light-intensity images. This model provides a better approach to get the intact vein patterns by adopting the vein biometric data captured by the bioinformation. The features of captured image are estimated and fused by using this model's differential curves. In this paper, the examination approach has been proven that it is an efficient and practical method for the finger's fusion approach via infrared transmitting model. It is suitable for fusion of the infrared images in biometric system. Finally, the applications in detail and their analyses on while applying the multi-light-intensity finger vein images' fusion which is based on the transmitting model to big data environments will be stated in future works.
Shin KY, Park YH, Nguyen DT (2014) Finger-Vein image enhancement using a fuzzy-based fusion method with gabor and retinex filtering. Sensors 14(2):3095–3129
Tistarelli M, Schouten B (2011) Biometrics in ambient intelligence. J Ambient Intell Human Comput 2(2):113–126
Liukui C, Zuojin L, Ying W, Lixiao F (2014) A principal component analysis fusion method on infrared multi-light-intensity finger vein images, BWCCA. pp 281–286
Kikuchi H, Nagai K, Ogata W, Nishigaki M (2010) Privacy-preserving similarity evaluation and application to remote biometrics authentication. Soft Comput 14(5):529–536
Article MATH Google Scholar
Greene CS, Tan J, Ung M, Moore JH, Cheng C (2014) Big data bioinformatics. J Cell Physiol 229(12):1896–1900
Ogiela MR, Ogiela L, Ogiela U (2015) Biometric methods for advanced strategic data sharing protocols. In: Barolli L, Palmieri F, Silva HDD, et al. (eds) 9th international conference on innovative mobile and internet services in ubiquitous computing (IMIS), Blumenau. pp 179–183
Ogiela MR, Ogiela U, Ogiela L (2012) Secure information sharing using personal biometric characteristics. In: Kim TH, Kang JJ, Grosky WI, et al. (eds) 4th international mega-conference on future generation information technology (FGIT 2012), Korea Woman Train Ctr, Kangwondo, South Korea Dec 16–19, 2012, Book series: Communications in computer and information science, vol. 353 pp 369–373
Ogiela L, Ogiela MR (2016) Bio-inspired cryptographic techniques in information management applications. In: Barolli L, Takizawa M, Enokido T, et al. (eds) IEEE 30th international conference on advanced information networking and applications (IEEE AINA), Switzerland Mar 23-25, 2016, Book series: International conference on advanced information networking and applications. pp 1059–1063
Chen HC, Kuo SS, Sun SC, Chang CH (2016) A distinguishing arterial pulse waves approach by using image processing and feature extraction technique. J Med Syst 40:215. doi:10.1007/s10916-016-0568-4
Chen W, Er MJ, Wu S (2006) Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domai. IEEE Trans Syst Man Cybern B (Cybernetics) 36(2):458–466
Wu X, Zhu X, Wu GQ, Ding W (2014) Data mining with big data. IEEE Trans Knowl Data Eng 26(1):97–107
Urbach R (1969) The biologic effects of ultraviolet radiation. Pergamon Press, New York. http://www.inchem.org/documents/ehc/ehc/ehc23.htm#SubSectionNumber:2.2.1
Chen LK, Li ZJ, Wu Y, Xiang Y (2013) Dynamic range extend on finger vein image based on infrared multi-light-intensity vascular imaging. MEIMEI2013. ChongQing, vol. 427–429, pp 1832–1835
Jacobs K, Loscos C, Ward G (2008) Automatic high-dynamic range image generation for dynamic scenes. IEEE Comput Gr Appl 28(2):84–93
Debevec PE, Malik J (1997) Recovering high dynamic range radiance maps from photographs. In: Whitted T, Mones-Hattal B, Owen SG (eds) Proc. of the ACM SIGGRAPH. ACM Press, New York, pp 369–378
Rovid A, Hashimoto T, Varlaki P (2007) Improved high dynamic range image reproduction method. In: Fodor J, Prostean O (eds) Proc. of the 4th Int'l Symp. on applied computational intelligence and informatics, IEEE Computer Society, Washington. pp 203–207
Yang J, Shi Y (2014) Towards finger-vein image restoration and enhancement for finger-vein recognition. Inf Sci 1(268):33–52
Zhang J, Dai X, Sun QD, Wang BP (2011) Directly fusion method for combining variable exposure value images (in Chinese). J Software 22(4):813–825 (in Chinese)
Article MathSciNet Google Scholar
Delpy DT, Cope M (1997) Quantification in tissue near-infrared spectroscopy. Philos Trans R Soc B Biol Sci 352:649–659
The authors' contributions are summarized below. LC have made substantial contributions to conception and design, involved in drafting the manuscript. ZL and YW have made the acquisition of data and analysis and interpretation of data. The critically important intellectual contents of this manuscript have been revised by HCC. All authors read and approved the final manuscript.
This study was funded in part by the Natural Science Foundation Project of CQ CSTC (cstc2011jjA40012), Foundation and Frontier Project of CQ CSTC (cstc2014jcyjA40006), and Campus Research Foundation of Chongqing University of Science and Technology (CK2011B09, CK2011B05). This work was also supported in part by Asia University, Taiwan, and China Medical University Hospital, China Medical University, Taiwan, under Grant ASIA-105-CMUH-04.
This article does not contain any studies with human participants or animals performed by any of the authors.
Chongqing University of Science and Technology, Huxi Street 200#, Chongqing, 401331, China
Liukui Chen, Zuojin Li & Ying Wu
Dept. of Computer Science and Information Engineering, Asia University, Taichung, 41354, Taiwan
Hsing-Chung Chen
Dept. of Medical Research, China Medical University Hospital, China Medical University, Taichung, 404, Taiwan
Liukui Chen
Zuojin Li
Ying Wu
Correspondence to Hsing-Chung Chen.
Chen, L., Chen, HC., Li, Z. et al. A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging. Hum. Cent. Comput. Inf. Sci. 7, 35 (2017). https://doi.org/10.1186/s13673-017-0110-9
Vein image
Multi-light-intensity
Transmitting model
Image fusion
|
CommonCrawl
|
How far can the convergence of Taylor series be extended?
Taylor series can diverge and have only a limited radius of convergence, but it seems that often this divergence is more a result of summation being too narrow rather than the series actually diverging.
For instance, take $$\frac{1}{1-x} = \sum_{n=0}^\infty x^n$$ at $x=-1$. This series is considered to diverge, but smoothing the sum gives $\sum_{n=0}^\infty (-1)^n = \frac{1}{2}$, which agrees with $\frac{1}{1-(-1)}$. Similarly, $$\sum_{n=0}^\infty nx^n = x\frac{d}{dx}\frac{1}{1-x} = \frac{x}{(1-x)^2}$$ normally diverges at $x=-1$, but smoothing the sums gives $-\frac{1}{4}$ which agrees with the function. This process continues to hold for any number of taking the derivative than multiplying by x.
One can extend series of these types even further. Taking $$\frac{1}{1+x} =\sum_{n=0}^\infty (-x)^n = \sum_{n=0}^\infty (-1)^ne^{\ln(x)n} = $$ $$\sum_{n=0}^\infty (-1)^n\sum_{m=0}^\infty \frac{\left(\ln(x)n\right)^m}{m!} = \sum_{m=0}^\infty \sum_{n=0}^\infty (-1)^n\frac{\left(\ln(x)n\right)^m}{m!} = $$ $$\sum_{m=0}^\infty\frac{\ln(x)^m}{m!} \sum_{n=0}^\infty (-1)^n n^m = 1 -\sum_{m=0}^\infty\frac{\ln(x)^m}{m!} \eta(-m)$$
This sum converges for all values of $x>0$ and converges to $\frac{1}{1+x}$
In general, one can transform any sum of the form $$ \sum_{k=1}^\infty a_k x^k = -\sum_{k=0}^\infty \left(\sum_{n=-\infty}^\infty b_k x^k\right) (-1)^n x^n =$$ $$ -\sum_{k=0}^\infty b_k \sum_{m=0}^\infty\frac{\ln(x)^m}{m!}\eta(-(n+k)) $$
In general, how much is it possible to extend the range of convergence, simply by overloading the summation operation (here I assign values based on using the eta functions, but I can imagine using something like Abel regularization or other regularization)? Are there any interesting results that come from extending the range of convergence to the Taylor series?
One theory I had was that the Taylor series should converge for any alternating series which has a monotonic $|a_k|$. Is this true? Are there any series that are impossible to increase their radius of convergence by changing the sum operation? In short, my goal is to find a way of overloading the sum operation that widens the radius of convergence of the Taylor series as far as possible while still agreeing with the function.
Edit: I wanted to add that it might be useful in looking at this question to know that $$ \sum_{m=0}^\infty\frac{\ln(x)^m}{m!}\eta(-(m-w))= -Li_w(-x) $$ where $Li_w(-x)$ is the wth logarithmic integral. So the sum method I provided transforms a sum to an (in)finite series of logarithmic integrals.
This is sort of tangential, but I was reflecting a bit more on this problem, and it seems to be that a Taylor series has enough information to be able to extend a sum until the next real singularity. For instance, if I graph a function in the complex plane, then the radius of convergence is the distance to the nearest singularity. In this image, I have a few different singularities (shown as dots). But, since the original Taylor series (call it $T_1(x)$, its shown as the large blue circle in the image) will exactly match the function within this disk, its possible to get another Taylor series (call it $T_2(x)$, its shown as the small blue circle) centered around another point while only relying on derivatives of $T_1(x)$. From $T_2(x)$, its possible to get $T_3(x)$ (the purple circle), which extends the area of convergence even further. I think this shows that the information contained by the Taylor series is enough to end all the way to a singularity on the real line, rather than being limited by the distance to a singularity in the complex plane. The only time this wouldn't work is if the set of points on the boundary circle where the function divergences is infinite and dense enough to not allow any circles to 'squeeze though' using the previous method. So in theory, it seems like any Taylor series which has a non-zero radius of convergence and a non-dense set of singularities uniquely determines a function that can be defined on the entire complex plane (minus singularities).
Edit 2: This is what I have so far for iterating the Taylor series. If the Taylor series for $f(x)$ is $$T_1(x)=\sum_{n=0}^\infty \frac{f^{(n)}}{n!}\left(0\right) x^n$$ and the radius of convergence is R, then we can center $T_2(x)$ around $x=\frac{9}{10}R$, since that is within the area of convergence. We get that $$T_2(x) = \sum_{n=0}^\infty \frac{T_1^{(n)}}{n!}\left(\frac{9}{10}R\right) x^n = $$ $$T_2(x) = \sum_{n=0}^\infty \left(\frac{\sum_{k=n}^\infty \frac{f^{(k)}\left(0\right)}{k!} \frac{k!}{(k-n)!} \left(\frac{9}{10}R\right)^{k-n}}{n!}\right) x^n = \sum_{n=0}^\infty \left(\sum_{k=n}^\infty f^{(k)}\left(0\right) \frac{1}{(k-n)!n!} \left(\frac{9}{10}R\right)^{k-n}\right) x^n $$ and in general centered around $x_{center}$ $$T_{w+1}(x) = \sum_{n=0}^\infty \frac{T_w^{(n)}\left(x_{center}\right)}{n!} x^n$$ I tested this out for ln(x), and it seemed to work well, but I suppose it could fail if taking the derivative of $T_w(x)$ too many times causes $T_w(x)$ to no longer match $f(x)$ closely.
I tested out this method of extending a function with Desmos, here is the link if you would like to test it out: https://www.desmos.com/calculator/fwvuasolla
Edit 3: I looked in analytic continuation some, and it looks like the method I was thinking of that extends the range of convergence using the old coefficients to get new ones is already a known method, though it uses Cauchy's differentiation formula so it is able to avoid some of the convergence problems that I was worried about before that comes from repeated differentiation. So, it appears there should exist some way to overload the sum operation that achieves this same continuation. I suppose there's the trivial option of defining summation as the thing which returns the same values as continuation would return, but that's a very unsatisfying solution. It would be interesting to see if it's possible to create a natural generalization of summation that agrees with this method of continuation.
Edit 4: I think I may have found a start for how to extend the convergence of all Taylor series which can be extended. Based on my above argument of recursively applying the Taylor series, one algorithm could go as follow:
Get the Taylor series at the starting point (call it $x_0$)
Use this Taylor series to get all the points between $[x_0,x_0+dt]$
Recenter the Taylor series at $x_0 + dt$ to updating the derivatives
Repeat from 1 to continue expanding the convergence
So long as dt is smaller than the distance of the nearest singularity is to the real line, all Taylor series will converge. Now, this method isn't all that useful itself, since it requires many recursive steps, but it sets the stage for what I think is a natural way to extend summation to work for analytic continuation.
Instead of thinking of the values of the derivative as values to be summed together as a polynomial, instead view the set of derivatives at a point as seed values for applying Euler's Method. My motivation for this is that as $dt$ becomes very small, Euler method should become a successively better and better approximation of the above algorithm. When dt is sufficiently small, the values of the Euler method around $x_0$ should uniformly converge to the values that the Taylor series would give. It seems to me that this should also hold for the derivatives of the Taylor series. My main concern with this method is that each step introduces a small error and that eventually, this error would become impossible to contain, but I'm not sure how to do prove or disprove this.
Based on this, could one view the Taylor series as instead providing a differential equation with infinite initial conditions? Do differential equations say anything about the uniqueness of the existence of equations of this type? Does the Euler method actually work to extend the radius of convergence of a power series?
I've attached some code to be able to run the Euler method on different functions. I've been able to extend the convergence of a number of functions, but it is quite hard to extend it anywhere about 2~3 times further than the regular convergence range since the size of the terms grow with a factorial function, so it takes a very long time to run past that range. In the following code, I'm extending the function with the seed $a_n = n!$, which corresponds to $\frac{1}{1-x}$.
from decimal import Decimal
from decimal import *
getcontext().prec = 1000 #need LOTS of precision to extend the range. Even 1000 accurate places causes a problem with large factorials
def iterate(L, m):
R = []
for i in range(len(L)-1):
R.append(L[i] + m*L[i+1])
R.append(L[len(L)-1])
return R
def createL(S):
for i in range(S):
L.append(Decimal(math.factorial(i)) )
return L
def createCorrectDeriv(S,x):
L.append( pow(Decimal(1-x),-1-i)* math.factorial(i))
def runEuler():
DT = -.002
dt = Decimal(DT)
W= int(-3/DT)
print(W)
print("range converge should be: "+ str(abs(DT* W) ))
print("Size list: " + str(W))
L = createL(int(W ) )
L_val = []
Y_val = []
Z_val = []
X_val = []
for i in range( W ):
L_val.append(L[0])
L = iterate(L,Decimal(dt) )
X_val.append(-dt*i)
Y_val.append(1.0/(1-(DT*(i)) ))
plt.plot(X_val, L_val)
plt.plot(X_val, Y_val)
runEuler()
Final Edit: I think I figured it out! The result is (for analytic f(x)) that $$f(x) = \lim_{dt \to 0}\sum_{n=0}^\frac{x}{dt} \frac{(dt)^n}{n!} f^{(n)}(0) \prod_{k=0}^{n-1} \left(\frac{x}{dt}-k\right)$$ For $f(x) = \frac{1}{1-x}$ this becomes $$\lim_{dt \to 0} e^{\frac{1}{dt}}dt^{-\frac{x}{dt}}\int_{\frac{1}{dt}}^{\infty}w^{\left(\frac{\left(dt-x\right)}{dt}-1\right)}e^{-w}dw$$ which does indeed converge to $\frac{1}{1+x}$ for $(-1,\infty)$. Thanks for everyone's help in getting here! I'm going to work next on seeing if allowing dt to be a function of the iterations allows one to extend this method to get analytic continuations of functions which have boundaries that are dense but not 100% dense, since my thought is that maybe after an infinite number of steps its possible to squeeze through the 'openings' in the dense set.
summation taylor-expansion riemann-zeta divergent-series
Caleb BriggsCaleb Briggs
$\begingroup$ Read about analytic continuation. en.wikipedia.org/wiki/…. $\endgroup$
$\begingroup$ As @YvesDaoust says, analytic continuation is what you're looking for.... and hopping from one expansion point to the next, as your image shows, is a way this can be made precise. As you point out, if the singularities in the complex plane are bounded away from the real line, then you can uniquely define the entire function on the real line in this way just given its Taylor series at the origin. $\endgroup$
– mjqxxxx
$\begingroup$ There is a summation method (though not exactly "explicit") that sums up the series in the largest star domain (with respect to the point at which the Taylor expansion is applied) to which the function can be continued analytically. It is hard to go far beyond that because then the natural question is "which branch do you want it to sum to?". $\endgroup$
– fedja
$\begingroup$ I just wanted to complement the OP on rediscovering analytic continuation! This shows that you have excellent mathematical taste. The question itself is too broad for me to attempt to answer (though I won't discourage others); en.wikipedia.org/wiki/Divergent_series might be a good starting point. $\endgroup$
$\begingroup$ @GottfriedHelms I addressed that a little bit with my argument that one can't extend the series past areas where the set of singularities is too dense. If I understand gap series, those are series that a set of singularities that cover the entire unit disk, so it's impossible to extend the Taylor series past that boundary. $\endgroup$
– Caleb Briggs
There exists powerful methods that allow you to do these sorts of computations in an efficient way. These methods are often used in theoretical physics, but usually not in a mathematically rigorous way. We then don't attempt to extend the radius of convergence in a step-by-step way as described in the problem, because typically we have to deal with a asymptotic expansion that has zero radius of convergence. In some cases the problem is to find the limit of the expansion parameter tending to infinity or even find the first few terms of the expansion around infinity.
Given a function $f(x)$ defined by a series:
$$f(x) = \sum_{n=0}^{\infty} c_n x^n$$
one can consider operations on the function yielding another function $g(x)$
$$g(x) = \phi[f(x)]$$
where $\phi$ is an operator that can involve algebraic operations involving $f(x)$ and its derivatives. Methods such as Padé approximants and differential approximants fall into this category. A different class of methods use a transform on the argument of the function:
$$g(z) = f\left[\phi(z)\right]$$
These so-called conformal mappings have the advantage that they can move singularities in the complex plane and thereby make the radius of convergence larger without having to be fine-tuned to the function $f(x)$ to cancel out such singularities. What makes these methods powerful is that they can be used when only a finite number of terms the expansion are known. One then needs to choose the conformal mapping $\phi(z)$ such that $\phi(0) = 0$.
Order dependent mapping method
The question is then how to choose the mapping $\phi(z)$ when we have limited knowledge of the properties of $f(x)$. A method that works well is the so-called order dependent mapping method developed developed by Zinn-Justin. This involves using a suitably chosen conformal mapping that includes a parameter and then to choose that parameter such that the last known term of the series becomes zero. The idea here is that typically you get the best results when summing a series to the term with the least absolute value. The error when summing an asymptotic series this way is beyond all orders, and is for this reason known as the superasymptotic approximation.
The order dependent mapping method thus amounts to tuning the conformal mapping such that the superasymptotic approximation becomes the sum of all the known terms. Zinn-Justin's method uses conformal mappings of a special form and prescribes how to choose the parameter from the set of all solutions that makes the last term equal to zero. I have improved and generalized this method to more general choices for the mapping which can include more than one parameter.
To illustrate how this method works in practice, let's consider estimating $\lim_{x\to\infty}\arctan(x)$ using the series expansion of $\arctan(x)$ to order 15. So, we have:
$$f(x) = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \frac{x^9}{9} - \frac{x^{11}}{11} + \frac{x^{13}}{13} - \frac{x^{15}}{15} +\mathcal{O}\left(x^{17}\right)$$
We pretend that we don't know what the next terms of the series expansion are, but we do know that $\lim_{x\to\infty} f(x)$ exists and that an expansion around infinity in positive integer powers of $x^{-1}$ exists.
The first step is to rewrite the function to get to a series with both even and odd powers: $$g(x) = \frac{f(\sqrt{x})}{\sqrt{x}} = 1 - \frac{x}{3} + \frac{x^2}{5} - \frac{x^3}{7} + \frac{x^4}{9} - \frac{x^{5}}{11} + \frac{x^{6}}{13} - \frac{x^{7}}{15} +\mathcal{O}\left(x^{8}\right)\tag{1}$$
The function $g(x)$ will then for large $x$ have an expansion on powers of $x^{-1/2}$ with the first term being a term proportional to $x^{-1/2}$. We want to find the coefficient of $x^{-1/2}$ of this expansion. We can then choose the following conformal mapping:
$$x = \phi(z) = \frac{p z}{(1-z)^2}$$
The point at infinity is then mapped to $z = 1$. If we put $z = 1-\epsilon$, then we see that $\phi(1-\epsilon)\sim \epsilon^{-2}$. Since $g(x)$ has an expansion for large $x$ in powers of $x^{-1/2}$,this becomes an expansion is positive powers of $\epsilon$ which will be consistent with what we're bound to get when re-expanding the series (1) and then substituting $z = 1-\epsilon$ in there. To extract the coefficient of $x^{-1/2}$ we need to consider the expansion of $\sqrt{p}g\left[\psi(z)\right]/(1-z)$. It follows from (1) that:
$$\frac{g\left[\psi(z)\right]}{1-z} = 1+\left(1-\frac{p}{3}\right) z+\left(\frac{p^2}{5}-p+1\right) z^2+\left(-\frac{p^3}{7}+p^2-2 p+1\right) z^3+\left(-\frac{p^3}{7}+p^2+\frac{1}{63} \left(7 p^4-54 p^3+126 p^2-84 p\right)-2 p+1\right) z^4+\left(-\frac{p^3}{7}+p^2+\frac{1}{63} \left(7 p^4-54 p^3+126 p^2-84 p\right)+\frac{1}{99} \left(-9 p^5+88 p^4-297 p^3+396 p^2-165 p\right)-2 p+1\right) z^5+\left(\frac{p^6}{13}-\frac{10 p^5}{11}+4 p^4-\frac{57 p^3}{7}+8 p^2+\frac{1}{63} \left(7 p^4-54 p^3+126 p^2-84 p\right)+\frac{1}{99} \left(-9 p^5+88 p^4-297 p^3+396 p^2-165 p\right)-4 p+1\right) z^6+\left(\frac{p^6}{13}-\frac{10 p^5}{11}+4 p^4-\frac{57 p^3}{7}+8 p^2+\frac{1}{63} \left(7 p^4-54 p^3+126 p^2-84 p\right)+\frac{1}{99} \left(-9 p^5+88 p^4-297 p^3+396 p^2-165 p\right)+\frac{1}{195} \left(-13 p^7+180 p^6-975 p^5+2600 p^4-3510 p^3+2184 p^2-455 p\right)-4 p+1\right) z^7+O\left(z^8\right)$$
We then set the coefficient of $z^7$ equal to zero and choose the solution for $p$ for which the absolute value of the coefficient of $z^6$ is the least. In the case at hand this also corresponds to choosing the solution for $p$ with the largest magnitude. As shown by Zinn-Justin, this choice is optimal because the mapping is proportional to $p$ and then larger $p$ leads to a larger radius of convergence. However, this logic cannot be generalized for more general conformal mappings while minimizing the modulus of the coefficient of the next largest power of $z$ will always work and will usually lead to the best choice.
We then find that the optimal solution is $p = 3.9562952014676\ldots$. If we then set $p$ to this value and put $z = 1$ in the above expansion and multiply that by $\sqrt{p}$, we find the estimate $1.5777\ldots$ for the limit. This not extremely accurate, but then we are trying to extract the value of a function at infinity from only 8 terms of a series with a radius of convergence of 1.
We can improve on this result by considering conformal transforms with more parameters and then equating the last few terms of the series to zero and picking the optimal solution as the one that minimizes the norm of the coefficient of the highest power of z that was not set to zero. However, while this does lead to very accurate results, this is a tour de force.
Obtaining a better estimate by taking linear combinations of solutions
There exists a simpler method that leads to reasonable accurate results that involves using the other solutions of the equation obtained when we set the coefficient of $z^7$ equal to zero. We then consider the linear combinations of such solutions that make the coefficients of lower powers of $z$ zero and then take the corresponding linear combinations of the estimates for the limits. For the problem at hand, this works as follows:
Let's denote the coefficient of $z^r$ by $k_r(p)$. Let $p_i$ for $0\leq i\leq 6$ be the solutions such that $|k\left(p_i\right)|$ is increasing as a function of $i$. For $n=1\cdots 6$ we compute coefficients $a^{(n)}_i$ with $1\leq i\leq n$ such that:
$$k_r\left(p_0\right)+\sum_{j =1}^n a^{(n)}_{j} k_r\left(p_j\right)=0$$
for $r = 6, 5,\cdots, 7-n$. Let's denote the series evaluated as $z = 1$ as a function of $p$ multiplied by $\sqrt{p}$ which we use to estimate the limit by $u(p)$. We then evaluate the expressions:
$$A_n = \frac{u\left(p_0\right)+\sum_{j =1}^n a^{(n)}_{j} u\left(p_j\right)}{1+\sum_{j =1}^n a^{(n)}_{j}}$$
and we put $A_0 = u\left(p_0\right)$. This yields successive approximants for the limit that at first become better and then start to become worse. We then find: $$ \begin{split} A_0 &= 1.577742424\\ A_1 &= 1.570563331\\ A_2 &= 1.570839165\\ A_3 &= 1.570774607\\ A_4 &= 1.570822307\\ A_5 &= 1.570720746\\ A_6 &= 1.571483896 \end{split} $$
The simplest way to choose the most accurate one without cheating is the look at the successive differences and choose the one where the difference with the next value is the smallest. A better way is to construct a new series:
$$h(x) = A_0 + \sum_{k = 0}^5 \left(A_{k+1}-A_k\right)x^{k+1} + \mathcal{O}\left(x^7\right)$$
which we want to evaluate at $x = 1$. We can then start a new round of the same method as we applied to the original series. If we want to use the conformal mapping method, then we must omit the $A_0$ term for the series, because we need the series coefficients to be smooth. In contrast, Padé approximants can be used directly on the series. The best Padé approximant for a series of order $2n$ is usually the diagonal $\{n,n\}$ Padé approximant. In this case we find that the $\{3,3\}$ Padé approximant is given by $1.570799\cdots$ while the exact answer is $\frac{\pi}{2}= 1.570796\cdots$.
The method I presented here involves applying a conformal transform containing $r$ parameters, setting the coefficients of the highest $r$ powers of $z$ equal to zero, and then choosing the that solution for which the coefficient of the highest power of $z$ that was not set to zero has the least absolute value. The enormous complexity of the equations when more than one parameter is used can be a problem. One can instead use linear combinations of the different solutions to set the coefficients of the lower powers of $z$ to zero and consider the corresponding linear combinations of the estimates.
As pointed out above, the reason why setting the highest known coefficient of $z$ to zero works well, is because this makes the summation of the known terms correspond to the optimal truncation rule, which in the theory of asymptotic expansions is known as the superasymptotic approximation, as the error is then beyond all powers of the expansion parameter.
But the reason why choosing the optimal value of the solution to be the one that minimizes the norm of the coefficient of the next highest power and why with more parameters one should set the coefficients of the next highest powers to zero needs more explanation. We can argue heuristically as follows. Suppose we have a conformal transform $x = \phi(z)$ such that the $r$ highest known series expansion coefficients of $g(z) = f\left[\phi(z)\right]$ are zero. The inverse mapping from $g(z)$ to $f(x)$ reproduces $f(x)$, but note that because the $r$ highest known powers of $z$ are zero, the $r$ highest powers of $x$ in $f(x)$ are generated using only the lower powers of $g(z)$. The coefficients of these lower powers follow in turn from the lower powers of $f(x)$.
So, the conformal mapping constructed by setting the coefficients if the $r$ highest powers of $z$ to zero, ends up being a predictive tool for the coefficients of the $r$ highest powers of $x$ of $f(x)$. what then typically happens is that the coefficients of the next powers of $x$ are very accurately reproduced when we simply assume that the coefficients of these powers of $z$ are zero in $g(z)$.
Padé approximants can also be considered in this way. Given a function $f(x)$ for which we know the series expansion up to $n$th order, we can consider the series expansion of $p(x) = q(x) f(x)$ to $n$th order for $q(x)$ an $r$th degree polynomial. We can then choose $q(x)$ such that the highest $r$ powers of $p(x)$ vanish. This thus yields the $\{n-r,r\}$ Padé approximant as
$$f(x) = \frac{p(x)}{q(x)} + \mathcal{O}\left(x^{n+1}\right)$$
Then comparing this to the order dependent mapping method, $q(x)$ is analogous to the conformal mapping and $p(x)$ is analogous to the series obtained after applying the conformal mapping. Then the analogous reasoning is that because this correctly reproduces $f(x)$ to $n$th order while the highest $$r orders if $p(x)$ are missing we can assume that setting the next highest orders of $p(x)$ to zero for the same $q(x)$ will approximately reproduce the coefficients of the powers of $f(x)$ higher than $n$ as well. This predictive power is indeed a well known feature of Padé approximants.
Saibal MitraSaibal Mitra
$\begingroup$ This is a very interesting answer. I will think about this and respond with some questions once I have studied and understood this. Thank you for sharing this interesting method! $\endgroup$
$\begingroup$ @CalebBriggs Thanks! I expanded the text, added more explanations and added a conclusion where I added a bit about Padé approximants. $\endgroup$
– Saibal Mitra
Not the answer you're looking for? Browse other questions tagged summation taylor-expansion riemann-zeta divergent-series or ask your own question.
Summation methods ordered by strength
Extending $\sum_{n=0}^\infty s^{n^2}$ beyond its natural boundary
Complex Taylor Series Circles of Convergence
Taylor Series for $\frac{1}{1+e^z}$ and radius of convergence
Check: Radius of Convergence of the Sum of these Complex Taylor Series
Taylor series in complex analysis -- change base point
Finding Taylor Series And Radius Of Convergence
Taylor series, Convergence
|
CommonCrawl
|
Bijen Patel
ISLR Chapter 9 - Support Vector Machines
Summary of Chapter 9 of ISLR. Support vector machines are one of the best classifiers in the binary class setting.
9 Aug 2020 • 12 min read
Support vector machines (SVMs) are often considered one of the best "out of the box" classifiers, though this is not to say that another classifier such as logistic regression couldn't outperform an SVM.
The SVM is a generalization of a simple classifier known as the maximal margin classifier. The maximal margin classifier is simple and intuitive, but cannot be applied to most datasets because it requires classes to be perfectly separable by a boundary. Another classifier known as the support vector classifier is an extension of the maximal margin classifier, which can be applied in a broader range of cases. The support vector machine is a further extension of the support vector classifier, which can accommodate non-linear class boundaries.
SVMs are intended for the binary classification setting, in which there are only two classes.
Maximal Margin Classifier
What is a Hyperplane?
In a \( p \)-dimensional space, a hyperplane is a flat subspace of dimension \( p - 1 \). For example, in a two-dimensional setting, a hyperplane is a flat one-dimensional subspace, which is also simply known as a line. A hyperplane in a \( p \)-dimensional setting is defined by the following equation:
\[ \beta_{0} + \beta_{1}X_{1} + \beta_{2}X_{2} +\ ...\ + \beta_{p}X_{p} = 0 \]
Any point \( X = (X_{1},\ X_{2},\ ...,\ X_{p})^{T} \) in \( p \)-dimensional space that satisfies the equation is a point that lies on the hyperplane. If some point \( X \) results in a value greater than or less than \( 0 \) for the equation, then the point lies on one of the sides of the hyperplane.
In other words, a hyperplane essentially divides a \( p \)-dimensional space into two parts.
Classification Using a Hyperplane
Suppose that we had a training dataset with \( p \) predictors and \( n \) observations. Additionally, the observations were either associated with one of two classes. Suppose that we also had a separate test dataset. Our goal is to develop a classifier based on the training data, and to classify the test data. How can we classify the data based on the concept of the separating hyperplane?
Assume that it is possible to create a hyperplane that separates the training data perfectly according to their class labels. We could use this hyperplane as a natural classifier. An observation from the test dataset would be assigned to a class, depending on which side of the hyperplane it is located. The determination is made by plugging the test observation into the hyperplane equation. If the value is greater than \( 0 \), it is assigned to the class corresponding to that side. If the value is less than \( 0 \), then it is assigned to the other class.
However, when we can perfectly separate the classes, many possibilities exist for the hyperplane. The following chart shows an example where two hyperplanes separate the classes perfectly.
This is where the maximal margin classifier helps determine the hyperplane to use.
The maximal margin classifier is a separating hyperplane that is farthest from the training observations.
The method involves determining the perpendicular distance from each training observation to some hyperplane. The smallest such distance is known as the margin. The maximal margin classifier settles on the hyperplane for which the margin is largest. In other words, the chosen hyperplane is the one that has the farthest minimum distance to the training observations.
The maximal margin classifier is often successful, but can lead to overfitting when we have a lot of predictors in our dataset.
The points that end up supporting the maximal margin hyperplane are known as support vectors. If these points are moved even slightly, the maximal margin hyperplane would move as well. The fact that the maximal margin hyperplane depends only on a small subset of observations is an important property that will also be discussed in the sections on support vector classifiers and support vector machines.
Construction of the Maximal Margin Classifier
The maximal margin hyperplane is the solution to an optimization problem with three components:
Maximize \( M \)
Subject to:
\( \sum_{j=1}^{p}\beta_{j}^{2} = 1 \)
\( y_{i}(\beta_{0} + \beta_{1}x_{i1} + \beta_{2}x_{i2} +\ \dots\ + \beta_{p}x_{ip}) \geq M\ \forall\ i=1,\ \dots,\ n \)
\( M \) is the margin of the hyperplane. The second component is a constraint that ensures that the perpendicular distance from any observation to the hyperplane is given by the following:
\[ y_{i}(\beta_{0} + \beta_{1}x_{i1} + \beta_{2}x_{i2} +\ \dots\ + \beta_{p}x_{ip}) \]
The third component guarantees that each observation will be on the correct side of the hyperplane, with some cushion \( M \).
Non-separable Case
The maximal margin classifier is a natural way to perform classification, but only if a separating hyperplane exists. However, that is usually not the case in real-world datasets.
The concept of the separating hyperplane can be extended to develop a hyperplane that almost separates the classes. This is done by using a soft margin. The generalization of the maximal margin classifier to the non-separable case is known as the supper vector classifier.
Support Vector Classifier
In most cases, we usually don't have a perfectly separating hyperplane for our datasets. However, even if we did, there are cases where it wouldn't be desirable. This is due to sensitivity issues from individual observations. For example, the addition of a single observation could result in a dramatic change in the maximal margin hyperplane.
Therefore, it is usually a good idea to consider a hyperplane that does not perfectly separate the classes. This provides two advantages:
Greater robustness to individual observations
Better classification of most of the training observations
In other words, it is usually worthwhile to misclassify a few training observations in order to do a better job of classifying the other observations. This is what the support vector classifier does. It allows observations to be on the wrong side of the margin, and even the wrong side of the hyperplane.
Details of the Support Vector Classifier
The support vector classifier will classify a test observation depending on what side of the hyperplane that it lies. The hyperplane is the solution to an optimization problem that is similar to the one for the maximal margin classifier.
\( y_{i}(\beta_{0} + \beta_{1}x_{i1} + \beta_{2}x_{i2} +\ \dots\ + \beta_{p}x_{ip}) \geq M(1 - \epsilon_{i}) \)
\( \epsilon_{i} \geq 0 \)
\( \sum_{i=1}^{n}\epsilon_{i} \leq C \)
\( \epsilon_{i} \) is a slack variable that allows observations to be on the wrong side of the margin or hyperplane. It tells us where the \( i^{th} \) observation is located, relative to the hyperplane and margin.
If \( \epsilon_{i} = 0 \), the observation is on the correct side of the margin
If \( \epsilon_{i} > 0 \), the observation is on the wrong side of the margin
If \( \epsilon_{i} > 1 \), the observation is on the wrong side of the hyperplane
\( C \) is a nonnegative tuning parameter that bounds the sum of the \( \epsilon_{i} \) values. It determines the number and severity of violations to the margin and hyperplane that will be tolerated.
In other words, \( C \) is a budget for the amount that the margin can be violated by the \( n \) observations. If \( C = 0 \), then there is no budget, and the result would simply be the same as the maximal margin classifier (if a perfectly separating hyperplane exists). If \( C > 0 \), no more than \( C \) observations can be on the wrong side of the hyperplane because \( \epsilon_{i} > 1 \) in those cases, and the constraint from the fourth component doesn't allow for it. As the budget increases, more violations to the margin are tolerated, and so the margin becomes wider.
It should come as no surprise that \( C \) is usually chosen through cross-validation. \( C \) controls the bias-variance tradeoff. When \( C \) is small, the classifier is highly fit to the data, resulting in high variance. When \( C \) is large, the classifier may be too general and oversimplified for the data, resulting in high bias.
In support vector classifiers, the support vectors for the hyperplane are a bit different than the ones from the maximal margin hyperplane. They are the observations that lie directly on the margin and the wrong side of the margin. The larger the value of \( C \), the more support vectors there will be.
The fact that the supper vector classifier is based only on a small subset of the training data means that it is robust to the behavior of observations far from the hyperplane. This is different from other classification methods such as linear discriminant analysis, where the mean of all observations within a class help determine the boundary. However, support vector classifiers are similar to logistic regression because logistic regression is not very sensitive to observations far from the decision boundary.
First, we will discuss how a linear classifier can be converted into a non-linear classifier. Then, we'll talk about support vector machines, which do this in an automatic way.
Classification with Non-linear Decision Boundaries
The support vector classifier is a natural approach for classification in the binary class setting, if the boundary between the classes is linear. However, there are many cases in practice where we need a non-linear boundary.
In chapter 7, we were able to extend linear regression to address non-linear relationships by enlarging the feature space by using higher-order polynomial functions, such as quadratic and cubic terms. Similarly, non-linear boundaries can be created through the use of higher-order polynomial functions. For example, we could fit a support vector classifier using each predictor and its squared term:
\[ X_{1}, X_{1}^2, X_{2}, X_{2}^2, ... X_{p}, X_{p}^2 \]
This would change the optimization problem to become the following:
\( \sum_{j=1}^{p}\sum_{k=1}^{2}\beta_{jk}^{2} = 1 \)
\( y_{i}(\beta_{0} + \sum_{j=1}^{p}\beta_{j1}x_{ij} + \sum_{j=1}^{p}\beta_{j2}x_{ij}^2) \geq M(1 - \epsilon_{i}) \)
However, the problem with enlarging the feature space is that there are many ways to do so. We could use cubic or even higher-order polynomial functions. We could add interaction terms. Many possibilities exist, which could lead to inefficiency in computation. Support vector machines allow for enlarging the feature space in a way that leads to efficient computations.
The support vector machine is an extension of the support vector classifier that enlarges the feature space by using kernels. Before we talk about kernels, let's discuss the solution to the support vector classifier optimization problem.
Solution to Support Vector Classifier Optimization Problem
The details of how the support vector classifier is computed is highly technical. However, it turns out that the solution only involves the inner products of the observations, instead of the observations themselves. The inner product of two vectors is illustrated as follows:
\[ a=\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}b=\begin{bmatrix} 4 \\ 5 \\ 6 \end{bmatrix} \]
\[ \langle a,b \rangle = (1\cdot 4) + (2\cdot 5) + (3\cdot 6) = 32\]
The linear support vector classifier can be represented as:
\[ f(x) = \beta_{0} + \sum_{i=1}^{n}\alpha_{i}\langle x, x_{i} \rangle \]
There are \( n \) parameters \( (\alpha_{i}) \) per training observation. The parameters are estimated with the inner products between all pairs of training observations.
To evaluate the support vector classifier function \( f(x) \), we compute the inner product between a new observation \( x \) and each training observation \( x_{i} \). However, the \( \alpha_{i} \) parameters are nonzero only for the support vectors. In other words, if an observation is not a support vector, then it \( \alpha_{i} \) is zero. If we represent \( S \) as the collection of the support vectors, then the solution function can be rewritten as the following:
\[ f(x) = \beta_{0} + \sum_{i \in S}^{}\alpha_{i}\langle x, x_{i} \rangle \]
A kernel is a function that quantifies the similarity of two observations, and is a generalization of the inner product. The kernel function used in the support vector classifier is simply
\[ K(x_{i}, x_{i^{\prime}}) = \sum_{j=1}^{p}x_{ij}x_{i^{\prime}j} \]
This is known as a linear kernel because the support vector classifier results in a linear boundary. The linear kernel determines the similarity of observations by using the Pearson correlation.
Instead of using the linear kernel, we could use a polynomial kernel:
\[ K(x_{i}, x_{i^{\prime}}) = (1 + \sum_{j=1}^{p}x_{ij}x_{i^{\prime}j})^{d} \]
Using a nonlinear kernel results in a non-linear decision boundary. When the support vector classifier is combined with a nonlinear kernel, it results in the support vector machine. The support vector machine takes on the form:
\[ f(x) = \beta_{0} + \sum_{i \in S}\alpha_{i}K(x, x_{i}) \]
The polynomial kernel is just one example of a nonlinear kernel. Another common choice is the radial kernel, which takes the following form:
\[ K(x_{i}, x_{i^{\prime}}) = \mathrm{exp}(-\gamma\sum_{j=1}^{p}(x_{ij} - x_{i^{\prime}j})^{2}) \]
The advantage to using kernels is that the computations can be performed without explicitly working in the enlarged feature space. For example, in the polynomial kernel, we are simply determining the summation of the inner products, and then transforming our result to a higher degree or dimension \( d \). This is known as the kernel trick.
SVMs with More than Two Classes
The concept of separating hyperplanes upon which SVMs are based does not lend itself naturally to more than two classes. However, there are a few approaches for extending SVMs beyond the binary class setting. The most common approaches are one-versus-one and one-versus-all.
One-Versus-One Classification
A one-versus-one or all-pairs approach develops multiple SVMs, each of which compares a pair of classes. Test observations are classified in each of the SVMs. In the end, we count the number of times that the test observation is assigned to each class. The class that the observation was assigned to most is the class assigned to the test observation.
One-Versus-All Classification
The one-versus-all approach develops multiple SVMs, each of which compares one class to all of the other classes. Assume that each SVM resulted in the following parameters from comparing some class \( k \) to all of the others:
\[ \beta_{0k},\ \beta_{1k},\ \dots,\ \beta_{pk} \]
Let \( x \) represent a test observation. The test observation is assigned to the class for which the following is the largest:
\[ \beta_{0k} + \beta_{1k}(x_{1}) + \dots + \beta_{pk}(x_{p}) \]
SVMs vs Logistic Regression
As previously mentioned, only the support vectors end up playing a role in the support vector classifier that is obtained. This is because the loss function is exactly zero for observations that are on the correct side of the margin.
The loss function for logistic regression is not exactly zero anywhere. However, it is very small for observations that are far from the decision boundary.
Due to the similarities between their loss functions, support vector classifiers and logistic regression often give similar results. However, when the classes are well separated, support vector machines tend to perform better. In cases where there is more overlap, logistic regression tends to perform better. In any case, both should always be tested, and the method that performs best should be chosen.
ISLR Chapter 9 - R Code
Support Vector Classifiers
library(ISLR)
library(MASS)
library(e1071)
# We will generate a random dataset of observations belonging to 2 classes
set.seed(1)
x=matrix(rnorm(20*2), ncol=2)
y=c(rep(-1, 10), rep(1, 10))
x[y==1,]=x[y==1,] + 1
plot(x, col=(3-y))
# To use SVM, the response must be encoded as a factor variable
data = data.frame(x=x, y=as.factor(y))
# Fit a Support Vector Classifier with a cost of 10
# The scale argument is used to scale predictors
# In this example, we will not scale them
svmfit = svm(y~., data=data, kernel="linear", cost=10, scale=FALSE)
# Plot the fit
plot(svmfit, data)
# Determine which observations are the support vectors
svmfit$index
# Fit an SVM with a smaller cost of 0.1
svmfit = svm(y~., data=data, kernel="linear", cost=0.1, scale=FALSE)
# The e1071 library contains a tune function
# The function performs cross-validation with different cost values
tune.out = tune(svm, y~., data=data, kernel="linear", ranges=list(cost=c(0.001, 0.01, 0.1, 1, 5, 10)))
# Check the summary to see the error rates of the different models
# The model with a cost of 0.1 has the lowest error
summary(tune.out)
# Choose the best model
bestmod = tune.out$best.model
summary(bestmod)
# The predict function can be used to predict classes on a set of test observations
xtest = matrix(rnorm(20*2), ncol=2)
ytest = sample(c(-1, 1), 20, rep=TRUE)
xtest[ytest==1,]=xtest[ytest==1,] + 1
testdata=data.frame(x=xtest, y=as.factor(ytest))
ypred = predict(bestmod, testdata)
table(predict=ypred, truth=testdata$y)
# Now, we will fit a Support Vector Machine model
# We can do this by simply using a non-linear kernel in the svm function
# Generate a dataset with a non-linear class boundary
x=matrix(rnorm(200*2), ncol=2)
x[1:100,]=x[1:100,]+2
x[101:150,]=x[101:150,]-2
y=c(rep(1, 150), rep(2, 50))
plot(x, col=y)
# Split the data into training and test sets
train = sample(200, 100)
# Fit an SVM with a radial kernel
svmfit=svm(y~., data=data[train,], kernel="radial", gamma=1, cost=1)
plot(svmfit, data[train,])
# Perform cross-validation using the tune function to test different choices for cost
tune.out = tune(svm, y~., data=data[train,], kernel="radial",
ranges=list(cost=c(0.1, 1, 10, 100, 1000),
gamma=c(0.5, 1, 2, 3, 4)))
# Cost of 1 and Gamma of 2 has the lowest error
# Test the model on the test dataset
table(true=data[-train,"y"], pred=predict(tune.out$best.model, newx=data[-train,]))
Subscribe to Bijen Patel
More in Guide
ISLR Chapter 10 - Unsupervised Learning
10 Aug 2020 – 13 min read
ISLR Chapter 8 - Tree-Based Methods
8 Aug 2020 – 11 min read
ISLR Chapter 7 - Moving Beyond Linearity
Summary of Chapter 10 of ISLR. In unsupervised learning, we have features, but no response. The goal is not to predict anything. Instead, the goal is to discover subgroups and relationships.
Bijen Patel 10 Aug 2020 • 13 min read
Summary of Chapter 8 of ISLR. Simple tree-based methods are useful for interpretability. More advanced methods, such as random forests and boosting, greatly improve accuracy, but lose interpretability.
Bijen Patel 8 Aug 2020 • 11 min read
Bijen Patel © 2023
Latest Posts Ghost
You've successfully subscribed to Bijen Patel!
|
CommonCrawl
|
Cryptography Meta
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.
What is so special about elliptic curves?
There seems to be sources like this, this also, and some introductions that discuss elliptic curves in general and how they're used. But what I'd like to know is why these particular curves are so important in cryptography as opposed to, let's say, any other polynomial degree $\gt$ 2 which you can then mod over some group. It seems like once a modulus is applied then other function types should be acceptable as well.
It seems even less intuitive when just looking at the bubble vs curve as here:
Since there are other curves (let's say anything from a sin wave to the $x^3 + x$ or even just some unusually shaped contour) that could do the job. It seems like they would provide much more surface area to get a larger space in $\mathbb{Z}_p$ or really just more possible combinations of connecting lines from some arbitrary $P$ and $Q$ to get $R$ as opposed to something as restrictive (on the graph) as beginning from some bubble (which would seem to unnecessarily reduce the possible combinations) and then use a modulus to implement the discrete logarithm problem.
Sorry if this seems a little naive of a question, I'm trying to write an implementation right now and just to understand it fully even if that means asking something that is taken for granted. Perhaps just walking through a simple example (most of the ones I've searched are anything but), just a few sentences, would be rather helpful, from "A wants to talk to B" all the way up to "now E can't listen in between A and B".
So it seems like this is the version of elliptic curves over a finite field:
Yes that looks pretty random. But I'm still not really seeing why they are the only equations that have cryptographic significance. It's difficult to imagine that if you simply took some other higher degree equation and applied modulus (to place within a group), then it seems like it would make sense that you'd get something that's also comparatively random.
elliptic-curves discrete-logarithm finite-field
stackuserstackuser
Elliptic curves are not the only curves that have groups structure, or uses in cryptography. But they hit the sweet spot between security and efficiency better than pretty much all others.
For example, conic sections (quadratic equations) do have a well-defined geometric addition law: given $P$ and $Q$, trace a line through them, and trace a parallel line that goes through the identity element. Here's a handy picture for one of the best known conics, the unit circle $x^2 + y^2 = 1$:
If you take the identity element to be $(1, 0)$, then you get the very simple addition formula (modulo your favorite prime)
$$ (x_3, y_3) = (x_1x_2 - y_1y_2, x_1y_2 + x_2y_1)$$
This is much faster than regular elliptic curve formulas, so why not use this? Well, the problem with conics is that the discrete logarithm in this group is no stronger than the discrete logarithm over the underlying field! So we would need very large keys, like prime-field discrete logarithms, without any advantage. That's not good.
So we move on to elliptic curves, which do not have reductions to the logarithm on the underlying field.
But wait, we can generalize elliptic curves to higher degrees. In fact,
$$ y^2 = x^{2g+1} + \ldots $$
when $g > 1$ and some restrictions are respected, is called a hyperelliptic curve, and we can work on it too. But for these curves there does not exist a nice geometric rule to add points, like in conics and elliptic curves. So we are forced to work in the Jacobian group of these curves, which is not the group of points anymore, but of divisors (which are kind of like polynomials of points, if that makes any sense). This group has size $\approx p^g$, when working modulo a prime $p$.
Hyperelliptic curves do have some advantages: since the group size is much larger than the prime, we can work modulo smaller primes for the same cryptographic strength. But ultimately, hyperelliptic curves fall prey to index calculus as well when $g$ starts to grow. In practice, only $g \in \{2,3\}$, i.e., polynomials of degree $5$ or $7$ offer similar security as elliptic curves. To add insult to injury, as Watson said, the addition formulas also get much more complicated as $g$ grows.
There are also further generalizations of hyperelliptic curves, like superelliptic curves, $C_{a,b}$ curves, and so on. But similar comments apply: they simply do not bring advantages in either speed or security over elliptic curves.
Samuel NevesSamuel Neves
$\begingroup$ So elliptic curves are like the sweet spot over a continuum of equations that might be used over a group. That continuum seems like it ranges from "way too expensive for computation" (hyper/super elliptic curves) to "the DLP is not difficult enough" (putting P and Q over a conic section or circle). Although your conic section made me wonder why a sphere in $R^{3}$ wouldn't work, too expensive perhaps. +1 and accepted. $\endgroup$ – stackuser Nov 6 '13 at 15:55
$\begingroup$ A sphere ($x^2 + y^2 + z^2 = 1$) would still not be secure; however, if you intersect two quadric surfaces (i.e. surfaces defined by quadratic polynomials) you actually can get a secure curve, which --- guess what --- is actually an elliptic curve! The Jacobi intersection curves are an example of this. $\endgroup$ – Samuel Neves Nov 8 '13 at 2:01
Elliptic curves have a number of nice features that make them good for cryptography. One could write a whole book on the topic (as some have), so I'll highlight a few points.
The points on an elliptic curve over a finite field forms a group. The same is not true for the ideas you mentioned.
Discrete log on many of these EC groups is hard. In fact, there are no sub exponential algorithms to solve DLP in these groups as there are for other groups we often use in crypto (e.g., $\mathbb{Z}_p$). This means we have smaller key sizes and faster operations.
Elliptic curves have been successfully applied to cryptanalytic problems such as factoring.
We have been able to do some other cool things with elliptic curves such as pairings that we haven't gotten in any other setting.
mikeazomikeazo
$\begingroup$ 1. Wouldn't any curve be able to form a group if some modulus is applied? Many ec's don't even have the bubble unless $a \lt 0$ and $b \lt 1$ so it seems like any other wavy line in that case. $\endgroup$ – stackuser Nov 5 '13 at 2:37
$\begingroup$ The ECs that you see that look nice are over the reals. The elliptic curves over finite fields do not look that way at all. In fact they look pretty random which is another reason they are good for cryptography. AFAIK other functions over finite fields do not form a group. $\endgroup$ – mikeazo Nov 5 '13 at 3:03
$\begingroup$ OK so it makes more sense now, after seeing the before and after transformation of it going over the finite field (like caterpillar to butterfly). I added an EDIT as to what I'm still not really getting about why the ec's are so special in that way. It just seems like applying modulus (placing within a group) most equations with 2 higher degree variables would have a comparable effect to create something random enough for cryptographic purposes. $\endgroup$ – stackuser Nov 5 '13 at 3:46
$\begingroup$ @stackuser While you can define some kind of "addition" for most types of curves using a similar geometric construction like the one for elliptic curves, it is not automatically given that this operation is associative and has neutral and inverse elements, i.e. forms a group. $\endgroup$ – Paŭlo Ebermann Nov 5 '13 at 7:31
$\begingroup$ Neither points itself, nor "other functions" form a group; points with an operation do, and that operation can be creative subject to associativity, neutral and inverse as Paulo Ebermann say. $\endgroup$ – Vadym Fedyukovych Feb 7 '18 at 23:17
You are not wrong: given any variety $V$, we can form the Jacobian $J(V)$ as an abelian variety, in particular an abelian group over which we could use the Diffie-Hellman problem. However, there are several details that get in the way of doing this. First, it is necessary to compute the order of the Jacobian. We only know how to do this for elliptic curves. Secondly for higher genus there are various reductions to the case of lower genus. Lastly the greater the genus, and hence degree, the more complex the formulas that have to be used to do these calculations. The Handbook of Elliptic and Hyperelliptic Curve Cryptography is an excellent reference on these issues.
Watson LaddWatson Ladd
Thanks for contributing an answer to Cryptography Stack Exchange!
Not the answer you're looking for? Browse other questions tagged elliptic-curves discrete-logarithm finite-field or ask your own question.
What is the difference between Elliptic curves and Hyper-elliptic curves in terms of security?
Why do they use elliptic curve instead of circle or other simpler curves?
Basic explanation of Elliptic Curve Cryptography?
Why can we not use the group $Z_{p}^{*}$ for cryptography?
How Were secp*k1 elliptic curve generators chosen?
How to generate own secure elliptic curves?
Safe Elliptic curves of prime order according to "safecurves"
addition on finite elliptic curves
Elliptic curves with pairings at 128-bit security in libpbc?
What is the correct elliptic curve representation?
Pedersen commitment in elliptic curves
NIST elliptic curves behaving anamolous in OPENSSL benchmark
Efficient calculation of point coordinates with elliptic curves over binary field
|
CommonCrawl
|
International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems
IPMU 2022: Information Processing and Management of Uncertainty in Knowledge-Based Systems pp 681–695Cite as
Handling Disagreement in Hate Speech Modelling
Petra Kralj Novak ORCID: orcid.org/0000-0003-3385-643012,13,
Teresa Scantamburlo ORCID: orcid.org/0000-0002-3769-887414,
Andraž Pelicon ORCID: orcid.org/0000-0002-2060-667013,15,
Matteo Cinelli ORCID: orcid.org/0000-0003-3899-459216,
Igor Mozetič ORCID: orcid.org/0000-0002-5466-060813 &
Fabiana Zollo ORCID: orcid.org/0000-0002-0833-538814
First Online: 04 July 2022
Part of the Communications in Computer and Information Science book series (CCIS,volume 1602)
Hate speech annotation for training machine learning models is an inherently ambiguous and subjective task. In this paper, we adopt a perspectivist approach to data annotation, model training and evaluation for hate speech classification. We first focus on the annotation process and argue that it drastically influences the final data quality. We then present three large hate speech datasets that incorporate annotator disagreement and use them to train and evaluate machine learning models. As the main point, we propose to evaluate machine learning models through the lens of disagreement by applying proper performance measures to evaluate both annotators' agreement and models' quality. We further argue that annotator agreement poses intrinsic limits to the performance achievable by models. When comparing models and annotators, we observed that they achieve consistent levels of agreement across datasets. We reflect upon our results and propose some methodological and ethical considerations that can stimulate the ongoing discussion on hate speech modelling and classification with disagreement.
Annotator agreement
Diamond standard evaluation
The authors acknowledge financial support from the EU REC Programme (2014–2020) project IMSyPP (grant no. 875263), the Slovenian Research Agency (research core funding no. P2-103), and from the project "IRIS: Global Health Security Academic Research Coalition".
Download conference paper PDF
Modern research in machine learning (ML) is driven by large datasets annotated by humans via crowdsourcing platforms or spontaneous online interactions [5]. Most annotation projects assume that a single preferred or even correct annotation exists for each item—the so-called "gold standard". However, this reflects an idealisation of how humans perceive and categorize the world. Virtually, all annotation projects encounter numerous cases in which humans disagree. The reasons behind disagreement can be various. For example, people can disagree because of accidental mistakes or misunderstandings experienced during the annotation process. In other cases, disagreement can originate from the inherent ambiguity of the annotation task or the annotators' subjective beliefs.
When labels represent different (subjective) views, ignoring this diversity creates an arbitrary target for training and evaluating models: If humans cannot agree, why would we expect the correct answer from a machine to be any different [7]? And, if the machine is able to learn an artificial gold standard, would it make it a perfect (infallible) predictor? The acknowledgement of multiple perspectives in the production of ground truth stimulated a reconsideration of the classical gold standard and the growth of a new research field developing alternative approaches. A recent work proposed a data perspectivist approach to ground truthing and suggested a spectrum of possibilities ranging from the traditional gold standard to the so-called "diamond standard", in which multiple labels are kept throughout the whole ML pipeline [3]. It has also been observed that training directly from soft labels (i.e., distributions over classes) can achieve higher performance than training from aggregated labels under certain conditions (e.g., large datasets and high quality annotators) [24]. Studies in hate speech classification came to similar conclusions and showed that supervised models informed by different perspectives on the target phenomena outperform a baseline represented by models trained on fully aggregated data [1].
In this paper, we focus on hate and offensive speech detection, which, similarly to other tasks like sentiment analysis, is inherently subjective. Thus, a disagreement between human annotators is not surprising. In sentiment analysis, disagreement ranges between 40–60% for low quality annotations, and between 25–35% even for high quality annotations [13, 17]. Until recently, the subjectivity factor has been largely ignored in favor of a gold standard [26, 27]. This led to a dramatic overestimation of model performance on human-facing ML tasks [12]. Here we investigate the specifics of hate speech annotation and modelling through the development of three large hate speech datasets and respective ML models. We present the process for data collection and annotation, the training of state-of-the-art ML models and the results achieved during the evaluation step. Our approach is characterized by two elements. First, we embrace disagreement among annotators in all phases of the ML pipeline and use a diamond standard for model training and evaluation. Second, we evaluate annotators' and models' performance through the lens of disagreement by applying the same performance measures to different comparisons (inter-annotator, self-agreement, and annotator vs model). Our experience led us to reflect and discuss a variety of methodological and ethical implications of handling multiple (conflicting) perspectives in hate speech classification. We conclude that disagreement is a genuine and crucial component of hate speech modelling and needs greater consideration within the ML community. A carefully designed annotation procedure supports the study of annotators' disagreement, discerns authentic dissent from spurious differences, and collects additional information that could possibly justify or contextualize the annotators' opinion. Moreover, a greater awareness of disagreement in hate speech datasets can generate more realistic expectations on the performance and limits of the ML models used to make decisions about the toxicity of online contents.
The paper is structured as follows. Section 2 presents the annotation process resulting in three large diamond standard hate speech datasets. Section 3 describes our training and evaluation of neural network-based models from diamond standard data, and reports the results by comparing the models' performance to the annotators' agreement. Finally, in Sect. 4, starting from our own results and experience, we discuss some implications of addressing disagreement in hate speech.
2 Data Selection and Annotation
Annotation campaign design and management drastically influences the quality of the annotated data. In this section, we first introduce the annotation schema used for annotating over 180,000 social media items in three different languages (English, Italian, and Slovenian). Then, we describe our annotation campaign and describe the procedure used to monitor and evaluate the annotation progress.
2.1 Annotation Schema
A simple and intuitive annotation schema facilitates the annotation efforts, and reduces possible errors and misunderstandings. However, since the definition of hate speech is a subtle issue there are other possible categorizations—see [18] for a systematic review. The annotation schema presented in this paper is adapted from the OLID [26] and FRENK [16] schemas, yet it is simpler, while retaining most of their expressiveness. The annotation procedure consists of two steps: first, the type of hate speech is determined, then the target of hate speech, when relevant, is identified. We distinguish between the following four speech types:
Acceptable: does not present inappropriate, offensive or violent elements.
Inappropriate: contains terms that are obscene or vulgar; but the text is not directed at any specific target.
Offensive: includes offensive generalizations, contempt, dehumanization, or indirect offensive remarks.
Violent: threatens, indulges, desires or calls for physical violence against a target; it also includes calling for, denying or glorifying war crimes and crimes against humanity.
In the case of offensive or violent speech, the annotation schema also includes a target. There are ten pre-specified targets: Racism, Migrants, Islamophobia, Antisemitism, Religion (other), Homophobia, Sexism, Ideology, Media, Politics, Individual, and Other. For Italian, an additional "North vs. South" target was included (see Sect. 4.1.). We used the same schema to annotate three datasets: English YouTube, Italian YouTube, and Slovenian Twitter (see Table 1).
Table 1. Description of the datasets used for model training and evaluation. There are data sources, topics covered, timeframe, and the number of annotated items in the training and evaluation sets.
2.2 Data Selection and Annotation Setup
For each language, we selected two separate sets of data for annotation to be used for training and evaluating machine learning models. To overcome the class-imbalance problem (most hate speech datasets are highly unbalanced [20], see also Table 2), the training data selection was optimized to get hate speech-rich training datasets. This was achieved by selecting the data from large collections based on simple classifiers trained on publicly available hate speech data: we used the FRENK data [16] for Slovenian and English, and a dataset of hate speech against immigrants for Italian [22]. This led to training datasets with about two times more violent hate speech (the minority class) than we would get from a random sample. The evaluation dataset was randomly sampled from a period strictly following the training data time-span.
Table 2. Distribution of hate speech classes across the three application datasets. There is the total size of the collected data, and the classes assigned by the hate speech classification models.
Annotators were recruited and selected in Slovenia and Italy. Excellent knowledge of the target language (native speakers of Slovenian and Italian and proficient users of English) as well as an interest in social media and hate speech problems were required. Annotators were provided with written annotations guidelinesFootnote 1 in their mother tongue. Guidelines included a description of the labels and the instructions on how to select them. They also provided practical information about the annotation interface and contact information to be used in case of doubts or requests. We provided continuous support to the annotators through online meetings and a dedicated group on Facebook.
Table 3. The annotator agreement and overall model performance. Two measures are used: Krippendorff's (ordinal) \( Alpha \) and accuracy (\( Acc \)). The first column is the self-agreement of individual annotators (available for Twitter data only), and the second column is the aggregated inter-annotator agreement between different annotators. The last two columns are the model evaluation results, on the training and the out-of-sample evaluation sets, respectively. Note that the overall model performance is comparable to the inter-annotator agreement.
Based on the number of annotators, we distributed the data according to the following constraints:
Each social media item should be annotated twice.
Each annotator gets roughly the same number of items.
All pairs of annotators have approximately the same overlap (in the number of items) for pair-wise annotator agreement computation.
For Twitter, each annotator is assigned some items (tweets) twice to compute self-agreement.
For YouTube: a) Threads (all comments to a video) are kept intact; b) Each annotator is assigned both long and short threads.
Such a careful distribution of work enables continuous monitoring and evaluation of the annotation progress and quality. The annotators were working remotely on their own schedule. Internal deadlines were set to discourage procrastination. We monitored the annotation progress by keeping track of the number of completed annotations and evaluating the self- and inter-annotator agreement measures (see Sect. 3.1). Agreement between (pairs of) annotators (see Table 3) was regularly computed during the process, enabling early detection of poorly-performing annotators, i.e., annotators disagreeing systematically with other annotators, either due to misunderstanding of the task, not following the guidelines or not devoting enough attention.
We used the described schema and protocol for developing three diamond standard datasets, and made them available on the Clarin repository: English YouTubeFootnote 2, Italian YouTubeFootnote 3, and Slovenian TwitterFootnote 4, summarized in Table 1. In the Slovenian dataset, the tweets are annotated independently, while the English and Italian datasets include contextual information in the form of threads of YouTube comments: Every comment is annotated for hate speech, yet the annotators were also given the context of discussion threads. Furthermore, the YouTube datasets are focused on the COVID-19 pandemic topic.
3 Model Training and Evaluation
We used the three diamond standard datasets to train and evaluate machine learning hate speech models. For each dataset, a state-of-the-art neural model based on a Transformer language model was trained end-to-end [6] to distinguish between the four speech classes. The models were trained directly on the diamond standard data, i.e., the training examples were repeated with several equal or disagreeing labels. For Italian, we used AlBERTo [19], a BERT-based language model pre-trained on a collection of tweets in the Italian language. For English, the base version of English BERT with 12 Transformer blocks [6] was used. For Slovenian, a trilingual CroSloEng-BERT [23], which was jointly pretrained on Slovenian, Croatian and English languages, was used. All three models are available at the IMSyPP project HuggingFace repositoryFootnote 5.
We used the Italian and Slovenian models in two previous analytical studies on hate speech in social media. The Italian model was used in a work investigating relationships between hate speech and misinformation sources on the Italian YouTube [4]. The Slovenian model was used to perform an analysis on the evolution of retweet communities, hate speech and topics on the Slovenian Twitter during 2018–2020 [8,9,10].
3.1 Evaluation Measures
A distinctive aspect of our approach is to apply the same measures a) to estimate the agreement between the human annotators and b) to estimate the agreement between the results of model classification and the manually annotated data. There are several measures of agreement, and to get robust estimates from different problem perspectives, we apply three well-known measures from the fields of inter-rater agreement and machine learning: Krippendorff's \( Alpha \), accuracy (\( Acc \)) and \(F_{1}\) score.
There are several properties of hate speech modelling that require special treatment: i) The four speech types are ordered, from normal to the most hateful, violent speech, and therefore disagreements have very different magnitudes, thus we use ordinal Krippendorff's \( Alpha \); ii) The four speech classes are severely imbalanced, a further reason to use Krippendorff's \( Alpha \); iii) Since we also need a class-specific measure of (dis)agreement, \(F_{1}\) is used.
The speech types are modelled by a discrete, ordered 4-valued variable \(c \in C\), where \(C = \{A, I, O, V\}\), and \(A \prec I \prec O \prec V\). The values of c denote acceptable speech (abbreviated A), inappropriate (I), offensive (O) or violent (V) hate speech. The data items that are labelled by speech types are either individual YouTube comments or Twitter posts. The data labeled by different annotators is represented in a reliability data matrix. The data matrix is a n-by-m matrix, where n is the number of items labeled, and m is the number of annotators. An entry in the matrix is a label \(c_{iu} \in C\), assigned by the annotator \(i \in \{1,\ldots ,m\}\) to the item \(u \in \{1,\ldots ,n\}\). The data matrix does not have to be full, i.e., some items might not be labelled by all the annotators.
A coincidence matrix is constructed from the reliability data matrix. It tabulates all the combined values of c from two different annotators. The coincidence matrix is a k-by-k square matrix, where \(k = |C|\), the number of possible values of C, and has the following form:
$$ \begin{array}{c|ccc|c} &{} &{} c' &{} &{} \sum \\ \hline &{} . &{} . &{} . &{} . \\ c &{} . &{} N(c,c') &{} . &{} N(c) \\ &{} . &{} . &{} . &{} . \\ \hline \sum &{} . &{} N(c') &{} . &{} N \\ \end{array} $$
An entry \(N(c,c')\) accounts for all coincidences from all pairs of annotators for all the items, where one annotator has assigned a label c and the other \(c'\). N(c) and \(N(c')\) are the totals for each label, and N is the grand total. The coincidences \(N(c,c')\) are computed as:
$$ N(c,c') = \sum _u\frac{N_u(c,c')}{m_u-1} \quad c,c' \in C $$
where \(N_u(c,c')\) is the number of \((c,c')\) pairs for the item u, and \(m_u\) is the number of labels assigned to the item u. When computing \(N_u(c,c')\), each pair of annotations is considered twice, once as a \((c,c')\) pair, and once as a \((c',c)\) pair. The coincidence matrix is therefore symmetrical around the diagonal, and the diagonal contains all the matching labelling.
We can now define the three evaluation measures that we use to quantify the agreement between the annotators, as well as the agreement between the model and the annotators. Since the annotators might disagree on the labels, there is no "gold standard". The performance of the model can thus only be compared to a (possibly inconsistent) labelling by the annotators.
Krippendorff's \( Alpha \)[14] is defined as follows:
$$ Alpha = 1 - \frac{D_{o}}{D_{e}} \,, $$
where \(D_{o}\) is the actual disagreement between the annotators, and \(D_{e}\) is disagreement expected by chance. When annotators agree perfectly, \( Alpha \) \(=1\), when there is a baseline agreement as expected by chance, \( Alpha \) \(=0\), and when the annotators disagree systematically, \( Alpha \) \(<0\). The two disagreement measures, \(D_{o}\) and \(D_{e}\), are defined as:
$$ D_{o} = \frac{1}{N} \sum _{c,c'} N(c,c') \cdot \delta ^2(c,c') \;,\;\;\;\;\; D_{e} = \frac{1}{N(N-1)} \sum _{c,c'} N(c) \cdot N(c') \cdot \delta ^2(c,c') \,. $$
The arguments \(N(c,c'), N(c), N(c')\) and N refer to the values in the coincidence matrix, constructed from the labeled data.
\(\delta (c,c')\) is a difference function between the values of c and \(c'\), and depends on the type of decision variable c (nominal, ordinal, interval, etc.). In our case, c is an ordinal variable, and \(\delta \) is defined as:
$$ \delta (c,c') = \sum _{i=c}^{i=c'} N(i) - \frac{N(c) + N(c')}{2} \quad e.g., \quad c,c',i \in \{1^{st}, 2^{nd}, 3^{rd}, 4^{th}\} \;. $$
Accuracy (\( Acc \)) is a common, and the simplest, measure of performance of the model which measures the agreement between the model and the "gold standard". However, it can be also used as a measure of agreement between two annotators. \( Acc \) is defined in terms of the observed disagreement \(D_{o}\):
$$ Acc = 1 - D_{o} = \frac{1}{N} \sum _{c} N(c,c) \,. $$
Accuracy does not account for the (dis)agreement by chance, nor for the ordering of hate speech classes. Furthermore, it can be deceiving in the case of unbalanced class distribution.
F-score (\(F_{1}\)) is an instance of a well-known effectiveness measure in information retrieval [25] and is useful for binary classification. In the case of multi-class problems, it can be used to measure the performance of the model to identify individual classes. \(F_{1}(c)\) is the harmonic mean of precision (\( Pre \)) and recall (\( Rec \)) for class c:
$$ F_{1}(c) = 2 * \frac{Pre(c) * Rec(c)}{Pre(c) + Rec(c)} \,. $$
In the case of a coincidence matrix, which is symmetric, the 'precision' and 'recall' are equal, since false positives and false negatives are both cases of disagreement. \(F_{1}(c)\) thus degenerates into:
$$ F_{1}(c) = \frac{N(c,c)}{N(c)} \,. $$
In terms of the annotator agreement, \(F_{1}(c)\) is the fraction of equally labelled items out of all the items with label c.
3.2 Annotator Agreement and Model Performance
For the evaluation, we use the same measures to estimate the agreement between the human annotators, and the agreement between the model classification and the manually annotated diamond standard data. Table 3 summarizes the overall annotator agreement and the models' performance in terms of Krippendorff's (ordinal) \( Alpha \) and accuracy (\( Acc \)) on all three datasets.
The annotators agree on the hate speech label on nearly 80% of the data points (\( Acc \) = 0.78–0.79). Our models agree with at least one annotator in over 80% of the cases (\( Acc \) = 0.80–0.84). Considering the high class imbalance and the ordering of the hate speech classes, a comparison in terms of Krippendorff's (ordinal) \( Alpha \) is more appropriate: Table 3 shows a very consistent agreement of about 0.6 (\( Alpha \) = 0.55–0.60) both between the annotators and the models on all three datasets.
The very misleading performance estimates as computed by accuracy are evident from Table 4. We consider two cases of binary classification. In the first case, all three types of speech which are not acceptable (e.g., inappropriate, offensive, or violent) are merged into a single, unacceptable class. In the second case, all types of speech which are not violent (e.g., acceptable, inappropriate, or offensive) are merged into a non-violent class. The performance of such binary classification is then estimated by \( Alpha \) and \( Acc \). The estimates in the first case are comparable to the results in Table 3. In the second case, however, the \( Alpha \) values drop considerably, while the \( Acc \) scores rise to almost 100% (\( Acc \) = 0.97–0.99). This is due to a high imbalance of the non-violent vs. violent items, with a respective ratio of more than 99:1. The \( Alpha \) score, on the other hand, indicates that the model performance is low, barely above the level of classification by chance (\( Alpha \) = 0.26–0.39 on the evaluation set).
Table 4. The annotator agreement and model performance for two cases of binary classification: Acceptable (A) vs. Unacceptable class (either I, O, or V), and Violent (V) vs. Non-violent class (either A, I, or O). The performance is measured by the \( Alpha \) and accuracy (\( Acc \)) scores. Note the very high and misleading \( Acc \) scores for the second case, where the class distribution between the Violent and Non-violent classes is highly imbalanced. The \( Alpha \) scores, on the other hand, are very low, barely above the level of classification by chance.
Class-specific results comparing the model and the annotator agreement in terms of \(F_{1}\) are available in Table 5. The \(F_{1}\) scores of the models would in absolute sense not be considered high. Yet they are comparable and in many cases even higher than the \(F_{1}\) scores between the annotators. The only exception (still consistent in all three datasets) is the relatively low models' performance for the violent class. This is consistent with the binary classification results (Non-violent vs. Violent) in Table 4. We hypothesise, with high degree of confidence, that a poor identification of the violent class is due to the scarcity of training examples.
Table 5. Class-specific annotator agreement and model performance. The classification is done into four hate speech classes (A, I, O, V), and the performance is measured by the \(F_{1}\) score for each class individually. Note a relatively low model performance for the Violent class (\(F_{1}\)(V)).
Given the intrinsically subjective nature of judging offensive and violent content, it might be argued that a diamond standard should be preferred in this and other similar contexts through all the phases of the machine learning pipeline. In the following, we discuss methodological and ethical implications of this approach.
4.1 Methodological Implications
Working with diamond standard data influences the data annotation, machine learning training and evaluation. We argue that selecting the data to be annotated, setting up the annotation campaign, monitoring its execution and evaluating the quality of annotations during and after the annotation campaign, are crucial steps that influence the final quality of the annotated data. Yet, the importance of annotation campaigns is often neglected in machine learning pipelines. An important practical dilemma when building diamond standard datasets is still to be investigated: when faced with an intrinsically subjective task (e.g., hate speech detection, sentiment analysis) how should one decide upon how many facets should a diamond have vs. how large should it be? The more diamond faces (i.e., the number of labels per item) ensure better data quality and enable the identification of ambiguous cases. Yet, when limited with the number of labels an annotation campaign can afford, is it better to have more data items labeled (thus a larger dataset with more variety) or more labels to the same items? Is this trade-off the same for the training as well as for the evaluation set?
Our second focus is on model evaluation: we propose a perspectivist view, as we evaluate model performance through the lens of disagreement by applying the same, proper performance measures to evaluate the annotator agreement and the model quality. Standard metrics assume a different meaning in a context where the same object can be assigned to multiple legitimate labels. For example, precision and recall lose the asymmetry that is implicitly assumed between the outcome retrieved from direct observation (also called 'real' outcome) and the prediction provided by the ML models, as we show in Sect. 3.1. In the case of ordered labels (e.g., our speech labels), mutual information, proposed by [24] as a good evaluation measure when learning with disagreement, is not appropriate as it neglects the labels' ordering. Proper performance measures in our case include ordinal Krippendorff's \( Alpha \), which accommodates both the ordered nature of the labels (from normal to the most hateful, violent speech, and consequently a varying magnitude of disagreements), and class imbalance (where the Violent class is underrepresented). Furthermore, we use \(F_{1}\) for the estimation of class-specific disagreement and misclassification, but not macro-\(F_{1}\). Macro-\(F_{1}\) is not an appropriate measure to aggregate individual \(F_{1}\) scores to estimate the overall model performance [11].
In our perspectivist view on model evaluation, model performance is closely tied to the agreement between annotators. This means that annotator agreement poses intrinsic limits to the performance achievable by the ML models. This is implemented by the use of the same measures for all comparisons (e.g., between the annotators and between the annotators and the model). We observed that the level of agreement between our models and the annotators reaches the inter-annotator agreement when applying the overall performance measure (ordinal Krippendorff's \( Alpha \)). This indicates that the model is limited by the annotator agreement and can not be drastically improved. However, when considering the class-specific \(F_{1}\) values, the model reaches the inter-annotator agreement in all classes except for the minority class (i.e., Violent). Without a comparison to the \(F_{1}\) scores of the annotators, or binary classification Non-violent vs. Violent, this shortcoming of the classification model would not have been detected.
4.2 Ethical Implications
The problem of ground truthing in hate speech modelling has also some ethical and legal implications. Even though the perception and interpretation of offensive and violent speech can vary among people and cultures, it is also true that the lack of respect is a moral violation and can have tangible negative effects on subjects. Some people, for example, can suffer from depression or even physical injuries after being largely exposed to violent and offensive communication [21]. In this regard, many countries impose restrictions to protect individuals from discriminatory and threatening content and digital platforms strive for the limitation of hate speech.
Defining hate speech subsumes important decisions about the ethical and legal boundaries of public debates and bears responsibility for limiting the right of freedom of expression, thereby including or excluding people from democratic participation. Not surprisingly, the introduction of legal boundaries to remove hate speech from the public sphere has raised various criticisms. For example, some consider hate speech bans as a form of paternalism, incompatible with the assumption that humans are responsible and autonomous individuals, while others fear that the power of judging hate speech would put the state in a position to decide what can or cannot be said [2].
The tension between the right to safety and the right to freedom of expression becomes even more controversial when one deals with ML models for hate speech detection and removal. In this context, the decision as to whether accepting or rejecting a potentially harmful content leverages the capacity of ML algorithms to make accurate predictions. However, our results and other studies (e.g. [12]) suggest that measuring hate speech classification in terms of prediction accuracy can be elusive when annotators disagree: a classifier cannot be accurate when the data is inconsistent due to many conflicting views. Deliberating upon items that cannot be classified in a clear-cut way is a questionable practice and requires greater scrutiny among ML developers, managers and policy makers. Achieving a consensus in predictive tasks might not necessarily be an ideal outcome. On the contrary, diversity can improve collective predictions [15]. Moreover, if predictions are accompanied by additional information including the reasons behind the predictions, cultivating a positive disagreement can foster more fruitful judgments.
5 Conclusions and Future Work
In this paper, we adopt a perspectivist approach to data annotation, model training and evaluation of hate speech classification. Our first emphasis is on the annotation process leading to the diamond standard data, as we argue that it influences the final data quality, and thereof the machine learning model quality. As the main point, we propose a perspectivist view on model evaluation, as we evaluate model performance through the lens of disagreement by applying the same, proper performance measures to evaluate the annotator agreement and the model quality. We argue that annotator agreement poses intrinsic limits to the performance achievable by models. By following the same annotation protocol, model training and evaluation, we developed three large scale hate speech datasets and the corresponding machine learning models. All our results are consistent across the three datasets: Trained and reliable annotators disagree in about 20% of the cases, model performance reaches the annotator agreement in the overall evaluation, while for the minority class (Violent) there is still some room for improvement. A broad reflection on the role of disagreement in hate speech detection leads us to consider some methodological and ethical implications that could stimulate the ongoing debate, not limited to hate speech modelling but to subjective classification tasks where disagreement is likely to arise and make a difference.
Hate speech annotation guidelines in English are available as part of IMSyPP D2.1: http://imsypp.ijs.si/wp-content/uploads/IMSyPP-D2.1-Hate-speech-DB-2.pdf, starting from page 16.
English dataset: https://www.clarin.si/repository/xmlui/handle/11356/1454.
Italian dataset: https://www.clarin.si/repository/xmlui/handle/11356/1450.
Slovenian dataset: https://www.clarin.si/repository/xmlui/handle/11356/1398.
IMSyPP HuggingFace model repository: https://huggingface.co/IMSyPP.
Akhtar, S., Basile, V., Patti, V.: Modeling annotator perspective and polarized opinions to improve hate speech detection. In: Proceedings AAAI Conference on Human Computation and Crowdsourcing, vol. 8, pp. 151–154 (2020)
Anderson, L., Barnes, M.: Hate speech. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab Stanford University (2022)
Basile, V., Cabitza, F., Campagner, A., Fell, M.: Toward a perspectivist turn in ground truthing for predictive computing. arXiv:2109.04270 (2021)
Cinelli, M., Pelicon, A., Mozetič, I., Quattrociocchi, W., Novak, P.K., Zollo, F.: Dynamics of online hate and misinformation. Sci. Rep. 11(1), 1–12 (2021). https://doi.org/10.1038/s41598-021-01487-w
CrossRef Google Scholar
Cristianini, N., Scantamburlo, T., Ladyman, J.: The social turn of artificial intelligence. AI Soc. 1–8 (2021). https://doi.org/10.1007/s00146-021-01289-8
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 (2018)
Dumitrache, A., Aroyo, L.,Welty, C.: A crowdsourced frame disambiguation corpus with ambiguity. In: Proceedings of NAACL (2019)
Evkoski, B., Ljubešić, N., Pelicon, A., Mozetič, I., Kralj Novak, P.: Evolution of topics and hate speech in retweet network communities. Appl. Netw. Sci. 6(1), 1–20 (2021). https://doi.org/10.1007/s41109-021-00439-7
Evkoski, B., Mozetič, I., Ljubešić, N., Novak, P.K.: Community evolution in retweet networks. PLoS One 16(9), e0256175 (2021). https://doi.org/10.1371/journal.pone.0256175,Non-anonymized version available at arXiv:2105.06214
Evkoski, B., Pelicon, A., Mozetič, I., Ljubešić, N., Novak, P.K.: Retweet communities reveal the main sources of hate speech. PLoS ONE 17(3), e0265602 (2022). https://doi.org/10.1371/journal.pone.0265602
Flach, P., Kull, M.: Precision-recall-gain curves: PR analysis done right. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, pp. 838–846. Curran Associates (2015)
Gordon, M.L., Zhou, K., Patel, K., Hashimoto, T., Bernstein, M.S.: The disagreement deconvolution: bringing machine learning performance metrics in line with reality. In: Proceedings CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2021)
Kenyon-Dean, K., et al.: Sentiment analysis: It's complicated! In: Proceedings of NAACL, pp. 1886–1895 (2018)
Krippendorff, K.: Content Analysis, An Introduction to its Methodology. Sage Publications, 4th edn. (2018)
Landemore, H., Page, S.E.: Deliberation and disagreement: problem solving, prediction, and positive dissensus. Politics Philos. Econ. 14(3), 229–254 (2015)
Ljubešić, N., Fišer, D., Erjavec, T.: The FRENK datasets of socially unacceptable discourse in Slovene and English (2019), arXiv:1906.02045
Mozetič, I., Grčar, M., Smailović, J.: Multilingual Twitter sentiment classification: the role of human annotators. PLoS One11(5), e0155036 (2016). https://doi.org/10.1371/journal.pone.0155036
Poletto, F., Basile, V., Sanguinetti, M., Bosco, C., Patti, V.: Resources and benchmark corpora for hate speech detection: a systematic review. Lang. Res. Eval. 55(2), 477–523 (2020). https://doi.org/10.1007/s10579-020-09502-8
Polignano, M., Basile, P., De Gemmis, M., Semeraro, G., Basile, V.: AlBERTo: Italian BERT language understanding model for NLP challenging tasks based on tweets. In: Italian Conference on Computational Linguistics, vol. 2481, pp. 1–6 (2019)
Rathpisey, H., Adji, T.B.: Handling imbalance issue in hate speech classification using sampling-based methods. In: IEEE International Conference on Science in Information Technology), pp. 193–198 (2019)
Saha, K., Chandrasekharan, E., De Choudhury, M.: Prevalence and psychological effects of hateful speech in online college communities. In: Proceedings 10th ACM Conference on Web Science, pp. 255–264 (2019)
Sanguinetti, M., Poletto, F., Bosco, C., Patti, V., Stranisci, M.: An Italian Twitter corpus of hate speech against immigrants. In: Proceedings of 11th International Conference on Language Resources and Evaluation (2018)
Sojka, P., Kopeček, I., Pala, K., Horák, A. (eds.): TSD 2020. LNCS (LNAI), vol. 12284. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58323-1
Uma, A.N., Fornaciari, T., Hovy, D., Paun, S., Plank, B., Poesio, M.: Learning from disagreement: a survey. Artif. Intell. Res. 72, 1385–1470 (2021)
Van Rijsbergen, C.: Information Retrieval. Butterworth, 2nd edn. (1979)
Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., Kumar, R.: Predicting the type and target of offensive posts in social media. In: Proceedings of NAACL-HLT, pp. 1415–1420 (2019)
Zampieri, M., Nakov, P., Rosenthal, S., Atanasova, P., Karadzhov, G., Mubarak, H., Derczynski, L., Pitenis, Z., Çöltekin, Ç.: SemEval-2020 task 12: Multilingual offensive language identification in social media. arXiv:2006.07235 (2020)
Central European University, Vienna, Austria
Petra Kralj Novak
Jožef Stefan Institute, Ljubljana, Slovenia
Petra Kralj Novak, Andraž Pelicon & Igor Mozetič
Ca' Foscari University, Venice, Italy
Teresa Scantamburlo & Fabiana Zollo
Jožef Stefan International Postgraduate School, Ljubljana, Slovenia
Andraž Pelicon
Sapienza University, Rome, Italy
Matteo Cinelli
Teresa Scantamburlo
Igor Mozetič
Fabiana Zollo
Correspondence to Fabiana Zollo .
Editors and Affiliations
University of Milano-Bicocca, Milan, Italy
Davide Ciucci
University of Oviedo, Oviedo, Spain
Prof. Dr. Inés Couso
University of Cádiz, Cádiz, Spain
Jesús Medina
University of Warsaw, Warsaw, Poland
Dominik Ślęzak
University of Perugia, Perugia, Italy
Prof. Davide Petturiti
Sorbonne Université, Paris, France
Prof. Dr. Bernadette Bouchon-Meunier
Iona College, New Rochelle, NY, USA
Prof. Ronald R. Yager
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
About this paper
Kralj Novak, P., Scantamburlo, T., Pelicon, A., Cinelli, M., Mozetič, I., Zollo, F. (2022). Handling Disagreement in Hate Speech Modelling. In: , et al. Information Processing and Management of Uncertainty in Knowledge-Based Systems. IPMU 2022. Communications in Computer and Information Science, vol 1602. Springer, Cham. https://doi.org/10.1007/978-3-031-08974-9_54
DOI: https://doi.org/10.1007/978-3-031-08974-9_54
Publisher Name: Springer, Cham
Online ISBN: 978-3-031-08974-9
eBook Packages: Computer ScienceComputer Science (R0)
|
CommonCrawl
|
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Is every ''group-completion'' map an acyclic map?
I start with a longer discussion which will result in a precise version of the question. A am puzzled about an issue with the Quillen plus construction. I have seen outstanding experts being confused about this point. There are the following different ways of calling a map $f:X \to Y$ a homology equivalence:
$f_*:H_*(X;\mathbb{Z}) \to H_*(Y;\mathbb{Z})$ is an isomorphism ("weak homology equivalence").
For each abelian system of local coefficients $A$ on $Y$ ($\pi_1 (Y)$ acts through an abelian group), the induced map $H_* (X;f^* A) \to H_* (Y;A)$ is an isomorphism ("strong homology equivalence").
For each system of local coefficients $A$ on $Y$, the induced map $H_* (X;f^* A) \to H_* (Y;A)$ is an isomorphism ("acyclic map").
The third condition is equivalent to each of
3'. The homotopy fibres of $f$ are acyclic. 3'''. $f$ is can be identified with the Quillen plus construction.
EDIT: Before, I included the statement ''3''. $f$ is weak homology equivalence, $\pi_1 (f)$ is epi and $ker(\pi_1 (f))$ is perfect.'' This is false (does not imply the other two conditions); in my answer to Spaces with same homotopy and homology groups that are not homotopy equivalent? I gave an example of a weak homology equivalence that is even an isomorphism on $\pi_1$, but whose homotopy fibre is not acyclic. END EDIT
The implications $(3)\Rightarrow (2)\Rightarrow ( 1)$ hold. If all components of $Y$ are simply connected, then all these notions coincide; if $\pi_1 (Y)$ is abelian (each component), then $(2)\Rightarrow(3)$. In that case, $\pi_1 (X)$ is quasiperfect (i.e., its commutator subgroup is perfect). If $\pi_1 (Y)$ is nonabelian, then $(2)$ does not imply $(3)$ (take the inclusion of the basepoint into a noncontractible acyclic space). Even if $Y$ is an infinite loop space, a weak homology equivalence does not have to be strong: Take $X=BSL_2 (Z)$, $Y=Z/12$. The abelianization of $SL_2 (Z)$ is $Z/12$, and the map $SL_2 (Z) \to Z/12$ is a weak homology equivalence. The kernel, however, is a free group on two generators.
Now, many cases of such maps arise in the process of ''group completion''. Here are some examples
$X=K_0 (R) \times BGL (R)$ for a ring and $Y=\Omega B (\coprod_{P} B Aut (P))$ ($P$ ranges over all finitely generated projective $R$-modules). The commutator subgroup is perfect due to the Whitehead lemma.
$X=\mathbb{Z} \times B \Sigma_{\infty}$; $Y=QS^0$. The alternating groups are perfect.
$X=\mathbb{Z} \times B \Gamma_{\infty}$ (the stable mapping class group); $Y$ the Madsen-Weiss infinite loop space. Here there is no problem, $\Gamma_g$ is perfect for large $g$.
$X=\mathbb{Z} \times B Out(F_{\infty})$ (outer automorphisms of the free group), $Y=Q S^0$. Galatius proves that this is a weak homology equivalence and he states implicitly this map is a strong homology equivalence.
I explain why I am interested: if you only look at the homology of $X$ and $Y$, this is only an aesthetical question. I want to take homotopy fibres and as explained above, the distinction is essential and a mistake here can ruin any argument.
In all these cases, there is a topological monoid $M$ (for example $\coprod_{P} B Aut (P)$) and $X$ is the limit $M_{\infty}$ obtained by multiplying with a fixed element. There is an identification $\Omega BM$ with $Y$ that results from geometric arguments and does not play a role in this discussion.
There is a map $\phi:M_{\infty} \to \Omega BM$, which is the subject of the ''group-completion theorem'', see the paper "Homology fibrations and the ''group completion'' theorem" by McDuff-Segal. The map arises from letting $M$ act on $M_{\infty}$ and forming the Borel construction $EM \times_M M_{\infty} \to BM$. The point-preimage is $M_{\infty}$, the space $EM \times_M M_{\infty}$ is contractible and so the homotopy fibre is $\Omega BM$. $\phi$ is the ''geometric-fibre-to-homotopy-fibre'' map.
What Segal and McDuff prove is that if the action is by weak homology equivalences, then $\phi$ is a weak homology equivalence. This is what is typically used to established the above results. To prove that 1,2,3 are strong homology equivalences, one can invoke an extra argument which is specific to each case.
Now, in McDuff-Segal, I find the claim (Remark 2) that their methods give that $M_{\infty} \to \Omega BM$ is a strong homology equivalence and I want to understand this.
I convinced myself that the whole argument goes through with strong homology equivalences (and the corresponding notion of "strong homology fibration"). Proposition 2 loc.cit. then has the assumption that $M$ acts on $M_{\infty}$ by strong homology equivalences (one needs the notion of homology equivalences one wants to prove in the end - which I find plausible).
This amounts, say in example 4, to prove that the stable stabilization map $B Out(F_{\infty}) \to B Out(F_{\infty})$ is a homology equivalence in the strong sense. For "weak homology equivalence", one invokes the usual homology stability theorem (Hatcher-Vogtmann-Wahl). But it seems that for the map being a strong homology equivalence, one needs a stronger homological stability result. I can imagine how the homological stability arguments can be modified to include abelian coefficient system, but that is not a satisfying solution.
Here are, finally, some questions:
McDuff and Segal refer to ''argument by Wagoner'' in his paper ''Delooping classifying spaces in algebraic K-Theory''. I am unable to find an argument in Wagoners paper that proves under general assumptions quasiperfectness. What argument do McDuff and Segal refer to?
If $M$ is a topological monoid and if $M_{\infty} \to \Omega BM$ is a weak homology equivalence, is it always a strong homology equivalence?
If not, do you know a counterexample?
If 2 is not true, is there a useful general criterion to prove that the group completion map is acyclic (besides the trivial case $H_1 (M_{\infty})=0$ and besides proving quasiperfectness of $\pi_1 (M_{\infty})$ by hands).
A related, but not central question:
What are good counterexamples to the ''group-completion'' theorem in general that explain why the hypothesis is essential?
at.algebraic-topology kt.k-theory-and-homology fundamental-group homotopy-theory algebraic-k-theory
მამუკა ჯიბლაძე
Johannes EbertJohannes Ebert
$\begingroup$ I don't have time for a long answer, but the standard notion of group completion in infinite loop space theory is an H-map X >--> Y, where \pi_0(Y) is a group and (to avoid an unconvincing morasse in the literature) X and Y are homotopy associative and commutative such that \pi_0(X) >--> \pi_0(Y) is a group completion in the obvious sense and for every commutative ring (not just Z) the map H_*(X;R) >--> H_*(Y;R) is a localization of graded rings obtained by inverting the elements of \pi_0(X) [these elements being an R-basis for H_0(X;R)]. McDuff somewhere published corrections to M-Segal. $\endgroup$
– Peter May
$\begingroup$ @Peter May: Am I overlooking something here, or is it just a reformulation of the problem? I guess the important case is where $R$ is the group ring of the fundamental group of $Y$. But verifying the hypothesis that $\pi_0$ is central in this ring needs (say for $Out (F_{\infty})$) homological stability with abelian coefficients, just in the same way the argument works for constant coefficients. With the ''standard notion of group completion'', the question becomes: how do I see that the above maps are group completions? $\endgroup$
– Johannes Ebert
$\begingroup$ Sorry, I was just explaining the term ``group completion'' for those readers who might not know. Your question is all about isomorphisms, so perhaps doesn't make that clear. From the point of view of group completion as I defined it, introducing $M_{\infty}$ is irrelevant: if $M$ is a topogical monoid, $\pi_0(M)$ is homotopically central in $M$, and $\pi_0(\Omega BM)$ is homotopically central in $\Omega BM$, then the natural map $M\to \Omega BM$ is a group completion. In that generality, $M_{\infty}$ as you define it using just one element can be misleading. I doubt this is helpful to you. $\endgroup$
$\begingroup$ I have written up what I know about this problem, and it is available at dpmms.cam.ac.uk/~or257/GCrem.pdf. $\endgroup$
– Oscar Randal-Williams
$\begingroup$ The corrections to the McDuff-Segal paper Peter May mentioned can be found as Lemma 3.1 in D.McDuff - "The homology of some groups of diffeomorphisms.". $\endgroup$
– archipelago
I think I have been able to reproduce the "argument by Wagoner" (perhaps it was removed from the published version?). It certainly holds in more generality that what I have written below, using the notion of "direct sum group" in Wagoner's paper (which unfortunately seems to be a little mangled).
Let $M$ be a homotopy commutative topological monoid with $\pi_0(M)=\mathbb{N}$. Choose a point $1 \in M$ in the correct component and let $n \in M$ be the $n$-fold product of 1 with itself, and define $G_n = \pi_1(M,n)$. The monoid structure defines homomorphisms $$\mu_{n,m} : G_n \times G_m \longrightarrow G_{n+m}$$ which satisfy the obvious associativity condition. Let $\tau : G_n \times G_m \to G_m \times G_n$ be the flip, and $$\mu_{m,n} \circ \tau : G_n \times G_m \longrightarrow G_{n+m}$$ be the opposite multiplication. Homotopy commutativity of the monoid $M$ not not ensure that these two multiplications are equal, but it ensures that there exists an element $c_{n,m} \in G_{n+m}$ such that $$c_{n,m}^{-1} \cdot \mu_{n,m}(-) \cdot c_{n,m} = \mu_{m,n} \circ \tau(-).$$
Let $G_\infty$ be the direct limit of the system $\cdots \to G_n \overset{\mu_{n,1}(-,e)}\to G_{n+1} \overset{\mu_{n+1,1}(-,e)}\to G_{n+2} \to \cdots$.
Theorem: the derived subgroup of $G_\infty$ is perfect.
Proof: Let $a, b \in G_n$ and consider $[a,b] \in G'_\infty$. Let me write $a \otimes b$ for $\mu_{n,m}(a, b)$ when $a \in G_n$ and $b \in G_m$, for ease of notation, and $e_n$ for the unit of $G_n$.
In the direct limit we identify $a$ with $a \otimes e_n$ and $b$ with $b \otimes e_n$, and we have $$b \otimes e_n = c_{n,n}^{-1} (e_n \otimes b) c_{n,n}$$ so $b \otimes e_n = [c_{n,n}^{-1}, (e_n \otimes b)] (e_n \otimes b)$. Thus $$[a \otimes e_n, b \otimes e_n] = [a \otimes e_n, [c_{n,n}^{-1}, (e_n \otimes b)] (e_n \otimes b)]$$ and because $e_n \otimes b$ commutes with $a \otimes e_n$ this simplifies to $$[a \otimes e_n, [c_{n,n}^{-1}, (e_n \otimes b)]].$$ We now identify this with $$[a \otimes e_{3n}, [c_{n,n}^{-1}, (e_n \otimes b)] \otimes e_{2n}]$$ and note that $a \otimes e_{3n} = c_{2n,2n}^{-1}(e_{2n} \otimes a \otimes e_{n})c_{2n,2n} = [c_{2n,2n}^{-1}, (e_{2n} \otimes a \otimes e_{n})]\cdot (e_{2n} \otimes a \otimes e_{n})$. Again, as $(e_{2n} \otimes a \otimes e_{n})$ commutes with $[c_{n,n}^{-1}, (e_n \otimes b)] \otimes e_{2n}$ the whole thing becomes $$[a,b]=[[c_{2n,2n}^{-1}, (e_{2n} \otimes a \otimes e_{n})], [c_{n,n}^{-1}, (e_n \otimes b)] \otimes e_{2n}],$$ a commutator of commutators.
122 silver badges33 bronze badges
Oscar Randal-WilliamsOscar Randal-Williams
$\begingroup$ Great! Thanks! Do you see whether you can generalize it to the ''multi-object-case'' as in GMTW or the noncommutative case? I don't. $\endgroup$
$\begingroup$ No, I don't, and I invested considerable fruitless effort in trying to find a counterexample. $\endgroup$
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology kt.k-theory-and-homology fundamental-group homotopy-theory algebraic-k-theory or ask your own question.
Group completion theorem
Acyclic models via model categories?
group completion theorem by using homology fibrations
group completion theorem of homology as Hopf algebras
Group completion of $E_k$-algebras
|
CommonCrawl
|
<< Friday, September 14, 2018 >>
Workshop | September 14 – 15, 2018 every day | 9 a.m.-6 p.m. | Stephens Hall, Geballe Room, 220
Sponsors: The Program in Critical Theory, Department of Comparative Literature, Center for Race and Gender, Department of English, Haas Institute for a Fair and Inclusive Society, Department of French, Department of Political Science, Townsend Center for the Humanities, Department of Rhetoric, Arts & Humanities, Letters & Science Division of
[This is a multiday event. Please see below for important details regarding participation and schedule. Event flyer is also included below.]
Articulating and performing a mode of reading that responds to the challenges of the present has been a constant endeavor not only in literary studies, but in all academic disciplines. Technological and scientific developments require us to... More >
Attendance restrictions: Seminar participation requires completing preselected readings beforehand.
Representation Learning with Contrastive Predictive Coding
Seminar | September 14 | 11 a.m.-12 p.m. | 310 Sutardja Dai Hall
Speaker: Aaron van den Oord, Research Scientist, Deepmind
Sponsor: BAIR | CPAR | EECS
While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations... More >
Preparing for the job search: Environmental Engineering Seminar
Seminar | September 14 | 12-1 p.m. | 534 Davis Hall
Speaker/Performer: Dr. Andrew Green, PhD Counselor, Berkeley Career Center
Sponsor: Civil and Environmental Engineering (CEE)
The Struggle for Labour's Soul: A Book Talk with Matt Beech
Presentation | September 14 | 12-1 p.m. | 201 Moses Hall
Speaker/Performer: Matt Beech, IES Senior Fellow and Director of the Centre for British Politics at the University of Hull
Sponsors: Institute of European Studies, Anglo-American Studies Program
IES Senior Fellow and Director of the Centre for British Politics at the University of Hull, Dr. Matt Beech FRHistS, FRSA, will speak about his new book.
The Struggle for Labour's Soul: Understanding Labour's Political Thought Since 1945 (Routledge, 2018) is a revised second edition of this well known and highly regarded volume by politicians and leading scholars of the British Labour... More >
Yoga for Tension and Stress Relief (BEUHS664)
Workshop | September 14 | 12:10-1 p.m. | 251 Hearst Gymnasium
Speaker: Laurie Ferris, Yoga Instructor, Be Well at Work - Wellness Program
Sponsor: Be Well at Work - Wellness
Practicing yoga can release tension in your joints, give you greater range of movement, soothe your back, and grant you increased comfort in all aspects of your life. Learn how pranayama breathing can enhance your practice, and help liberate your mind in surprising ways. Yoga mats are provided, or you can bring your own. Comfortable clothing and bare feet recommended.
The Info-Metrics Framework in Pictures
Seminar | September 14 | 12:10-1:30 p.m. | 248 Giannini Hall
Speaker/Performer: Amos Golan, American University
Sponsor: Agricultural & Resource Economics
Info-metrics is the science of modeling, reasoning, and drawing inferences under conditions of noisy and insufficient information. It is at the intersection of information theory, statistical inference, and decision-making under uncertainty. It plays an important role in helping make informed decisions even when there is inadequate or incomplete information because it provides a... More >
Harnessing U.S. Health and Retirement Survey Data for Research in the Social Sciences: A Technical Workshop
Workshop | September 14 | 1-4 p.m. | 2232 Piedmont, Seminar Room
Moderator: Ryan Edwards, Research Associate, UC Berkeley Population Center
Sponsors: Population Science, Department of Demography
This workshop will focus on harnessing the powerful HRS dataset to assess an array of compelling questions in the social sciences. The objective is to stimulate new research efforts by expanding awareness of the content and structure of the HRS, with an emphasis on inquiries made possible by the secure computing environment.
Solid State Technology and Devices Seminar: Quantum Computing versus Classical Analog Algorithms
Seminar | September 14 | 1-2 p.m. | Cory Hall, 521 Hogan Room
Speaker/Performer: Eli Yablonovitch, Electrical Engineering and Computer Sciences Dept., University of California, Berkeley
Sponsor: Electrical Engineering and Computer Sciences (EECS)
Recently, at least six well-funded quantum computing startups have emerged, in addition to some large internal efforts in major companies on quantum information processing . It appears that the initial emphasis is not on the Shor Algorithm, which would require billions of qubits, but rather on optimization algorithms that aim to solve the Ising problem, that could possibly be done with as few as... More >
Student Probability/PDE Seminar: Metastability for Diffusions
Seminar | September 14 | 2:10-3:30 p.m. | 891 Evans Hall
Speaker: Fraydoun Rezakhanlou, UC Berkeley
It is well-known that diffusions with gradient drifts exhibit metastable behavior. The large deviation estimates of Wentzel-Freidlin and classical Eyring-Kramers Formula give a precise description for such metastable behavior. For non-gradient models, the large-deviation techniques are still applicable, though no rigorous analog of Eyring-Kramers Formula is available. In this talk I give an... More >
MENA Salon: Saudi Arabia: Image and Reality in an Era of Reform
Workshop | September 14 | 3-4 p.m. | 340 Stephens Hall
Sponsor: Center for Middle Eastern Studies
Since Crown Prince Mohammed Bin Salman took power in June 2017, Saudi Arabia has become the center of international media attention on its new found reform agenda–from granting women the right to drive, effective in June 2018, to restricting the powers of the religious police, to new found connections and investments between the Kingdom and Silicon Valley. These reforms have not come without... More >
The School's First 50 Years: 1918–1968
Seminar | September 14 | 3:10-5 p.m. | 205 South Hall
Speaker: Michael Buckland
Sponsor: Information, School of
By 1900 there was an acute need for qualified librarians and no program in the western states to prepare them. At Berkeley, Californiaâs land-grant university, President Wheeler accepted the need but prevented action. The dramatic expansion of public library services around 1910 induced the California State Library to start a school in Sacramento until Berkeley could take over. In Fall 1918 the... More >
EECS Career Fair and Resume Workshop
Workshop | September 14 | 3:30-5 p.m. | Soda Hall, 306 HP Auditorium
Learn how to prepare for & navigate an internship or career fair from Katie Crawford, an Engineering focused Career Counselor.
Bring your laptop to work on your résumé as we go.
https://app.joinhandshake.com/events/206344/share_preview
Logic Colloquium: Elimination and consistency checking for difference equations (even though the theory is undecidable!)
Colloquium | September 14 | 4-5 p.m. | 60 Evans Hall
Speaker: Thomas Scanlon, UC Berkeley
In practice, an algebraic difference equation (of $N$ variables) is given by a set Σ of polynomials in the variables $\{ x_{i,j} ~:~ 1 \leq i \leq N, j \in \mathbb N \}$ and one looks for sequences of $N$-tuples of numbers $((a_{1,j})_{j=0}^\infty , …, (a_{N,j})_{j=0}^\infty )$ as solutions in the sense that for each $P \in \Sigma $, the equations $P(a_{1,j}, …, a_{n,j}; a_{1,j+1}, …,... More >
Dalton Seminar in Inorganic Chemistry: The Chemistry of Nanoscale Phosphides: Building Complex Inorganic "Molecules" with Atom-Level Precision
Seminar | September 14 | 4-5 p.m. | 120 Latimer Hall
Featured Speaker: Brandi Cossairt, Department of Chemistry, University of Washington
Our research focuses on solution-processable chemical systems capable of sunlight absorption, color-pure emission, charge transfer, and fuels generation. Towards this end we seek to address fundamental challenges in the field of inorganic chemistry, specifically controlling the composition, structure and function of nanoscale light absorbers and molecular catalysts, and controlling the... More >
Student Arithmetic Geometry Seminar: The p-adic Cohomology of Drinfeld half-spaces
Speaker: Koji Shimizu, UC Berkeley
This is a survey talk on the paper "Cohomology of p-adic Stein spaces" by Colmez, Dospinescu and Niziol. Drinfeld half-spaces are a p-adic analogue of the complex upper half-plane, and our goal is to describe p-adic (pro-)etale cohomology of these rigid analytic spaces in terms of Steinberg representations of the general linear group. In this talk, I will explain several key ingredients in... More >
Colloquium: Jonathan Glasser, College of William and Mary: The Muslim-Jewish Musical Question in Algeria and Its Borderlands
Colloquium | September 14 | 4:30 p.m. | 128 Morrison Hall
Sponsor: Department of Music
Jonathan Glasser is a historical anthropologist whose work focuses on modern North Africa, with particular attention to Algeria and Morocco. His first book, The Lost Paradise: Andalusi Music in Urban North Africa (University of Chicago Press, 2016) explored questions of revival and transmission in an urban performance practice in northwestern Algeria and eastern Morocco. His current project looks... More >
|
CommonCrawl
|
Annales Universitatis Mariae Curie-Skłodowska, sectio A – Mathematica
General Lebesgue integral inequalities of Jensen and Ostrowski type for differentiable functions whose derivatives in absolute value are h-convex and applications
Sever Dragomir
Some inequalities related to Jensen and Ostrowski inequalities for general Lebesgue integral of differentiable functions whose derivatives in absolute value are h-convex are obtained. Applications for f-divergence measure are provided as well.
Ostrowski's inequality Jensen's inequality f-divergence measures.
Ali, S. M., Silvey, S. D., A general class of coefficients of divergence of one distribution from another, J. Roy. Statist. Soc. Sec. B 28 (1966), 131-142.
Alomari, M., Darus, M., The Hadamard's inequality for s-convex function, Int. J. Math. Anal. (Ruse) 2, No. 13-16 (2008), 639-646.
Alomari, M., Darus, M., Hadamard-type inequalities for s-convex functions, Int. Math. Forum 3, No. 37-40 (2008), 1965-1975.
Anastassiou, G. A., Univariate Ostrowski inequalities, revisited, Monatsh. Math. 135, No. 3 (2002), 175-189.
Barnett, N. S., Cerone, P., Dragomir, S. S., Pinheiro, M. R., Sofo, A., Ostrowski type inequalities for functions whose modulus of the derivatives are convex and applications, in Inequality Theory and Applications Vol. 2 (Chinju/Masan, 2001), Nova Sci. Publ., Hauppauge, NY, 2003, 19-32. Preprint: RGMIA Res. Rep. Coll. 5, No. 2 (2002), Art. 1 [Online http://rgmia.org/papers/v5n2/Paperwapp2q.pdf].
Beckenbach, E. F., Convex functions, Bull. Amer. Math. Soc. 54 (1948), 439-460.
Beth Bassat, M., f-entropies, probability of error and feature selection, Inform. Control 39 (1978), 227-242.
Bhattacharyya, A., On a measure of divergence between two statistical populations defined by their probability distributions, Bull. Calcutta Math. Soc. 35 (1943), 99-109.
Bombardelli, M., Varosanec, S., Properties of h-convex functions related to the Hermite-Hadamard-Fejer inequalities, Comput. Math. Appl. 58, No. 9 (2009), 1869-1877.
Breckner, W. W., Stetigkeitsaussagen fur eine Klasse verallgemeinerter konvexer Funktionen in topologischen linearen Raumen, Publ. Inst. Math. (Beograd) (N.S.) 23 (37) (1978), 13-20.
Breckner, W. W., Orban, G., Continuity Properties of Rationally s-Convex Mappings with Values in an Ordered Topological Linear Space, Universitatea "Babes-Bolyai", Facultatea de Matematica, Cluj-Napoca, 1978.
Burbea, I., Rao, C. R., On the convexity of some divergence measures based on entropy function, IEEE Trans. Inform. Theory 28 (3) (1982), 489-495.
Cerone, P., Dragomir, S. S., Midpoint-type rules from an inequalities point of view, in Handbook of Analytic-Computational Methods in Applied Mathematics, Anastassiou, G. A., (Ed.), CRC Press, New York, 2000, 135-200.
Cerone, P., Dragomir, S. S., New bounds for the three-point rule involving the Riemann-Stieltjes integrals, in Advances in Statistics Combinatorics and Related Areas, Gulati, C., et al. (Eds.), World Science Publishing, River Edge, N.J., 2002, 53-62.
Cerone, P., Dragomir, S. S., Pearce, C. E. M., A generalised trapezoid inequality for functions of bounded variation, Turkish J. Math. 24 (2) (2000), 147-163.
Cerone, P., Dragomir, S. S., Roumeliotis, J., Some Ostrowski type inequalities for n-time differentiable mappings and applications, Demonstratio Math. 32 (2) (1999), 697-712.
Chen, C. H., Statistical Pattern Recognition, Hoyderc Book Co., Rocelle Park, New York, 1973.
Chow, C. K., Lin, C. N., Approximating discrete probability distributions with dependence trees, IEEE Trans. Inform. Theory 14 (3) (1968), 462-467.
Cristescu, G., Hadamard type inequalities for convolution of h-convex functions, Ann. Tiberiu Popoviciu Semin. Funct. Equ. Approx. Convexity 8 (2010), 3-11.
Csiszar, I. I., Information-type measures of difference of probability distributions and indirect observations, Studia Math. Hungarica 2 (1967), 299-318.
Csiszar, I. I., On topological properties of f-divergences, Studia Math. Hungarica 2 (1967), 329-339.
Csiszar, I. I., Korner, J., Information Theory: Coding Theorem for Discrete Memoryless Systems, Academic Press, New York, 1981.
Dragomir, S. S., Ostrowski's inequality for monotonous mappings and applications, J. KSIAM 3 (1) (1999), 127-135.
Dragomir, S. S., The Ostrowski integral inequality for mappings of bounded variation, Bull. Austral. Math. Soc. 60 (1) (1999), 495-508.
Dragomir, S. S., The Ostrowski's integral inequality for Lipschitzian mappings and applications, Comp. Math. Appl. 38 (1999), 33-37.
Dragomir, S. S., A converse result for Jensen's discrete inequality via Gruss' inequality and applications in information theory, An. Univ. Oradea Fasc. Mat. 7 (1999/2000), 178-189.
Dragomir, S. S., On the midpoint quadrature formula for mappings with bounded variation and applications, Kragujevac J. Math. 22 (2000), 13-18.
Dragomir, S. S., On the Ostrowski's inequality for Riemann-Stieltjes integral, Korean J. Appl. Math. 7 (2000), 477-485.
Dragomir, S. S., On the Ostrowski's integral inequality for mappings with bounded variation and applications, Math. Inequal. Appl. 4 (1) (2001), 59-66.
Dragomir, S. S., On the Ostrowski inequality for Riemann-Stieltjes integral \(\int_{a}^{b}f(t) du(t)\) where \(f\) is of Holder type and u is of bounded variation and applications, J. KSIAM 5 (1) (2001), 35-45.
Dragomir, S. S., On a reverse of Jessen's inequality for isotonic linear functionals, J. Ineq. Pure Appl. Math. 2, No. 3, (2001), Art. 36.
Dragomir, S. S., Ostrowski type inequalities for isotonic linear functionals, J. Inequal. Pure Appl. Math. 3 (5) (2002), Art. 68.
Dragomir, S. S., A refinement of Ostrowski's inequality for absolutely continuous functions whose derivatives belong to $L_{\infty }$ and applications, Libertas Math. 22 (2002), 49-63.
Dragomir, S. S., An inequality improving the first Hermite-Hadamard inequality for convex functions defined on linear spaces and applications for semi-inner products, J. Inequal. Pure Appl. Math. 3, No. 2 (2002), Art. 31.
Dragomir, S. S., An inequality improving the second Hermite-Hadamard inequality for convex functions defined on linear spaces and applications for semi-inner products, J. Inequal. Pure Appl. Math. 3, No. 3 (2002), Art. 35.
Dragomir, S. S., Some companions of Ostrowski's inequality for absolutely continuous functions and applications, Preprint RGMIA Res. Rep. Coll. 5 (2002), Suppl. Art. 29. [Online http://rgmia.org/papers/v5e/COIACFApp.pdf], Bull. Korean Math. Soc. 42, No. 2 (2005), 213-230.
Dragomir, S. S., A Gruss type inequality for isotonic linear functionals and applications, Demonstratio Math. 36, No. 3 (2003), 551-562. Preprint RGMIA Res. Rep. Coll. 5 (2002), Supl. Art. 12. [Online http://rgmia.org/v5(E).php].
Dragomir, S. S., An Ostrowski like inequality for convex functions and applications, Revista Math. Complutense 16 (2) (2003), 373-382.
Dragomir, S. S., Bounds for the normalised Jensen functional, Bull. Aust. Math. Soc. 74 (2006), 471-478.
Dragomir, S. S., Bounds for the deviation of a function from the chord generated by its extremities, Bull. Aust. Math. Soc. 78, No. 2 (2008), 225-248.
Dragomir, S. S., Reverses of the Jensen inequality in terms of the first derivative and applications, Preprint RGMIA Res. Rep. Coll. 14 (2011), Art. 71 [http://rgmia.org/papers/v14/v14a71.pdf].
Dragomir, S. S., Operator Inequalities of Ostrowski and Trapezoidal Type, Springer, New York, 2012.
Dragomir, S. S., Perturbed companions of Ostrowski's inequality for absolutely continuous functions (I), Preprint RGMIA Res. Rep. Coll. 17 (2014), Art 7. [Online http://rgmia.org/papers/v17/v17a07.pdf].
Dragomir, S. S., Inequalities of Hermite-Hadamard type for \(\lambda\)-convex functions on linear spaces, Preprint RGMIA Res. Rep. Coll. 17 (2014), Art. 13.
Dragomir, S. S., Jensen and Ostrowski type inequalities for general Lebesgue integral with applications (I), Preprint RGMIA Res. Rep. Coll. 17 (2014), Art. 25.
Dragomir, S. S., Cerone, P., Roumeliotis, J., Wang, S., A weighted version of Ostrowski inequality for mappings of Holder type and applications in numerical analysis, Bull. Math. Soc. Sci. Math. Romanie 42(90) (4) (1999), 301-314.
Dragomir, S. S., Fitzpatrick, S., The Hadamard inequalities for s-convex functions in the second sense, Demonstratio Math. 32, No. 4 (1999), 687-696.
Dragomir, S. S., Fitzpatrick, S., The Jensen inequality for s-Breckner convex functions in linear spaces, Demonstratio Math. 33, No. 1 (2000), 43-49.
Dragomir, S. S., Ionescu, N. M., Some converse of Jensen's inequality and applications, Rev. Anal. Numer. Theor. Approx. 23, No. 1 (1994), 71-78.
Dragomir, S. S., Mond, B., On Hadamard's inequality for a class of functions of Godunova and Levin, Indian J. Math. 39, No. 1 (1997), 1-9.
Dragomir, S. S., Pearce, C. E., On Jensen's inequality for a class of functions of Godunova and Levin, Period. Math. Hungar. 33, No. 2 (1996), 93-100.
Dragomir, S. S., Pearce, C. E., Quasi-convex functions and Hadamard's inequality, Bull. Aust. Math. Soc. 57 (1998), 377-385.
Dragomir, S. S., Pecaric, J., Persson, L., Some inequalities of Hadamard type, Soochow J. Math. 21, No. 3 (1995), 335-341.
Dragomir, S. S., Pecaric, J., Persson, L., Properties of some functionals related to Jensen's inequality, Acta Math. Hungarica 70 (1996), 129-143.
Dragomir, S. S., Rassias, Th. M. (Eds.), Ostrowski Type Inequalities and Applications in Numerical Integration, Kluwer Academic Publishers, Dordrecht-Boston-London, 2002.
Dragomir, S. S., Wang, S., A new inequality of Ostrowski's type in \(L_{1}\)-norm and applications to some special means and to some numerical quadrature rules}, Tamkang J. Math. 28 (1997), 239-244.
Dragomir, S. S., Wang, S., A new inequality of Ostrowski's type in \(L_{p}\)-norm and applications to some special means and to some numerical quadrature rules, Indian J. Math. 40 (3) (1998), 245-304.
Dragomir, S. S., Wang, S., Applications of Ostrowski's inequality to the estimation of error bounds for some special means and some numerical quadrature rules, Appl. Math. Lett. 11 (1998), 105-109.
El Farissi, A., Simple proof and refinement of Hermite-Hadamard inequality, J. Math. Ineq. 4, No. 3 (2010), 365-369.
Fink, A. M., Bounds on the deviation of a function from its averages, Czechoslovak Math. J. 42, No. 2 (1992), 298-310.
Godunova, E. K., Levin, V. I., Inequalities for functions of a broad class that contains convex, monotone and some other forms of functions, in Numerical Mathematics and Mathematical Physics, Moskov. Gos. Ped. Inst., Moscow, 1985, 138-142 (Russian).
Gokhale, D. V., Kullback, S., Information in Contingency Tables, Marcel Decker, New York, 1978.
Hudzik, H., Maligranda, L., Some remarks on s-convex functions, Aequationes Math. 48, No. 1 (1994), 100-111.
Guessab, A., Schmeisser, G., Sharp integral inequalities of the Hermite-Hadamard type, J. Approx. Theory 115 (2002), 260-288.
Havrda, J. H., Charvat, F., Quantification method classification process: concept of structural \(\alpha\)-entropy, Kybernetika 3 (1967), 30-35.
Hellinger, E., Neue Bergruirdung du Theorie quadratisher Formerus von uneudlichvieleu Veranderlicher, J. Reine Angew. Math. 36 (1909), 210-271.
Jeffreys, H., An invariant form for the prior probability in estimating problems, Proc. Roy. Soc. London A Math. Phys. Sci. 186 (1946), 453-461.
Kadota, T. T., Shepp, L. A., On the best finite set of linear observables for discriminating two Gaussian signals, IEEE Trans. Inform. Theory 13 (1967), 288-294.
Kailath, T., The divergence and Bhattacharyya distance measures in signal selection, IEEE Trans. Comm. Technology 15 (1967), 52-60.
Kapur, J. N., A comparative assessment of various measures of directed divergence, Advances in Management Studies 3 (1984), 1-16.
Kazakos, D., Cotsidas, T., A decision theory approach to the approximation of discrete probability densities, IEEE Trans. Perform. Anal. Machine Intell. 1 (1980), 61-67.
Kikianty, E., Dragomir, S. S., Hermite-Hadamard's inequality and the p-HH-norm on the Cartesian product of two copies of a normed space, Math. Inequal. Appl. (in press).
Kirmaci, U. S., Klaricic Bakula, M., E Ozdemir, M., Pecaric, J., Hadamard-type inequalities for s-convex functions, Appl. Math. Comput. 193, No. 1 (2007), 26-35.
Kullback, S., Leibler, R. A., On information and sufficiency, Annals Math. Statist. 22 (1951), 79-86.
Lin, J., Divergence measures based on the Shannon entropy, IEEE Trans. Inform. Theory 37 (1) (1991), 145-151.
Latif, M. A., On some inequalities for h-convex functions, Int. J. Math. Anal. (Ruse) 4, No. 29--32 (2010), 1473-1482.
Mei, M., The theory of genetic distance and evaluation of human races, Japan J. Human Genetics 23 (1978), 341-369.
Mitrinovic, D. S., Lackovic, I. B., Hermite and convexity, Aequationes Math. 28 (1985), 229-232.
Mitrinovic, D. S., Pecaric, J. E., Note on a class of functions of Godunova and Levin, C. R. Math. Rep. Acad. Sci. Canada 12, No. 1 (1990), 33-36.
Ostrowski, A., Uber die Absolutabweichung einer differentienbaren Funktionen von ihren Integralmittelwert, Comment. Math. Helv. 10 (1938), 226-227.
Pearce, C. E. M., Rubinov, A. M., P-functions, quasi-convex functions, and Hadamard-type inequalities, J. Math. Anal. Appl. 240, No. 1 (1999), 92-104.
Pecaric, J. E., Dragomir, S. S., On an inequality of Godunova-Levin and some refinements of Jensen integral inequality, "Babes-Bolyai" University, Research Seminars, Preprint No. 6, Cluj-Napoca, 1989.
Pecaric, J., Dragomir, S. S., A generalization of Hadamard's inequality for isotonic linear functionals, Radovi Mat. (Sarajevo) 7 (1991), 103-107.
Pielou, E. C., Ecological Diversity, Wiley, New York, 1975.
Radulescu, M., Radulescu, S., Alexandrescu, P., On the Godunova-Levin-Schur class of functions, Math. Inequal. Appl. 12, No. 4 (2009), 853-862.
Rao, C. R., Diversity and dissimilarity coefficients: a unified approach, Theoretic Population Biology 21 (1982), 24-43.
Renyi, A., On measures of entropy and information, in Proc. Fourth Berkeley Symp. Math. Stat. and Prob., Vol. 1, University of California Press, 1961, 547-561.
Roberts, A. W., Varberg, D. E., Convex Functions, Academic Press, New York, 1973.
Sarikaya, M. Z., Saglam, A., Yildirim, H., On some Hadamard-type inequalities for h-convex functions, J. Math. Inequal. 2, No. 3 (2008), 335-341.
Sarikaya, M. Z., Set, E., Ozdemir, M. E., On some new inequalities of Hadamard type involving h-convex functions, Acta Math. Univ. Comenian. (N.S.) 79, No. 2 (2010), 265-272.
Set, E., Ozdemir, M. E., Sarikaya, M. Z., New inequalities of Ostrowski's type for s-convex functions in the second sense with applications, Facta Univ. Ser. Math. Inform. 27, No. 1 (2012), 67-82.
Sharma, B. D., Mittal, D. P., New non-additive measures of relative information, J. Comb. Inf. Syst. Sci. 2 (4) (1977), 122-132.
Sen, A., On Economic Inequality, Oxford University Press, London, 1973.
Shioya, H., Da-Te, T., A generalisation of Lin divergence and the derivative of a new information divergence, Electronics and Communications in Japan 78 (7) (1995), 37-40.
Taneja, I. J., Generalised Information Measures and Their Applications [http://www.mtm.ufsc.br/\symbol{126}taneja/bhtml/bhtml.html].
Theil, H., Economics and Information Theory, North-Holland, Amsterdam, 1967.
Theil, H., Statistical Decomposition Analysis, North-Holland, Amsterdam, 1972.
Topsoe, F., Some inequalities for information divergence and related measures of discrimination, Preprint RGMIA Res. Rep. Coll. 2 (1) (1999), 85-98.
Tunc, M., Ostrowski-type inequalities via h-convex functions with applications to special means, J. Inequal. Appl. 2013, 2013:326.
Vajda, I., Theory of Statistical Inference and Information, Kluwer Academic Publishers, Dordrecht-Boston, 1989.
Varosanec, S., On h-convexity, J. Math. Anal. Appl. 326, No. 1 (2007), 303-311.
10.17951/a.2015.69.2.17-45
bwmeta1.element.ojs-doi-10_17951_a_2015_69_2_17-45
|
CommonCrawl
|
Estimating transmission dynamics and serial interval of the first wave of COVID-19 infections under different control measures: a statistical analysis in Tunisia from February 29 to May 5, 2020
Khouloud Talmoudi1,2,
Mouna Safer1,2,
Hejer Letaief1,2,
Aicha Hchaichi1,2,
Chahida Harizi3,
Sonia Dhaouadi1,
Sondes Derouiche1,
Ilhem Bouaziz1,
Donia Gharbi1,
Nourhene Najar4,
Molka Osman1,
Ines Cherif4,
Rym Mlallekh4,
Oumaima Ben-Ayed4,
Yosr Ayedi4,
Leila Bouabid1,
Souha Bougatef1,
Nissaf Bouafif ép Ben-Alaya1,2,4 &
Mohamed Kouni Chahed4
Describing transmission dynamics of the outbreak and impact of intervention measures are critical to planning responses to future outbreaks and providing timely information to guide policy makers decision. We estimate serial interval (SI) and temporal reproduction number (Rt) of SARS-CoV-2 in Tunisia.
We collected data of investigations and contact tracing between March 1, 2020 and May 5, 2020 as well as illness onset data during the period February 29–May 5, 2020 from National Observatory of New and Emerging Diseases of Tunisia. Maximum likelihood (ML) approach is used to estimate dynamics of Rt.
Four hundred ninety-one of infector-infectee pairs were involved, with 14.46% reported pre-symptomatic transmission. SI follows Gamma distribution with mean 5.30 days [95% Confidence Interval (CI) 4.66–5.95] and standard deviation 0.26 [95% CI 0.23–0.30]. Also, we estimated large changes in Rt in response to the combined lockdown interventions. The Rt moves from 3.18 [95% Credible Interval (CrI) 2.73–3.69] to 1.77 [95% CrI 1.49–2.08] with curfew prevention measure, and under the epidemic threshold (0.89 [95% CrI 0.84–0.94]) by national lockdown measure.
Overall, our findings highlight contribution of interventions to interrupt transmission of SARS-CoV-2 in Tunisia.
Since December 2019, the epidemic of the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of coronavirus disease 2019 (COVID-19) has been spreading. Initially present in Wuhan, China [1], it was officially declared a pandemic on March 11, 2020 by the World Health Organization [2].
In Tunisia, as of January 22, 2020, government has implemented early prevention measure, including screening in point of entry and systematic 14 days isolation of travelers returning from risk areas. The first confirmed case, among an international traveler from Italy, was reported on March 2, 2020. One week later, Tunisian government reinforce its suppression strategy with additional preventive measures. Following the reporting of 13 new cases on March 12, 2020 closure of school and university facilities was announced. The government announced further prevention measures, specifically border closure with Italy as of 14 March. On March 17, 2020, a curfew throughout the whole country, starting on March 18, 2020 was decided. Also, the closure of all sea and air borders were applied as of March 18, 2020. On March 20, a national lockdown, with a ban of transport between governorates were announced from 22 March. Finally, transition to the risk level 3 was announced on March 22, 2020.
Data are accumulating daily on transmission of COVID-19 virus in Tunisia. These data are vitally important in controlling the spread of this virus and settle the current pandemic. Determination of the serial interval (SI), the time between the symptoms onset in the primary patient (infector or index case) and symptoms onset in the patient receiving that infection from the infector (the infectee or secondary case) is fundamental in estimating the basic reproduction number (R0), which is the number of infectees resulting from one infector throughout his entire infectious period [3, 4].
Besides, the temporal reproduction number Rt is one of the key parameters in public health because it determines the extent of the epidemic, i.e. it characterizes the number of infected people caused by a contagious person during the period of his infection; it summarizes the potential transmissibility of the disease and indicates whether an epidemic is under control.
Up to now, the reproduction number (Rt) can only been estimated retrospectively for periods from which all secondary cases had been detected. In terms of policy making and evaluation during outbreaks, obtaining estimates of the temporal tendency in the reproduction number covering as recent a time as possible would be critical [5].
Rt is the only reproduction number easily estimated in real time [6]. Moreover, effective control measures undertaken at time t are expected to result in a sudden decrease in Rt. Hence, assessing the impact of public health interventions to mitigate the disease is easier by using estimates of Rt. For these reasons, we focus on estimating the instantaneous reproduction number Rt in Tunisia.
Data of SARS-CoV-2 were collected from the National Observatory of New and Emerging Diseases of Tunisia. The first dataset consists of time series of symptom onset reported from February 29, 2020, to May 5, 2020. The second dataset is obtained from contact tracing between March 1 and May 5, 2020. It was screened to clearly identified transmission events, which are a known pairs of index and secondary cases and the dates of symptom onset for both cases. Data were anonymized for this study, we only report the certain pairs of infector/infectee during the study period.
Inference methods
A two-step procedure is used to estimate the Rt. It consists of the use of data informing the SI and daily temporal incidence onset of cases data [7]. The first step uses data on known pairs of index (infector) and secondary (infectee) cases to estimate the SI distribution; the second step estimates the time-varying reproduction number jointly from disease onset time series and from the SI distribution fitted in the first step.
Estimation of the serial interval distribution
Serial intervals distribution can be estimated during an ongoing outbreak using data from the list of censored lines by interval, i.e. the lower and upper limits of the date of symptom onset in index and secondary cases [8]. For each infector/infectee pair, a delay between the date of symptom onset, as claimed by the infector, and the date of symptom onset, as claimed by the infected person, is calculated [9]. In some cases, the infected person develops symptoms before the person transmitting the virus, in this case the difference between two dates will be negative.
Maximum likelihood (ML) estimates and the Akaike information criterion (AIC) are used to evaluate widely used parametric candidate models for the SARS-CoV-2 serial interval distributions namely normal, lognormal, Weibull, and gamma. Since our SI data includes a considerable number of non-positive values, we fit the four distributions both to positive values (truncated) and to shifted data, in which 12 delays are added to each observation [9]. However, caution against making assessments and projections based on the truncated data should be carefully explored and we do not believe there is cause for excluding the non-positive data.
Estimation of the number of temporal reproduction
At the beginning of an epidemic, when the whole population is susceptible (i.e. not immune), this number takes on a particular value denoted R0 and called basic reproduction number [10].
The calculation of R0 is based on three underlying assumptions as follows:
Screening strategy in Tunisia is assumed to be constant,
Spatial structure is neglected,
Incidences used are those available since February 29, 2020 and until March 18, 2020 (date of the curfew) for R0 and until 5 May 2020 for temporal reproduction number.
During the outbreak, when the proportion of immunized persons becomes sufficiently large to slow the transmission of the virus (by an effect similar to a reduction in the number of individuals still susceptible), we speak about the effective, or temporal, reproduction number denoted Rt. [11]
Analyses for estimating the reproduction number were conducted using the EpiEstim [7, 12, 13] package on the R statistical software (version 3.6.3) [14]. This package is based on an approach that is motivated by the fact that in the situation where the epidemic under study would still be ongoing, and more particularly when it comes to evaluating the effectiveness of control measures, the total number of infections caused by the latest cases detected is not yet known. For EpiEstim package, the highlighted approach to temporal reproduction number leads to the instantaneous reproduction number, which is prospective: its calculation is based on the potential number of secondarily infected persons that a cohort of cases could have caused if the conditions of transmissibility had remained the same as at the time of their detection.
Let's denote the total cases by symptom onset arising at time-step t by It (assuming total cases of local and imported). Following [6, 7], in which the time-dependent reproduction number, Rt, is illustrated as the ratio of the number of new infected cases at time t, It, and the total infection potential across all infected individuals at time unit t, Λt. If there is a single serial interval distribution ωs (s = 1,2,...), representing the probability of a secondary case arising a time period s after the primary case, each incident case that appeared at a previous time-step t-s contributes to the current infectiousness at a relative level given by ωs. Therefore conditional on ωs, Λt can be computed as follows:
$$ {\Lambda}_{\mathrm{t}}\left({\mathrm{w}}_{\mathrm{s}}\right)=\sum \limits_{\mathrm{s}=1}^{\mathrm{t}}{\mathrm{I}}_{\mathrm{t}-\mathrm{s}}{\mathrm{w}}_{\mathrm{s}} $$
Formally the EpiEstim package maximizes the likelihood of incidence data (seen as a Poisson count) observed over a time window of size τ ending at t. The assumption made here is that the reproduction number is constant over this time window [t-τ,t]. The estimation of the reproduction number at each time window, denoted Rt,τ, for the time interval [t-τ,t] verifies:
$$ {R}_{t,\tau }(t)=\underset{R_t}{\mathrm{argmax}}\prod \limits_{k=t-\tau}^t\frac{{\left({R}_t{\Lambda}_k\left({w}_s\right)\right)}^{I_k}\mathit{\exp}\left(-{R}_t{\Lambda}_k\left({w}_s\right)\right)}{I_k!} $$
Thereby, an estimation of Rt is obtained given both the incidence and serial interval data, from which the mean and 95% intervals of Rt can be computed. The proposed formula may also be used for early detection of the effect of control measures to prevent the spread of the virus on the incidence of new cases.
For the next parts of the document, Rt is denoted R for simplicity. If R > 1, then one person infects more than one person, on average and the epidemic is growing. As the epidemic spreads, R decreases as an increasing proportion of the population becomes immune. When the threshold for group immunity is exceeded, R drops below 1, an epidemic peak is reached and the epidemic decreases. Public health control measures can also decrease R and thus reach epidemic peak before the threshold of population immunity is reached. Therefore, knowing the value of R at time t is essential to determine the status of the epidemic.
Moreover, the overall infectivity due to previously infected individuals of an outbreak at time t, denoted λt, is a relative measure of the current force of infection. It is calculated as the sum the previously infected individuals It, weighted by their infectivity at time t and is given by:
$$ {\lambda}_t=\sum \limits_{k=1}^{t-1}{I}_{t-k}{w}_k $$
The critical parameter for these calculations is the distribution of SI. If λ is falling, then that's good: if not, bad.
Distribution of serial interval
Contact tracing data, collected in the study period (between Feb 29 and May 5, 2020) included 188 unique infectors, with 117 index cases (infectors) who infected multiple people and 39 individuals that appear as both infector and infectee. Notably, 71 of the 491 (14.46%) reported cases have negative values in number of days separating symptom onset date for the infector and symptom onset date for the infected person. This indicates that the infected case developed symptoms earlier than the infector. Thus, the results suggest contamination during the asymptomatic phase or pre-symptomatic phase of the infected source may be occurring, i.e., infected persons may be infectious before their symptoms appear.
The SI estimates by fitting the four parametric distributions both to positive values (truncated) and to shifted data show that the gamma distribution provides the best fit for the truncated data (followed closely by the Weibull and lognormal). Fitted distributions can be found in the Additional file: Fig. S1 and Table S1. Also, the SI was estimated using the full dataset. Again, the gamma distribution can provide the best fit for the full dataset (shifted or truncated) and thus is the distribution we recommend for future epidemiological assessments and planning. Results from gamma distribution estimated mean SI of 5.30 days [95% CI 4.66–5.95] with a standard deviation (SD) of 0.26 [95% CI 0.23–0.30] for SARS-CoV-2 in Tunisia.
Calculating R
As of May 5, 2020, 1028 SARS-CoV-2 symptom onset data were reported with 246 (23.9%) being imported. In Tunisia, the epidemic started with one imported case from Italy, followed by cases with travel history and known contacts with imported cases (Fig. 1a). In March (week 1 to mid-5), the largest part of cases were local with no traveling activity (353 of 562, or 63%). For this wave of the epidemic (Fig. 1b), the R for total cases was above the epidemic threshold (> 1). For each day t of the epidemic, we estimated weekly window R ending on that day (Fig. 1b). Estimates are not shown from the very beginning of the epidemic because precise estimation is not possible in this period. R initially increased from a median value of 2.06 [95% CrI 1.36–2.97] early in the second week to 3.46 [95% CrI 2.70–4.35] at the end of the same week. R increased in the third week, reaching a peak of 5.01 [95% CrI 4.03–6.13] in the beginning of this week. Note that in this wave the estimated reproduction number for local cases (estimated as 2.25 [95% CrI 1.62–3.03] in the beginning of week 3) is, as expected, much lower than estimated when assuming that all cases were linked by local transmission (estimated as 5.01 [95% CrI 4.03–6.13]). The increased values of R from weeks 2 to 3 for both all cases and local cases suggests increasing transmissibility. This may be suggested by the existence of early "superspreaders".
Instantaneous effect reproduction number for SARS-CoV-2 by symptom onset date in Tunisia. In the first graph is shown the daily symptom onset time series for coronavirus from February 29, 2020–May 5, 2020. The second graph shows the estimated reproduction number over sliding weekly windows (posterior mean and 95% credible interval, with estimates for a time window plotted at the end of the time window); the blue color is for all cases and the red color is for local cases; the solid lines show the posterior means and the transparent zones show the 95% credible intervals; the horizontal dashed red line indicate the threshold value R = 1
At the start of week 4, R decreased to 1.69 [95% CrI 1.49–1.90]. In April (mid-week 5 to 9), almost all cases had local transmission with few travel history (413 of 450, or 92%). R continue to decrease till the middle of week 7 with values under the epidemic threshold (< 1). Then, it increased again (mid-week 7 to mid 8) up to 1.40 [95% CrI 1.21–1.60] by 17 April 2020. Finally, R decreased again to the end of period study, up to 0.68 [95% CrI 0.52–0.87]. The weekly estimates of R can be found in the Additional file: Table S2. This could reflect the impact of control measures or could be due to the depletion of susceptibles in the Tunisian population.
Besides, to reinforce our findings and provide more information on the epidemic situation in Tunisia, the overall infectivity is globally increased over the first period study, from February 29, 2020 to April 8, 2020, then it remains stable by April 18, 2020 to peak again between April 22 to April 24, 2020. Finally, it decreases during the last period study. Illustration of the overall infectivity can be found in the Additional file: Fig. S2.
Impact of curfew and lockdown prevention measures on R
In Tunisia, the curfew was applied on March 18, 2020 and the lockdown was applied on March 22, 2020. We estimated large changes in R in response to the combined lockdown interventions. Our results suggest that the lockdown was effective in terms of reducing transmissibility (Fig. 2), as the estimated reproduction number during the lockdown was significantly lower compared to pre-intervention period. The R moves from 3.18 [95% CrI 2.73–3.69] to 1.77 [95% CrI 1.49–2.08] with curfew prevention measure, meaning that it reduces transmissibility but the risk of contagion is still alarming. By national lockdown measure, this value moves to 0.89 [95% CrI 0.84–0.94] (< 1), indicating the substantial impact of this prevention measure in reducing transmission of the epidemic.
Impact of interventions on the estimates of the reproduction number R during the study period in Tunisia. The first intervention was the curfew which is applied on March 18, 2020 and the second one was the national lockdown applied on March 22, 2020
We analyzed the transmission dynamics of SARS-CoV-2 infection in Tunisia in the first 3 months of the epidemics where all prevention measures were implemented, especially curfew and national lockdown.
Our results focus on the use of the likelihood-based method to estimate initial Rt and SI. Utilizing temporal symptom onset data and contact tracing, we provide estimates of the transmissibility parameters of SARS-CoV-2 during the first wave experienced in Tunisia. These data are incorporated to provide robust estimates of transmission parameters.
Our estimates of the SI for SARS-CoV-2 in Tunisia better resemble a gamma distribution with an estimated mean of 5.30 days [95% CI 4.66–5.95]. These estimates of the mean SI are higher than the published estimates of 4.6 days [95% CI: 3.5, 5.9] among 18 certain pairs calculated at early stage of the COVID-19 epidemic [4], indicating that SARS-CoV-2 infection in Tunisia leads to slow cycles of transmission from one generation of cases to the next. Recent estimates for the mean serial interval of COVID-19 range from 3.9 days [95% CI 2.7–73] [15] and 4.0 days [95% CI 3.1–4.9] [4] to 7.5 days [95% CI 5.3–19] [3] based on data from 21, 28 and 6 pairs, respectively. Also, the mean SI in Tunisia is considerably lower than reported mean serial intervals of 8.4 days for SARS [16] and 12.6 days [17] - 14.6 days [18] for MERS. This indicates that calculations made using the SARS SI may introduce bias.
At least, two sources of bias can be potentially considered in our estimates, which are likely to cause underestimation of SARS-CoV-2 serial intervals. First, the distribution of SI varies during an epidemic, with the time separating successive cases close to the epidemic peak [9]. To provide insight, a susceptible person would probably become infected more quickly if he is surrounded by more than one infected person. Since our estimates are based on transmission cases reported during the early stages of outbreaks, such compression are not explicitly accounted and we interpret the estimates as basic serial intervals at the beginning of an epidemic. However, our estimates may reflect effective serial intervals when certain of the reported infections occurred in the amidst of growing clusters, that would be expected during a period of epidemic growth. Second, the dates of symptom onset of each infector was likely based on individual remembrance of past activities. If accuracy of the recall is impeded by time, it is highly possible that recent encounters (short serial intervals) are attributable to infected cases rather than over past encounters (longer serial intervals). This information is self-reported from infected cases. Therefore, an information or reporting bias may occur. In contrast, the reported serial intervals may be biased upwards by travel-related delays in transmission from primary cases that were infected in another countries before returning in Tunisia. In that case, if their infectious period began while still traveling, then it is very unlikely to observe early transmission events with shorter serial intervals. Given the diversity in type and reliability sources of bias when estimating the SI, particular cautions to our findings should be granted. Our results provide working hypotheses regarding the infectivity of coronavirus in Tunisia, which will need to be validated as new data become available. In our study, we used the symptom onset dates for both infector and infected confirmed cases. However, the last exposure dates to infected person(s) for pairs of infector/infectee were not considered in this study due to messing data. These dates are useful to settle if infection is occurring during the asymptomatic phase of the infection. However, negative values in delays separating symptom onset date for the infector and symptom onset date for the infected person strongly suggest infection occurring during the asymptomatic phase of SARS-CoV-2 in Tunisia. Although contamination from asymptomatic persons has not been proven before, recent studies have highlighted the existence of such transmission [19, 20]. However, contamination from asymptomatic persons lower in comparison with symptomatic cases [21]. Here, the potential impacts for the control of SARS CoV-2 are mixed. While our lower estimates for R suggest easier lockdown, the asymptomatic transmission events remains a concern. Future works are developed to provide more information of the contamination of SARS-CoV-2 during the asymptomatic phase of the infected sources in Tunisia.
In Tunisia massive prevention strategies have been applied at early stage of the SARS-CoV-2 epidemic. These strategies include implementing travel restrictions, isolation of infected individuals, active contact tracing, make quarantining compulsory, and enforcing lockdown [22]. In Tunisia, where all interventions were implemented in short time period and some of them were separated by a short time interval, these individual effects are by definition unidentifiable. Despite this, while individual impacts cannot be determined, their estimated joint impact is strongly empirically justified [23]. Our results suggest that the lockdown was effective in mitigating transmissibility of the disease, as the estimated reproduction number during the lockdown was significantly lower compared to initial-intervention period. We showed that after less than 5 weeks from the beginning of the epidemic, R dropped below the epidemic threshold. This corresponds to a decline of new cases, which is the result of herd immunity altogether with public health control measures. However, despite the dependence of infectious diseases on climate factors [24], there is no evidence so far supporting the impact of warmer climate factors in reducing the transmissibility of the disease [25].
Likewise, we estimated that there was an early decrease in R for the outbreak of coronavirus in Tunisia. But just estimating R does not allow us to determine whether this reflects a true reduction in transmissibility, possibly due to the national lockdown or the depletion of susceptibles. Furthermore, with R values dropping significantly, the acquisition rate of herd immunity will slow down rapidly. This indicates the ability of the virus to spread rapidly should interventions be lifted. Interestingly, just after the first peak of incidence, R decreased under 1 for almost 2 weeks, raised above 1 for 5 days, and decreased again under 1, indicating that the epidemic was not yet over; and indeed, a second peak was still to come.
Some limitations in this study can be highlighted. First, we presume all infected cases are known and consistently reported over the study. Violation of this assumption may lead to biased estimates of the SI and the reproduction number [26]. Similarly, the variation in case tracking over time may bias our estimates of the time variation of Rt. In general, higher reporting rates can be expected in the early phase of the epidemic, with reporting fatigue becoming a factor in late phase. Second, our estimates do not incorporate regional heterogeneity probably existing in transmission patterns, or to assess its impact on overall measures of reproduction number.
Despite these limitations, the post-pandemic estimates reported in this study add to general acquisition and knowledge of coronavirus transmissibility parameters, which were previously dominated by estimates from other countries, or occasionally from other epidemics, and are often based on preliminary data. It remains important that the revision of the parameters, based on exhaustive data from different geographical scales, be integrated into the planning of mitigation strategies for future pandemics. Nevertheless, the methods used in this study would be adaptable to generate real-time estimates for future epidemics. As we continue to build epidemiological capacity in our country, urgent improvements need to be implemented such as, digitizing contact tracing, the need for rapid assessments of transmissibility of novel pathogens, in addition to disease severity, to better inform public health interventions.
The datasets generated and/or analyzed in this study are not publicly available for respect for patient privacy, but are available from the corresponding author upon reasonable request.
AIC:
Akaike information criterion
Causative agent of coronavirus disease 2019
ML:
R0 :
Basic reproduction number
Rt :
Temporal reproduction number
SARS-CoV-2:
Severe acute respiratory syndrome coronavirus 2
SI:
Serial interval
Chinese Center for Disease Control and Prevention. Epidemic update and risk assessment of 2019 novel coronavirus 2020; 2020. [Available from: http://www.chinacdc.cn/yyrdgz/202001/P020200128523354919292.pdf]. Accessed 18 Feb 2020).
World Health Organization. Statement on the second meeting of the international health regulations (2005) emergency committee regarding the outbreak of novel coronavirus (2019-nCoV); 2020. Available from: https://www.who.int/news-room/detail/30-01-2020-statement-on-the-second-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-outbreak-of-novel-coronavirus-(2019-ncov)]. Accessed 18 Feb 2020.
Li Q, Guan X, Wu P, Wang X, Zhou L, Tong Y. Early transmission dynamics in Wuhan, China, of novel coronavirus–infected pneumonia. N Engl J Med. 2020;382(13):1199–207.
Nishiura H, Linton NM, Akhmetzhanov AR. Serial interval of novel coronavirus (COVID-19) infections. Int J Infect Dis. 2020;93:284–6.
Cauchemez S, Boëlle PY, Donnelly CA, Ferguson NM, Thomas G, Leung GM, et al. Real-time estimates in early detection of SARS. Emerg Infect Dis. 2006;12:110–3.
Cori A, Nouvellet P, Garske T, Bourhy H, Nakouné E, Jombart T. A graph-based evidence synthesis approach to detecting outbreak clusters: an application to dog rabies. PLoS Comp Biol. 2018;14:e1006552.
Thompson RN, Stockwin JE, van Gaalen RD, Polonsky JA, Kamvar ZN, Demarsh PA, et al. Improved inference of time-varying reproduction numbers during infectious disease outbreaks. Epidemics. 2019;29:100356.
Cowling BJ, Fang VJ, Riley S, Peiris JM, Leung GM. Estimation of the serial interval of influenza. Epidemiology. 2009;20:344–7.
Du Z, Xu X, Wu Y, Wang L, Cowling BJ, Meyers L. Report: the serial interval of COVID-19 from publicly reported confirmed cases. Emerg Infect Dis. 2020;26(6):1341–3.
Modeling group of the ETE team (MIVEGEC Laboratory, CNRS, IRD, University of Montpellier). Report: estimation du nombre de reproduction temporel; 2020. [Available from: http://bioinfo-shiny.ird.fr:3838/Rt/]. Accessed 17 Apr 2020.
Cori A, Ferguson NM, Fraser C, Cauchemez S. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am J Epidemiol. 2013;178(9):1505–12.
Cori A, Cauchemez S, Ferguson NM, Fraser C, Dahlqwist E, Demarsh PA, et al. EpiEstim: estimate time varying reproduction numbers from epidemic curves. R package version: 2.2–1; 2019. [Available from: https://CRAN.R-project.org/package=EpiEstim]. Accessed 8 Jul 2019.
Wallinga J, Teunis P. Different epidemic curves for severe acute respiratory syndrome reveal. Am J Epidemiol. 2004;160(6):509–16.
R Development Core Team. R: a language and environment for statistical computing. R Foundation for statistical computing, Vienna, Austria; 2017. [Available from: http://www.r-project.org].
Zhao S, Gao D, Zhuang Z et al. Estimating the serial interval of the novel coronavirus disease (COVID-19): A statistical analysis using the public data in Hong Kong from January 16 to February 15, 2020, 13 May 2020, PREPRINT (Version 2). [Available from: https://doi.org/10.21203/rs.3.rs-18805/v2].
Lipsitch M, Cohen T, Cooper B, Robins JM, Ma S, James L, et al. Transmission dynamics and control of severe acute respiratory syndrome. Science. 2003;300(5627):1966–70.
Cowling BJ, Park M, Frang VJ, Wu P, Leung GM, Wu JT. Preliminary epidemiological assessment of MERS-Cov outbreak in South Korea. Eurosurveillance. 2015;20(25):7–13.
Park SH, Kim Y-S, Jung Y, Choi SY, Cho N-H, Jeong HW, et al. Outbreaks of Middle East respiratory syndrome in two hospitals initialed by a single patient in Daejeon, South Korea. Infect Chemother. 2016;48(2):99–107.
World Health Organization. Transmission of SARS-CoV-2: implications for infection prevention precautions; 2020. [Available from: https://www.who.int/news-room/commentaries/detail/transmission-of-sars-cov-2-implications-for-infection-prevention-precautions]. Accessed 9 July 2020.
Wei WE, Li Z, Chiew CJ, Yong SE, Toh MP, Lee VJ. Presymptomatic transmission of SARS-CoV-2_Singapore, January 23-march 16, 2020. Morb Mortal Wkly Rep. 2020;69(14):411.
Oran DP, Topol EJ. Prevalence of asymptomatic SARS-CoV-2 infection: a narrative review. Ann Intern Med. 2020;173(5):362. https://doi.org/10.7326/M20-3012.
Fang Y, Nie Y, Penny M. Transmission dynamics of the COVID19 outbreak and effectiveness of government interventions: a data-driven analysis. J Med Virol. 2020;92:645–65.
Flaxman S, Mishra S, Gandy A, et al. Report 13: Estimating the number of infections and the impact of non-pharmaceutical interventions on COVID-19 in 11 European countries. Imperial College COVID-19 Response Team 2020. doi: https://doi.org/10.25561/77731.
Talmoudi K, Bellali H, Ben-Alaya N, Saez M, Malouche D, Chahed MK. Modeling zoonotic cutaneous leishmaniasis incidence in Central Tunisia from 2009-2015: forecasting models using climate variables as predictors. PLoS Negl Trop Dis. 2017;11(8):e0005844.
Xie J, Zhu Y. Association between ambient temperature and COVID-19 infection in 122 cities from China. Sci Total Environ. 2020;724(2020):138201.
White LF, Pagano M. Reporting errors in infectious disease outbreaks, with an application to pandemic influenza a/H1N1. Epidemiol Perspect Innov. 2010;7:12.
Talmoudi K., Safer M, Letaief H, Hchaichi A, Harizi C, Dhaouadi S et al. Data from: estimating transmission dynamics and serial interval of the first wave of COVID-19 infections under different control measures: a statistical analysis in Tunisia from February 29 to may 5, 2020. Dryad. 2020. doi: https://doi.org/10.5061/dryad.b8gtht799.
We acknowledge the health teams involved in investigation and case tracing, admin team of National observatory of new and emerging diseases of Tunisia, the field teams, members of the regional monitoring units, and all lab team contributing to the laboratory diagnosis based on the real-time RT-PCR test. The authors thank World Health Organization (WHO) office of Tunis, Tunisia for assistance with data collection. We acknowledge Dryad for publishing the datasets [27].
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
National Observatory of New and Emerging Diseases, Tunis, Tunisia
Khouloud Talmoudi, Mouna Safer, Hejer Letaief, Aicha Hchaichi, Sonia Dhaouadi, Sondes Derouiche, Ilhem Bouaziz, Donia Gharbi, Molka Osman, Leila Bouabid, Souha Bougatef & Nissaf Bouafif ép Ben-Alaya
Research laboratory "Epidemiology and Prevention of Cardiovascular Diseases in Tunisia", Tunis, Tunisia
Khouloud Talmoudi, Mouna Safer, Hejer Letaief, Aicha Hchaichi & Nissaf Bouafif ép Ben-Alaya
Department of Epidemiology and Statistics, Abderrahman Mami Hospital, Ariana, Tunisia
Chahida Harizi
Department of Epidemiology and Public Health, Faculty of Medicine of Tunis, Tunis El Manar University, Tunis, Tunisia
Nourhene Najar, Ines Cherif, Rym Mlallekh, Oumaima Ben-Ayed, Yosr Ayedi, Nissaf Bouafif ép Ben-Alaya & Mohamed Kouni Chahed
Khouloud Talmoudi
Mouna Safer
Hejer Letaief
Aicha Hchaichi
Sonia Dhaouadi
Sondes Derouiche
Ilhem Bouaziz
Donia Gharbi
Nourhene Najar
Molka Osman
Ines Cherif
Rym Mlallekh
Oumaima Ben-Ayed
Yosr Ayedi
Leila Bouabid
Souha Bougatef
Nissaf Bouafif ép Ben-Alaya
Mohamed Kouni Chahed
MC and NBA conceived the study. KT, MS, NBA, MC carried out the analysis. MS and HL collected and managed contact tracing. HA and NBA checked and validated symptom onset data. KT drafted the first manuscript. MS, AH and SD provided helpful information. CH, IB, DG, NN, MO, IC, RM, OB-A, YA, LB, SB, [SD]1 and [SD]2 participated in data collection and insured contact tracing. NBA and MC provided guidance and carefully revised the manuscript. All authors discussed the results, critically read and revised the manuscript, and gave final approval for publication.
Correspondence to Khouloud Talmoudi.
The follow-up data of individual patients were collected from National Observatory of New and Emerging Diseases of Tunis, Tunisia. Administrative permissions were required to access and use the meta-data described in our study. Permission was granted from the National Observatory of New and Emerging Diseases of Tunis, Tunisia. Data were anonymized for this study. Neither ethical approval nor individual consent was not applicable.
Authors declare no conflict of interest.
Maximum likelihood distributions fit to transformed COVID-19 serial intervals (491 reported transmission events in Tunisia between March 1, 2020 and May 5, 2020). To evaluate several positive-valued distributions (lognormal, gamma and Weibull), we took two approaches to addressing the negative-valued data. First, we left truncated the data (i.e., removed all non-positive values) for (A) all infection events. Second, we shifted the data by adding 12 days to each reported serial interval for (B) all infection events. Table S1. Model comparison for COVID-19 serial intervals based on all 491 reported transmission events in Tunisia between March 1, 2020 and May 5, 2020. Table S2. Weekly window reported estimates of the reproduction number (R) during the study period in Tunisia. Figure S2. Overall infectivity between February 29, 2020 and May 5, 2020.
Talmoudi, K., Safer, M., Letaief, H. et al. Estimating transmission dynamics and serial interval of the first wave of COVID-19 infections under different control measures: a statistical analysis in Tunisia from February 29 to May 5, 2020. BMC Infect Dis 20, 914 (2020). https://doi.org/10.1186/s12879-020-05577-4
Reproduction number
Healthcare-associated infection control
|
CommonCrawl
|
Person profile:
V. Chepyzhov
Публикаций на странице: 5102050 Page: 123
Authors: Chepyzhov V., Chechkin G.A., Pankratov L.S.
Homogenization of trajectory attractors of Ginzburg-Landau equations with randomly oscillating terms. Discrete and Continuous Dynamical Systems Series B. V.23. 2018. N. 3. P. 1133-1154.
Authors: Chepyzhov V., Ilyin A.A., Zelik S.C.
Vanishing viscosity limit for global attractors for the damped Navier-Stokes system with stress free boundary conditions. Physica D. V. 376-377. 2018. P. 31-38.
Go to publication
Authors: Bekmaganbetov K.A., Chechkin G.A., Chepyzhov V.
Weak convergence of attractors of reaction–diffusion systems with randomly oscillating coefficients. Applicable Analysis. 2017.
Authors: Chepyzhov V., Conti M., Pata V.
Averaging of equations of viscoelasticity with singularly oscillating external forces. Journal de Mathématiques Pures et Appliquées. V.108. 2017. N.6. P.841-868.
Authors: Chepyzhov V., Ilyin A.A.
On strong convergence of attractors of Navier–Stokes equations in the limit of vanishing viscosity. Mathematical Notes, March 2017, Volume 101, Issue 3–4, pp 746–750.
Authors: Chepyzhov V., Ilyin A.A., Zelik S.V.
Strong trajectory and global W1,p -attractors for the damped-driven Euler system in R2. Discrete and Continuous Dynamical Systems B. 2017. V. 22. N.5. P.1835-1855.
Authors: Bekmaganbetov K.A., Chechkin G.A., Chepyzhov V., Goritsky A.Yu.
Homogenization of Trajectory Attractors of 3D Navier--Stokes system with Randomly Oscillating Force. Discrete and Continuous Dynamical Systems A. V.37. 2017. N. 5. P. 2375-2393.
Homogenization of Random Attractors for Reaction-Diffusion Systems. Comptes Rendus Mecaniques. V.344. 2016. N. 11-12. P.753–758.
Authors: Chepyzhov V.
Approximating the trajectory attractor of the 3D Navier-Stokes system using various $ \alpha$-models of fluid dynamics. Sbornik: Mathematics(2016),207(4):610.
Authors: Chepyzhov V., A.A. Bedrintsev
Design Space Description by Extremal Ellipsoids in Data Representation Problems
Authors: A.A. Bedrinsev, Chepyzhov V., S.S. Shernova
Extreme ellipsoids as approximations of design space in data predictive metamodeling problems. 2015. N. 2. P. 95-104.
Go to publication Download (377.6 KB)
Authors: Chepyzhov V., S.V. Zelik
Infinite energy solutions for Dissipative Euler equations in $\R^2$. Journal of Mathematical Fluid Mechanics. 2015. V. 17. P.513-532.
Trajectory attractors for non-autonomous dissipative 2d Euler equations. Discrete and Continuous Dynamical Systems B. 2015. V. 20. N.3. P.811-832.
Strong trajectory and global $\mathbf{W^{1,p}}$-attractors for the damped-driven Euler system in $\mathbb R^2$. ArXiv.org e-Print archive, 1511.0387sv1, 2015, pp. 1-26.
Totally dissipative dynamical processes and their uniform global attractors. Communications on Pure and Applied Analisis. 2014. V.13, N 5, pp.1989-2004.
Authors: Zelik S.V., Chepyzhov V.
Regular Attractors of Autonomous and Nonautonomous Dynamical Systems. Doklady Mathematics, 2014, Vol. 89, No. 1, pp. 92–97.
Uniform attractors of dynamical processes and non-autonomous equations of mathematical physics. Russian Math. Surveys, 68:2, 349–382
Authors: Vishik M., Zelik S.V., Chepyzhov V.
Regular attractors and their nonautonomous perturbations // Mat. Sb., 204:1 (2013), 3–46
Trajectory attractors for equations of mathematical physics // Abstracts of the International Conference "Differential Equations and Applications" in Honour of Mark Vishik, Moscow, June 4-7, 2012. P.9.
A minimal approach to the theory of global attractors // Discrete and Continuous Dynamical Systems. 2012. V. 32. N.6. P.2079-2088.
Attractors for Autonomous and Non-autonomous Navier-Stokes Systems. Abstracts of 7th International Congress on Industrial and Applied Mathematics, July 18-22, 2011, Vancouver, Canada. P.88.
Strong trajectory attractors for 2D Euler equations with dissipation. Abstracts of the International Mathematical Conference "50 Years of IPPI", July 25-27, 2011, Moscow. P.1-3.
Authors: Vishik M., Chepyzhov V.
Trajectory attractors of equations of mathematical physics. Uspekhi Mat. Nauk V. 66. N.4. P.3–102.
Authors: Chepyzhov V., Vishik M., S.V.Zelik
Strong trajectory attractors for dissipative Euler equations. Journal de Mathematiques Pures et Appliquees. V.96. 2011. P.395-407.
On trajectory attractors for non-autonomous 2D Navier-Stokes system in the Nicolskij space. Modern problems in analysis and in mathematical education. Proceedings. International conference dedicated to the 105-th anniversary of academician S.M.Nicolskij, 17-19 May 2010, MSU, Moscow. P.57-58.
Trajectory attractor for a system of two reaction-diffusion equations with diffusion coefficient δ(t) → 0+ as t → + ∞. Doklady Mathematics, Vol. 81, 2010, No. 2. P.196–200.
Authors: Chepyzhov V., Vishik M.
Trajectory attractor for reaction-diffusion system with diffusion coefficient vanishing in time. Discrete and Continuous Dynamical Systems A. V.27. 2010. N.4. P.1493-1509.
Authors: Chepyzhov V., Pata V., Vishik M.
Averaging of 2D Navier-Stokes equations with singularly oscillating forces. Nonlinearity. V.22. 2009. No. 2. P.351-370.
Trajectory attractor for reaction-diffusion system with a series of zero diffusion coefficients. Russian Journal of Mathematical Physics. V.16. 2009. N.2 P.208-227.
Trajectory attractors of reaction-diffusion systems with small diffusion. Sbornik: Mathematics. V.200. 2009. N.4. P.471–497.
Trajectory attractor for reaction-diffusion system containing a small diffusion coefficient. Doklady Mathematics, Vol. 79, 2009, No. 2. P.443–446.
Averaging of nonautonomous damped wave equations with singularly oscillating external forces. Journal de Mathematiques Pures et Appliquees. V.90. 2008. P.469-491.
Go to publication Download (260 KB)
Authors: Vishik M., Pata V., Chepyzhov V.
Time Averaging of Global Attractors for Nonautonomous Wave Equations with Singularly Oscillating External Forces. Doklady Mathematics, 2008, Vol. 78, No. 2, pp. 689–692.
Attractors for nonautonomous Navier–Stokes system and other partial differential equations. In the book: Instability in Models Connected with Fluid Flows, I. (C.Bardos, A.Fursikov eds.), International Mathematical Series, V.6, Springer. 2008, P.135-265.
Trajectory attractors for dissipative 2d Euler and Navier-Stokes equations. Russian Journal of Mathematical Physics. V.15. 2008. N.2. P.156-170.
Authors: Vishik M., Titi E.S., Chepyzhov V.
On convergence of trajectory attractors of the 3D Navier–Stokes-α model as α approaches 0. Sbornik: Mathematics V.198. 2007. N.12. P.1703–1736.
Non-autonomous 2D Navier-Stokes system with singularly oscillating external force and its global attractor. Journal of Dynamics and Differential Equations. V.19. 2007. N.3. P.655-684.
Trajectory Attractor for the 2d Dissipative Euler Equations and Its Relation to the Navier–Stokes System with Vanishing Viscosity. Doklady Mathematics, Vol. 76, 2007, No. 3, pp. 856–860.
The Global Attractor of the Nonautonomous 2D Navier–Stokes System with Singularly Oscillating External Force. Doklady Mathematics, Vol. 75, 2007, No. 2, pp. 236–239.
Authors: Chepyzhov V., Titi E.S., Vishik M.
On the convergence of solutions of the Leray-alpha model to the trajectory attractor of the 3D Navier-Stokes system. Discrete and Continuous Dynamical Systems. 17. 2007. N.3. P.481-500.
Search for publications of V. Chepyzhov
from 20212020201920182017201620152014201320122011201020092008200720062005200420032002200120001999199819971996199519941993199219911990198919881987198619851984198319821981198019791978197719761975197419731972197119701969196819671966196519641963196219611960195919581957195619551954195319521951195019491948194719461945194419431942194119401939193819371936193519341933193219311930192919281927192619251924192319221921192019191918191719161915191419131912191119101909190819071906190519041903190219011900 по 20212020201920182017201620152014201320122011201020092008200720062005200420032002200120001999199819971996199519941993199219911990198919881987198619851984198319821981198019791978197719761975197419731972197119701969196819671966196519641963196219611960195919581957195619551954195319521951195019491948194719461945194419431942194119401939193819371936193519341933193219311930192919281927192619251924192319221921192019191918191719161915191419131912191119101909190819071906190519041903190219011900
Choose category === BookThesisJornal paperConference proceedingElectronic publishing
Наличие в международных базах цитирования
Не важно Да Нет Search in department
All departments === Direction of the Institute Buhgalteria Center for Distributed Computing Laboratory #17 Laboratory 11 Laboratory № 10 Laboratory №15 Laboratory №18 Laboratory №2 Laboratory №3 Laboratory №4 Laboratory №5 Laboratory №6 Laboratory №8 Laboratory №9 M.S. Pinsker Laboratory №1 Mathematical department RTC Bioinformatics Sector №3.1 Sector №3.2 Sector №4 Sector №4.1 Sector №6 Sector №7
|
CommonCrawl
|
April 2019 , Volume 25, Issue 3, pp 405–417 | Cite as
Comparison of physicochemical, sorption and electrochemical properties of nitrogen-doped activated carbons obtained with the use of microwave and conventional heating
Justyna Kaźmierczak-Raźna
Paulina Półrolniczak
Krzysztof Wasiński
Robert Pietrzak
A series of new nitrogen-doped activated carbons has been obtained via the reaction with urea and chemical activation of Polish brown coal. In order to obtain nitrogen groups bonded in different ways to the carbonaceous matrix, the modification with urea was performed at two different stages of processing, i.e. precursor or char. Additionally, the effects of conventional or microwave heating on the physicochemical parameters, sorption abilities as well as capacitance behaviour of the carbons prepared were tested. All the materials under investigation were characterized by elementary analysis, surface area measurements as well as estimation of the number of surface functional groups. The sorption properties of the materials were tested towards methylene blue at temperature of 25 °C. Moreover, symmetric supercapacitors containing organic electrolyte and prepared carbons were tested in Swagelok® type cells by using CV, GCD and EIS methods. Depending on the variant of preparation, the final products were micro/mesoporous nitrogen-doped activated carbons of well-developed surface area ranging from 617 to 1117 m2/g, showing acidic or intermediate acid-base character of the surface and different contents of nitrogen functional groups varying from 1.0 to 5.6 wt%. The results obtained in this study showed that introduction of nitrogen and chemical activation of brown coal led to activated carbons with very good sorption capacity toward organic dyes as well as good electrochemical parameters. The specific capacitance was achieved up to 86 F/g for N-doped carbon obtained by microwave carbonization followed by chemical activation with potassium carbonate.
Brown coal Chemical activation Activated carbons Incorporation of nitrogen Adsorption from liquid phase Electrochemical capacitors
The original version of this article was revised: The incorrect copyright holder name has been corrected.
A correction to this article is available online at https://doi.org/10.1007/s10450-019-00079-5.
The nitrogen-doped carbonaceous materials can be obtained via different ways, however, the most often applied method is thermal treatment of carbonaceous precursor in the presence of nitrogen supplying agent such as urea, melamine or ammonia (Nowicki and Pietrzak 2011; Liang et al. 2004; Vargas et al. 2013; Seredych et al. 2009; Grzybek et al. 2008; Shirahama et al. 2005). Very popular method is also pyrolysis and/or activation of plastics containing nitrogen species in their structure, e.g. polyacrylonitrile, polyamides or polyurethane (Sullivan et al. 2012; Hayashi et al. 2005; Zaini et al. 2010). The third variant of nitrogen-enriched materials preparation is deposition of amines and imines of any order on the surface of chemically or physically activated carbons (Gholidoust et al. 2017). Depending on the applied variant of modification, the carbon materials obtained are characterized by different contents of nitrogen, different types of functional groups and their different positions in the carbonaceous structure (Chen et al. 2003; Boudou et al. 2006; Nowicki et al. 2010).
Regardless of the method of nitrogen-doped activated carbons preparation, the processes are based on the use of conventional heating (Nowicki et al. 2008, 2009; Kazmierczak-Razna et al. 2017; Jurewicz et al. 2008), whose main disadvantages are non-uniform heating of the samples and what is more important, the necessity of using high temperatures, which can lead to destruction of earlier introduced nitrogen functional groups (Kazmierczak-Razna et al. 2015). A very interesting and attractive alternative seems to be the use of microwave energy for heating, which offers uniform heating of the whole sample volume. Moreover, microwave heating is based on direct conversion of electromagnetic energy into heat so thermal treatment can be carried out at a lower temperature and for a shorter period of time than when conventional heating based on convection and radiation is used (Jacob et al. 1995; Remya and Lin 2011; Jones et al. 2002).
Taking the above into consideration, the main purpose of the present study was to obtain a series of N-doped activated carbons via the reaction with urea and chemical activation of Polish brown coal as well as to compare the influence of conventional and microwave heating on the physicochemical parameters, sorption abilities and capacitance behaviour of the carbons prepared.
2 Experimental
2.1 Preparation of activated carbons
The precursor of activated carbon was the Polish brown coal (B) from the Konin colliery. The starting material was milled and sieved to the grain size of 0.5 mm, demineralised (D) by concentrated HCl and HF according to the Radmacher and Mohrhauer method (Radmacher and Mohrhauer 1956) and subjected to further treatment including enrichment in nitrogen (U), pyrolysis (P) and chemical activation (A) in different sequences: (1) reaction with urea followed by pyrolysis and activation (BUPA samples) and (2) pyrolysis followed by reaction with urea and activation (BPUA samples). The unmodified activated carbons (BPA samples) were used as a references. The sample codes and the preparation details are outlined in the scheme presented in Fig. 1.
Scheme of preparation of the activated carbon samples
Incorporation of nitrogen (U): 50 g of carbonaceous material was impregnated with urea at the weight ratio of 1:1, dried to constant mass at 110 °C and then subjected to thermal treatment for 1 h, at 350 °C, in nitrogen flow (100 mL/min).
During pyrolysis (P) the samples were heated in two variants—in conventional resistance furnace (Pc) as well as in microwave muffle furnace (Pm). This process was performed under nitrogen atmosphere (flow rate 170 mL/min). The samples were heated at the rate 10 °C/min from room temperature to final pyrolysis temperature of 600 (in case of microwave furnace) or 700 °C (in case of conventional furnace), and maintained for 30 or 60 min, respectively.
The chars obtained were next impregnated with potassium carbonate solution (at the weight ratio of 2:1), dried to constant mass at 110 °C and then subjected to thermal treatment in nitrogen atmosphere (flow rate 330 mL/min). The impregnated samples were heated (10 °C/min) from room temperature to final activation temperature of 700 °C (in microwave furnace, Am) or at 800 °C (in conventional furnace, Ac), kept at the final activation temperature for 15 or 30 min, respectively, and then cooled down in nitrogen flow. The products of activation were subjected to two-steps washing procedure, with 5% solution of hydrochloric acid and later with demineralised water until free of chloride ions.
2.2 Sample characterisation
Elemental analysis (C, H, N, S) of the precursor, chars and activated carbons obtained was carried out using the CHNS Vario EL III analyser provided by Elementar Analysensysteme GmbH, Germany. The ash content for all materials under investigation was determined according to the DNS ISO 1171:2002 standard, according to which the dried sample in a form of powder was burned in a microwave furnace at temperature 850 °C, for 60 min.
Characterization of the pore structure of activated carbons was based on the nitrogen adsorption—desorption measured at − 196 °C on Autosorb iQ surface area analyser. Prior to the isotherm measurements, the samples were outgassed at 150 °C for 8 h. On the grounds of results of these measurements BET surface area, total pore volume and average pore diameter were determined. The total pore volume was calculated at a relative pressure of approximately p/p0 = 0.99 and at this relative pressure all pores were completely filled with nitrogen. Average pore sizes and pore distributions (Figs. 2, 3) were calculated from the adsorption branches of isotherms using the BJH method. Additionally, micropores volumes and areas were determined by the t-plot method.
Pore size distribution in activated carbons obtained via conventional activation
Pore size distribution in activated carbons obtained via microwave activation
To evaluate the content of oxygen functional groups of acidic and basic character the Boehm method was applied (Boehm et al. 1964; Boehm 1994). Volumetric standards of NaOH (0.1 M) and HCl (0.1 M) were used as the titrants. The pH of materials was measured by means of pH meter manufactured by Metrohm Ion Analysis (Switzerland) equipped in Unitrode Pt1000 (combined glass pH electrode), using the following procedure: a portion of 0.4 g the sample of dry powder was added to 20 mL of demineralised water and the suspension was stirred overnight to reach equilibrium. After that time, pH of the suspension was measured.
2.3 Adsorption studies
Methylene blue adsorption was determined according to the following procedure. Samples of the prepared activated carbons in the same portions of 0.025 g with the particle size ≤ 0.09 mm were added to 0.05 L of methylene blue solution with initial concentrations in the range from 0 to 200 mg/L and the suspension was stirred to reach equilibrium for 12 h at temperature of 25 °C. After the adsorption equilibrium had been achieved, the solution was separated from the sorbent by centrifugation. The dye concentrations in the solution before and after adsorption were determined using a double beam UV–Vis spectrophotometer (CaryBio100, Varian) at a wavelength of 665 nm. All experiments were made in triplicate. The equilibrium adsorption amounts (mg/g) were calculated according to the following formula:
$${\text{q}_\text{e}}=\frac{{({\text{c}_\text{i}} - {\text{c}_\text{e}}) \cdot \text{V}}}{\text{m}}$$
where ci and ce (mg/L) are the initial and equilibrium concentration of the dye, V (L) is the volume of the dye solution, and m (g) is the mass of activated carbon used, respectively. The equilibrium data were analysed by the Langmuir and Freundlich models.
2.4 Electrochemical study
Prepared carbons were mixed with polymer binder (poly(vinylidene fluoride-co-hexafluoropropylene), Kynar-Flex®, Atofina) and acetylene black (C65, Timcal®) in a mass ratio 80:15:5. Next, N-methylpirolidone (> 99%, VWR) was added slowly. The obtained slurry was mixed for 20 h. The homogeneous, viscous slurry was then casted onto aluminum current collector using the doctor blade technique. The prepared electrodes were vacuum dried at 105 °C for 24 h. Symmetric two-electrode Swagelok® type cells with electrodes were assembled in argon filled glovebox (MBraun, H2O < 0.5 ppm, O2 < 0.5 ppm). 1 M tetraethylammonium tetrafluoroborate (> 99%, Aldrich) in acetonitrile (> 99.8%, Aldrich) was used as an electrolyte. Electrochemical experiments were performed with the application of multichannel potentiostat–galvanostat VMP-3 (Biologic). The following techniques were applied: cycling voltammetry (CV) at scan rates from 1 to 100 mV/s; galvanostatic charge/discharge (GCD) at current density ranging from 0.1 to 10 in Amper per mass of two electrodes (A/g) and electrochemical impedance spectroscopy (EIS) at the frequency range from 100 kHz to 1 mHz and amplitude 10 mV. The operation voltage of supercapacitors were fixed to 2.7 V. All results were calculated as the specific capacitance values and expressed in Farads per mass of active material per one electrode (F/g).
3 Results and discussion
3.1 Characterization of the activated carbons
Analysis of the data presented in Table 1 shows that the precursor used for the study is characterized by very high content of mineral matter (ash) as well as organic non-carbon impurities, mainly oxygen. Demineralization of the starting brown coal with the concentrated hydrochloric and hydrofluoric acids removes almost 84% of the mineral substance from its structure and leads to small but notable changes in the elemental composition. The demineralized sample had a bit higher content of carbon, nitrogen and sulphur than the initial coal. Demineralization also leads to small decrease in the content of hydrogen and oxygen calculated from the difference.
Proximate and elemental analysis of starting and demineralised coal (wt%)
Ndaf*
Cdaf
Hdaf
Sdaf
Odiff**
* Dry-ash-free basis
** Determined by difference
As indicated by the data presented in Table 2, pyrolysis and activation of the demineralized brown coal (both in conventional and microwave furnace) cause significant changes in its structure. Thermochemical treatment brings a substantial increase in the content of Cdaf (24.8–29.9 wt%), accompanied by a considerable decrease in the content of the non-carbon elements, with the exception of nitrogen. These changes are mainly induced by high temperature of the processes, in particular that of the activation process. Upon heating the least stable fragments of the coal structure (e.g. methylene, oxygen, sulphur bridges) break, leading to formation of side products of pyrolysis and activation, rich in hydrogen, such as water, hydrogen sulphide or hydrocarbons. Relatively small changes in the content of Ndaf suggest that brown coal contains nitrogen in the form of thermally stable functional groups. However, it should be emphasized that all changes are much greater in the variant with conventional heating (PcAc sample).
Elemental composition of activated carbons (wt%)
PmAm
UPcAc
UPcAm
UPmAc
UPmAm
PcUAc
PcUAm
PmUAc
PmUAm
As follows from further analysis of the data collected in Table 2, a similar situation takes place for samples modified with nitrogen at the precursor (UPA samples) or char stage (PUA samples). All the samples obtained via activation at 800 °C in a conventional furnace (Ac) are characterized by very high carbon content ranging from 93.3 to 95.4 wt%. For analogous samples activated in a microwave furnace (Am) the content of Cdaf is about 7–8 wt% lower. All the nitrogen-enriched activated carbons show very low content of Hdaf, which is most probably a consequence of gasification of the hydrogen-rich and amorphous fragments of coal structure as well as progressing aromatisation of carbonaceous matrix during pyrolysis and activation processes. Moreover, the activation with K2CO3 in a conventional furnace brings almost total removal of sulphur from the carbonaceous structure. Depending on the variant of heating applied during the activation step, the N-doped activated carbons differ significantly in the content of nitrogen and oxygen. Samples UPcAc, UPmAc, PcUAc and PmUAc obtained via activation in a conventional furnace show almost 3–5 times lower Ndaf content than the analogous samples UPcAm, UPmAm, PcUAm and PmUAm activated in a microwave furnace. It is most probably a consequence of higher activation temperature (by 100 °C) in conventional heating. Under the effect of high temperature and potassium carbonate, a considerable part of nitrogen groups incorporated into carbonaceous structure during the reaction with urea (at precursor or char stage) underwent decomposition or transformation to more thermally stable nitrogen species, e.g. pyridinium (N-6), pyrrole (N-5) or quaternary nitrogen (N-Q). It should be noted that urea turned out less effective N-reagent than the mixture of ammonia and air at the ratio of 1:3. According to our earlier study (Nowicki et al. 2008) activated carbons prepared via ammoxidation (simultaneous nitrogenation and oxidation of carbonaceous materials) followed by conventional physical activation with steam or chemical activation with KOH showed higher nitrogen content(2.6–3.8 and 0.8–2.1 wt%, respectively), despite the higher temperature of heat treatment or more aggressive activation conditions (potassium hydroxide as the activating agent, weight ratio of reagents equal to 4:1). Therefore, in the further study, the ammoxidation process should be used (instead of urea impregnation) for the preparation of nitrogen-enriched activated carbons by means of activation with potassium carbonate using microwave heating.
The situation is quite similar as far as oxygen contribution is concerned. The samples activated in a microwave furnace show about 3 wt% higher Odiff content than the analogous materials obtained by conventional heating. This is most likely due to the lower activation temperature, as a result of which more oxygen functional groups, especially those of acidic nature, are preserved on the surface of the produced activated carbons. It should be also noted that despite the high temperature treatment during pyrolysis and activation processes, all the activated carbons show lower ash content (being ballast deteriorating the physicochemical and electrochemical properties) than demineralized brown coal, used for their preparation. The low content of ash in the activated carbon samples (ranging from 0.5 to 2.7 wt%) follows from the fact that a considerable amount of the inorganic matter present in the precursor was removed in the reaction with potassium carbonate during the activation stage as well as upon washing the activated carbons with a 5% HCl solution in order to remove the excess of the activating agent and side products of the activation.
The textural parameters of the activated carbon samples were determined from the measurements of the low-temperature nitrogen adsorption/desorption isotherms, performed on a Autosorb iQ surface area analyser. As follows from the results shown in Table 3, the majority of the samples have well developed surface area and porous structure with dominant micropores. The surface area of the activated carbons prepared ranges from 617 to 1117 m2/g, whereas the total pore volume varies between 0.39 and 0.69 cm3/g. The data presented in Table 3 also imply that the textural parameters of the materials obtained significantly depend on the variant of activation as well as on the sequence of nitrogenation, pyrolysis and activation processes. From among the samples not subjected to reaction with urea, better developed surface area of 696 m2/g and the greater total volume of pores of 0.49 cm3/g were found for sample PmAm, activated in a microwave furnace. However, more microporous nature of the structure showed PcAc sample, activated in a conventional furnace. As follows from further analysis of the data collected in Table 3, almost all nitrogen-enriched activated carbons (with the exception of PmUAm sample) are characterized by more favourable textural parameters than unmodified carbons. The most developed porous structure was determined for samples UPcAc and UPmAc, subjected to modification with urea at the stage of precursor, followed by pyrolysis (both conventional and microwave) and activation at 800 °C in a conventional furnace. The surface area of these samples only exceeded 1050 m2/g (Table 3). The probable reason for such a strong development of the porous structure of these samples is increased reactivity of the modified precursor, caused by the presence of great amounts of oxygen as well as nitrogen groups introduced during the reaction with urea. As mentioned earlier, during pyrolysis a significant amount of these functional groups underwent transformation to more thermally stable species, built into deeper layers of the carbon matrix. Consequently, these functional groups could react with the potassium carbonate during the activation step (as indicated by a drastic decrease in the content of nitrogen, Table 2), facilitating the activating agent penetration into the deeper layers of the carbonaceous structure, and thus leading to a greater development of the porous structure. Much less favourable textural parameters (SBET ≈ 850 m2/g, Vt ≈ 0.54 cm3/g) were obtained for the analogous samples UPcAm and UPmAm activated in a microwave muffle furnace. Lower efficiency of porous structure development in these samples results probably from much milder conditions of thermal treatment during the activation process—temperature 700 °C, time 15 min. This supposition is confirmed by the fact, that samples UPcAm and UPmAm are characterized by a much higher nitrogen and oxygen content than the corresponding samples activated at 800 °C for 30 min (see Table 2).
Textural parameters of activated carbons
Surface area (m2/g)
Pore volume (cm3/g)
Vm/Vt
D (nm)
Micropore
Vm/Vt: micropore contribution
Interestingly, significantly worse textural parameters were determined for the activated carbons enriched in nitrogen at the char stage, especially sample PmUAm, whose surface area (617 m2/g) and total pore volume (0.42 cm3/g) were lower even than for the unmodified activated carbons. This situation is probably a consequence of the fact that the majority of nitrogen and oxygen species introduced upon reaction with urea were built into the surface layers of the char's grains, because of a more ordered structure of the carbonaceous matrix. As a result during the activation process, these groups could hinder the access of potassium carbonate to the deeper layers of the carbonaceous structure, which led to gasification of the surface layers of the chars and thus to less effective development of the porous structure. This assumption is to some degree confirmed by the fact that the majority of the samples enriched in nitrogen at the char stage (PUA series) are characterized by higher mesopores contribution in total pore volume as well as wider average pore diameter, than the analogous carbons modified at the precursor stage (UPA series), in particular samples PcUAm and PmUAm. Unfortunately, the textural parameters of the samples described in this study are significantly less favourable than for the analogous carbons obtained by ammoxidation and chemical activation of brown coal with KOH, in case of which the surface area varied from 2292 to 3181 m2/g (Nowicki et al. 2008). It is most probably a consequence of lower reactivity of potassium carbonate in comparison to potassium hydroxide, as well as lower weight ratio of reagents (2:1) used during this study. Worse textural parameters of discussed activated carbons may be also related to bigger grains size of the starting brown coal, which was 0.5 mm. Therefore, further studies are needed to optimise the parameters of activation to get activated carbons of better textural properties, especially in case of samples activated in microwave furnace.
In order to characterize the chemical properties of the surface of the activated carbons obtained (unmodified and N-doped), the contents of the surface functional groups of acidic and basic character as well as pH were measured. The data presented in Figs. 4 and 5 imply that the acidic-basic character of the surface depends first of all on the variant of heating applied during pyrolysis and activation processes. The second factor influencing the acid-base properties of carbonaceous materials prepared (however to a lesser extent) is the sequence of the processes of nitrogen introduction, pyrolysis and activation. As shown, the pH value of the activated carbons varies in a fairly wide range from 4.70 to 7.37, however, much higher pH values are achieved by the samples obtained as a result of activation in a conventional furnace. The biggest difference in this respect (by 1.98) was noted for samples UPmAc and UPmAm, while the most-similar pH values (6.11 and 5.45, respectively) show the analogous carbons UPcAc and UPcAm.
pH value of activated carbons
The content of surface functional groups for activated carbons
The greatest amount of the functional species (over 1.70 mmol per gram of activated carbon) was found on the surface of samples PcUAm and PmUAc, subjected to the reaction with urea after the pyrolysis step, while the lowest (about 1 mmol/g) on samples UPcAc and UPmAc enriched in nitrogen at the precursor stage. According to the data from Fig. 5, the activated carbons prepared differ significantly in the content of acidic and basic groups. The samples activated in a conventional furnace (Ac) show intermediate acid-base character of the surface. The exception is sample PmUAc, which, like all the carbons activated in a microwave furnace (Am), show a distinct predominance of acidic groups. These differences most probably result from different thermal conditions of the activation process. Higher temperature (800 °C) used during conventional activation favours formation of a greater amount of basic functional groups, and simultaneously causes a decomposition of the acidic groups. Milder conditions of thermal treatment applied during activation in a microwave furnace (700 °C, 15 min) allowed preservation of a much larger number of acidic groups, especially in the nitrogen-enriched activated carbons. The greatest dominance of acidic functional surface groups was noted for sample PmUAm, which has almost three times more acidic groups (1.11 mmol/g) than basic ones (0.41 mmol/g). In turn, the greatest prevalence of groups of basic character was observed in sample UPmAc, containing 0.43 mmol/g of acidic groups and 0.58 mmol/g of basic groups. Based on the comparison of the obtained data with the results of our earlier studies (Nowicki et al. 2008), it can be concluded that the samples activated with potassium carbonate show an intermediate acidic-basic nature of the surface, between the materials chemically activated with KOH (characterised by great prevalence of acidic groups) and the samples physically activated with steam, for which a dominance of basic functional groups is observed.
3.2 Methylene blue adsorption
The data presented in Table 4 as well as in Figs. 6 and 7 clearly illustrate a significant effect of the variant of activation and nitrogen doping on the sorption abilities of activated carbons toward methylene blue (MB). From among the samples unmodified with nitrogen, more effective adsorbent against the above-mentioned organic dye proved to be sample PmAm (obtained by microwave assisted pyrolysis and activation), whose sorption capacity was by 31.7 mg/g higher than for analogous sample PcAc, obtained via conventional heat treatment. Most probably it is related to its better developed surface area as well as a considerable higher mesopores contribution in its porous structure (see Table 3), whose presence favours the sorption of compounds with large particle sizes.
Adsorption isotherms constants for the adsorption of methylene blue onto the activated carbons at 25 ± 1°C
qe (mg/g)
KL (L/mg)
KF (mg/g)
1/n
0.00194–0.00043
Norit® SX2a
aCommercial activated carbon
Adsorption isotherms of methylene blue onto activated carbons obtained via conventional activation
Adsorption isotherms of methylene blue onto activated carbons obtained via microwave assisted activation
As mentioned above, introduction of nitrogen functional groups into the activated carbons structure significantly improves their sorption properties toward organic pollutants. However, it should be emphasized that more effective adsorbents proved to be the carbons enriched in nitrogen at the precursor stage, especially that activated at 800 °C in a conventional furnace. The highest efficiency in methylene blue removal from water solution show samples UPmAc, UPcAc and PcUAc, whose maximum adsorption capacities were 348.4; 316.5 and 308.6 mg/g, respectively. It is most probably a consequence of their well-developed surface area (891–1117 m2/g) and total pore volume (0.546–0.687 cm3/g) as well as prevalence of basic functional groups on the surface. For the other samples (characterized by less favourable textural parameters) an increase in the sorption capacity in comparison to corresponding unmodified carbons is much lower and ranges from 34.4 mg/g for sample PmUAm to 205.2 mg/g for sample PmUAc. However, it should be emphasized that the sorption capacities obtained for the majority of nitrogen-enriched carbons exceed significantly the sorption capacity of commercial micro/mesoporous activated carbon—Norit® SX2 (161.3 mg/g), which is very often used for water purification and decolourisation.
According to the equilibrium adsorption isotherms of methylene blue presented in Figs. 6 and 7, the amount of adsorbed dye increases significantly with increasing initial concentration of MB in water solution, until saturation. It suggest that at low methylene blue concentrations its adsorption on the activated carbons surface is rather accidental. At higher MB concentrations the active centres present on the adsorbents surface can be completely occupied by dye molecules and the surface as well as the porous structure of the activated carbons is fully saturated. As the shape of the majority of isotherms is smooth and single, it may be also supposed that we observe a monolayer coverage of the adsorbents surface with the dye molecules.
To investigate the interaction of adsorbate molecules and adsorbent surface, two well-known models, the Freundlich and Langmuir isotherms, were applied. According to the data collected in Table 4, the Langmuir isotherm fits the experimental data more accurately than the Freundlich isotherm, as evidenced by the high value of the correlation coefficient varying between 0.9976 and 0.9999. Thus, according to the linear regression method, the methylene blue uptake is most probably realised as the monolayer coverage of the activated carbon surface with the dye particles. The character of the Langmuir isotherms can be also described by a separation factor (RL), which informs about the nature of the adsorption process. For all the activated carbons under investigation, the values of separation factor (RL) vary in the range of 0–1, which indicates that the adsorption of methylene blue is favourable. Moreover, the values of RL factor decrease with increasing initial MB concentration in water solution, which indicates that the sorption of this organic dye is more favourable at higher concentrations. As follows from further analysis of the data presented in Table 4, for all the activated carbons prepared the value of the slope 1/n (a measure of surface heterogeneity and of adsorption intensity) ranges between 0.0113 and 0.0934, which suggests that the adsorption conditions are favourable (methylene blue molecules has free access to the porous structure of activated carbons) as well as indicates a very high heterogeneity of the adsorbents surface.
The electrochemical performance of activated carbon as electrode material for electrochemical double layer capacitors have been studied. The cycling voltammetry characteristics are presented in Figs. 8, 9, 10a, b. Only curves recorded at 1 and 100 mV/s scan rates are included in this paper. At low scan rates CV curves were close to the rectangular shape. The values of specific capacitances calculated from CV technique were placed in Table 5. The highest value of capacitance was recorded for UPmAc (86 F/g), whereas the lowest capacitance was obtained for PcAc carbon (only 5 F/g). The EDLC systems based on studied carbons show typical resistive behaviour with much worse charge propagation at highest scan rate (100 mV/s). Pseudocapacitance from faradaic processes was not observed. Any additional distortions at CV curves are related only to electrostatic charge storage on carbon surface. In Figs. 8, 9, 10c frequency response graphs have been shown. The calculated capacitances at the lowest frequency (1 mHz) have been placed in Table 5. The EIS technique provides the similar data to CV. The maximum capacitance was 86 F/g for UPmAc and only 3 F/g for PcAc carbon. The UPmAc has also the highest capacitance at 0.1 mHz (10 s discharging time). The performance of this material was also satisfied at high frequency region even 1 Hz. The studies also showed that the charge storage ability decreased dramatically for frequencies higher than 10 Hz for all tested carbons. Figures 8, 9 and 10d show the dependence of specific capacitances from current density using galvanostatic charge/discharge method. Tests have been carried out at current loads from 0.1 to 10 A/g. The specific capacitance decreases with the increasing current density. The same relationship was observed in other papers (Wasiński et al. 2014, 2017). The specific capacitance values for EDLC system based on UPmAc and PcAc carbons at 0.1 A/g current density were 78.5 F/g and less than 2 F/g, respectively. The data for the 1 A/g current density were collected in Table 5. As seen, the specific capacitances decrease to 15 F/g or less for most investigated carbons. However, UPmAc, UPcAm and UPcAc carbons achieved more than 12 F/g even for 5 A/g current density. At the highest current density (10 A/g) the capacitances decrease to 10 F/g for UPmAc and even lower for other tested carbons.
Cycling voltammetry curves (a, b), capacitance vs. frequency response plot (c) and capacitance values at different charge/discharge current rates (d) for PcAc (green) and PmAm (red) carbons. (Color figure online)
Cycling voltammetry curves (a, b), capacitance vs. frequency response plot (c) and capacitance values at different charge/discharge current rates (d) for UPcAc (green); UPcAm (blue); UPmAc (red); UPmAm (grey) carbons. (Color figure online)
Cycling voltammetry curves (a, b), capacitance vs. frequency response plot (c) and capacitance values at different charge/discharge current rates (d) for PcUAc (green); PcUAm (blue); PmUAc (red); PmUAm (grey) carbons. (Color figure online)
Carbons capacitances in EDLC system
CV (1 mV/s)
EIS (1 mHz)
GCD (1 A/g)
Only UPmAc carbon shows good performance as electrode of EDLC system. For other studied samples the charge propagation and rate capability were average or poor. This behaviour is attributed to low specific surface area of synthesized carbons. The carbon synthesis route needs optimization in terms of time and temperature of pyrolysis and activation process and the amount of urea. Generally, the incorporation of nitrogen before pyrolysis and activation processes resulted in carbons with higher specific surface area, thus, greater specific capacitance in EDLC system could be obtained. Also, the microwave assisted pyrolysis is desirable and leads to carbon materials with better electrochemical performance whereas activation process should be conducted conventionally.
The above presented and discussed results have shown that the application of microwave and conventional heating in preparation of nitrogen-doped activated carbons from brown coal allows producing a wide gamut of carbonaceous adsorbents with very different physicochemical properties. Depending on the procedure of activation, the final products were micro/mesoporous activated carbons of well-developed porous structure, showing acidic (microwave heating) or intermediate (conventional heating) acid-base character of the surface as well as very diverse content of nitrogen functional groups varying from 1.0 to 5.6 wt%. The results obtained during the adsorption tests have proved that introduction of nitrogen functional groups into carbonaceous structure led to activated carbons with very good sorption capacity toward methylene blue, reaching 350 mg/g. Unfortunately, the electrochemical tests have shown, that only sample UPmAc shows good performance as electrode material of EDLC system (maximum capacitance 86 F/g), so the carbon synthesis route needs optimization in terms of textural parameters and the amount of nitrogen introduced into carbon matrix.
Following research was financially supported by the grant MINIATURA 1 DEC-2017/01/X/ST5/00421 funded by National Science Centre, Poland.
Boehm, H.P.: Some aspects of the surface chemistry of carbon blacks and other carbons. Carbon 32(5), 759–769 (1994)CrossRefGoogle Scholar
Boehm, H.P., Diehl, E., Heck, W., Sappok, R.: Surface oxides of carbon. Angew. Chem. Int. Edit. Engl. 3, 669–677 (1964)CrossRefGoogle Scholar
Boudou, J.P., Parent, P., Suarez-Garcia, F., Vilar-Rodil, S., Martinez-Alonso, A., Tascon, J.M.D.: Nitrogen in aramid-based activated carbon fibers by TPD, XPS and XANES. Carbon 44, 2452–2462 (2006)CrossRefGoogle Scholar
Chen, W.C., Wen, T.C., Teng, H.: Polyaniline-deposited porous carbon electrode for supercapacitor. Electrochim. Acta 48, 641–649 (2003)CrossRefGoogle Scholar
Gholidoust, A., Atkinson, J.D., Hashisho, Z.: Enhancing CO2 adsorption via amine-impregnated activated carbon from oil sands coke. Energy Fuels 31(2), 1756–1763 (2017)CrossRefGoogle Scholar
Grzybek, T., Klinik, J., Motak, M., Papp, H.: Nitrogen-promoted active carbons as catalytic supports 2. The influence of Mn promotion on the structure and catalytic properties in SCR. Catal. Today 137, 235–241 (2008)CrossRefGoogle Scholar
Hayashi, J., Yamamoto, N., Horikawa, T., Muroyama, K., Gome, V.G.: Preparation and characterization of high-specific-surface-area activated carbons from K2CO3-treated waste polyurethane. J. Colloid Interf. Sci. 281, 437–443 (2005)CrossRefGoogle Scholar
Jacob, J., Chia, L.H.L., Boey, F.Y.C.: Review thermal and non-thermal interaction of microwave radiation with materials. J. Mater. Sci. 30, 5321–5327 (1995)CrossRefGoogle Scholar
Jones, D.A., Lelyveld, T.P., Mavrofidis, S.D., Kingman, S.W., Miles, N.J.: Microwave heating applications in environmental engineering—a review. Resour. Conserv. Recy. 34, 75–90 (2002)CrossRefGoogle Scholar
Jurewicz, K., Pietrzak, R., Nowicki, P., Wachowska, H.: Capacitance behaviour of brown coal based active carbon modified through chemical reaction with urea. Electrochim. Acta 53, 5469–5475 (2008)CrossRefGoogle Scholar
Kazmierczak-Razna, J., Nowicki, P., Pietrzak, R.: The use of microwave radiation for obtaining activated carbons enriched in nitrogen. Powder Technol. 273, 71–75 (2015)CrossRefGoogle Scholar
Kazmierczak-Razna, J., Nowicki, P., Wiśniewska, M., Nosal-Wiercińska Pietrzak, R.: Thermal and physicochemical properties of phosphorus-containing activated carbons obtained from biomass. J. Taiwan Inst. Chem. E 80, 1006–1013 (2017)CrossRefGoogle Scholar
Liang, Ch, Wei, Z., Xin, Q., Li, C.: Ammonia – treated activated carbon as support of a Ru–Ba catalyst for ammonia synthesis. React. Kinet. Catal. Lett. 83, 39–45 (2004)CrossRefGoogle Scholar
Nowicki, P., Pietrzak, R.: Effect of ammoxidation of activated carbons obtained from sub-bituminous coal on their NO2 sorption capacity under dry conditions. Chem. Eng. J. 166, 1039–1043 (2011)CrossRefGoogle Scholar
Nowicki, P., Pietrzak, R., Wachowska, H.: Comparison of physicochemical properties of nitrogen-enriched activated carbons prepared by physical and chemical activation of brown coal. Energy Fuels 22, 4133–4138 (2008)CrossRefGoogle Scholar
Nowicki, P., Pietrzak, R., Wachowska, H.: Influence of metamorphism degree of the precursor on preparation of nitrogen enriched activated carbons by ammoxidation and chemical activation of coals. Energy Fuels 23, 2205–2212 (2009)CrossRefGoogle Scholar
Nowicki, P., Pietrzak, R., Wachowska, H.: X-ray photoelectron spectroscopy study of nitrogen-enriched active carbons obtained by ammoxidation and chemical activation of brown and bituminous coals. Energy Fuels 24, 1197–1206 (2010)CrossRefGoogle Scholar
Radmacher, W., Mohrhauer, O.: Demineralizing of coal for analytical purposes. Brennstoff-Chemie 37, 353–358 (1956)Google Scholar
Remya, N., Lin, J.G.: Current status of microwave application in wastewater treatment—a review. Chem. Eng. J. 166, 797–813 (2011)CrossRefGoogle Scholar
Seredych, M., Portet, C., Gogotsi, Y., Bandosz, T.J.: Nitrogen modified carbide-derived carbons as adsorbents of hydrogen sulfide. J. Colloid Interf. Sci. 330, 60–66 (2009)CrossRefGoogle Scholar
Shirahama, N., Mochida, I., Korai, Y., Choi, K.H., Enjoji, T., Shimohara, T., Yasutake, A.: Reaction of NO with urea supported on activated carbons. Appl. Catal. B-Environ. 57, 237–245 (2005)CrossRefGoogle Scholar
Sullivan, P., Moate, J., Stone, B., Atkinson, J.D., Hashisho, Z., Rood, M.J.: Physical and chemical properties of PAN-derived electrospun activated carbon nanofibers and their potential for use as an adsorbent for toxic industrial chemicals. Adsorption 18, 265–274 (2012)CrossRefGoogle Scholar
Vargas, D.P., Giraldo, L., Erto, A., Moreno-Piraján, J.C.: Chemical modification of activated carbon monoliths for CO2 adsorption. J. Therm. Anal. Calorim. 114(3), 1039–1047 (2013)CrossRefGoogle Scholar
Wasiński, K., Walkowiak, M., Lota, G.: Humic acids as pseudocapacitive electrolyte additive for electrochemical double layer capacitors. J. Power Sources 255, 230–234 (2014)CrossRefGoogle Scholar
Wasiński, K., Nowicki, P., Półrolniczak, P., Walkowiak, M., Pietrzak, R.: Processing organic waste towards high performance carbon electrodes for Electrochemical capacitors. Int. J. Electrochem. Sci. 12, 128–143 (2017)CrossRefGoogle Scholar
Zaini, M.A., Amano, Y., Machida, M.: Adsorption of heavy metals onto activated carbons derived from polyacrylonitrile fiber. J. Hazard. Mater. 180(1–3), 552–560 (2010)CrossRefGoogle Scholar
corrected publication 2019
1.Adam Mickiewicz University in Poznań, Faculty of Chemistry, Laboratory of Applied ChemistryPoznańPoland
2.Institute of Non-Ferrous Metals, Division in Poznań, Central Laboratory of Batteries and CellsPoznańPoland
Kaźmierczak-Raźna, J., Półrolniczak, P., Wasiński, K. et al. Adsorption (2019) 25: 405. https://doi.org/10.1007/s10450-019-00012-w
Revised 04 January 2019
DOI https://doi.org/10.1007/s10450-019-00012-w
|
CommonCrawl
|
Home Journals ISI Recombinant Sort: N-Dimensional Cartesian Spaced Algorithm Designed from Synergetic Combination of Hashing, Bucket, Counting and Radix Sort
Recombinant Sort: N-Dimensional Cartesian Spaced Algorithm Designed from Synergetic Combination of Hashing, Bucket, Counting and Radix Sort
Peeyush Kumar | Ayushe Gangal | Sunita Kumari* | Sunita Tiwari
Computer Science and Engineering, G. B. Pant Government Engineering College, Delhi 110020, India
[email protected]
Sorting is an essential operation which is widely used and is fundamental to some very basic day to day utilities like searches, databases, social networks and much more. Optimizing this basic operation in terms of complexity as well as efficiency is cardinal. Optimization is achieved with respect to space and time complexities of the algorithm. In this paper, a novel left-field N-dimensional cartesian spaced sorting method is proposed by combining the best characteristics of bucket sort, counting sort and radix sort, in addition to employing hashing and dynamic programming for making the method more efficient. Comparison between the proposed sorting method and various existing sorting methods like bubble sort, insertion sort, selection sort, merge sort, heap sort, counting sort, bucket sort, etc., has also been performed. The time complexity of the proposed model is estimated to be linear i.e. for the best, average and worst cases, which is better than every sorting algorithm introduced till date.
recombinant sort, bucket sort, counting sort, radix sort, hashing, sorting algorithm
Sorting is a process of arranging the given data into an ascending or a declining fashion on the basis of a linear relationship among the data elements [1]. Sorting may be performed on numbers, strings or records containing both numbers and strings like names, IDs, departments, etc., in alphabetical order, or in increasing or decreasing manner [2]. The exponential rise in the quantity of data available for use and being used, calls for more efficient and less time consuming sorting methods. Sorting algorithms are of two major types, namely, comparison and non-comparison sorting. Comparison sort involves sorting the data elements by doing repetitive comparisons and deciding which data element should come before or after which data element in the sorted array [3]. Comparison sort based sorting methods are bubble sort, insertion sort, quick sort, merge sort, shell sort, etc. Non-comparison sort does not compare the data elements for sorting them into an order. The non-comparison based sorting methods involve counting sort, bucket sort, radix sort, etc. Sorting algorithms can also be stable and unstable, in-place and out of place. In-place sorting algorithms are those which sort the given data without employing an additional data structure [4]. Out-of-place sorting algorithms require an additional or auxiliary data structure for sorting the given data elements [5]. Stable sort refers to the sorting technique in which two elements having equal values appear in the same order in the sorted array as they were, before the sorting was applied [6]. In the case of unstable sort, this order is not necessarily retained. Bubble sort, merge sort, counting sort and insertion sort are examples of stable sorting algorithms. While, quick sort, heap sort and selection sort are based on unstable sorting technique.
Each of these sorting algorithms have unique properties that add value to the specific function they are used to perform. Sorting algorithms are majorly distinguished on the basis of four properties, which are adaptability, stability, in-place/ not in-place, and online/ not-online, in addition to their basic methodology. An algorithm is adaptable in nature if its time complexity becomes almost O(n) if the array is nearly sorted. An algorithm is said to display online property if it can process the input element by element, and doesn't require the whole array as input at the beginning. Bubble sort works by exchanging method and is an in-place, stable sorting algorithm, which makes O(n2) comparisons and swaps. It is not online and is adaptive in nature. Insertion sort works by insertion method and is also a stable, in-place sorting algorithm, which requires O(n2) comparisons and swaps. It is adaptive and online, in addition to having little over-head. Heap sort works by selection methodology and makes use of heap data structure. Both heap sort and quick sort are unstable in nature, and takes O(n logn) for comparisons and swaps. Heap sort is not-online, not-adaptive and is an in-place sorting algorithm, while quick sort is not-online, adaptive and an in-place sorting algorithm. Quick sort also has less over-head and works by partitioning. Bucket sort is a type of non-comparison distribution sort, which is not-online, out-of-place, non-adaptive and stable in nature. It has overheads of the buckets. Radix sort is a non-comparison integer sort, which is stable, not-online, adaptive and in-place in nature. In this paper, a novel sorting method is proposed by combining all the best characteristics of a few existing sorting algorithms. This novel method, called Recombinant Sort, combines the counting sort, bucket sort and radix sort, along with hashing and dynamic programming to elevate efficiency. This selective combination precedes the sum of the qualities of its parent algorithms, which brings out the essence of the idea behind this synergy. The proposed method has many unique and striking properties. It can work on numbers as well as strings, and can sort numbers containing decimals as well as non-decimal numbers together or apart. Due to the application of hashing and dynamic programming, the traversal for fetching values is decreased tremendously and thus, the time complexity is also reduced. Comparison of various existing sorting algorithms is also conducted, on the basis of best, average and worst cases of time complexity, the ability to process decimals and strings, stability and on in-place or out-of-place technique.
This paper is divided into seven sections. Section 2 elaborates all the concepts used as pre-requisites for the proposed method. Section 3 delineates the concept, algorithm and the working of the proposed methodology of the recombinant sort. Proper description of algorithms, along with labelled diagrams are used to enhance the readers' understanding, and highlight the proposed novel approach in a lucid manner. Section 4 provides the proof of correctness of the proposed algorithm using loop invariant method. Section 5 contains the complexity analysis of the proposed algorithm and section 6 discusses the results obtained in a graphical and neatly tabulated manner. Section 7 discusses the conclusions and the future prospects of this algorithm and the domain.
2. Concepts Used
2.1 Hashing
Hashing is a faster and more efficient method of insertion and retrieval of data. It works by employing a function called the hash function, which is used for generating new indices for the data elements. The hash function applies a uniform mathematical operation to all the data elements to allot them a place in the hash table. A hash table is a data structure that stores the values mapped by the hash function [7]. With hashing, the speed of retrieval or insertion can't be known but space-time trade off comes to picture. The speed can be checked by using a known amount of space for hashing, or the space used can be checked using a known speed for the process. Though usually, the speed of searching, insertion and deletion in hash tables is fast if collision of data does not occur, but it still heavily depends on the selection of the hash function. As hashing works by inducing randomness in the hash table and not order, it can't be considered to perform an admirable job for sorting the data alone. Hashing becomes extremely inefficient as the number of collisions increases, which causes the number of tuples in a bucket to increase, and ultimately leads the time complexity to become more linear O(n). Hashing is used for a variety of applications, like, password verification, Rabin-Karp algorithm, compiler applications, message digest and in linking file name to path.
2.2 Bucket sort
Bucket sort works by distributing the data elements to be sorted in different buckets, which are then individually sorted using any other sorting technique or by recursive application of the bucket sort technique itself. The complexity of bucket sort depends on the number of buckets used, algorithm used for sorting each bucket and the uniformity in distribution of the data elements [8]. Once the elements are sorted into different buckets, the sorting of elements of the bucket becomes an independent task, and thus can also be carried out in parallel with other buckets to enhance performance. It can't be applied for string data type and requires a high degree of parallelism for achieving good performance [9]. Also, a bad distribution of elements in the buckets may very easily lead to extra work and degraded performance. Time complexity of bucket sort is O(n+k) for best and average cases and O(n2) for the worst case. Bucket sort works best when the input data is of floating point type and is distributed uniformly over a range.
2.3 Counting sort
Counting sort is a small integer sorting technique, which works by counting data elements with distinct key values. Arithmetic is applied on these counts to determine the positions of the elements in the output. It is only suitable for data items in which the variation in the values of the elements do not precede the total number of elements to be sorted, as it has linear running time in total number of elements and difference between the maximum key and the minimum key values [10]. It is a stable sort and does not work by doing comparisons, thus is a non-comparison sort. Counting sort's time complexity is O(n+k), where n is the size of the sorted array and k is the size of the helper array, which is needed when sorting non-primitive elements. Counting sort uses the values of the keys as indices, thus is only suitable for sorting small integers and can't be used to sort large datasets. As it only works for discrete values, it can't be used to sort strings and decimal values as array frequencies cannot be constructed. Counting sort has linear time complexity of O(n+k) for the elements within the range of 1 to k, but turns to O(n2) for elements within the range of 1 to n2 [11]. Counting sort is used when linear time complexity is needed and there are multiple entries of smaller magnitude integers.
2.4 Radix sort
Radix sort is a non-comparison sorting algorithm that works by considering the radix of the elements for distributing them into different buckets. The process of bucketing is repeated for each digit, with the previous ordering being preserved, again if the elements contain more than one significant digit [12]. Therefore, it is fast when the keys are small and the range of the array is less. Radix sort is known to be a close cousin of the counting sort. Though radix sort can work for integers, words, or any other dataset which can be lexicographically sorted, its flexibility is curbed as it depends on digits or letters to perform sorting. Separate codes need to be written for integers, floating type values and for strings. It is slower in comparison to merge sort and quicksort when the operations like insertion and deletion are not efficient enough and also has high space complexity [13]. The radix sort's constant k in O(kn) is greater in comparison to any other sorting algorithm, and radix sort also consumes much greater space than quick sort, which is an in-place sorting algorithm. Radix sort is mostly used for sorting strings like stably sorting fixed-length words over fixed alphabets.
3. Proposed Recombinant Sort Algorithm
The Recombination Sort is formulated from recombination of cardinal concepts of various sorting algorithms. The capability of radix sort to deal with each digit of the number separately, the concept of counting the number of occurrences of the elements in counting sort, the concept of bucketting from bucket sort and the concept of hashing a number to a multidimensional space are combined together to form a single sorting algorithm which outperforms its parent algorithms. As Radix sort is one of the parent algorithms, the recombinant sort needs to be rewritten for every different type of data. The Recombinant Sort consists of two parts, namely, the Hashing cycle and the Extraction cycle. For the purpose of simplicity, an array consisting of numbers between the of range 1 to 10, consisting of only one digit after decimal, is considered.
3.1 Hashing cycle
3.1.1 Mathematical rendition of hashing used in hashing cycle
For an n-digit decimal number $\begin{equation}
\Theta=n_{1} n_{2} n_{3} \ldots n_{\lambda-1} n_{\lambda} \cdot n_{\lambda+1} n_{\lambda+2} n_{\lambda+3} \ldots n_{n-1} n_{n}
\end{equation}$ , $\begin{equation}
\forall \cdot \lambda \in Z
\end{equation}$, the hash function $\begin{equation}
H\left(\Lambda_{\theta}\right)
\end{equation}$, where $\begin{equation}
\Lambda_{\Theta}
\end{equation}$ = set containing all digits of decimal number $\begin{equation}
\theta
\end{equation}$ in a systematic order from left to right $\begin{equation}
\left(n_{1}, n_{2}, n_{3}, \ldots, n_{\lambda-1}, n_{\lambda}, n_{\lambda+1}, n_{\lambda+2}, n_{\lambda+3}, \ldots, n_{n-1}, n_{n}\right)
\end{equation}$ , can be defined as:
$\begin{equation}
H\left(\Lambda_{\theta}\right) \cdot= \\
\left\{S\left[n_{1}\right]\left[n_{2}\right]\left[n_{3}\right] \ldots\left[n_{\lambda-1}\right]\left[n_{\lambda}\right]\left[n_{\lambda+1}\right]\left[n_{\lambda+2}\right]\left[n_{\lambda+3}\right] \cdots\left[n_{n-1}\right]\left[n_{n}\right]+\right. \\
+\}
\end{equation}$ (1)
where, S is an n-dimensional cartesian space initialized by the hash function in the form of a hypercube to map an n digit number $\begin{equation}
\end{equation}$. The'++'sign donates an increment by 1. This increment by 1 is used in the hash function to tackle the problem of collision in hashing, thus the need for chaining list data structure is eliminated. Each axis of each dimension of S lies from [0, 9] and for an n-digit number $\begin{equation}
\end{equation}$, the shape of the space S initialized in the computer's memory is in the form of a hypercube (n-dimensional array) with each axis consisting of only 10 memory blocks, and can be expressed as:
\text { shape }(S) \equiv S[10][10][10] \ldots[10][10][10] \ldots[10][10]
The hash function defined in Eq. (1) maps a number $\begin{equation}
\end{equation}$ to an n-dimensional array S, defined in Eq. (2). The main goal in hashing is to minimize the time complexity [14] of the whole hashing operation. From Eq. 1, it can be stated that the hash function updates/increment (or maps the number $\begin{equation}
\end{equation}$ at) the index $\begin{equation}
\end{equation}$ of hypercube/array S. As the updation or deletion or fetching in an array has the time complexity of O(1) for each element [14], therefore the time complexity of hash function $\begin{equation}
\end{equation}$ for each element is also O(1). Thus, the hash function maintains the minimum time complexity that can be maintained by a hash function and along with it, due the use of a hypercube/array data structure as hash table, the traversing through the table is also fast as well as continuous, and unlike counting sort, the large numbers can be sorted using space S.
3.1.2 Assumed pre-conditions
Only a single main precondition is required to instantiate the hashing cycle for the entire data consisting of N elements, which is, that each element should have the same number of digits, i.e, if elements does not have same number of digits then additional zeros are added to make up for the few digits in a way that it doesn't effect or change the quantity of the number. For example, if we have three numbers: [1.01, 2.1, 1], so in order to make these numbers have an equal number of digits, we add zeroes: [1.01, 2.10, 1.00]. This step is extremely easy and does not affect the efficiency of the algorithm in any magnitude. This step is also cardinal to keep track of the decimal's position. After this step, the unsorted array are given in Figure 1, defined as:
arr = [ 4.5, 0.3, 2.3, 8.8, 7, 9.2, 4.5, 4.3, 8, 3.2] can be written (after doing the preprocessing) as:
arr = [ 4.5, 0.3, 2.3, 8.8, 7.0, 9.2, 4.5, 4.3, 8.0, 3.2]
3.1.3 Dynamic programming used in $\begin{equation}
\end{equation}$
As for an n digit decimal number, an initialized n-dimensional cartesian space will have a lot of unused space left after all the numbers have been mapped. But to increase the efficiency of the algorithm in order to retrieve the filled spaces (as in computer, in order to travel to or find the filled memory spaces in an array, one has to travel in a systematic pattern), a trick by maintaining two separate maps has been also performed. A more understood definition of two such maps has been given in the Hashing Cycle section.
Figure 1. Hashing cycle
The steps of the hashing cycle are lucidly depicted in Figure 1 (the hashing function $\begin{equation}
\end{equation}$) defined above is used for each element of the unsorted array arr). For sorting the type of data considered in the example, a 2D array of dimension 10x10 called the Count array, where the values will be mapped is considered, a traverse map H_Max of dimension 10x1 and a traverse map H_Min of dimension 10x2 are also taken. The two traverse maps are used to avoid unnecessary steps during the extraction period. The algorithm for hashing cycle designed for the example considered is as follows:
HASHING CYCLE ALGORITHM: The algorithm presented below uses two function: First, the numeric to string converter function, defined as: $\begin{equation}
F_{\text {string}}
\end{equation}$() and second, the string to numeric converter, defined as: $\begin{equation}
F_{\text {Numeric}}
\end{equation}$().
Recombinant-hashing (arr, size): //the unsorted array arr
S[10][10];
H_Max[10];
H_Min[10][2];
set digit_count_after_decimal ← 1
for i = 0 to size do
t=F_{\text {string}}\left(\operatorname{arr}[i] \times\left(10^{\text {digit_count_after_decimal}}\right)\right)
S\left[F_{\text {Numeric}}(t[0])\right]\left[F_{\text {Numeric}}(t[1])\right] \cdot \leftarrow
Increment by 1
if $\begin{equation}
\left(H_{-} \operatorname{Max}\left[F_{\text {Numeric}}(t[0])\right]<F_{\text {Numeric}}(t[1])\right)
\end{equation}$ then
set $\begin{equation}
H_{-} \operatorname{Max}\left[F_{\text {Numeric}}(t[0])\right] \leftarrow F_{\text {Numeric}}(t[1])
\left(H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][0]==0\right)
H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][1] \leftarrow
\end{equation}$ $\begin{equation}
F_{\text {Numeric}}(t[1])
H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0\rceil)\right][0\rceil \leftarrow 1
else if $\begin{equation}
\left(H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][0] \neq 0\right.
\end{equation}$ and
\left.H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][1]>F_{\text {Numeric}}(t[1])\right)
F_{\text {Numeric}}(t[1\rceil)
end for( i )
end func
As depicted in Figure 1, the array arr (defined above) is fed to the hashing cycle for sorting and the space S of 10x10 is initialized along with a vector H_Max of shape 10 and a space H_Min of shape 10x2. The further steps are as follows:
1. The first element of the array is '4.5', so:
a. First, it will be multiplied by 101 (as count after decimal is 1): 4.5 × 10 = 45
b. Second, the number 45 will be converted to string using Fstring (45) = t = '45'.
c. Third, we will increment the value in the memory block at row t[0] = 4 and column t[1] = 5 (at array index (4, 5)).
d. Fourth, in the traverse map H_Max, as H_Max [t[0]] < t[1], then H_Max[t[0]] will be set as t[1].
e. Fifth, in traverse map H_Min, as H_Min [t[0]][0] = = 0, then H_Min[t[0]][1] will be set as t[1] and H_Min[ FNumeric (t[0])][0] will be set to 1.
f. Lastly, the complete process will continue for each array element till we reach the end of the unsorted array and the rest of the complete steps are given in the example 1 in the supplementary section.
3.2 Extraction cycle
The end result of the hashing cycle is depicted in Figure 2. For the extraction of sorted arrays from the count array, the exaction cycle moves row by row, like done in raster scanning, for example, the cycle will visit all indices of row 0 and then all indices of row 1 and so on.
Figure 2. Extraction cycle
It is clear from Figure 2 that most of the memory spaces in the count array are not filled and thus, traversing these unused spaces will increase the time complexity of the algorithm. So, in order to minimize the time complexity and prevent wasteful traversal of these unused spaces, the traverse maps H_Min and H_Max are used. In the Traverse map H_Min, each row stores the lowest numeric value attained by the columns for that particular row in the count array and in the Traverse map H_Max, each row stores the highest numeric value attained by the columns for that particular row in the count array, for example, for row 4 in the count array the minimum column reached is 3 and the maximum column reached is 5, so the traverse map H_Min will store the value 3 and of H_Max will store the value 5, for row 4 of the count array. The column 0 of the traverse map H_Min will store whether the map for that particular row had been updated before or not. The algorithm for extraction cycle is as follows:
EXTRACTION CYCLE ALGORITHM: The algorithm presented below uses a function defined as: Overwrite_arr(element, position, arr). This function overwrites the element 'element' at position 'position' of array 'arr'. The importance of pre-conditions, defined above, can be seen in line 7. The FFloat() function used below converts strings to float numbers and numeric to string converter function is also used and is defined as: FString(). In line 7, the '+' sign is used to represent string concatenation.
Recombinant-extraction (S, H_Min, H_Max,arr, size):
set overwrite_pos_at ←
for i = 0 to 9 do
for j = H_Min[i][1] to H_Max[i]+1 do
if(S[i][j]!=Empty) then
for z = 0 to S[i][j] do
Overwrite_arr( FFloat ( FString (i)+'.'+
FString (j)), overwrite_pos_at, arr)
overwrite_pos_at ← increment by 1
if(overwrite_pos_at == size) then
return arr
end for( z )
end for( j )
At the end of the extraction cycle, the parts of the count array traversed are shown in Figure 2 and the sorted array obtained is shown below. The time complexity of the example taken in the prior section is found to be O(n+17). Sorted array arr returned as:
[0.3, 2.3, 3.2, 4.3, 4.5, 4.5, 7.0, 8.0, 8.8, 9.2]
Why Does Extraction Cycle Work?
As it is known that in a computer's memory, in order to travel through an n-dimensional space one has to travel in a systematic pattern, performed using for loops. This systematic pattern traversal has a unique ability, whose advantage has been taken during the extraction cycle. A similar pattern can be observed when traversing a Binary Search Tree in an inorder traversal fashion. By traversing inorderly, a sorted form of the unsorted data used to build the binary search tree can be obtained. Due to the unique hash function $\begin{equation}
\end{equation}$ used to map an n-digit number $\begin{equation}
\end{equation}$ to an n-dimensional cartesian space, a unique sorted outcome observed when performing inorder traversal in a binary search can also be observed when using Extraction Cycle (proposed above) to traverse through an n-dimensional cartesian space.
4. Proof of Correctness
Loop Invariant Induction [14] method has been used to prove the correctness of the proposed algorithm. It has been used to prove the correctness of both the Hashing Cycle and the Extraction Cycle. The correctness is for the recombinant sorting algorithm for n digit decimal numbers (the pseudo code for which is given in Supplementary Section).
Notations: The unsorted array is denoted by arr[] and S is used to denote the initialized n-dimensional space. The two arrays used to lower the extraction cost of the algorithm are denoted by H_Min and H_Max. It is assumed that arr[] contains N elements and the precondition stated above has been satisfied, and therefore, each element of arr[] has n digits. It is also assumed that the decimal point is placed after λth , $\begin{equation}
\forall \lambda \in Z
\end{equation}$, digit for each element and thus, λ digits lie before the decimal and (n-λ) digits lie after decimal for each element.
The predominant objective of this function is to use $\begin{equation}
\end{equation}$ so as to map all n elements of arr[] to an n-dimensional cartesian space, which is in the form of an array in the computer's memory. So, for every element in arr[], there exists a place in the n-dimensional space. Therefore, the loop invariant IHash for iteration ith can be defined as,
I_{\text {Hash}} \equiv
\end{equation}$ At iteration i, the initialized n-dimensional empty cartesian space S should have $\begin{equation}
\leq i
\end{equation}$ points mapped in it by using $\begin{equation}
\end{equation}$. Also, the arrays H_Min and H_Max should have $\begin{equation}
\leq 2 i
\end{equation}$ and $\begin{equation}
\end{equation}$ spaces mapped in it respectively.
\end{equation}$ At iteration i, the cycle should successfully map all elements in arr[0: i] using $\begin{equation}
\end{equation}$, to an n-dimensional space S, initialized prior to the starting of loop.
The three steps for Loop invariant proof are as follows:
(1) Initialization: Before the first iteration of the loop or at i=0 in the cycle, the invariant IHash states that the initialized n-dimensional empty cartesian space should have ≤0 points mapped in it by using $\begin{equation}
\end{equation}$. Also, the arrays H_Min and H_Max should have ≤ 2 * 0 and ≤ 0 spaces mapped in it, respectively. As 0 points have been mapped in space S at i=0, therefore the space remains vacant. Also, 0 spaces in arrays H_Min and H_Max have been mapped, therefore they also remain unoccupied. As the space S and arrays H_Min and H_Max were already set to be vacant, the invariant condition stands corrected.
(2) Maintenance: Assume that the loop invariant stands corrected at the start of iteration i=j in the cycle. Then it must be that the initialized n-dimensional empty cartesian space S should have ≤ j points mapped in it by using $\begin{equation}
\end{equation}$. Also, the arrays H_Min and H_Max should have ≤ 2 * j and ≤ j spaces mapped in it respectively. In the body of the loop at iteration j, arr[j] is mapped to cartesian space S, and if the defined condition holds True, then the required values are mapped in arrays H_Min and H_Max. Thus, at the start of the iteration i = j+1, the initialized n-dimensional empty cartesian space S will have ≤ j+1 points mapped in it by using $\begin{equation}
\end{equation}$. Also, the arrays H_Min and H_Max will have ≤ 2 * (j +1) and ≤ j+1 spaces mapped in it respectively, which needed to be proved.
(3) Termination: When the for-loop terminates at i = N, the initialized n-dimensional empty cartesian space S has ≤ N points mapped in it by using $\begin{equation}
\end{equation}$. Also, the arrays H_Min and H_Max has ≤ 2 * N and ≤ N spaces mapped in it respectively. As arr[] has N elements, therefore all elements have been mapped to space S, which is also the desired output.
As all three steps of the loop invariant hold true, therefore the algorithm for the hash cycle is correct.
It is a known fact that a human can traverse an n-dimensional space in either a linear or a nonlinear fashion, but computers can only traverse such spaces in a linear fashion. This linear traversal, as also mentioned before, yields an advantage. For instance, in order to traverse through a 2-dimensional space of size 10×10, a for or while-loop is needed, and upon discerning those loops closely, one would notice that they are intrinsically counting from 0 to 100 (and undeniably counting is sorted). Thus, the extraction cycle traverses the n-dimensional space in a fashion that it encounters the mapped elements in a sorted manner, due to the intrinsic nature of loop traversal. n+1 number of for-loops are required for traversing an n-dimensional array, as well as for extracting the numbers mapped. In order to define n+1 for-loops, n+1 number of iterators will be needed, and can be defined as: ($\begin{equation}
\left(i_{1}, i_{2}, i_{3}, \ldots, i_{n-1}, i_{n}, i_{n+1}\right.
\end{equation}$). Another variable overwrite_at_pos is defined to keep track of how many occupied spaces in S have been detected and tells where to overwrite the original unsorted array. The loop invariant IExtract for every ith iteration can be defined as,
I_{\text {Extract}} \equiv
\end{equation}$ At iteration overwrite_at_pos = j and iterators $\begin{equation}
i_{1}, i_{2}, i_{3}, \ldots, i_{n-1}, i_{n} \text { and } i_{n+1}
\end{equation}$ having any value such that the mentioned if-condition is satisfied, the element Ej+1 detected at $\begin{equation}
S\left[i_{l}\right]\left[i_{2}\right]\left[i_{3}\right] \ldots\left[i_{n-1}\right]\left[i_{n+1}\right]
\end{equation}$ can be represented as:
E_{1}, E_{2}, E_{3}, \ldots, E_{j} \leq E_{j+1} \leq E_{j+2}, E_{j+3}, E_{j+4}, \ldots, E_{N}
And the overwritten sub-array arr[0:j] should be sorted or the sub-array arr[j:N] should be unsorted or unchanged.
(1) Initialization: Before the first iteration of the loop or at overwrite_at_pos=j=0 and iterators $\begin{equation}
i_{1}, i_{2}, i_{3}, \ldots, i_{n-1}, i_{n}, i_{n+1}
\end{equation}$ having any value such that the mentioned if-condition is satisfied, the element E1 detected at $\begin{equation}
E_{1} \leq E_{2}, E_{3}, E_{4}, \ldots, E_{N}
And the overwritten sub-array arr[0:0] should be sorted or the sub-array arr[0:N] should be unsorted or unchanged. As the size of the overwritten sub-array arr[0:0] is zero, or the overwritten sub-array arr[0:0] is completely empty, therefore, it is sorted. Also, as the sub-array arr[0:N] was already unsorted or unchanged, the invariant condition stands corrected.
(2) Maintenance: Assume that the loop invariant stands corrected at the start of iteration overwrite_at_pos=j=z and and iterators $\begin{equation}
\end{equation}$ having any value such that the mentioned if-condition is satisfied in the cycle. Then it must be that the element Ez+1 detected at $\begin{equation}
S\left[i_{1}\right]\left[i_{2}\right]\left[i_{3}\right] \ldots\left[i_{n-1}\right]\left[i_{n+1}\right]
E_{1}, E_{2}, E_{3}, \ldots, E_{z-1} \leq E_{z+1} \leq E_{z+2}, E_{z+3}, E_{z+4}, \ldots, E_{N}
And the overwritten sub-array arr[0:z] should be sorted or the sub-array arr[z:N] should be unsorted or unchanged. In the body of the loop at iteration overwrite_at_pos=j=z, the extracted element is overwritten at index z of unsorted array arr[], leaving sub-array arr[0:z] sorted or sub-array arr[z:N] unsorted or unchanged. Thus, at the start of iteration overwrite_at_pos=j=z+1 and iterators $\begin{equation}
\end{equation}$ having any value such that the mentioned if-condition is satisfied in the cycle, the element Ez+2 detected at $\begin{equation}
\end{equation}$ will be represented as:
E_{1}, E_{2}, E_{3}, \ldots, E_{z+1} \leq E_{z+2} \leq E_{z+3}, E_{z+4}, E_{z+5}, \ldots, E_{N}
And the overwritten sub-array arr[0:z+1] will be sorted or the sub-array arr[z+1:N] will be unsorted or unchanged, which needed to be proved.
(3) Termination: When the for-loop terminates at overwrite_at_pos=j=N and iterators $\begin{equation}
\end{equation}$ having any value such that the mentioned if-condition is satisfied in the cycle, then the element EN+1 detected at $\begin{equation}
\end{equation}$ is represented as:
As EN+1 does not exist, therefore, all N elements have been detected and overwritten. Also, the overwritten sub-array arr[0:N] will be sorted or the sub-array arr[N:N] will have size zero or contain zero elements, which is also the desired output.
As all three steps of the loop invariant hold true, therefore the algorithm for the extraction cycle is correct. Also, as both the hashing cycle and extraction cycle are correct, it renders the proposed Recombinant Sort algorithm correct. The correctness of the algorithm can also be verified from the example 1 described in the supplementary section.
5. Complexity Analysis
Best case: The best case takes place when the extraction cost k (= total number of memory block traversed in hypercube S during the whole extraction cycle process) will be of the form, k<<<n and thus, the time complexity O(n+k) will be O(n). The possible scenarios of best cases (where cost k is minimum) are as follows:
i. If all elements of the unsorted array lie on the same horizontal axis (after mapping) of the hypercube space S.
ii. If all elements of the unsorted array lie on the same vertical axis (after mapping) of the hypercube space S.
iii. If all elements of the unsorted array lie inside the same memory block (after mapping) of the hypercube space S.
Average case: The Average case takes place when the extraction cost k will be of the form, k< =n. The possible cases for the average time complexity will be as follows:
i. If k<n, then the time complexity O(n+k) will be $\begin{equation}
\equiv O(n)
\end{equation}$ after taking the upper bound n.
ii. If k = n, then the time complexity O(n+k) will be $\begin{equation}
O(2 n) \equiv O(n)
\end{equation}$.
Thus the average time complexity in both possible cases is O(n).
Worst case: The Worst case takes place when one of the two possible cases defined below happens:
i. First: When during the extraction cycle, the whole count array needs to be traversed. Thus, making the extraction cost k=10b, where b = the maximum number of digits an element has in our dataset. But for reaching worst case the count array has to be filled completely, thus, at least 10b(n=10b) elements have to be there in the dataset. Therefore, the time complexity O(n+k) will be:
O\left(n+10^{b}\right)
But n=10b, so
O(n+n)=O(2 n)=O(n)
ii. Second: When the start and end memory block of each axis of hypercube space S is occupied and the rest of the memory block in between them are empty. Thus, in this case the whole space S needs to be traversed, which makes the extraction cost k=10d, where d = dimensions of hypercube space S or the maximum number of digits an element has in our dataset. And the time complexity will be:
O\left(n+10^{d}\right)
\end{equation}$ (10)
But for this case to be valid, the total number of elements to be sorted should be =10d. Thus, it can be stated that n=10d and the Eq. (10) can be written as:
O\left(10 d+10^{d}\right)
O\left(10 d\left(1+\frac{10^{(d-1)}}{d}\right)\right)
As n =10d,
O\left(n\left(1+\frac{10^{(d-1)}}{d}\right)\right)
Which can be further simplified as,
O(n C)
where, C$\begin{equation}
=\left(1+\frac{10^{(d-1)}}{d}\right)
Figure 3. Relationship between constant C, n, n2 and d
On the basis of experiment, whose results are shown in Figure 3, (by putting different values of d) it was observed that:
C<n^{2}
Thus, from Eq. (15) we can state that C is a constant that will never make the time complexity nonlinear and the complexity given in Eq. (14) can be written as O(n).
Thus, the time complexity will always be linear.
Table 1 shows the time taken by the system to execute recombinant sort using Python on Mac OS. The sorting method is executed in Python, C++ and Java language on Mac OS, Windows OS and Linux OS. The system had 3.1 GHz Intel Core i5 processor and 8 GB 2133 MHz LPDDR3 RAM. The testing data is generated from a random generator function available in python's numpy library. The number of elements taken for the execution of recombinant sort ranges from 10 to 10000, increasing in powers of 10. Time is calculated for five major cases, namely, for data between the range of 1 to 10 having no digits after the decimal, having a single digit after the decimal and having two digits after the decimal, and for data between the range of 1 to 100 having no digits after the decimal and having a single digit after the decimal. The time taken for a specific number of elements for all the cases are of comparable order, as can be seen from Table 1. The results obtained by running the algorithm in different languages on different operating systems platforms are shown in tabular format (Tables 4-11), along with their graphical representation (Figure 5), are given in the supplementary section. These languages (Python, C++ and Java) and operating systems (Window, Mac and Linux) have been chosen specifically as they are very widely used.
Figure 4 depicts the results obtained in Table 1 in a graphical manner. The graph shows the time taken to execute recombinant sort (in milliseconds) for all the five enlisted cases. The graph depicts linear characteristics of the proposed sorting algorithm and it can also be concluded from the graph that the count_after_decimal variable hardly affects the time complexity. These same observations can also be made while observing the graphs (for Python, C++ and Java languages and Mac OS, Windows OS and Linux OS) given in the supplementary section.
Table 2 gives a complete comparison between various existing and known sorting algorithms and the proposed Recombinant sort technique on the basis of various cardinal factors like best case time complexity, average case time complexity, worst case time complexity, stability, in-place or out-of-place sorting and the ability to process strings and floating point numbers.
Figure 4. Relationship between the number of elements and the time taken by Recombinant Sort to sort elements using Python on Mac OS
Table 1. The time taken (in sec) by the system to execute recombinant sort using python on Mac OS
No. of elements
TFD (1,10) & cd=0
TFD (1,100) & cd=0
TFD (1,100) & cd=1)
Note: The expression TFD(a,b) & cd = c stands for: Time For sorting Data ranging between a to b, and count after decimal = c respectively.
Table 2. Comparison with other sorting algorithms
Sorting algorithm
Best TC
Average TC
Worst TC
stable sort
Bubble sort [15]
O(n)
(waas)
O(n2)
Selection Sort [16]
Insertion Sort [17]
O(n2) (waas)
Merge Sort [18]
O(nlogn)
Quick Sort [19]
Bucket Sort [9]
O(n+c)
Radix Sort [12]
O(kn)
(k $\begin{equation}
\epsilon
\end{equation}$ Z)
O(kn) (k $\begin{equation}
Heap Sort [20]
Tim Sort [21]
Shell Sort [22]
Counting Sort [11]
Recombinant Sort
Note: waas stands for: When array is already sorted; TC stands for Time Complexity; PD: Can Sort or Process Decimals; PS: Can Sort or Process Strings; IP: Inplace Sort; Z: integer.
Table 3. Dimensions of elements required for sorting different types of data
Dimensions of Count Array
Dimensions of Traverse Map H_Min
Dimensions of Traverse Map H_Max
D(1,10) & cd = 0
D(1,100) & cd = 0
Note: The expression D(a,b) & cd = c stands for: Data ranging between a to b, and count after decimal = c respectively.
From Table 2, it is observed that merge sort and heap sort have the consistent time complexity of O(nlogn) for the best, average and worst case scenarios, but none of these sorting methods can be used to sort elements of string data type. Quicksort also has O(nlogn) time complexity for the best and average cases, but resorts to being O(n2) for the worst case, i.e. when the array is already sorted in any order or when the array contains all identical elements. Tim sort has the time complexity O(nlogn) for the worst and average cases and O(n) for the best case (given that the array is already sorted). Unlike the sorting algorithms listed above, the proposed recombinant sort has consistent performance of O(n) for the best case, average case and worst case scenarios. In addition to this, recombinant sort can also be used to sort elements of string data type and floating type. Therefore, it can be observed that the proposed recombinant sort performs best among all the listed sorting algorithms.
Table 3 specifies the dimensions of the elements constituting the recombinant sort, that is, the count array, and the H_Min and H_Max traverse maps, for sorting data elements that belong to the data specified in the five cases enlisted previously.
This table depicts a pattern that can be followed to deal with different types of data (not mentioned in the table) using Recombinant Sort.
7. Conclusion and Future Work
The proposed Recombinant Sort is a dynamic sorting technique which can be modified as per the needs of the user and is designed to achieve utmost efficiency for sorting data of varied types and ranges. The time complexity of the proposed Recombinant Sort is estimated to be O(n+k) for best, average and worst cases. The k in O(n+k) will become n in the worse case scenario, but in no circumstance will n's order approaches two, i.e, k will never approach n2, thus, the complexity will never be O(n2). The extraction cost k, will always be very less than or equal to n, thus, the final time complexity will always be O(n). Also, the extraction cost k of the proposed Recombinant Sort came out to be much smaller than the extraction cost of any other linear sorting algorithms. The graph plotted between the number of elements and the time taken by recombinant sort to sort those elements depicts a linear characteristic.
All major highlighted demerits of the parent algorithms of the Recombinant Sort, i.e., counting sort, radix sort and bucket sort, are surmounted by Recombinant Sort. Recombinant Sort can process strings as well as numbers, and can also process both floating point and integer type numbers together. Though, with the increase in the number of digits in elements to be sorted, the dimensions of the count array will increase, and the complexity of the working of the algorithm will also increase. But an important thing to note here is that, in the physical world, we don't usually deal with numbers containing more than 10 digits, be it, marks obtained or the net salaries. By testing the algorithm on all possible types of data, it has been empirically proved that the proposed algorithm is correct, complete and terminates at the end. Thus, Recombinant Sort is a viable option from the user's perspective. In order to accredit fair competition, an open source library named Recombinant Sort has been released on github.
In the future, the proposed Recombinant Sort can be enhanced by sorting integer, string and floating type elements without rewriting the entire program for these specific needs. Another noteworthy addition to the current proposed algorithm can be made post-availability of advanced literature on N-dimensional space or hypercubes.
Supplementary Section
HASHING CYCLE ALGORITHM FOR n DIGIT NUMBER: The algorithm presented below uses two function: First, the numeric to string converter function, defined as: $F_{\text {string}}()$ and second, the string to numeric converter, defined as: $F_{\text {Numeric}}()$ .
NOTE: In day to day life we usually deal with 4-5 digit numbers.
Recombinant-hashing(arr , size, $\begin{equation}
\lambda
\end{equation}$) // the unsorted array arr
S[10][10][10]..[10][10]; // n dimensional count array S is initialized
H_Max[10][10][10]..[10][10]; // (n-1) dimensional traverse map H_Max is initialized
H_Min[10][222...222]; // traverse map H_Min is initialized. (n-1) 2's are there
set digit_count_after_decimal $\begin{equation}
\leftarrow \lambda
$t=F_{\text {string}}\left(\operatorname{arr}[i] \times\left(10^{\text {count_after_decimal}}\right)\right)$ //converts number to string
$S\left[F_{\text {Numeric}}(t[0])\right]\left[F_{\text {Numeric}}(t[1])\right] \ldots\left[F_{\text {Numeric}}(t[n-1])\right] \leftarrow \text { increment by } 1$
$\text { if }\left(H_{-} \operatorname{Max}\left[F_{\text {Numeric}}(t[0])\right][0]<F_{\text {Numeric}}(t[1])\right) \text { then }$ // checking H_Max
$\operatorname{set} H_{-} \operatorname{Max}\left[F_{\text {Numeric}}(t[0])\right][0] \leftarrow F_{\text {Numeric}}(t[1])$
$\text { if }\left(H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][0]==0\right. \text { ) then }$ // if H_Min traverse map had been updated before
$\text { set } H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][1] \leftarrow F_{\text {Numeric}}(t[1])$
$\text { set } H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][0] \leftarrow 1$ // marking that the H_Min is updated
$\text { else if }\left(H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][0] \neq 0\right. \text { and }$
$\left.H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[0])\right][1]>F_{\text {Numeric}}(t[1])\right) \text { then }$
$\text { if }\left(H_{-} M a x\left[F_{\text {Numeric}}(t[0])\right][222 . .221]<F_{\text {Numeric}}(t[n-1])\right) \text { then }$ // checking H_Max
set $H_{-} \text {Max[F_\mathrm{Numeric} } \left.(t[0])\right][10][10] \ldots[10][10] \leftarrow F_{\text {Numeric}}(t[n-1])$
if $\left(H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[n-1])\right][222 . .220]==0\right)$ then // if H_Min traverse map had been updated before
set $H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[n-1])\right][222 . .221] \leftarrow F_{\text {Numeric}}(t[n-1])$
set $H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[n-1])\right][222 . .220] \leftarrow 1$ // marking that the H_Mi is updated
else if $\left(H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[n-1])\right][222 . .221] \neq 0\right.$ and
$\left.H_{-} \operatorname{Min}\left[F_{\text {Numeric}}(t[n-2])\right][222 \ldots 221]>F_{\text {Numeric}}(t[n-1])\right)$ then
EXTRACTION CYCLE ALGORITHM FOR n DIGIT NUMBER: The algorithm presented below uses a function defined as: Overwrite_arr(element, position, arr). This function overwrites the element 'element' at position 'position' of array 'arr'. The $F_{\text {Float}}()$ function used below converts strings to float numbers and numeric to string converter function is also used and is defined as: $F_{\text {string}}()$ .
Recombinant-extraction(S, H_Min, H_Max,arr, size)
set overwrite_pos_at $\leftarrow$ 0
for $i_{1}$ = 0 to 9 do
for $i_{2}$ = H_Min[ $i_{1}$ ][1] to H_Max[ $i_{1}$ ][0]..[0]+1 do // maps H_Min and H_Max are tallied
for $i_{3}$ = H_Min[i][2] to H_Max[ $i_{1}$ ][0]..[1]+1 do // maps H_Min and H_Max are tallied
for $i_{n}$ = H_Min[ $i_{1}$ ][22..21] to H_Max[ $i_{1}$ ][10]..[10]+1 do // maps H_Min and H_Max are tallied
for z = 0 to S[i][j] do // generates the hashed data
Overwrite_arr(extracted_element, overwrite_pos_at, arr)
overwrite_pos_at $\leftarrow$ increment by 1
end for( $i_{1n}$ )
end for( $i_{n-1}$ )
end for( $i_{1}$ )
Note: The 'extracted_element' defined above in line 11 can be represented as:
$\text { extracted_element }=i_{1} i_{2} i_{3} \ldots i_{\lambda-1} i_{\lambda} \cdot i_{\lambda+1} i_{\lambda+2} i_{\lambda+3} \ldots i_{n-1} i_{n}$ (16)
RESULTS OF EXECUTION OF RECOMBINANT SORT USING DIFFERENT LANGUAGES ON DIFFERENT OPERATING SYSTEMS SHOWN USING TABULAR AS WELL AS GRAPHICAL METHOD
Table 4. The time taken (in sec) by the system to execute recombinant sort written in Python on Windows OS
TFD(1,10) & cd=0
TFD(1,100) & cd=0
TFD(1,100) & cd=1)
Note: The expression TFD(a,b) & cd = c stands for: Time For sorting Data ranging between a to b, and count after decimal = c.
Table 5. The time taken (in sec) by the system to execute recombinant sort written in Python on Linux OS
Table 6. The time taken (in sec) by the system to execute recombinant sort written in Java on Windows OS
Table 7. The time taken (in sec) by the system to execute recombinant sort written in Java on Mac OS
Table 8. The time taken (in sec) by the system to execute recombinant sort written in Java on Linux OS
Table 9. The time taken (in sec) by the system to execute recombinant sort written in C++ on Windows OS
Table 10. The time taken (in sec) by the system to execute recombinant sort written in C++ on Mac OS
Table 11. The time taken (in sec) by the system to execute recombinant sort written in C++ on Linux OS
zai_.png
Figure 5. Graphs A-H represent the linear characteristics depicted by tables 4-11 respectively
As depicted in Figure 1, the array arr (defined above) is fed to the hashing cycle for sorting and the space S of 10x10 is initialized along with a vector H_Max of shape 10 and a space H_Min of shape 10x2. The further steps are as follows
The first element of the array is'4.5', so:
First, it will be multiplied by 101 (as count after decimal is 1): 4.5 × 10 = 45
Second, the number 45 will be converted to string using FString(45) = t = '45'.
Third, we will increment the value in the memory block at row t[0] = 4 and column t[1] = 5 (at array index ( 4 , 5 ) ).
Fourth, in the traverse map H_Max, as H_Max [ FNumeric (t[0])] < FNumeric (t[1]), then H_Max[ FNumeric (t[0])] will be set as FNumeric (t[1]).
Fifth, in traverse map H_Min, as H_Min [FNumeric (t[0])][0] = = 0, then H_Min[ FNumeric (t[0])][1] will be set as FNumeric (t[1]) and H_Min[ FNumeric ( t[0] )][0] will be set to 1.
Second, the number 03 will be converted to string using FString(45) (03) = t = '03'.
Fourth, in the traverse map H_Max, as H_Max [FNumeric (t[0])] < $\begin{equation}
F_{\text {Numeric }}
\end{equation}$ (t[1]), then H_Max[ FNumeric (t[0])] will be set as FNumeric (t[1]).
Fifth, in traverse map H_Min, as H_Min [ FNumeric (t[0])][0] = = 0, then H_Min[ FNumeric (t[0])][1] will be set as FNumeric (t[1]) and H_Min[ FNumeric ( t[0] )][0] will be set to 1.
Second, the number 23 will be converted to string using FString (23) = t = '23'.
Third, we will increment the value in the memory block at row t[0] = 2 and column t[1] = 3 (at array index ( 2, 3 ) ).
Fourth, in the traverse map H_Max, as H_Max [ FNumeric (t[0])] <= FNumeric (t[1]), then H_Max[ FNumeric (t[0])] will be set as FNumeric (t[1]).
This step will be skipped.
Fifth, in the traverse map H_Min, as H_Min [ FNumeric (t[0])][0] != 0 and H_Min [ FNumeric (t[0])][1] > FNumeric (t[1]) then H_Min[ FNumeric (t[0])][1] will be set as FNumeric (t[1]).
Third, we will increment the value in the memory block at row t[0] = 8 and column t[1] = 0 (at array index (8, 0) ).
10. The first element of the array is '3.2', so:
The final result of this algorithm (Hashing Cycle) is given in Figure 2.
[1] Sorting. Definition of Sorting. En.wikipedia.org. https://en.wikipedia.org/wiki/Sorting, accessed on May 20, 2020.
[2] Aung, H.H. (2019). Analysis and comparative of sorting algorithms. International Journal of Trend in Scientific Research and Development (IJTSRD), 3(5): 1049-1053. https://doi.org/10.31142/ijtsrd26575
[3] Comparison Sort Definition. En.wikipedia.org. https://en.wikipedia.org/wiki/Comparison_sort, accessed on May 20, 2020.
[4] In-place Sorting Technique. 2020. En.wikipedia.org. https://en.wikipedia.org/wiki/In-place_algorithm, accessed on May 20, 2020.
[5] Verma, A.K., Kumar, P. (2013). List sort: A new approach for sorting list to reduce execution time. arXiv preprint arXiv:1310.7890.
[6] Stable Sorts Definition. En.wikipedia.org. https://en.wikipedia.org/wiki/Category:Stable_sorts, accessed on May 22, 2020.
[7] Singh, M., Garg, D. (2009). Choosing best hashing strategies and hash functions. In proceedings of 2009 IEEE International Advance Computing Conference, Patiala, pp. 50-55. https://doi.org/10.1109/IADCC.2009.4808979
[8] Mohammad, S., Kumar, A.R. (2019). SA sorting: A novel sorting technique for large-scale data. Journal of Computer Networks and Communications, 2019: 3027578. https://doi.org/10.1155/2019/3027578
[9] Bucket Sort Definition. En.wikipedia.org. https://en.wikipedia.org/wiki/Bucket_sort, accessed on May 22, 2020.
[10] Abdulla, P.A. (2009). Counting Sort.
[11] Counting Sort Definition. En.wikipedia.org. https://en.wikipedia.org/wiki/Counting_sort, accessed on May 23, 2020.
[12] Radix Sort Definition. En.wikipedia.org. https://en.wikipedia.org/wiki/Radix_sort, accessed on May 23, 2020.
[13] Abdulla, P.A. (2011). Radix Sort. Encyclopedia of Parallel Computing.
[14] Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C. (2009). Introduction to Algorithms. MIT press.
[15] Astrachan, O.L. (2003). Bubble sort: An archaeological algorithmic analysis. SIGCSE, 35(1). https://doi.org/10.1145/792548.611918
[16] Chand, S., Chaudhary, T., Parveen, R. (2011). Upgraded selection sort. International Journal on Computer Science and Engineering, 3(4): 1633-1637.
[17] Kowalk, W.P. (2011). Insertion Sort. In: Vöcking B. et al. (eds) Algorithms Unplugged. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15328-0_2
[18] Wang, B. (2008). Merge Sort. (2008).
[19] Hennequin, P. (1989). Combinatorial analysis of quicksort algorithm. RAIRO-Theoretical Informatics and Applications, 23(3): 317-333. https://doi.org/10.1051/ita/1989230303171
[20] Schaffer, R., Sedgewick, R. (1993). The analysis of heapsort. Journal of Algorithms, 15(1): 76-100. https://doi.org/10.1006/jagm.1993.1031
[21] Auger, N., Jugé, V., Nicaud, C., Pivoteau, C. Analysis of TimSort Algorithm. http://igm.univ-mlv.fr/~juge/slides/poster/ligm-2019.pdf, accessed on Jun. 2, 2020.
[22] Sedgewick, R. (1996). Analysis of Shellsort and related algorithms. In: Diaz J., Serna M. (eds) Algorithms - ESA '96. ESA 1996. Lecture Notes in Computer Science, vol 1136. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61680-2_42
|
CommonCrawl
|
Research article | Open | Open Peer Review | Published: 10 November 2016
Prevalence and socio-economic burden of heart failure in an aging society of South Korea
Hankil Lee1,
Sung-Hee Oh1,
Hyeonseok Cho1,
Hyun-Jai Cho2 &
Hye-Young Kang1
Heart failure (HF) is one of the leading causes of morbidity and mortality in South Korea. With the rapidly aging population in the country, the prevalence of HF and its associated costs are expected to rise continuously. This study was carried out to estimate the prevalence and economic burden of HF in order to understand its impact on our society.
A prevalence-based, cost-of-illness study was conducted using the 2014 Health Insurance Review and Assessment Service-National Patients Sample (HIRA-NPS) data. Adult HF patients were defined as those aged ≥19 years who had at least one insurance claim record with a primary or secondary diagnosis of HF (ICD-10 codes of I11.0, I13.0, I13.2, and I50.x). The costs consist of direct costs (i.e., medical and non-medical costs) and indirect costs (i.e., productivity loss cost due to morbidity and premature death). Subgroup analyses were conducted by age group, history of HF hospitalization, and type of universal health security program enrolled in.
A total of 475,019 adults were identified to have HF in 2014. The estimated prevalence rate of HF was 12.4 persons per 1,000 adults. According to the base cases and the extended definition of the cases, the annual economic burden of HF from a societal perspective ranges from USD 1,414.0 to 1,560.5 for individual patients, and from USD 752.8 million to 1,085.6 million for the country. A high percentage (68.5 %) of this socioeconomic burden consist of medical costs, followed by caregiver's cost (13.2 %), productivity loss costs due to premature death (10.8 %) and morbidity (4.2 %), and transportation costs (3.4 %). The HF patients with prior hospitalization due to HF annually spent 9.7 times more for National-Health-Insurance-covered medical costs compared to HF patients who were not previously hospitalized.
In the present study, HF patients who were older and had a history of prior hospitalization for HF as well as an indigent status were shown at high risk of spending more for healthcare to treat their HF. An effective disease management protocol should be employed to target this patient group.
According to the American College of Cardiology Foundation and American Heart Association [1], heart failure (HF) is defined as "a complex clinical syndrome that results from any structural or functional impairment of the ventricular filling or ejection of blood". More than two-thirds of HF patients have underlying diseases such as ischemic heart disease, chronic obstructive pulmonary disease, hypertensive heart disease, and rheumatic heart disease [2]. The early symptoms of HF include chronic fatigue, indigestion, insomnia, and headache. Based on the progression of the disease, swelling, ascites, and dyspnea due to lung congestion are commonly developed.
HF is a progressive disease with repeated recurrences and improvements, resulting in frequent hospitalization [3]. It has been observed in Minnesota, U.S. that about 16.5 % of HF patients experience at least one hospitalization associated with HF during their lifetime, and that 83.1 % are hospitalized for all types of reasons [4]. Roughly one in four HF patients among the Medicaid beneficiaries in the U.S. is readmitted within a month after discharge with HF [5]. In particular, among those 65 years or older, HF is the most common reason for hospitalization [6]. Whellan et al. reported that 66 % of the elderly in their study who were hospitalized due to HF were readmitted for HF in the following year [7]. The numerous symptoms of and repeated hospitalizations for HF negatively affect the patient's quality of life and increase the patient's economic burden [8]. In Sweden, the annual economic burden attributed to HF was SEK 2.0–2.6 billion, accounting for 2 % of Sweden's public healthcare budget [9]. In England, the direct medical costs for HF treatment accounted for 1.9 % of the entire National Health Service budget [10]. Since the onset of HF is strongly correlated with aging [2], the prevalence of HF is expected to grow worldwide with the aging population trend.
In South Korea, the number of patients with HF has been increasing in recent years, with an average annual increase rate of 4.5 % from 2009 to 2013. In those aged 80 years or above, the reported annual increase rate is 9 %, which is about twice that of the adult population [11]. In addition, HF is one of the leading causes of death in South Korea [12]. The aging population rate in Korea is on a very steep rise, and the percentage of the population aged 60 years and above is predicted to increase from 13.7 % in 2015 to 28.6 % by 2050 [13]. With such a rapidly aging population in South Korea, it is expected that the prevalence of HF and its associated costs will continue to grow. One useful approach to supporting the rationale of allocating healthcare resources to a specific condition would be to provide information on the extent to which the patients themselves and our society suffer from the burden of that condition. Thus, in the present study, the prevalence of HF was estimated, and the impact of HF on the South Korean society was determined by estimating the economic burden of HF from the perspectives of the National Health Insurance (NHI) and the society. Understanding the burden of HF and identifying the patient subgroups with a higher economic burden will aid in the prioritization of healthcare resource allocation.
Study design and data source
The economic burden of HF was estimated based on a "prevalence-based approach," which measured the costs associated with treating HF among both new and pre-existing cases of HF patients in a year [14]. A macro-costing method was used to investigate the costs of HF using the 2014 Health Insurance Review and Assessment Service-National Patients Sample (HIRA-NPS) claims data (HIRA-NPS-2014-0067). These secondary data include the claims records for the insurance-covered costs for the inpatient, outpatient, and emergency department services, and the prescription drugs of the 3 % random sample (about 1,400,000 persons) of the entire South Korean population, which consist of the NHI and Medical Aid (MA) program enrollees. This set of data was generated by using a stratified sampling with a stratification into two sets of subgroups, namely, sex (2 strata) and age (16 strata), having a total of 32 strata [15]. In South Korea, there are two tiers for the universal health security system. The NHI program is a wage-based, contributory insurance program covering about 96 % of the population, while the MA program is a government-subsidized public assistance program for poor and medically indigent individuals [16].
Adult patients with HF were defined as those aged ≥19 years who had at least one NHI or MA claim record of outpatient or inpatient services with a primary or secondary diagnosis of HF from the HIRA-NPS claims database in 2014. Due to the fact that HF in patients below 19 years old is largely attributed to congenital defects, different pathological etiologies, and lower prevalence rate than in adults, HF in patients below the age of 19 was excluded from this study [17]. Based on the literature review [7, 18–23], the diagnosis codes for HF were identified as I11.0 (hypertensive heart disease with [congestive] heart failure), I13.0 (hypertensive heart and renal disease with [congestive] heart failure), I13.2 (hypertensive heart and renal disease with both [congestive] heart failure and renal failure), I50.x (heart failure) as listed in the International Statistical Classification of Disease and Related Health Problems 10th Revision (ICD-10 codes) for a base-case group. In an effort to reflect the patient characteristics and clinical practice in South Korea, a panel consisting of three clinicians specializing in cardiology and working in tertiary-care hospitals in South Korea was consulted. They were asked about whether the diagnosis codes of HF identified from the literature are valid for Korean patients with HF. The clinician panel suggested additional diagnosis codes (I25.5 [ischemic cardiomyopathy], I42.0 [dilated cardiomyopathy], and I42.5 [other restrictive cardiomyopathy]) to comprehensively capture patients with HF from the claims records. Thus, to minimize the over- or under-specification of patients with HF, the patient group was defined in three different ways: base-case group, narrow-definition group, and extended-definition group. The patients in the narrow-definition group have the same diagnosis codes of HF as patients in the base case group, with the former having HF as the primary diagnosis. The patients in the extended-definition group included base-case patients and those who had at least one claim record of outpatient or inpatient services with a primary or secondary diagnosis of I25.5, I42.0, and I42.5 ICD-10 codes. For each patient group, healthcare utilization associated with HF was defined as claims records for outpatient, inpatient, and emergency department services, and prescriptions with the same diagnosis codes used to define the patient group.
Estimating the economic burden of HF
The economic burden of HF was estimated both from the perspective of a payer and of the society. The costs of HF from a payer's perspective consisted only of the NHI- or MA-covered medical costs (hereafter referred to as "NHI-covered medical costs"). Societal costs included direct medical and non-medical costs, and indirect costs defined as cost of productivity loss due to morbidity and premature death. Medical costs were divided into the NHI-covered and non-NHI-covered costs. The NHI-covered costs were estimated from the HIRA-NPS claims data. Using data from the Medical Expenditure Survey provided by NHI Service, the non-NHI-covered medical costs were estimated using the ratio of the medical costs for NHI-covered services to non-NHI-covered services for patients with heart disease. [11]. Among the non-medical direct costs, the annual per-capita transportation costs were calculated as the product of the average annual number of outpatient visits and hospital admissions due to HF per patient and the average round-trip transportation costs to visit healthcare institutions, which were obtained from the 2006 Korean National Health and Nutrition Examination Survey (KNHANES). With the assumption that at least one family member takes care of the patient during the latter's hospitalization or a helper is hired to do such, the caregiver's cost was calculated as the product of the average annual inpatient days per patient due to HF and the average market price for the daily charge of a helper.
The indirect costs consisted of the productivity loss costs due to morbidity and mortality and were assigned for ages under 65 years old, when people are assumed to be in the labor market. The productivity loss cost due to morbidity means the opportunity costs of time lost because of hospitalization or outpatient visits, and was calculated as shown in Eq. (1) based on the human capital approach [14, 24].
$$ \mathrm{Productivity}\ \mathrm{loss}\ \mathrm{costs}\ \mathrm{due}\ \mathrm{t}\mathrm{o}\ \mathrm{morbidity} = {\displaystyle \sum_i}{\displaystyle \sum_j}\left\{\left({I}_{ij} \times {D}_{ij} \times {P}_{ij}\right)+\left({O}_{ij}\times V \times {H}_{ij}\times {P}_{ij}\right)\right\} $$
i = age
j = gender
Iij = average annual inpatient days with a diagnosis of heart failure (HF) per patient with HF by age and gender
Dij = average daily income by age and gender
Pij = employment rate by age and gender
Oij = average annual number of outpatient visits with a diagnosis of HF per patient with HF by age and gender
V = average hours per outpatient visit
Hij = average hourly wage by age and gender
The productivity loss cost due to mortality was measured based on the expected future income foregone as a result of premature death caused by HF, and was calculated as shown in Eq. (2). The age- and gender-specific number of deaths attributable to HF (ICD-10 codes of I11.0 and I50.x as a primary diagnosis) was obtained from the Annual Statistical Report of the Cause of Death by the Korean National Statistical Office (KNSO). The average annual income was derived from the age- and gender-specific average monthly income from KNSO [14].
$$ \mathrm{Productivity}\ \mathrm{loss}\ \mathrm{costs}\ \mathrm{due}\ \mathrm{t}\mathrm{o}\ \mathrm{mortality} = {\displaystyle {\sum}_i{\displaystyle {\sum}_j{\displaystyle {\sum}_{k=1}^n\left({N}_{ij}\times \frac{Y_{ij\left(t+k\right)}\times {P}_{ij\left(t+k\right)}}{{\left(1+r\right)}^k}\right)}}} $$
k = 1, 2, …, n (difference between the life expectancy and age at the time of death)
t = age at the time of death
r = annual discount rate
Nij = number of deaths caused by HF by age and gender
Yij(t+k) = average annual income at the time of (t + k) by age and gender
Pij(t+k) = employment rate at the time of (t + k) by age and gender
The estimated economic burden is presented as the annual cost per patient with HF and the total national costs of patients with HF. As the productivity loss cost due to premature death was estimated based on the total number of HF deaths in the country instead of applying the individual risk of death for each patient, it was incorporated only in estimating the total national costs. All costs were expressed in 2016 monetary value.
Sensitivity analyses were performed for the different approaches to define the patients with HF. The economic burden of HF estimated from the base-case patient group was compared with that estimated from the narrow- and extended-definition patient groups. Sensitivity analysis was also conducted for the mortality rate of HF using different data sources. The case definition of HF death used by the Annual Statistical Report of the Cause of Death by KNSO was based on the ICD-10 codes of I11.0 (hypertensive heart disease with [congestive] heart failure) and I50.x (heart failure) as a primary diagnosis, which under-identify HF cases compared to the case definition of the base-case patient group (ICD-10 codes of I11.0, I13.0, I13.2, and I50.x as a primary or secondary diagnosis) in this study. For the sensitivity analysis, the results from Roger's cohort study of 4,537 HF patients older than 60 years were used [25, 26]. The one-year mortality rate (male, 20 %; female, 15 %) and the five-year mortality rate (male, 50 %; female, 46 %) were extracted from the study for the sensitivity analysis. The mortality estimates in Roger's study were calculated through a review of the medical charts by a panel of physicians. Compared to the claims data, the mortality estimates in Roger's study are more relevant for reflecting the precise and real clinical mortality rate of HF. For the mortality rate of the HF patients under 60 years, the same values from the Annual Report on the Cause of Death Statistics in South Korea were used in the base-case analysis.
Subgroup analysis
To determine the impact of the disease severity of HF on the patient's economic burden, the estimated costs across the patient subgroups were compared with the different disease severity levels. The medical costs of the HF patients younger than 65 and those older than or equal to 65 were compared. In earlier studies, it was observed that the per-capita utilization and cost of medical services to treat the same condition was significantly higher among those enrolled in the MA program in South Korea than among those enrolled in the NHI program [27, 28]. Therefore, the type of national health security program was considered a risk factor for HF patients in terms of the disease severity level and the extent of healthcare utilization. Finally, hospital admission would be a signal for a severe condition. Thus, the HF patients were divided into two groups according to experience of hospitalization, and the medical costs of the two groups were compared.
Prevalence and healthcare utilization characteristics of the HF patients
Table 1 presents the epidemiologic and healthcare utilization characteristics of patients with HF in Korea. According to the base-case analysis, a total of 475,019 adults (≥19 years old) in South Korea in 2014 were identified to have HF. The estimated prevalence rate of HF was 12.4 persons per 1,000 adults. The prevalence of HF was 9.2 times higher (47.8 vs. 5.2 per 1,000 population) in the elderly population (≥65 years old) than in the non-elderly population (19–64 years old). About two-thirds of the adult HF patients (65.1 %) in the country were 65 years or older. The highest proportion of HF patients was observed among those in the 70s (32.7 %), followed by those in the 60s (21.7 %), 80s or above (20.8 %), and 50s (16.0 %). Patients under 50 years old account for only 9.1 % of the adult HF patients. Overall, more women than men had HF throughout the age groups (57.7 % vs. 42.3 %). The prevalence of HF across genders differs, however, depending on the age. Up to the 60s, men have a higher prevalence of HF than women, while women have a higher prevalence of HF than men starting from the 70s (Fig. 1).
Table 1 Characteristics of patients with heart failure in South Korea in 2014
Gender- and age-specific prevalence of heart failure per 1,000 population
On average, the patients with HF had 5.01 outpatient visits, 0.24 hospital admissions, and 3.83 inpatient days for HF treatment in a year. About 12 out of 100 HF patients had hospital admissions due to HF, with 15.76 inpatient days per admission. As age increased, the probability of hospitalization increased: 14.5 % of the elderly patients aged 65 or older had hospitalization due to HF while only 7.1 % of the non-elderly patients had hospitalization due to HF. For the patients with at least one episode of HF-associated hospitalization, the average number of annual hospitalizations due to HF, including the initial hospitalization, was 2.01, and the total number of inpatient days was 31.70 days.
Socioeconomic burden of HF patients
The average annual medical expenditure for NHI-covered services spent by individual patients to treat HF was USD 868.2 in 2014 (Table 2). From the societal perspective, the average spending of each patient in 2014 was USD 1,414.0. The total national NHI-covered medical expenditure for the treatment of HF across South Korea amounted to USD 412.4 million. While only 12 % of the patients with HF were hospitalized (Table 1), the medical expenses for inpatient services accounted for 53.4 % of the NHI-covered medical expenditure. From the societal perspective, the economic burden of HF in the country was estimated to be USD 752.8 million. Medical costs accounted for the biggest portion of the national burden (68.5 % = 54.8 + 13.7 %), followed by the caregiver's cost (13.2 %), the productivity loss costs due to premature death (10.8 %) and morbidity (4.2 %), and transportation costs (3.4 %).
Table 2 Economic burden of heart failure in South Korea in 2014
Sensitivity analyses were performed with varying approaches to define patients with HF (Table 3). When HF patients were defined in the most conservative way (using the narrow definition of patients), the total number of patients was reduced by 46.1 %, from 475,019 to 256,241 patients. Also, the total national cost was reduced by 58.7 %, from USD 752.8 to 311.1 million. On the other hand, the impact of using the extended definition of HF patients was marginal, increasing the total number of patients by only 1.1 %, from 475,019 to 498,783 patients, and increasing the total national cost by only 9.1 %, from USD 752.8 to 822.0 million. The impact of using different data sources for the mortality rate of HF was also examined through sensitivity analysis (Table 3). As both the one- and five-year mortality rates were higher than the mortality rate reported by the Annual Statistical Report on the Cause of Death, the total national costs were increased to USD 878.4 and 1,085.6 million, respectively.
Table 3 Sensitivity analysis results by varying the definition of patient group and mortality rate
The estimated medical costs substantially varied across the subgroups that were examined in this study (Table 4). The elderly patients with HF aged 65 or above spent about 1.6 times more for NHI-covered medical services than the non-elderly patients aged 19–64 years old. The annual per-capita NHI-covered medical cost for treating HF was 1.6 times higher for the patients enrolled in the MA program than for those enrolled in the NHI program. Finally, compared to the HF patients who had not experienced hospitalization associated with HF, those with at least one episode of hospitalization due to HF showed 9.7-fold higher NHI-covered medical costs for treating HF.
Table 4 Economic burden of heart failure by subgroup
In this study, the prevalence and economic burden of the adult HF patients in South Korea were estimated using the nationally representative HIRA-NPS data, which cover the insurance claims records of the 3 % random sample of the entire population in the country. Based on the most recent HIRA-NPS data (2014), the analysis in this study revealed that the estimated prevalence rate of HF was 1.24 %. This figure is similar to the prevalence rate in other countries, such as USA, UK, Italy, and Denmark, which was reported at approximately 1–2 % [2, 9]. As observed in other countries [29, 30], it was confirmed that the prevalence of HF also increases with age in South Korea. The elderly aged 65 or older showed a 9.2-fold higher prevalence of HF than the non-elderly population aged 19–64.
Due to the fact that HF is usually accompanied by many underlying diseases, it is often difficult to identify the correct cases of HF based on the diagnosis codes from the insurance claims data. To improve the validity of the case definition, different approaches to defining HF cases were carried out in this study: the base-case, narrow-definition, and extended-definition approaches. While the estimated number of HF patients identified according to the extended definition of HF increased only by 1.1 % from the base-case patients, it decreased by 46.1 % when the narrow definition of HF patient was used. The difference in the method of identifying HF patients between the narrow and base-case definitions comes from whether only the primary diagnosis was used or both the primary and secondary diagnoses were used to define the cases using the same ICD-10 codes (I11.0, I13.0, I13.2, and I50.x). As a result of the use only of the primary diagnosis in the narrow-definition group, the total number of HF patients in the country was reduced to about half of the base-case patient groups, which used both the primary and secondary diagnoses to define HF patients. To determine which approach is more valid for capturing HF cases, we examined the primary diagnoses of those identified as HF patients based on their secondary diagnosis. About 75 % of the primary diagnoses of the patients were conditions related to HF, such as hypertension (33.6 %), angina pectoris (10.3 %), non-insulin-dependent diabetes mellitus (9.4 %), atrial fibrillation and flutter (6.6 %), chronic kidney disease (6.2 %), and chronic ischemic heart disease (2.7 %) (Table 5). As a result, it appears that the narrow definition would cause under-specification for HF cases. Thus, based on the base case and the extended definition of cases, it is reported that the annual economic burden of HF from the societal perspective ranges from USD 1,414.0 to 1,560.5 for individual patients, and from USD 752.8 to 1,085.6 million for the entire country.
Table 5 Distribution of primary diagnoses of base-case patients identified as having heart failure based on the secondary diagnosis
As mentioned earlier (in the Background section), patients with HF have a high risk of hospitalization. This is reflected in the cost estimation results of this study. The inpatient services accounted for 53.4 % of the NHI-covered medical costs for the individual patients with HF (Table 2). Compared to other chronic diseases that are common among the elderly such as hypertension (18.3 %), diabetes (36.4 %), rheumatic arthritis (18.9 %), and respiratory disorder (49.5 %) including asthma, COPD, and emphysema, HF (53.4 %) incurred higher spending for inpatient services in 2014 [24]. Also, patients who had experienced hospitalization with HF incurred about ten times higher NHI-covered medical costs (Table 4). These findings suggest that effective intervention to prevent hospital admission would be a critical component of the disease management strategy for patients with HF to minimize the economic and clinical burden of HF.
The higher costs occurred among the MA patients than the NHI patients. This was explained by the differences in hospitalization rate and proportion of the elderly between the two groups. The proportion of the HF patients experiencing hospitalization is approximately 1.7 times higher in the MA patients (19.2 %) than the NHI patients (11.3 %). In addition, the higher proportion of the elderly aged 70 years and above is observed among the MA patients (64.2 %) than the NHI patients (52.4 %).
As in most other illnesses, the elderly patients seemed to require more healthcare resources to treat HF compared to the younger patients. Those aged 65 or above in this study spent about 1.6 times more NHI-covered medical costs than those aged 19–64. The higher medical spending among the elderly group may be partly attributable to the higher risk of hospitalization that comes with aging. The hospitalization rate of those aged 65 or above among the study subjects was about twice that of those aged 19–64 (14.8 % vs. 7.1 %).
With the NHI claims data recently made available to the general public in South Korea for research purposes, there are ongoing researches on the cost of the illness, but most of the researches on HF are focused on primary diagnoses or with a narrowly defined scope of HF, posing restrictions on the accurate estimation of the cost of HF. In contrast, some studies have been conducted in hospital settings. According to a previous study conducted using electronic medical chart review for HF patients from tertiary-care hospitals, the average cost per hospitalization with HF was estimated to be approximately USD 7,000 [31], which is 1.7 times higher than what was estimated in this study (USD 4,140). The difference in the estimated hospitalization cost is attributable to the fact that the previous study limited the study subjects to the patients with acute HF in the tertiary hospitals.
This study has the potential risk of underestimating the cost of HF for the following reasons. First, in calculating the transportation costs incurred for an outpatient visit or for hospital admission, only the patient's transportation costs were included. It was possible, however, that at least one caregiver accompanied a patient when he/she visited the hospital on an outpatient basis or was admitted to a hospital because about 64 % of HF patients are aged 65 years or above who require assistance from others. Second, based on the human capital approach, the productivity loss costs due to morbidity and premature death were accounted for only for those under 65 years old, with the assumption that those above 65 years old are no longer productive and no longer contribute to the society. This leads to underestimation of costs as well as the ethical problem of not ascribing any value to the latter years of life. Third, to overcome the tendency to underestimate premature death costs caused by under-specified HF death from the national death statistics, sensitivity analysis was conducted using the one- and five-year mortality rates of HF patients based on existing relevant literature. However, the use of the literature value was limited to only those who were ≥60 years old. Thus, there is still a possibility of underestimating premature death costs for cases of patients under 60 years old.
This study also has a potential to overestimate the economic burden of HF. In calculating the productivity loss costs due to morbidity and mortality, a 100 % employment rate was assumed for those under 65 years old. This does not imply that all the HF patients are employed. The rationale for this assumption is that the time of the HF patients who are not part of the labor market should not be undervalued compared to that of patients who are part of the labor market. If it is insisted that productivity loss can occur only for those who are part of the labor market, the proposed approach may overestimate the costs.
This study presented the extent of the economic burden attributable to heart failure (HF) in the South Korean society. The prevalence rate of HF in 2014 was 12.4 out of 1000 adults, while the annual socioeconomic cost of HF was estimated to be 752.8 million USD. The onset of HF is positively correlated with aging. Due to the extended life expectancy, the prevalence of HF, accumulated from the long-term survivors, is expected to grow continuously. As such, HF has drawn special attention in the existing aging society. In this study, HF patients who were older and had a history of prior hospitalization with HF as well as an indigent status (i.e., enrolled in the Medical Aid [MA] program) were shown to be at high risk to spend more for healthcare to treat their HF. An effective disease management protocol should be employed to target such patient group.
HIRA-NPS:
Health insurance Review and Assessment Service-National Patients Sample
ICD:
International Statistical Classification of Disease
KNHANES:
Korean National Health and Nutrition Examination Survey
KNSO:
Korean National Statistical Office
NHI:
Yancy CW, Jessup M, Bozkurt B, Butler J, Casey DE, Drazner MH, et al. 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2013;62:e147–239.
Ziaeian B, Fonarow GC. Epidemiology and aetiology of heart failure. Nat Rev Cardiol. 2016;13:368–78.
Bui AL, Horwich TB, Fonarow GC. Epidemiology and risk profile of heart failure. Nat Rev Cardiol. 2011;8:30–41.
Dunlay SM, Redfield MM, Weston SA, Therneau TM, Long KH, Shah ND, et al. Hospitalizations after heart failure diagnosis a community perspective. J Am Coll Cardiol. 2009;54:1695–702.
Chen J, Ross JS, Carlson MDA, Lin Z, Normand SLT, Bernheim SM, et al. Skilled nursing facility referral and hospital readmission rates after heart failure or myocardial infarction. Am J Med. 2012;125:100.e101–9.
Roger VL, Go AS, Lloyd-Jones DM, Adams RJ, Berry JD, Brown TM, et al. Heart disease and stroke statistics--2011 update: a report from the American Heart Association. Circulation. 2011;123:e18–e209.
Whellan DJ, Greiner MA, Schulman KA, Curtis LH. Costs of inpatient care among Medicare beneficiaries with heart failure, 2001 to 2004. Circ Cardiovasc Qual Outcomes. 2010;3:33–40.
Yusuf S, Rangarajan S, Teo K, Islam S, Li W, Liu L, et al. Cardiovascular risk and events in 17 low-, middle-, and high-income countries. N Engl J Med. 2014;371:818–27.
Ryden-Bergsten T, Andersson F. The health care costs of heart failure in Sweden. J Intern Med. 1999;246:275–84.
Stewart S, Jenkins A, Buchan S, McGuire A, Capewell S, McMurray JJ. The current cost of heart failure to the National Health Service in the UK. Eur J Heart Fail. 2002;4:361–71.
Korea National Health Insurance Service. Survey on medical expenditure of patients insured by National Health Insurance. 2012. http://stat.kosis.kr/gen_etl/fileStat/fileStatView.jsp?org_id=350&tbl_id=DT_35005_FILE2012&tab_yn=N&conn_path=E1. Accessed 2 Dec 2015.
Korean National Statistical Office. The Annual Statistical Report of the Cause of Death 2014. 2014. http://www.index.go.kr/potal/main/EachDtlPageDetail.do?idx_cd=1012. Accessed 1 Mar 2016.
United Nations. World Population Prospects 2015 Revision. https://esa.un.org/unpd/wpp/publications/files/key_findings_wpp_2015.pdf. Accessed 28 Sep 2016.
Kim Y, Shin S, Park J, Jung Y, Kim J, Lee TJ, et al. Costing methods in Healthcare. Seoul: National Evidence-based Healthcare Collaborating Agency. 2013. http://neca.re.kr/center/researcher/book_view.jsp?boardNo=CA&seq=6095&q=626f6172644e6f3d4341. Accessed 30 Mar 2016.
Kim L, Kim JA, Kim S. A guide for the utilitzation of Health Insurance Review and Assessment Service National Patient Samples. Epidemiol Health. 2014;36:e2014008.
Song YJ. The South Korean health care system. Jpn Med Assoc J. 2009;52:206–9.
Nandi D, Rossano JW. Epidemiology and cost of heart failure in children. Cardiol Young. 2015;25:1460–8.
Ahluwalia SC, Gross CP, Chaudhry SI, Leo-Summers L, Van Ness PH, Fried TR. Change in comorbidity prevalence with advancing age among persons with heart failure. J Gen Intern Med. 2011;26:1145–151.
Joynt KE, Orav EJ, Jha AK. The association between hospital volume and processes, outcomes, and costs of care for congestive heart failure. Ann Intern Med. 2011;154:94–102.
Soran OZ, Feldman AM, Pina IL, Lamas GA, Kelsey SF, Selzer F, et al. Cost of medical services in older patients with heart failure: those receiving enhanced monitoring using a computer-based telephonic monitoring system compared with those in usual care: the Heart Failure Home Care trial. J Card Fail. 2010;16:859–66.
Stafford RS, Davidson SM, Davidson H, Miracle-McMahill H, Crawford SL, Blumenthal D. Chronic disease medication use in managed care and indemnity insurance plans. Health Serv Res. 2003;38:595–612.
Stewart S, MacIntyre K, Hole DJ, Capewell S, McMurray JJ. More 'malignant' than cancer? Five-year survival following a first admission for heart failure. Eur J Heart Fail. 2001;3:315–22.
Lee DS, Donovan L, Austin PC, Gong Y, Liu PP, Rouleau JL, et al. Comparison of coding of heart failure and comorbidities in administrative and clinical data for use in outcomes research. Med Care. 2005;43:182–8.
Fautrel B, Clarke AE, Guillemin F, Adam V, St-Pierre Y, Panaritis T, et al. Costs of rheumatoid arthritis: new estimates from the human capital method and comparison to the willingness-to-pay method. Med Decis Making. 2007;27:138–50.
Roger VL, Weston SA, Redfield MM, Hellermann-Homan JP, Killian J, Yawn BP, et al. Trends in Heart Failure Incidence and Survival in a Community-Based Population. JAMA. 2004;292:344–50.
Roger VL. Epidemiology of Heart Failure. Circ Res. 2013;113:646–59.
Suh HS, Kang HY, Kim JK, Shin EC. Effect of health insurance type on health care utilization in patients with hypertension: a National Health Insurance database study in Korea. BMC Health Serv Res. 2014;14:570–82.
National Health Insurance Corporation/Health Insurance Review & Assessment Service. National Health Insurance Statistical Yearbook. 2014. http://opendata.hira.or.kr/op/opc/selectStcPblc.do?sno=10700&odPblcTpCd=002&searchCnd=&searchWrd=&pageIndex=1. Accessed 20 Mar 2016.
Centers for disease control and prevention. Heart Failure Fact Sheet. 2015. http://www.cdc.gov/dhdsp/data_statistics/fact_sheets/fs_heart_failure.htm. Accessed 12 Dec 2016.
Guha K, McDonagh T. Heart Failure Epidemiology: European Perspective. Curr Cardiol Rev. 2013;9:123–7.
Lee SE, Cho HJ, Lee HY, Yang HM, Choi JO, Jeon ES, et al. A multicentre cohort study of acute heart failure syndromes in Korea: rationale, design, and interim observations of the Korean Acute Heart Failure (KorAHF) registry. Eur J Heart Fail. 2014;16:700–8.
The authors would like to express their special appreciation to Jae-Joong Kim, M.D., Ph.D. and Byung-Su Yoo, M.D., Ph.D. Their clinical advice helped the authors define the study population and provided them with inspiration for the sensitivity analysis.
This study was funded by Novartis Korea Ltd.
Data can be shared upon request.
HL performed statistical analysis on the HIRA-NPS data, interpreted the study results, and prepared the manuscript. SHO estimated the costs using published data sources, and calculated the indirect costs. HC assisted in the statistical analysis and sought published data sources that were used for the cost estimation. HJC helped design the subgroup analysis and identified the clinical implications of the study findings. HYK developed the study design, interpreted the study results, and prepared the manuscript. All the authors read and approved the final manuscript.
HL is a Ph.D. candidate in the College of Pharmacy, Yonsei University. She received her Master of Science degree in Pharmacy from the Ewha Womans University.
SHO is a Ph.D. candidate in the College of Pharmacy, Yonsei University. She received her Master of Public Health degree in Health Economics from the School of Public Health at Seoul National University.
HC is a Ph.D. student in the College of Pharmacy, Yonsei University. He received his Bachelor degree in Pharmacy from the Yonsei University.
HJC is an associate professor in the Division of Cardiology, Department of Internal Medicine at Seoul National University Hospital. He received his M.D. degree and Ph.D. degree in Cardiovascular Medicine from Seoul National University College of Medicine.
Kang HY is a professor in the College of Pharmacy, Yonsei University. She received her doctoral degree in Health Policy and Administration from the University of North Carolina at Chapel Hill.
This study was approved by the Yonsei University Institutional Review Board (IRB No. 201601-SB-598-02).
College of Pharmacy, Yonsei Institute of Pharmaceutical Sciences, Yonsei University, Incheon, South Korea
Hankil Lee
, Sung-Hee Oh
, Hyeonseok Cho
& Hye-Young Kang
Department of Internal Medicine, Division of Cardiology, Seoul National University Hospital, Seoul, South Korea
Hyun-Jai Cho
Search for Hankil Lee in:
Search for Sung-Hee Oh in:
Search for Hyeonseok Cho in:
Search for Hyun-Jai Cho in:
Search for Hye-Young Kang in:
Correspondence to Hye-Young Kang.
Economic burden
Cost of heart failure
|
CommonCrawl
|
Nayak, A. et al. Controlling the Synaptic Plasticity of a Cu2S Gap-Type Atomic Switch. Advanced Functional Materials 22, 3606–3613 (2012).
Gimzewski, J. K., Modesti, S. & Schlittler, R. R. Cooperative self-assembly of Au atoms and C 60 on Au (110) surfaces. Physical review letters 72, 1036 (1994).
Haak, H. W., Sawatzky, G. A., Ungier, L., Gimzewski, J. K. & Thomas, T. D. Core-level electron–electron coincidence spectroscopy. Review of scientific instruments 55, 696–711 (1984).
Sharma, S. et al. Correlative nanomechanical profiling with super-resolution F-actin imaging reveals novel insights into mechanisms of cisplatin resistance in ovarian cancer cells. Nanomedicine: Nanotechnology, Biology and Medicine 8, 757–766 (2012).
Sharma, S., Zhu, H., Grintsevich, E. E., Reisler, E. & Gimzewski, J. K. Correlative nanoscale imaging of actin filaments and their complexes. Nanoscale 5, 5692–5702 (2013).
Feynman, R. P. et al. CSIS-181 Section 1346. (Submitted).
Schlittler, R. R. & Gimzewski, J. K. Design and performance analysis of a three-dimensional sample translation device used in ultrahigh vacuum scanned probe microscopy. Journal of Vacuum Science & Technology B 14, 827–831 (1996).
Loppacher, C. et al. Direct determination of the energy required to operate a single molecule switch. Physical review letters 90, 066107 (2003).
Dumas, P. et al. Direct observation of individual nanometer-sized light-emitting structures on porous silicon surfaces. EPL (Europhysics Letters) 23, 197 (1993).
Pelling, A. E. et al. Distinct contributions of microtubule subtypes to cell membrane shape and stability. Nanomedicine: Nanotechnology, Biology and Medicine 3, 43–52 (2007).
Hu, W. et al. DNA builds and strengthens the extracellular matrix in Myxococcus xanthus biofilms by interacting with exopolysaccharides. PloS one 7, e51905 (2012).
Pelling, A. E., Wilkinson, P. R., Stringer, R. & Gimzewski, J. K. Dynamic mechanical oscillations during metamorphosis of the monarch butterfly. Journal of The Royal Society Interface 6, 29–37 (2009).
Battiston, F. et al. E. MEYER, M. GUGGISBERG, CH. LOPPACHER. Impact of Electron and Scanning Probe Microscopy on Materials Research 339 (1999).
Gimzewski, J. K., Brewer, R. J., VepYek, S. & Stuessi, H. THE EFFECT OF A HYDROGEN PLASMA ON THE HYDRIDING OF TITANIUM: KINETICS AND EQUILIBRIUM CONCENTRATION. (Submitted).
Brewer, R. J., Gimzewski, J. K., Veprek, S. & Stuessi, H. Effect of surface contamination and pretreatment on the hydrogen diffusion into and out of titanium under plasma conditions. Journal of Nuclear Materials 103, 465–469 (1981).
Berndt, R., Gimzewski, J. K. & Johansson, P. Electromagnetic interactions of metallic objects in nanometer proximity. Physical review letters 71, 3493 (1993).
Joachim, C. & Gimzewski, J. K. An electromechanical amplifier using a single molecule. Chemical Physics Letters 265, 353–357 (1997).
Fornaro, P. et al. AN ELECTRONIC NOSE BASED ON A MICROMECHANICAL CANTILEVER ARRAY. Micro Total Analysis Systems' 98: Proceedings of the Utas' 98 Workshop, Held in Banff, Canada, 13-16 October 1998 57 (1998).
Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic Transparence of a Single ${\mathrm{C}}_{60}$ Molecule. Phys. Rev. Lett. 74, 2102–2105 (1995).
Joachim, C., Gimzewski, J. K. & Aviram, A. Electronics using hybrid-molecular and mono-molecular devices. Nature 408, 541–548 (2000).
Gimzewski, J. K., Sass, J. K., Schlitter, R. R. & Schott, J. Enhanced photon emission in scanning tunnelling microscopy. EPL (Europhysics Letters) 8, 435 (1989).
David, T., Gimzewski, J. K., Purdie, D., Reihl, B. & Schlittler, R. R. Epitaxial growth of C 60 on Ag (110) studied by scanning tunneling microscopy and tunneling spectroscopy. Physical Review B 50, 5810 (1994).
, et al. Erratum: A femtojoule calorimeter using micromechanical sensors [Rev. Sci. Instrum. 65, 3793 (1994)]. Review of Scientific Instruments 66, 3083–3083 (1995).
Han, T. H. & Liao, J. C. Erythrocyte nitric oxide transport reduced by a submembrane cytoskeletal barrier. Biochimica et Biophysica Acta (BBA)-General Subjects 1723, 135–142 (2005).
Fabian, D. J., Gimzewski, J. K., Barrie, A. & Dev, B. Excitation of Fe 1s core-level photoelectrons with synchrotron radiation. Journal of Physics F: Metal Physics 7, L345 (1977).
Dürig, U., Gimzewski, J. K. & Pohl, D. W. Experimental observation of forces acting during scanning tunneling microscopy. Physical review letters 57, 2403 (1986).
, et al. A femtojoule calorimeter using micromechanical sensors. Review of Scientific Instruments 65, 3793–3798 (1994).
Reihl, B. & Gimzewski, J. K. Field emission scanning Auger microscope (FESAM). Surface Science 189, 36–43 (1987).
Coombs, J. H. & Gimzewski, J. K. Fine structure in field emission resonances at surfaces. Journal of Microscopy 152, 841–851 (1988).
Stieg, A. Z., Rasool, H. I. & Gimzewski, J. K. A flexible, highly stable electrochemical scanning probe microscope for nanoscale studies at the solid-liquid interface. Review of Scientific Instruments 79, 103701 (2008).
Steiner, W. et al. The following patents were recently issued by the countries in which the inventions were made. For US patents, titles and names supplied to us by the US Patent Office are reproduced exactly as they appear on the original published patent. (Submitted).
Dürig, U., Gimzewski, J. K., Pohl, D. W. & Schlittler, R. Force Sensing in Scanning Tunneling Microscopy. IBM, Rüschlikon 1 (1986).
Loppacher, C. et al. Forces with submolecular resolution between the probing tip and Cu-TBPP molecules on Cu (100) observed with a combined AFM/STM. Applied Physics A 72, S105–S108 (2001).
Stoll, E. P. & Gimzewski, J. K. Fundamental and practical aspects of differential scanning tunneling microscopy. Journal of Vacuum Science & Technology B 9, 643–647 (1991).
Tang, H., Cuberes, M. T., Joachim, C. & Gimzewski, J. K. Fundamental considerations in the manipulation of a single C< sub> 60 molecule on a surface with an STM. Surface science 386, 115–123 (1997).
|
CommonCrawl
|
Yimin Xiao
Department of Statistics and Probability
E-mail: [email protected]
Maps and Driving direction
Math Review
Mathematics Web
Mathematics ArXiv
Probability Web
Probability Abstracts
Electronic J. Probab. & ECP
Fractals & Stochastics
Fractal Geometry at Yale
MaPhySto
Chinese Mathematics and Systems Science
Campus Photos (I)
Campus Photos (II)
Stochastic processes and random fields: Gaussian and stable random fields, fractional Lévy fields, self-similar processes, Lévy processes, additive Lévy processes.
Fractal geometry: Measure theory of random fractals, Hausdorff and packing dimensions.
Extreme value theory: Gaussian and related random fields.
Nonparametric regression: Long memory (long range dependence), wavelet methods.
Colloquiua of Department of Statistics and Probability. Tuesday, 10:20--11:10AM
Probability at Michigan State. Thursday, 3:00--3:50PM
Editorial Service
Co-Editor in Chief, Statistics & Probability Letters, July, 2011 -- present.
Managing Editor, Journal of Fractal Geometry, July, 2013 -- present.
Associate Editor, Science China: Mathematics, January, 2015 -- present.
[4]. Recent developments on fractal properties of Gaussian random fields. In: Further Developments in Fractals and Related Fields. (Julien Barral and Stephane Seuret, editors) pp. 255--288, Springer, New York, 2013.
[3]. Properties of local nondeterminism of Gaussian and stable random fields and their applications. Ann. Fac. Sci. Toulouse Math. XV (2006), 157--193.
[2]. Additive Levy processes: capacity and Hausdorff dimension. (with Davar Khoshnevisan) Proc. of Inter. Conf. on Fractal Geometry and Stochastics III., Progress in Probability, 57, pp. 151--170, Birkhauser, 2004.
[1]. Random fractals and Markov processes. In: Fractal Geometry and Applications: A Jubilee of Benoit Mandelbrot, (Michel L. Lapidus and Machiel van Frankenhuijsen, editors), pp. 261--338, American Mathematical Society, 2004.
Preprints and Recent Publications
[60]. Polarity of points for Gaussian random fields. (with R. Dalang and C. Mueller), Submitted, 2015.
[59]. Tail asymptotics of extremes for bivariate Gaussian random fields. (with Y. Zhou), Submitted, 2015.
[58]. Intermittency and multifractality: A case study via parabolic stochastic PDEs. (with D. Khoshnevisan and K. Kim), Submitted, 2015.
[57]. Sample paths of the solution to the fractional-colored stochastic heat equation. (with C. Tudor), Submitted, 2015.
[56]. On the double points of operator stable Lévy processes. (with T. Luks), J. Theoret. Probab., to appear.
[55]. Weak existence of a solution to a differential equation driven by a very rough fBm. (with D. Khoshnevisan, J. Swanson and L. Zhang), Submitted. 2014.
[54]. Harmonizable fractional stable fields: local nondeterminism and joint continuity of the local times. (with A. Ayache), Stoch. Process. Appl., to appear.
[53]. The mean Euler characteristic and excursion probability of Gaussian random fields. (with D. Cheng), Ann. Appl. Probab., to appear.
[52]. Excursion probability of Gaussian random fields on sphere. (with D. Cheng), Bernoulli, to appear.
[51].Smoothness of local times and self-intersection local times of Gaussian random fields. (with Z. Chen and D. Wu), Frontiers Math. China 10 (2015), 777--805.
[50]. Exact moduli of continuity for operator scaling Gaussian random fields. (with Yuqiang Li and Wensheng Wang), Bernoulli 21 (2015), 930--956.
[49]. Brownian motion and thermal capacity. (with D. Khoshnevisan), Ann. Probab. 45 (2015), 405--434.
[48]. Generalized dimensions of images of measures under Gaussian processes. (with K.J. Falconer), Adv. Math. 252 (2014), 492--517.
[47]. Discrete fractal dimensions of the ranges of random walks in $\Z^d$ associate with random conductances. (with Xinghua Zheng), Probab. Th. Rel. Fields 156 (2013), 1--26.
[46]. A class of fractional Brownian fields from branching systems and their regularity properties. (with Yuqiang Li), Infi. Dim. Anal. Quan. Probab. Rel. Topics 16, (2013).
[45]. Tail estimation of the spectral density under fixed-domain asymptotics. (with Chae-Young Lim and Wei-Ying Wu), J. Multivar. Anal. 116 (2013), 74--91.
[44]. Fractal dimension for continuous time random walk limits. (with M. M. Meerschaert and E. Nane), Statist. Probab. Letters 83 (2013), 1083--1093.
[43]. Hitting probability and packing dimensions of the random covering sets. (with Bing Li and Narn-Rueih Shieh). In: Applications of Fractals and Dynamical Systems in Science and Economics, (David Carfi, Michel L. Lapidus, Erin P. J. Pearse, and Machiel van Frankenhuijsen, editors), American Mathematical Society, 2013.
[42]. Fernique-type inequalities and moduli of continuity of anisotropic Gaussian random fields. (with M.M. Meerschaert and Wensheng Wang), Trans. Amer. Math. Soc. 365 (2013), 1081--1107.
[41]. On intersections of independent anisotropic Gaussian random fields. (with Zhenlong Chen), Sci. China Math. 55 (2012), 2217--2232.
[40]. Packing dimension profiles and Levy processes. (with D. Khoshnevisan and R. Schiling), Bull. London Math. Soc. 44 (2012), 931--943.
[39]. Occupation time fluctuations of weakly degenerated branching systems. (with Yuqiang Li), J. Theoret. Probab, 25 (2012), 1119--1152.
[38]. $\alpha$-time fractional Brownian motion: PDE connections and local times. (with E. Nane and D. Wu), Esaim: Probab.& Stat. 16 (2012), 1--24.
[37]. Critical Brownian sheet does not have double points. (with R. C. Dalang, D. Khoshnevisan, E. Nualart and D. Wu), Ann. Probab. 40 (2012), 1829--1859
[36]. Spectral conditions for strong local nondeterminism and exact Hausdorff measure of ranges of Gaussian random fields. (with N. Luan), J. Fourier Anal. Appl. 18 (2012), 118--145.
[35]. Multiparameter multifractional Brownian motion: local nondeterminism and joint continuity of the local times. (with A. Ayache and N.-R. Shieh), Ann. Inst. H. Poincare Probab. Statist. 47 (2011), 1029--1054.
[34]. Multivariate operator-self-similar random fields. (with Yuqiang Li), Stoch. Process. Appl. 121 (2011), 1178--1200.
[33]. Fractal and smoothness properties of anisotropic Gaussian models. (with Yun Xue), Frontiers Math. 6 (2011), 1217--1246.
[32]. Packing dimension results for anisotropic Gaussian random fields. (with A. Estrade and D. Wu), Comm. Stoch. Anal. 5 (2011), 41--64.
[31]. On local times of anisotropic Gaussian random fields. (with D. Wu), Comm. Stoch. Anal. 5 (2011), 15--39.
[30]. Properties of strong local nondeterminism and local times of stable random fields. Seminar on Stochastic Analysis, Random Fields and Applications VI. pp. 279--310. Progr. Probab., 63, Birkh\"auser, Basel , 2011.
[29]. Hausdorff and packing dimensions of the images of random fields. (with N.-R. Shieh), Bernoulli 16 (2010), 926--952.
[28]. Regularity of intersection local times of fractional Brownian motions. (with D. Wu). J. Theoret. Probab. 23 (2010), 972--1001.
[27]. On uniform modulus of continuity of random fields. Monatsh. Math. 159 (2010), 163--184.
[26]. Correlated continuous time random walks. (with M. M. Meerschaert and E. Nane). Statist. Probab. Letters 79 (2009), 1194--1202.
[25]. Continuity with respect to the Hurst index of the local times of anisotropic Gaussian random fields. (with D. Wu). Stochastic Process. Appl. 119 (2009), 1823--1844.
[24]. Hitting probabilities and the Hausdorff dimension of the inverse images of anisotropic Gaussian random fields. (with H. Bierme and C. Lacaux ). Bull. London Math. Soc. 41 (2009), 253--273
[23]. A packing dimension theorem for Gaussian random fields. Statist. Probab. Letters 79 (2009), 88--97.
[22]. Harmonic analysis of additive Levy processes. (with D. Khoshnevisan). Probab. Theory Rel. Fields, 145 (2009), 459--515. [The original publication is available at http://www.springerlink.com]
[21]. Sample path properties of anisotropic Gaussian random fields. In: A Minicourse on Stochastic Partial Differential Equations, (D. Khoshnevisan and F. Rassoul-Agha, editors), Lecture Notes in Math. 1962, pp. 145--212, Springer, New York , 2009.
[20]. Linear fractional stable sheets: wavelet expansion and sample path properties. (with A. Ayache and F. Roueff). Stochastic Process. Appl. 119 (2009), 1168--1197.
[19]. Packing dimension profiles and fractional Brownian motion. (with D. Khoshnevisan). Math. Proc. Cambridge Philo. Soc. 145 (2008), 205--213.
[18]. Local times of multifractional Brownian sheets. (with M. Meerschaert and D. Wu). Bernoulli 14(3) (2008), 865--898.
[17]. Joint continuity of the local times of fractional Brownian sheets. (with Antoine Ayache and D. Wu). Ann. Inst. H. Poincare Probab. Statist. 44 (2008), 727--748.
[16]. Large deviations for local time fractional Brownian motion and applications. (with M. M. Meerschaert and E. Nane). J. Math. Anal. Appl. 346 (2008), 432--445. [The original publication is available at www.elsevier.com/locate/jmaa]
[15]. Packing dimension of the range of a Levy process. (with D. Khoshnevisan). Proc. Amer. Math. Soc. 136 (2008), 2597--2607.
[14]. Hausdorff dimension of the contours of symmetric additive Levy processes. (with D. Khoshnevisan and N.-R. Shieh). Probab. Th. Rel. Fields. 140 (2008), 169--193. [The original publication is available at www.springerlink.com.]
[13]. Sample path properties of bifractional Brownian motion. (with Ciprian A. Tudor). Bernoulli 13 (2007), 1023--1052.
[12]. Joint continuity of the local times of linear fractional stable sheets. (with A. Ayache and F. Roueff). C. R. Acad. Sci.Paris, Ser. A. 344 (2007), 635--640.
[11]. Local and asymptotic properties of linear fractional stable sheets. (with A. Ayache and F. Roueff). C. R. Acad. Sci.Paris, Ser. A. 344 (2007), 389--394.
[10]. On the minimax optimality of block thresholded wavelet estimators with long memory data. (with Linyuan Li) J. Statist. Plann. Inference 137 (2007), 2850--2869.
[9]. Mean integrated squared error of nonlinear wavelet-based estimators with long memory data. (with Linyuan Li) Ann. Inst. Statist. Math. 59 (2007), 299--324.
[8]. Geometric properties of the images fractional Brownian sheets. (with D. Wu). J. Fourier Anal. Appl. 13 (2007), 1--37. [The original publication is available at www.springerlink.com ]
[7]. Images of the Brownian sheet. (with Davar Khoshnevisan) Trans. Amer. Math. Soc. 359 (2007), 3125--3151.
[6]. Sectorial local non-determinism and the geometry of the Brownian sheet. (with D. Khoshnevisan and D. Wu). Electron. J. Probab. 11 (2006), 817--843.
[5]. Asymptotic properties and Hausdorff dimensions of fractional Brownian sheets. (with Antoine Ayache) J. Fourier Anal. Appl. 11 (2005), 407--439. [The original publication is available at www.springerlink.com.]
[4]. Levy processes: capacity and Hausdorff dimension. (with Davar Khoshnevisan) Ann. Probab. 33 (2005), 841--878.
[3]. Packing measure of the trajectories of multiparameter fractional Brownian motion. Math. Proc. Cambridge Philo. Soc. 135 (2003), 349--375
[2]. Measuring the range of an additive Levy process. (with Davar Khoshnevisan and Yuquan Zhong) Ann. Probab. 31 (2003), 1097--1141.
[1]. Level sets of additive Levy process. (with Davar Khoshnevisan) Ann. Probab. 30 (2002), 62--100.
More Publications Click here
Created by Yimin Xiao
|
CommonCrawl
|
Finally, it's not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference).
Took full pill at 10:21 PM when I started feeling a bit tired. Around 11:30, I noticed my head feeling fuzzy but my reading seemed to still be up to snuff. I would eventually finish the science book around 9 AM the next day, taking some very long breaks to walk the dog, write some poems, write a program, do Mnemosyne review (memory performance: subjectively below average, but not as bad as I would have expected from staying up all night), and some other things. Around 4 AM, I reflected that I felt much as I had during my nightwatch job at the same hour of the day - except I had switched sleep schedules for the job. The tiredness continued to build and my willpower weakened so the morning wasn't as productive as it could have been - but my actual performance when I could be bothered was still pretty normal. That struck me as kind of interesting that I can feel very tired and not act tired, in line with the anecdotes.
Even party drugs are going to work: Biohackers are taking recreational drugs like LSD, psilocybin mushrooms, and mescaline in microdoses—about a tenth of what constitutes a typical dose—with the goal of becoming more focused and creative. Many who've tried it report positive results, but real research on the practice—and its safety—is a long way off. "Whether microdosing with LSD improves creativity and cognition remains to be determined in an objective experiment using double-blind, placebo-controlled methodology," Sahakian says.
The evidence? In small studies, healthy people taking modafinil showed improved planning and working memory, and better reaction time, spatial planning, and visual pattern recognition. A 2015 meta-analysis claimed that "when more complex assessments are used, modafinil appears to consistently engender enhancement of attention, executive functions, and learning" without affecting a user's mood. In a study from earlier this year involving 39 male chess players, subjects taking modafinil were found to perform better in chess games played against a computer.
After my rudimentary stacking efforts flamed out in unspectacular fashion, I tried a few ready-made stacks—brand-name nootropic cocktails that offer to eliminate the guesswork for newbies. They were just as useful. And a lot more expensive. Goop's Braindust turned water into tea-flavored chalk. But it did make my face feel hot for 45 minutes. Then there were the two pills of Brain Force Plus, a supplement hawked relentlessly by Alex Jones of InfoWars infamy. The only result of those was the lingering guilt of knowing that I had willingly put $19.95 in the jorts pocket of a dipshit conspiracy theorist.
A fundamental aspect of human evolution has been the drive to augment our capabilities. The neocortex is the neural seat of abstract and higher order cognitive processes. As it grew, so did our ability to create. The invention of tools and weapons, writing, the steam engine, and the computer have exponentially increased our capacity to influence and understand the world around us. These advances are being driven by improved higher-order cognitive processing.1Fascinatingly, the practice of modulating our biology through naturally occurring flora predated all of the above discoveries. Indeed, Sumerian clay slabs as old as 5000 BC detail medicinal recipes which include over 250 plants2. The enhancement of human cognition through natural compounds followed, as people discovered plants containing caffeine, theanine, and other cognition-enhancing, or nootropic, agents.
A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea.
Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185).
For obvious reasons, it's difficult for researchers to know just how common the "smart drug" or "neuro-enhancing" lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular.
Evidence in support of the neuroprotective effects of flavonoids has increased significantly in recent years, although to date much of this evidence has emerged from animal rather than human studies. Nonetheless, with a view to making recommendations for future good practice, we review 15 existing human dietary intervention studies that have examined the effects of particular types of flavonoid on cognitive performance. The studies employed a total of 55 different cognitive tests covering a broad range of cognitive domains. Most studies incorporated at least one measure of executive function/working memory, with nine reporting significant improvements in performance as a function of flavonoid supplementation compared to a control group. However, some domains were overlooked completely (e.g. implicit memory, prospective memory), and for the most part there was little consistency in terms of the particular cognitive tests used making across study comparisons difficult. Furthermore, there was some confusion concerning what aspects of cognitive function particular tests were actually measuring. Overall, while initial results are encouraging, future studies need to pay careful attention when selecting cognitive measures, especially in terms of ensuring that tasks are actually sensitive enough to detect treatment effects.
"My husband and I (Ryan Cedermark) are so impressed with the research Cavin did when writing this book. If you, a family member or friend has suffered a TBI, concussion or are just looking to be nicer to your brain, then we highly recommend this book! Your brain is only as good as the body's internal environment and Cavin has done an amazing job on providing the information needed to obtain such!"
According to clinical psychiatrist and Harvard Medical School Professor, Emily Deans, "there's probably nothing dangerous about the occasional course of nootropics...beyond that, it's possible to build up a tolerance if you use them often enough." Her recommendation is to seek pharmaceutical-grade products which she says are more accurate regarding dosage and less likely to be contaminated.
While the mechanism is largely unknown, one commonly mechanism possibility is that light of the relevant wavelengths is preferentially absorbed by the protein cytochrome c oxidase, which is a key protein in mitochondrial metabolism and production of ATP, substantially increasing output, and this extra output presumably can be useful for cellular activities like healing or higher performance.
If you could take a pill that would help you study and get better grades, would you? Off-label use of "smart drugs" – pharmaceuticals meant to treat disorders like ADHD, narcolepsy, and Alzheimer's – are becoming increasingly popular among college students hoping to get ahead, by helping them to stay focused and alert for longer periods of time. But is this cheating? Should their use as cognitive enhancers be approved by the FDA, the medical community, and society at large? Do the benefits outweigh the risks?
Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect.
Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo.
If you want to try a nootropic in supplement form, check the label to weed out products you may be allergic to and vet the company as best you can by scouring its website and research basis, and talking to other customers, Kerl recommends. "Find one that isn't just giving you some temporary mental boost or some quick fix – that's not what a nootropic is intended to do," Cyr says.
By the end of 2009, at least 25 studies reported surveys of college students' rates of nonmedical stimulant use. Of the studies using relatively smaller samples, prevalence was, in chronological order, 16.6% (lifetime; Babcock & Byrne, 2000), 35.3% (past year; Low & Gendaszek, 2002), 13.7% (lifetime; Hall, Irwin, Bowman, Frankenberger, & Jewett, 2005), 9.2% (lifetime; Carroll, McLaughlin, & Blake, 2006), and 55% (lifetime, fraternity students only; DeSantis, Noar, & Web, 2009). Of the studies using samples of more than a thousand students, somewhat lower rates of nonmedical stimulant use were found, although the range extends into the same high rates as the small studies: 2.5% (past year, Ritalin only; Teter, McCabe, Boyd, & Guthrie, 2003), 5.4% (past year; McCabe & Boyd, 2005), 4.1% (past year; McCabe, Knight, Teter, & Wechsler, 2005), 11.2% (past year; Shillington, Reed, Lange, Clapp, & Henry, 2006), 5.9% (past year; Teter, McCabe, LaGrange, Cranford, & Boyd, 2006), 16.2% (lifetime; White, Becker-Blease, & Grace-Bishop, 2006), 1.7% (past month; Kaloyanides, McCabe, Cranford, & Teter, 2007), 10.8% (past year; Arria, O'Grady, Caldeira, Vincent, & Wish, 2008); 5.3% (MPH only, lifetime; Du-Pont, Coleman, Bucher, & Wilford, 2008); 34% (lifetime; DeSantis, Webb, & Noar, 2008), 8.9% (lifetime; Rabiner et al., 2009), and 7.5% (past month; Weyandt et al., 2009).
Oxiracetam is one of the 3 most popular -racetams; less popular than piracetam but seems to be more popular than aniracetam. Prices have come down substantially since the early 2000s, and stand at around 1.2g/$ or roughly 50 cents a dose, which was low enough to experiment with; key question, does it stack with piracetam or is it redundant for me? (Oxiracetam can't compete on price with my piracetam pile stockpile: the latter is now a sunk cost and hence free.)
70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of:
Many of the positive effects of cognitive enhancers have been seen in experiments using rats. For example, scientists can train rats on a specific test, such as maze running, and then see if the "smart drug" can improve the rats' performance. It is difficult to see how many of these data can be applied to human learning and memory. For example, what if the "smart drug" made the rat hungry? Wouldn't a hungry rat run faster in the maze to receive a food reward than a non-hungry rat? Maybe the rat did not get any "smarter" and did not have any improved memory. Perhaps the rat ran faster simply because it was hungrier. Therefore, it was the rat's motivation to run the maze, not its increased cognitive ability that affected the performance. Thus, it is important to be very careful when interpreting changes observed in these types of animal learning and memory experiments.
The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.)
Some nootropics are more commonly used than others. These include nutrients like Alpha GPC, huperzine A, L-Theanine, bacopa monnieri, and vinpocetine. Other types of nootropics ware still gaining traction. With all that in mind, to claim there is a "best" nootropic for everyone would be the wrong approach since every person is unique and looking for different benefits.
You'll find several supplements that can enhance focus, energy, creativity, and mood. These brain enhancers can work very well, and their benefits often increase over time. Again, nootropics won't dress you in a suit and carry you to Wall Street. That is a decision you'll have to make on your own. But, smart drugs can provide the motivation boost you need to make positive life changes.
Cognition is a suite of mental phenomena that includes memory, attention and executive functions, and any drug would have to enhance executive functions to be considered truly 'smart'. Executive functions occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren't), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to 10 instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human.
Two increasingly popular options are amphetamines and methylphenidate, which are prescription drugs sold under the brand names Adderall and Ritalin. In the United States, both are approved as treatments for people with ADHD, a behavioural disorder which makes it hard to sit still or concentrate. Now they're also widely abused by people in highly competitive environments, looking for a way to remain focused on specific tasks.
Harrisburg, NC -- (SBWIRE) -- 02/18/2019 -- Global Smart Pills Technology Market - Segmented by Technology, Disease Indication, and Geography - Growth, Trends, and Forecast (2019 - 2023) The smart pill is a wireless capsule that can be swallowed, and with the help of a receiver (worn by patients) and software that analyzes the pictures captured by the smart pill, the physician is effectively able to examine the gastrointestinal tract. Gastrointestinal disorders have become very common, but recently, there has been increasing incidence of colorectal cancer, inflammatory bowel disease, and Crohns disease as well.
A provisional conclusion about the effects of stimulants on learning is that they do help with the consolidation of declarative learning, with effect sizes varying widely from small to large depending on the task and individual study. Indeed, as a practical matter, stimulants may be more helpful than many of the laboratory tasks indicate, given the apparent dependence of enhancement on length of delay before testing. Although, as a matter of convenience, experimenters tend to test memory for learned material soon after the learning, this method has not generally demonstrated stimulant-enhanced learning. However, when longer periods intervene between learning and test, a more robust enhancement effect can be seen. Note that the persistence of the enhancement effect well past the time of drug action implies that state-dependent learning is not responsible. In general, long-term effects on learning are of greater practical value to people. Even students cramming for exams need to retain information for more than an hour or two. We therefore conclude that stimulant medication does enhance learning in ways that may be useful in the real world.
As shown in Table 6, two of these are fluency tasks, which require the generation of as large a set of unique responses as possible that meet the criteria given in the instructions. Fluency tasks are often considered tests of executive function because they require flexibility and the avoidance of perseveration and because they are often impaired along with other executive functions after prefrontal damage. In verbal fluency, subjects are asked to generate as many words that begin with a specific letter as possible. Neither Fleming et al. (1995), who administered d-AMP, nor Elliott et al. (1997), who administered MPH, found enhancement of verbal fluency. However, Elliott et al. found enhancement on a more complex nonverbal fluency task, the sequence generation task. Subjects were able to touch four squares in more unique orders with MPH than with placebo.
This continued up to 1 AM, at which point I decided not to take a second armodafinil (why spend a second pill to gain what would likely be an unproductive set of 8 hours?) and finish up the experiment with some n-backing. My 5 rounds: 60/38/62/44/5023. This was surprising. Compare those scores with scores from several previous days: 39/42/44/40/20/28/36. I had estimated before the n-backing that my scores would be in the low-end of my usual performance (20-30%) since I had not slept for the past 41 hours, and instead, the lowest score was 38%. If one did not know the context, one might think I had discovered a good nootropic! Interesting evidence that armodafinil preserves at least one kind of mental performance.
Although piracetam has a history of "relatively few side effects," it has fallen far short of its initial promise for treating any of the illnesses associated with cognitive decline, according to Lon Schneider, a professor of psychiatry and behavioral sciences at the Keck School of Medicine at the University of Southern California. "We don't use it at all and never have."
The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia.
The evidence? A 2012 study in Greece found it can boost cognitive function in adults with mild cognitive impairment (MCI), a type of disorder marked by forgetfulness and problems with language, judgement, or planning that are more severe than average "senior moments," but are not serious enough to be diagnosed as dementia. In some people, MCI will progress into dementia.
Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'.
OptiMind - It is one of the best Nootropic supplements available and brought to you by AlternaScript. It contains six natural Nootropic ingredients derived from plants that help in overall brain development. All the ingredients have been clinically tested for their effects and benefits, which has made OptiMind one of the best brain pills that you can find in the US today. It is worth adding to your Nootropic Stack.
Maj. Jamie Schwandt, USAR, is a logistics officer and has served as an operations officer, planner and commander. He is certified as a Department of the Army Lean Six Sigma Master Black Belt, certified Red Team Member, and holds a doctorate from Kansas State University. This article represents his own personal views, which are not necessarily those of the Department of the Army.
As with any thesis, there are exceptions to this general practice. For example, theanine for dogs is sold under the brand Anxitane is sold at almost a dollar a pill, and apparently a month's supply costs $50+ vs $13 for human-branded theanine; on the other hand, this thesis predicts downgrading if the market priced pet versions higher than human versions, and that Reddit poster appears to be doing just that with her dog.↩
^ Sattler, Sebastian; Forlini, Cynthia; Racine, Éric; Sauer, Carsten (August 5, 2013). "Impact of Contextual Factors and Substance Characteristics on Perspectives toward Cognitive Enhancement". PLOS ONE. 8 (8): e71452. Bibcode:2013PLoSO...871452S. doi:10.1371/journal.pone.0071452. ISSN 1932-6203. LCCN 2006214532. OCLC 228234657. PMC 3733969. PMID 23940757.
|
CommonCrawl
|
Machine learning to predict rapid progression of carotid atherosclerosis in patients with impaired glucose tolerance
Xia Hu1,2,
Peter D. Reaven1,3,4,
Aramesh Saremi3,
Ninghao Liu2,
Mohammad Ali Abbasi1,
Huan Liu1,
Raymond Q. Migrino3,4 &
the ACT NOW Study Investigators
Prediabetes is a major epidemic and is associated with adverse cardio-cerebrovascular outcomes. Early identification of patients who will develop rapid progression of atherosclerosis could be beneficial for improved risk stratification. In this paper, we investigate important factors impacting the prediction, using several machine learning methods, of rapid progression of carotid intima-media thickness in impaired glucose tolerance (IGT) participants.
In the Actos Now for Prevention of Diabetes (ACT NOW) study, 382 participants with IGT underwent carotid intima-media thickness (CIMT) ultrasound evaluation at baseline and at 15–18 months, and were divided into rapid progressors (RP, n = 39, 58 ± 17.5 μM change) and non-rapid progressors (NRP, n = 343, 5.8 ± 20 μM change, p < 0.001 versus RP). To deal with complex multi-modal data consisting of demographic, clinical, and laboratory variables, we propose a general data-driven framework to investigate the ACT NOW dataset. In particular, we first employed a Fisher Score-based feature selection method to identify the most effective variables and then proposed a probabilistic Bayes-based learning method for the prediction. Comparison of the methods and factors was conducted using area under the receiver operating characteristic curve (AUC) analyses and Brier score.
The experimental results show that the proposed learning methods performed well in identifying or predicting RP. Among the methods, the performance of Naïve Bayes was the best (AUC 0.797, Brier score 0.085) compared to multilayer perceptron (0.729, 0.086) and random forest (0.642, 0.10). The results also show that feature selection has a significant positive impact on the data prediction performance.
By dealing with multi-modal data, the proposed learning methods show effectiveness in predicting prediabetics at risk for rapid atherosclerosis progression. The proposed framework demonstrated utility in outcome prediction in a typical multidimensional clinical dataset with a relatively small number of subjects, extending the potential utility of machine learning approaches beyond extremely large-scale datasets.
Impaired glucose tolerance (IGT) is a risk factor for the development of type 2 diabetes mellitus (T2DM) [1], and both IGT and T2DM are associated with increase in cardio-cerebrovascular related mortality [2, 3]. The Diabetes Epidemiology: Collaborative Analysis of Diagnostic Criteria in Europe (DECODE) [4] study showed a tight correlation between IGT and cardiovascular mortality, and IGT is a known risk factor for early-stage atherosclerosis [5]. In the Actos Now for Prevention of Diabetes (ACT NOW) study, it was shown that pharmacotherapy with pioglitazone in IGT subjects resulted in reduced development of T2DM [6] as well as reduced progression of atherosclerosis [7]. Therefore, identification of IGT subjects who are at risk for rapid atherosclerosis progression, and understanding the important characteristics that affect the identification process, may be beneficial in risk stratification and early intervention. Machine learning (ML) methods have been widely used to learn complex relationships or patterns from data to make accurate predictions [8] and are usually applied in the setting of massive datasets ("big data"). Although encompassing traditional biostatistical approaches such as linear regression modeling, ML approaches, in general, have advantages over traditional frequentist statistical approaches because they can predict patterns without any assumption that simple/complex equations underlie relationships among variables and are able to handle the high-dimensionality nature of medical data [9, 10]. The use of ML approaches in clinical trial data to predict clinical response remains in its infancy. Recently, researchers used data from clinical trials of major depressive disorders (STAR*D and COMED) to predict whether a patient will reach clinical remission from a major depressive episode following treatment with citalopram using stochastic gradient boosting ML approach [11]. Using the data from 768 patients in the Neo-tAnGo chemotherapy clinical trial for breast cancer, ML methods were used to classify cells as cancerous or not [12]. The ACT NOW clinical trial has contributed to novel discoveries on reducing the onset of type 2 diabetes mellitus in at-risk participants using pioglitazone [6] as well as providing insights as to underlying metabolic mechanisms involved with development of diabetes [13–15], but the analytic approaches used involved traditional frequentist biostatistical methods. This study aims to investigate the effectiveness of different ML based methods in predicting IGT patients who will develop rapid carotid atherosclerosis plaque progression in a limited dataset typical of clinical trials.
Study design and subjects
The ACT NOW study design including the exclusion and inclusion criteria have been previously published (Clinicaltrials.gov NCT00220961) [6, 13]. In brief, the ACT NOW study was a multicenter, prospective, randomized, double-blind, placebo-controlled trial to test whether pioglitazone prevents T2DM and progression of carotid intima-media thickness (CIMT) in adults ≥18 years old with IGT (defined by a 2-h plasma glucose concentration of 140–199 mg/dL during a 75 g, 2-h oral glucose tolerance test). Of the 602 total participants, 382 subjects had serial carotid atherosclerosis measurements and comprise the study population of the current study. All research subjects gave informed consent and the study was approved by the Institutional Review Boards at each site.
Carotid atherosclerosis measurement and progression classification
The method for measurement of carotid atherosclerosis has previously been reported [7]. In brief, all 382 subjects underwent high-resolution B-mode carotid artery ultrasound (Logiq, General Electric, Waukesha, WI) to image the far wall of the right distal common carotid region at baseline and mid-study (15–18 months after baseline). Carotid intima-media thickness (CIMT) was measured, and the absolute difference in CIMT between the two time points was considered the measure of plaque progression (or regression). Subjects with CIMT change in the top decile (n = 39, 58.1 ± 17.5 μM change from baseline) were arbitrarily classified as rapid progressors (RP), and the rest (n = 343, 5.8 ± 2.0 μM change from baseline, p < 0.001 versus RP) were considered non-rapid progressors (NRP). Note that despite the arbitrary nature of the cutoff selection, the CIMT change observed in the RP group (58.1 ± 17.5 μM) represents more than 2 standard deviations of annual CIMT change (11.8 ± 12.8 μM) reported in the Multi-Ethnic Study of Atherosclerosis (MESA) study involving 3441 subjects with multiple cardiovascular disease risk factors [16], providing support for the categorization of this group as rapid progressors.
Demographic, clinical, and laboratory information was collected as previously reported [6, 13] and used as variables for model building.
Data analytics framework
Data analyses settings
Boldface uppercase letters (e.g., A) are used to denote matrices, uppercase letters (e.g., A) to denote vectors, and lowercase letters (e.g., a) to denote scalars. The entry at the ith row and jth column of a matrix A is denoted as A ij. A i∗ and A ∗j denote the ith row and jth column of a matrix A, respectively.
Given a set of patients X ∈ ℝn × d, n is the number of patients and d is the number of features. The feature (attribute, variable) vector is denoted as {X1, X2, …, Xd}. Let Y ∈ ℝn be a vector denoting the classes of the patients. In this study, we have two classes for each patient, i.e., Yi used in Hu et al. study [17].
With the notations above, the problem is formally defined as follows: given a set of patients X with their class information Y, the aim is to learn a classifier h to automatically assign class labels for unseen patients (i.e., test data).
Data preprocessing was performed to make the input data more consistent to facilitate machine learning algorithms. First, data imputation was performed to deal with missing values. Missing value was crudely imputed as the smallest value for the variable in the dataset. Second, in order to tackle variables with heterogeneous nature, a widely used method [18] was employed to create dummy variables to substitute all possible categories in a categorical variable. Zero or 1 was used to indicate the absence or presence of a categorical variable, thus creating multiple dummy variables for the categorical variable. The number of dummy variables is equal to the number of distinct categories in the original variable.
The variables used in the model are as follows: age; sex; race; Hispanic race; site; family income; randomization to placebo versus pioglitazone; waist circumference; height; systolic/diastolic/mean blood pressure; body mass index; plasma creatinine; urine microalbumin; insulin level; interleukin-6; leptin; plasminogen activator inhibitor-1; C-reactive protein; monocyte chemoattractant protein-1; tumor necrosis factor-1; total cholesterol; triglyceride; low density lipoprotein; alkaline phosphatase; alanine transaminase; aspartate transaminase; hemoglobin; hematocrit; platelet; white blood cell count; and history of hypertension, smoking, the use of alcohol, the use of lipid lowering therapy, the use of nonsteroidal anti-inflammatory medication, the use of angiotensin converting enzyme inhibitor, gestational diabetes, myocardial infarction, stroke, and peripheral vascular disease.
Feature selection with Fisher Score
To deal with the multi-modal data consisting of heterogeneous variables, we propose to employ feature selection to first obtain an effective feature space. By introducing feature selection in the learning framework, we exploit its advantages including increased learning performance and computational efficiency, better generalization of the learned model, and interpretability for specific applications. In particular, we employed a supervised feature selection algorithm called Fisher Score in our study. Fisher Score [19], which is one of the most widely used methods, has shown effectiveness in many data mining applications. The basic idea is to select the features that are efficient for discrimination, i.e., feature values of samples within a class are small while being large between classes. The top k features can be obtained with a greedy search method by finding the features with the largest Fisher Scores. Human (clinician) input mainly involved consideration of which redundant/repetitive features are to be discarded (e.g., the presence of hypertension variable and the use of antihypertensive medications variable) and which features are irrelevant to predictive function (e.g., clinical trial variables that were measured after the 18-month outcome has occurred). The investigators were careful in minimizing feature de-selection so as to minimize bias and prevent exlcusion in the model of previously unknown features that could affect the outcome of interest.
A probabilistic Bayes model
We employ a probabilistic Bayes model to tackle the classification problem. Bayesian classifiers have been intensively studied to assign the most likely class to a given data instance represented by its feature vector. The classifiers are built upon the Bayes theorem shown as below:
$$ P\left(Y\Big|{\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right)=\frac{P\left({\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\Big|Y\right)P(Y)}{P\left({\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right)}, $$
where \( P\left(Y|{\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right) \) represents the probability of having class label Y given the data instance \( \mathrm{X}=\left\{{\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right\} \), \( P\left({\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\Big|Y\right) \) represents the probability to observe \( \mathrm{X}=\left\{{\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right\} \) in the class Y, \( P(Y) \) represents the probability that instances belong to the class Y, \( P\left({\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right) \) is the probability of instance \( \mathrm{X} \). To use Bayes theorem for classification, the goal is to find the class, give an instance \( \mathrm{X}=\left\{{\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right\} \), to maximize the posterior probability, shown below:
$$ {h}^{*}\left(\boldsymbol{x}\right)= \arg { \max}_cP\left(Y=c\Big|\boldsymbol{x}={\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right). $$
Since the prior probability \( P\left({\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\right) \) is a fixed value in Eq. 1, by substituting Eq. 2 into Eq. 1, it is easy to show thatP(Y|X1X2 … X d ) ∝ P(X1X2 … X d |Y)P(Y), indicating that the posterior probability is proportional to likelihood times prior. Therefore, given a data instance \( \boldsymbol{x} \), its class label can be determined according to the following Bayes classifier:
$$ {h}^{*}\left(\boldsymbol{x}\right)= \arg { \max}_cP\left(\boldsymbol{x} = {\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\Big|Y=c\right)P\left(Y = c\right), $$
which is to maximize the multiplication of likelihood and prior previously discussed. However, the calculation of likelihood \( P\left(\boldsymbol{x}={\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\Big|Y=c\right) \) may be difficult especially when the number of data instances is small. To make the computation effective and efficient, a widely used assumption for Bayesian classifiers is that the features are independent with each other given the classes shown as follows:
$$ P\left(\boldsymbol{x}={\mathrm{X}}_1{\mathrm{X}}_2\dots {\mathrm{X}}_d\Big|Y=c\right)=P\left({\mathrm{X}}_1\Big|Y=c\right)P\left({\mathrm{X}}_2\Big|Y=c\right)\dots P\left({\mathrm{X}}_d\Big|Y=c\right). $$
The classifier built upon this assumption is Naïve Bayes and while the assumption is simple, Naïve Bayes classifier has shown effectiveness in many real-world applications such as text classification [20] and information retrieval [21]. Naïve Bayes classifier was used in the current study by substituting Eq. 4 into Eq. 3, shown as below:
$$ {h}^{*}\left(\boldsymbol{x}\right)= \arg { \max}_c{\displaystyle \prod_i^d}P\left({\mathrm{X}}_i\Big|Y=c\right)P\left(Y=c\right). $$
The proposed method is efficient in terms of training and testing time. Although the real-world dataset the method was used in contains a limited number of subjects, the proposed method has the potential to be applied on a large-scale dataset based on the time complexity analysis as follows: training time of the proposed method is O(|D|Ld + |C||V|), where |D| is the number of instances in the training data, Ld is the average number of variables of a subject in the training data, and |C| is the number of classes and |V| is the number of variables. Testing time of the proposed method is O(|C| Lt), where Lt is the average number of variables of a subject in the testing data.
In addition to Bayesian classifiers, in the pilot study, we also employed another two representative machine learning methods, multilayer perceptron (MLP) and random forest (RF), for classification. MLP is a supervised learning model that uses backpropagation for training an artificial neural network. The learned model consists of multiple layers of nodes, and each layer is fully connected to the next one. The key idea is that, by constructing multiple layers of the model, MLP aims to better map sets of input data onto a set of appropriate outputs. RF is a representative ensemble learning method that constructs a multitude of decision trees for classification. Comparing to traditional decision tree-based learning models, RF is more robust to the overfitting problem and much more effective by combining multiple models. Similarly, RF enjoys the nice properties of decision tree based models such as the interpretability and fast learning rate.
Assessment of model performance
The performances of the three ML models were assessed using area under the receiver operating characteristic curve (AUC) and Brier score. The Brier score is a proper score function that measures the accuracy of the probabilistic predictions, with a score of 0 being perfect prediction and a score of 1 being worst score achievable [22]. The AUC was calculated from the probability of RP classification for each subject using each of the learning methods. The Brier score was computed as the mean squared difference between final classification prediction for each subject versus ground truth subject classification [22].
Clinical and demographic characteristics
Clinical, demographic, and CIMT data are presented in Table 1. There was no significant difference in age, gender, cardiovascular risk factor co-morbidities, and proportion assigned to pioglitazone between the RP and NRP groups. There were significant differences in enrollment site, proportion with Hispanic race, urine microalbumin, plasma creatinine and serum plasminogen activator inhibitor-1 level between RP and NRP.
Table 1 Demographic and clinical and laboratory results
Feature selection results
Based on Fisher Scores, the following variables were selected based on the feature selection process: hemoglobin (HGB), mean plasma creatinine (MEAN_PCREAT), PCREAT, gestational diabetes (GDM)_Y_dummy, arterial procedures (OpArtery)_N_dummy, OpArtery_Y_dummy, medical center (CURRENTCENTER), SITE, GDM_N_dummy, Ethnicity_Hispanic (H)_dummy, and HISPANIC, Ethnicity_Non-Hispanic (N)_dummy. However, since CURRENTCENTER and SITE are redundant features, we eliminate SITE from the feature set that are fed into the learning phase. We also removed PCREAT because it is redundant with Mean PCREAT. It also demonstrates the importance of incorporating domain knowledge into the proposed data-driven framework. More sophisticated domain knowledge, such as group structure of the features or pair-wise partial order between some features, could be further incorporated in the framework to improve the learning performance. Since it is beyond the scope of this work, we leave it as future work.
Learning performance of the baseline methods
We evaluated the performance of several representative learning methods with threefold cross validation. In particular, the data were randomly divided into a training set (67 % of subjects) whose data were used to build the model, and a test set (33 % of subjects) whose data were used to validate the built model. While each of the methods had good performance overall, Naïve Bayes with feature selection achieved the best performance, which resulted in correct classification in 340 of 382 subjects (89.23 %), AUC of 0.797 and Brier score of 0.086 (Table 2).
Table 2 Performance of baseline models
Also, we investigated the effectiveness of introducing feature selection method in the data analytics framework. The experimental results showed that all of the three methods achieved significantly better results by using feature selection, and Naïve Bayes method achieved AUC of 0.797 (with feature selection) and 0.745 (without feature selection).
The novel finding of our study is that machine learning methods can be applied to a limited dataset typical of a clinical trial in order to predict impaired glucose tolerance subjects who will develop rapid carotid plaque progression with overall good performance. Our results demonstrate the potential utility of sophisticated Bayesian approaches in predicting clinical events from limited clinical datasets.
In 2010, approximately 1 in 3 adults in the USA or about 79 million people had prediabetes [23], which includes IGT and impaired fasting glucose. Aside from the risk for developing diabetes, prediabetes by itself is also independently associated with future risk of stroke [24]. It is therefore critical that we develop tools for early identification of at-risk patients who might benefit from targeted early intervention, both non-pharmacologic and pharmacologic.
The medical field remains almost universally reliant on traditional frequentist low-dimensional statistical approaches for building risk prediction models [9], such as linear and logistic regression models. These approaches are disadvantaged by their reliance on the assumption that simple or complex equations underlie the relationships among variables and the limitations imposed by the high dimensionality of hundreds of features/variables typical of clinical trials or human studies. Machine learning approaches have the potential to overcome these disadvantages. Machine learning is the study of computer algorithms and optimization techniques that can learn complex relationships or patterns from data which in turn can be used to make accurate predictions or decisions [10]. Pattern recognition ML algorithms can be useful for prediction even if no mathematical relationship exists among variables and ML approaches can apply in infinite dimensional spaces. Additionally, the testing of model performance derived from a training set to a separate held-out validation set enhances generalizability of the prediction model allowing for a dynamic ability to learn from new data to optimize the prediction model. Although its current use is predominantly on massive datasets in social media, finance, and information technology business applications, ML may also be useful in high-dimensional but limited dataset (in terms of number of subjects) typical of human studies and clinical trials. In addition, the widespread use of electronic medical records from large health care systems to small independent clinical practices point to an ever-increasing need for novel methods to analyze complex big data. Our results support the use and application of ML approaches to predict outcomes in a limited dataset but with a large number of demographic, clinical, and laboratory variables. It is important to note that even though we used our ML methods in a clinical dataset with a limited number of subjects, we expect the approaches to perform well, if not better, with a large number of subjects. A larger number of subjects (bigger dataset) allows more robust cross validation of model performance to a held-out dataset that would enhance the generalizability of the model. The major problem of clinical trials, however, lies not in too large a sample size but often the opposite, the smaller number of subjects enrolled. This is due to the cost of performing clinical trials plus the ethical mandate of enrolling only the number of subjects that is predicted to statistically show significant differences among treatment options and no more, to ensure research subject safety. Traditional frequentist biostatistical approaches currently being used by the medical community are limited by the dual conditions of small sample size and hyperdimensional datasets typically present in real-world clinical trials, which are conditions that may be ideally addressed by ML approaches, as we have shown in this study.
Among several learning methods used, we found that Naïve Bayes with feature selection performed the best. This is likely because probabilistic Bayesian models perform well with multi-modal data because it assumes independence in inferring probability of each feature. A strong assumption of Naïve Bayes model is that the features are conditionally independent given the label. The assumption may not always hold true for clinical data, but we believe this assumption is reasonable for this study because of the following reasons. First, the conditional independence is a relaxation to enable the calculation of conditional probability, but it is not strictly required for using the model. Naïve Bayes model has been widely used in many real-world problems in which the assumption may not hold, and it achieved better performance in our study compared with other methods. Second, some features in our data, although not all of them, are conditionally independent with each other given the label, e.g., age and gender. A potentially interesting extension of this work and a promising future direction is to investigate how the conditional dependencies can be learned and modeled in the Bayes-based models. The findings motivate us to explore even more sophisticated probabilistic Bayesian models in future work to improve the proposed framework.
We were able to achieve several nice properties by employing feature selection in the data analytics framework. First, we achieved improved performance by introducing feature selection and demonstrated that feature selection has the potential to improve this type of clinical investigation by finding the most effective set of variables. Second, by reducing the number of variables (ten variables in our study), the approach allowed clinicians to manually examine the selected variables and thus improved the interpretability of the learned model. Third, with limited number of selected variables, we could now apply the proposed framework on larger scale datasets that were previously difficult to process with high-dimensional data.
The proposed framework, including preprocessing, feature selection and prediction, is general and can be easily extended to many other data-driven problems in clinical research under some specific conditions. First, a strong assumption in Naïve Bayes based methods is that the features are conditionally independent given the label. To extend the proposed model, we need to have a good understanding of the nature of features. Second, clinician input was incorporated in feature selection for use in the model. Different problems/datasets may require very different domain knowledge to select more informative or useful features. Applying the proposed framework from this initial study to other problems/datasets is potentially important and is one of our future goals.
An important limitation of the study is the inability to determine the generalizability of the ML models derived from the ACT NOW dataset to other prediabetic groups or populations, which should be the focus of future studies looking at real-world performance of ML approaches. This limitation, however, is intrinsic to the nature of clinical trials whose findings or conclusions need to be validated in the general clinical population. Also, given large number of variables before feature selection, it is difficult to model and incorporate domain knowledge from physicians into the framework.
In conclusion, ML methods were applied to a clinical trial dataset and showed good performance in identifying/predicting impaired glucose tolerance participants who developed rapid carotid plaque progression. Naïve Bayes method showed superior performance over multilayer perceptron and random forest methods and feature selection improved predictive performance. Our findings point to the utility of ML methods in data analytics for clinical applications.
JE Shaw, PZ Zimmet, M de Courten et al., Impaired fasting glucose or impaired glucose tolerance. What best predicts future diabetes in Mauritius? Diabetes Care 22(3), 399–402 (1999)
IM Stratton, AI Adler, HA Neil et al., Association of glycaemia with macrovascular and microvascular complications of type 2 diabetes (UKPDS 35): prospective observational study. BMJ 321(7258), 405–12 (2000)
M Tominaga, H Eguchi, H Manaka, K Igarashi, T Kato, A Sekikawa, Impaired glucose tolerance is a risk factor for cardiovascular disease, but not impaired fasting glucose. The Funagata Diabetes Study. Diabetes Care 22(6), 920–4 (1999)
Glucose tolerance and mortality: comparison of WHO and American Diabetes Association diagnostic criteria. The DECODE study group. European Diabetes Epidemiology Group. Diabetes Epidemiology: Collaborative analysis Of Diagnostic criteria in Europe. Lancet. 354(9179), 617–621 (1999)
T Ando, S Okada, Y Niijima et al., Impaired glucose tolerance, but not impaired fasting glucose, is a risk factor for early-stage atherosclerosis. Diabet. Med. 27(12), 1430–5 (2010)
RA DeFronzo, D Tripathy, DC Schwenke et al., Pioglitazone for diabetes prevention in impaired glucose tolerance. N. Engl. J. Med. 364(12), 1104–15 (2011)
A Saremi, DC Schwenke, TA Buchanan et al., Pioglitazone slows progression of atherosclerosis in prediabetes independent of changes in cardiovascular risk factors. Arterioscler. Thromb. Vasc. Biol. 33(2), 393–9 (2013)
S Wang, RM Summers, Machine learning and radiology. Med. Image Anal. 16(5), 933–51 (2012)
JM Bland, DG Altman, Bayesians and frequentists. BMJ 317(7166), 1151–60 (1998)
Murphy KP, Machine learning: a probabilistic perspective (MIT Press, Cambridge, 2012)
AM Chekroud, RJ Zotti, Z Shehzad et al., Cross-trial prediction of treatment outcome in depression: a machine learning approach. Lancet Psychiatry 3(3), 243–50 (2016)
HR Ali, A Dariush, E Provenzano et al., Computational pathology of pre-treatment biopsies identifies lymphocyte density as a predictor of response to neoadjuvant chemotherapy in breast cancer. Breast Cancer Res. 18(1), 21 (2016)
RA Defronzo, M Banerji, GA Bray et al., Actos Now for the prevention of diabetes (ACT NOW) study. BMC Endocr. Disord. 9, 17 (2009)
RA Defronzo, D Tripathy, DC Schwenke et al., Prevention of diabetes with pioglitazone in ACT NOW: physiologic correlates. Diabetes 62(11), 3920–6 (2013)
D Tripathy, DC Schwenke, M Banerji et al., Diabetes incidence and glucose tolerance after termination of pioglitazone therapy: results from ACT NOW. J. Clin. Endocrinol. Metab. 101(5), 2056–62 (2016)
MC Tattersall, A Gassett, CE Korcarz et al., Predictors of carotid thickness and plaque progression during a decade: the multi-ethnic study of atherosclerosis. Stroke 45(11), 3257–62 (2014)
Hu X, Tang L, Tang J, Liu H, Exploiting social relations for sentiment analysis in microblogging. Proceedings of the Sixth ACM International Conference on web search and data mining. (2013). pp. 537–546.
DB Suits, Use of dummy variables in regression equations. J. Am. Stat. Assoc. 52(280), 548–51 (1957)
Article MATH Google Scholar
X. He, D. Cai, P. Niyogi, Laplacian score for feature selection. Advances in Neural Information Processing Systems. (Electronic Proceeding of the Neural Information Processing Systems Conference in 2005, Canada, 2005). pp. 507–514.
A. McCallum, K. Nigam, A comparison of event models for Naive Bayes text classification. AAAI-98 workshop on learning for text categorization (1998)
D. Lewis, in Machine Learning: ECML-98, ed. by. C. Nédellec, C. Rouveirol. Naive (Bayes) at forty: the independence assumption in information retrieval (Springer Berlin Heidelberg, Heidelberg, 1998). p. 4–15.
GW Brier, Verification of forecases expressed in terms of probability. Mon. Weather Rev. 78, 1–3 (1950)
Centers for Disease C, Prevention, Awareness of prediabetes—United States, 2005–2010. MMWR Morb. Mortal. Wkly. Rep. 62(11), 209–212 (2013)
M Lee, JL Saver, KS Hong, S Song, KH Chang, B Ovbiagele, Effect of pre-diabetes on future risk of stroke: meta-analysis. BMJ 344, e3564 (2012)
The ACT NOW study was originally an investigator-initiated study funded by Takeda Pharmaceuticals. We would like to thank the Office of Research of the Phoenix Veterans Affairs Health Care System and the Phoenix VA Center for Healthcare Data Analytics Research for their support. The study does not represent the views of the United States government or the Department of Veterans Affairs.
XH, NL, MA, and RM designed the concepts, developed the algorithm, and conducted experiments. PR, AS, and HL contributed to the analyses and interpretation of the results and provided critical revisions. XH and RM wrote the paper. The ACT NOW Study Investigators contributed in the original study that provided the data.
Arizona State University, Tempe, AZ, USA
Xia Hu, Peter D. Reaven, Mohammad Ali Abbasi & Huan Liu
Texas A&M University, College Station, TX, USA
Xia Hu & Ninghao Liu
Phoenix Veterans Affairs Health Care System, Phoenix, AZ, USA
Peter D. Reaven, Aramesh Saremi & Raymond Q. Migrino
University of Arizona College of Medicine-Phoenix, Phoenix, AZ, USA
Peter D. Reaven & Raymond Q. Migrino
Xia Hu
Peter D. Reaven
Aramesh Saremi
Ninghao Liu
Mohammad Ali Abbasi
Huan Liu
Raymond Q. Migrino
Correspondence to Raymond Q. Migrino.
Hu, X., Reaven, P.D., Saremi, A. et al. Machine learning to predict rapid progression of carotid atherosclerosis in patients with impaired glucose tolerance. J Bioinform Sys Biology 2016, 14 (2016). https://doi.org/10.1186/s13637-016-0049-6
|
CommonCrawl
|
Time crystals from minimum time uncertainty
Regular Article - Theoretical Physics
Mir Faizal1,
Mohammed M. Khalil ORCID: orcid.org/0000-0002-6398-44282 &
Saurya Das3
The European Physical Journal C volume 76, Article number: 30 (2016) Cite this article
Motivated by the Generalized Uncertainty Principle, covariance, and a minimum measurable time, we propose a deformation of the Heisenberg algebra and show that this leads to corrections to all quantum mechanical systems. We also demonstrate that such a deformation implies a discrete spectrum for time. In other words, time behaves like a crystal. As an application of our formalism, we analyze the effect of such a deformation on the rate of spontaneous emission in a hydrogen atom.
A preprint version of the article is available at ArXiv.
The Heisenberg uncertainty principle predicts that the position of a particle can, in principle, be measured as accurately as one wants if its momentum is allowed to remain completely uncertain. However, most approaches to quantum gravity predict the existence of a minimum measurable length scale, usually the Planck length. There are also strong indications from black hole physics and other sources for the existence of a minimum measurable length [1–3]. This is because the energy needed to probe spacetime below the Planck length scale exceeds the energy needed to produce a black hole in that region of spacetime. Similarly, string theory also predicts a minimum length, as strings are the smallest probes [4–8]. Also in loop quantum gravity there exists a minimum measurable length scale, which turns the big bang into a big bounce [9].
The existence of a minimum measurable length scale in turn requires the modification of the Heisenberg uncertainty principle into a Generalized Uncertainty Principle (GUP) [4–7]; there is a corresponding deformation of the Heisenberg algebra to include momentum-dependent terms and a modified coordinate representation of the momentum operators [8, 10–15]. It may be noted that a different kind of deformation of the Heisenberg algebra occurs due to Doubly Special Relativity (DSR) theories, which postulate the existence of a universal energy scale (the Planck scale) [16–18]. These are also related to the idea of discrete spacetime [19], spontaneous symmetry breaking of Lorentz invariance in string field theory [20], spacetime foam models [21], spin-network in loop quantum gravity [22], non-commutative geometry [23–25], ghost condensation in perturbative quantum gravity [26], and Horava–Lifshitz gravity [27]. It may be noted that DSR has been generalized to curved spacetime and the resultant theory is called gravity's rainbow [28–33]. It is interesting to note that the deformation from DSR and the deformation from GUP can be combined into a single consistent deformation of the Heisenberg algebra [34].
A number of interesting quantum systems have been studied using this deformed algebra, such as the transition rate of ultra-cold neutrons in gravitational field [35], the Lamb shift and Landau levels [36]. There has been another interesting result derived from this deformed algebra, which shows that space needs to be a discrete lattice, and only multiples of a fundamental length scale (normally taken as the Planck length) can be measured [37]. Note that minimum length does not automatically imply discrete lengths, or vice versa. Motivated by this result, in this paper we analyze the deformation of the algebra and the subsequent Schrödinger equation consistent with the existence of a minimum time, and demonstrate that it leads to a discretization of time as well. It may be noted that discretization of time had also been predicted from a deformed version of the Wheeler–DeWitt equation [38]. The discretization of time, and the related breakdown of time reparametrization invariance of a system resembles a crystal lattice in time. Time crystals have been studied recently using a very different physical motivation, e.g. analyzing superconducting rings, and the spontaneous breakdown of time-translation symmetry in classical and quantum systems [39–43].
Observable time
In this section, we review the work done on viewing time as a quantum mechanical observable. It is well known that time cannot be represented as a self-adjoint operator [44]. This is because the Hamiltonian with a semi-bounded spectrum does not admit a group of shifts which can be generated from canonically conjugate self-adjoint operators. However, von Neumann had suggested that restricting quantum mechanics to self-adjoint operators could be quite limiting [45]. In fact, it was demonstrated by von Neumann that the momentum operator for a free particle bounded by a rigid wall at \(x = 0\) is not a self-adjoint operator but only a maximal Hermitian operator. This situation is similar to the time being defined as an observable.
It has been demonstrated that under certain conditions time can be viewed as a quantum mechanical observable [46–50]. This is because it is possible to use symmetric non-self-adjoint operators that satisfy the commutation relation [51, 52],
$$\begin{aligned}{}[t, H] = - i \hbar . \end{aligned}$$
In this formalism, observables are viewed as positive operator valued measures. Now for a system with Hamiltonian H the map \(b \rightarrow e^{iHb}\) constitutes a unitary representation of the time translation group. Thus, the positive operator valued B, with \(\theta \rightarrow B(\theta ) \), represents a time observation of the system, and it will satisfy \( e^{iHb} B(\theta ) e^{-iHb} = B(\theta - b)\). So for a time observable B, it is possible to define a symmetric time operator \(t = \int t dB(t)\). This operator will not be self-adjoint. However, self-adjointness is not essential for calculating probabilities associated with the system. So, for any experiment the probability measure \(\theta \rightarrow p(\theta )\) can be associated with the states \(\rho \) by defining \(p(\theta ) = tr [\rho B(\theta )]\), where \(\theta \rightarrow B(\theta )\) is a positive operator valued measure [46]. Thus, it is possible to formally define time as an observable by using a maximal Hermitian (but non-self-adjoint) operator for time.
It is this definition of time that we will use when formally deforming the commutation relation. What we intend to do in this paper is to deform this formal definition of time to be consistent with the existence of a minimum measurable time interval. Mathematically this situation will be similar to the GUP deformation of the usual Heisenberg algebra. Physically observable time can be defined by defining an observable with reference to the evolution of some non-stationary quantity, if events are characterized by of a specific values of this quantity [46]. Such a non-stationary quantity could be the tunneling time for particles. Then the existence of a minimum measurable time interval will constitute a lower bound on such measurements. The existence of a lower bound on such measurements will effect the measurement of tunneling time for particles. In fact, such system have been analyzed by considering time as an observable [47–50]. Even though such an analysis is important, we will concentrate on another problem in this paper. We will analyze the deformation of commutator between the Hamiltonian and time, and demonstrate that such a deformation can lead to the existence of a discrete spectrum for time.
Minimum time
We start with the modified Heisenberg algebra, the modified expression of the momentum operator in position space, and the GUP consistent with all theoretical models, correct to \({\mathscr {O}}(\alpha ^2)\). In this paper, we use units in which \(c=1\). We have
$$\begin{aligned}{}[x^i, p_j]= & {} i \hbar \left[ \delta _{j}^i - \alpha |p^k p_k|^{1/2} \delta _{j}^i + \alpha |p^k p_k|^{-1/2} p^i p_j \right. \nonumber \\&\left. +\, \alpha ^2 p^k p_k \delta _{j}^i + 3 \alpha ^2 p^i p_j\right] , \end{aligned}$$
$$\begin{aligned} p_i= & {} -i \hbar \left( 1 - \hbar \alpha \sqrt{- \partial ^j \partial _j} - 2\hbar ^2 \alpha ^2 \partial ^j \partial _j\right) \partial _i, \end{aligned}$$
where \(\alpha = {\alpha _0 \ell _{Pl}}/{\hbar }\), and \(\ell _{Pl}\) is the Planck length. It has been suggested that the parameter \(\alpha _0\) could be situated at an intermediate scale between the electroweak scale and the Planck scale, and this could have measurable consequences in the near future [36]. However, if such a deformation parameter exists, then it would be universal for all processes. This is because it would be the parameter controlling low energy phenomena occurring because of quantum gravitational effects, and as gravity affects all systems universally, we expect this parameter also to universally deform all quantum mechanical systems. Also the apparent non-local nature of operators in Eq. (3) above poses no problem in one dimension (space or time). In more than one dimensions, the issue was tackled by using the Dirac equation [34]. It is also possible to deal with these non-local derivatives, in more than one dimensions, using the theory of harmonic extension of functions [56, 57]. The modified Heisenberg algebra is consistent with the following GUP, in one dimension [36]:
$$\begin{aligned}&\Delta x \Delta p \ge \frac{\hbar }{2} \left[ 1 - 2\alpha \langle p \rangle + 4\alpha ^2 \langle p^2 \rangle \right] \nonumber \\&\quad \ge \frac{\hbar }{2} \left[ 1 + \left( \frac{\alpha }{\sqrt{\langle p^2\rangle }}+ 4 \alpha ^2 \right) \Delta p^2 \right. \nonumber \\&\qquad \left. +\, 4\alpha ^2 \langle p\rangle ^2 -2\alpha \sqrt{\langle p^2 \rangle } \right] . \end{aligned}$$
One way to arrive at the temporal deformation of the commutator is to use the principle of covariance and propose the following deformation spacetime commutators:
$$\begin{aligned}&[x^\mu , p_\nu ] = i \hbar \left[ \delta _{\nu }^\mu - \alpha |p^\rho p_\rho |^{1/2} \delta _{\nu }^\nu + \alpha |p^\rho p_\rho |^{-1/2} p^\mu p_\nu \right. \nonumber \\&\qquad \qquad \,\,\, \left. +\, \alpha ^2 p^\rho p_\rho \delta _{\nu }^\mu + 3 \alpha ^2 p^\mu p_\nu \right] , \end{aligned}$$
$$\begin{aligned}&p_\mu = -i \hbar \left( 1 - \hbar \alpha \sqrt{- \partial ^\nu \partial _\nu } - 2 \hbar ^2 \alpha ^2 \partial ^\nu \partial _\nu \right) \partial _\mu . \end{aligned}$$
Even though we could study a temporally deformed system by using the temporal part of this covariant algebra, we will only deform the commutation relation between energy and time. This is because the deformation of the spatial part of the Heisenberg algebra has been thoroughly analyzed [34–37], and here we would like to analyze the effect of temporal deformation alone on a system. We will also simplify our analysis by only deforming the relation between time and Hamiltonian of a system. This deformation will be different from the temporal part of the deformed covariant algebra. It may be noted that such a deformation only makes sense if we view time as a quantum mechanical observable. Therefore we first define the original commutator of this observable time with Hamiltonian as \([t, H] = - i \hbar \) [51, 52]. Then we deform this commutator of the observable time with Hamiltonian to
$$\begin{aligned}{}[t, H]= & {} - i \hbar \left[ 1 + f(H)\right] , \end{aligned}$$
where f(H) is a suitable function of the Hamiltonian of the system. Thus, the temporal part of Eq. (6) yields the modified Schrödinger equation
$$\begin{aligned} H \psi = i\hbar \partial _t\psi +\hbar ^2\alpha \partial _t^2\psi . \end{aligned}$$
As can be seen from the above, this deformation of quantum Hamiltonian will produce corrections to all quantum mechanical systems. The temporal part also implies the following time-energy uncertainty:
$$\begin{aligned} \Delta t \Delta E\ge & {} \frac{\hbar }{2} \left[ 1 - 2\alpha \langle E \rangle + 4\alpha ^2 \langle E^2 \rangle \right] \nonumber \\\ge & {} \frac{\hbar }{2} \left[ 1 + \left( \frac{\alpha }{\sqrt{\langle E^2\rangle }} + 4 \alpha ^2 \right) \Delta E^2\right. \nonumber \\&\left. +\, 4\alpha ^2 \langle E\rangle ^2 -2\alpha \sqrt{\langle E^2 \rangle } \right] . \end{aligned}$$
Time crystals
The spatially deformed Heisenberg algebra has been used for analyzing a free particle in a box [37]. The boundary conditions which were used for analyzing this system were \(\psi (0) =0\) and \(\psi (L) =0\), where L was the length of the box. It was demonstrated that the length of the box was quantized because of the spatial deformation of the Heisenberg algebra. As this particle was used as a test particle to measure the length of the box, this implied that space itself was quantized. The same argument can be now used for the temporal deformation. This can be done by taking the temporal analog of the particle in a box. The boundary conditions for this system can be written as \(\psi (0) =0\) and \(\psi (T) =0\), where T is a fixed interval of time. This is the temporal analog of a particle in a box, and the particle in this case is a test particle which measures the interval of time. Now we will demonstrate that in this case the interval of time has to be quantized. As this particle is a test particle used to measure this interval of time, we can argue that time itself is quantized.
The temporal part of the deformed Schrödinger equation to first order in \(\alpha \) is given by
$$\begin{aligned} i\hbar \partial _t\psi +\hbar ^2\alpha \partial _t^2\psi = E\psi , \end{aligned}$$
and it has the solution
$$\begin{aligned} \psi (t)=Ae^{\frac{-it\left( 1+\sqrt{1-4E\alpha }\right) }{2\alpha \hbar }}+ Be^{\frac{-it\left( 1-\sqrt{1-4E\alpha }\right) }{2\alpha \hbar }}. \end{aligned}$$
Applying the boundary condition \(\psi (0)=0\) leads to \(B=-A\), and the second boundary condition \(\psi (T)=0\) leads to
$$\begin{aligned} Ae^{\frac{-iT\left( 1+\sqrt{1-4E\alpha }\right) }{2\alpha \hbar }} \left( 1-e^{\frac{iT\sqrt{1-4E\alpha }}{\alpha \hbar }}\right) =0, \end{aligned}$$
which means that either \(A=B=0\) or both the real and the imaginary parts of the above equation are zero. The real part is
$$\begin{aligned} -2\sin \left( \frac{T}{2\alpha \hbar }\right) \sin \left( \frac{T\sqrt{1-4E\alpha }}{2\alpha \hbar }\right) =0. \end{aligned}$$
The imaginary part is
$$\begin{aligned} -2\cos \left( \frac{T}{2\alpha \hbar }\right) \sin \left( \frac{T\sqrt{1-4E\alpha }}{2\alpha \hbar }\right) =0. \end{aligned}$$
If both are zero, then
$$\begin{aligned} \sin \left( \frac{T\sqrt{1-4E\alpha }}{2\alpha \hbar }\right) =0, \end{aligned}$$
leading to
$$\begin{aligned} \frac{T\sqrt{1-4E\alpha }}{2\alpha \hbar }=n\pi , \end{aligned}$$
where \(n\in {Z}\). This means that
$$\begin{aligned} T=n\pi \frac{2\alpha \hbar }{\sqrt{1-4E\alpha }}, \end{aligned}$$
or expanding in terms of \(\alpha \)
$$\begin{aligned} T=2n\pi \hbar \left( \alpha +2E\alpha ^2+6E^2\alpha ^3+{\mathscr {O}}(\alpha ^4)\right) , \end{aligned}$$
i.e. we can only measure time in discrete steps. It is interesting to note that this discrete interval is dependent on the energy of the system, i.e. the larger the energy the larger will be this discrete interval of time, but since the energy dependence is to second and higher orders, this does not change the time interval by much, except near Planckian energy scales. It may also be noted that this time interval is of the same order as the minimum time expected directly from the time-energy uncertainty in Eq. (9). Further, it appears from Eq. (17) that the minimum time interval diverges as the energy approaches Planck scale (\(E\sim 1/4\alpha \)). However, this divergence could be unphysical since the Schrödinger equation (10) is deformed to first order in \(\alpha \) only. Finally, as expected, a continuous time is recovered in the limit in which \(\alpha \rightarrow 0\). In short, any physical system with finite energy can only evolve by taking discrete jumps in time rather than continuously.
Rate of spontaneous emission
We now apply the above to a concrete quantum mechanical system. The rate of spontaneous emission in a two-level system is well understood [53]. Here we shall repeat this analysis for a deformed quantum mechanical system. Now for a two-level system with eigenstates \(\psi _a\) and \(\psi _b\), the eigenvalues of the unperturbed Hamiltonian \(H^0\) can be written as
$$\begin{aligned} H^0\psi _a=E_a\psi _a, \qquad H^0\psi _b=E_b\psi _b. \end{aligned}$$
Any state can be written as a superposition of those eigenstates with the time dependence found in Eq. (11)
$$\begin{aligned} \Psi (t)=c_a\psi _a e^{\frac{-it}{2\alpha \hbar }\left( 1-\sqrt{1-4\alpha E_a}\right) }+ c_b\psi _b e^{\frac{-it}{2\alpha \hbar }\left( 1-\sqrt{1-4\alpha E_b}\right) }. \end{aligned}$$
If a time-dependent perturbation \(H'(t)\) was turned on, the wave function \(\Psi (t)\) can still be expressed as the previous equation but with a time-dependent \(c_a(t)\) and \(c_b(t)\), and the goal is to solve for \(c_a(t)\) and \(c_b(t)\). This will also hold if the time evolution of the system is given by a deformed Schrödinger equation. So, let us assume that this system actually evolves according to the deformed time-dependent Schrödinger equation,
$$\begin{aligned} H \psi= & {} H^0\psi +H'(t)\psi \nonumber \\= & {} i\hbar \partial _t\psi +\hbar ^2\alpha \partial _t^2\psi . \end{aligned}$$
Now neglecting terms of order \(\hbar \alpha \) and \(\hbar ^2\alpha \) for a two-level system, we obtain
$$\begin{aligned}&c_a H^0\psi _a e^{-i\epsilon _a t/\hbar } +c_bH^0\psi _b e^{-i\epsilon _b t/\hbar } +c_aH'\psi _a e^{-i\epsilon _at/\hbar } \nonumber \\&\quad \quad + c_bH'\psi _b e^{-i\epsilon _bt/\hbar } \nonumber \\&\quad = i\hbar \left( \dot{c}_a\psi _ae^{-i\epsilon _a t/\hbar } +\dot{c}_b\psi _be^{-i\epsilon _b t/\hbar }\right) +c_aE_a\psi _ae^{-i\epsilon _at/\hbar } \nonumber \\&\quad \quad +c_bE_b\psi _be^{-i\epsilon _bt/\hbar }. \end{aligned}$$
To simplify that last expression, we defined
$$\begin{aligned} \epsilon _a=\frac{1}{2\alpha }\left( 1-\sqrt{1-4\alpha E_a}\right) , \nonumber \\ \epsilon _b=\frac{1}{2\alpha }\left( 1-\sqrt{1-4\alpha E_b}\right) . \end{aligned}$$
It may be noted that in the limit \(\alpha \rightarrow 0\), we obtain \(\epsilon _a \rightarrow E_a\) and \(\epsilon _b \rightarrow E_b\). The first two terms cancel the last two terms. Now taking the inner product with \(\psi _a\) and solving for \(\dot{c}_a\), we obtain
$$\begin{aligned} \dot{c}_a=-\frac{i}{\hbar }\left( c_aH'_{aa}+c_bH'_{ab}e^{-i\omega _0 t}\right) . \end{aligned}$$
Here we have defined
$$\begin{aligned} H'_{ij}= & {} \langle \psi _i|H'|\psi _j\rangle ,\nonumber \\ \omega _0= & {} \frac{\epsilon _b-\epsilon _a}{\hbar }\nonumber \\= & {} \frac{\sqrt{1-4\alpha E_a}-\sqrt{1-4\alpha E_b}}{2\alpha \hbar }. \end{aligned}$$
Similarly, the inner product with \(\psi _b\) picks out \(\dot{c}_b\),
$$\begin{aligned} \dot{c}_b=-\frac{i}{\hbar }\left( c_bH'_{bb}+c_aH'_{ba}e^{i\omega _0 t}\right) . \end{aligned}$$
Since in most applications the diagonal elements of \(H'\) vanish, we get the simplified equations
$$\begin{aligned} \dot{c}_a=-\frac{i}{\hbar }H'_{ab}e^{-i\omega _0 t}c_b, \qquad \dot{c}_b=-\frac{i}{\hbar }H'_{ba}e^{i\omega _0 t}c_a. \end{aligned}$$
These equations have the same form as the un-deformed two-level system, except that in these equations \(\omega _0\) is modified. Thus, the standard analysis for the un-deformed two-level system also holds for a deformed two-level system. So if an atom is exposed to a sinusoidally oscillating electric field \(\mathbf{E}= E_0\cos (\omega t)\hat{k}\), then the perturbation Hamiltonian can be written as
$$\begin{aligned} H'(t)=-qE_0 \mathbf{r}\cos (\omega t) \end{aligned}$$
$$\begin{aligned} H'_{ba}=-\mathbf{p} E_0\cos (\omega t), \end{aligned}$$
where \(\mathbf{p}=q\langle \psi _b|\mathbf{r}|\psi _a\rangle \) is the electric dipole radiation. Repeating the analysis for the un-deformed two-level system [53], we can write the rate of spontaneous emission \(\mathscr {A}\) for the deformed system as
$$\begin{aligned} \mathscr {A}=\frac{\omega _0^3|\mathbf{p}|^2}{3\pi \epsilon _0\hbar }. \end{aligned}$$
Expanding to first order in \(\alpha \), we obtain
$$\begin{aligned} \mathscr {A}=\frac{(E_b-E_a)^3 |\mathbf{p}|^2}{3\pi \epsilon _0\hbar ^4}+\frac{(E_b-E_a)^3(E_a+E_b) |\mathbf{p}|^2}{\pi \epsilon _0\hbar ^4}\alpha .\nonumber \\ \end{aligned}$$
To get an order of magnitude estimate of the effect of the extra term in Eq. (31), we consider the spontaneous emission from a transition between the first and second energy levels in the hydrogen atom. Now for these levels, we have \(E_1=13.6\) eV, \(E_2=E_1/4\), and \(|\mathbf{p}|\sim 0.7qa_0\), where \(a_0\) is the Bohr radius. Thus, we obtain
$$\begin{aligned} \mathscr {A}&\approx 2.1 + 1.7\times 10^{-17} \alpha \text {~[m}^{-1}]\\&\approx 6.2 \times 10^8 + 5.1 \times 10^{-9} \alpha \text {~[s}^{-1}].\nonumber \end{aligned}$$
The uncertainty in measuring the rate of spontaneous emission for hydrogen atom is \(\pm 0.3\,\%\) [54]. So, the bound on \(\alpha _0\) from the rate of spontaneous emission in a hydrogen atom is given by
$$\begin{aligned} \alpha _0 < 7.2 \times 10^{23}. \end{aligned}$$
Hence, at this scale the effect of the rate of spontaneous emission in hydrogen can be effected by the temporal deformation proposed in this paper. If such a deformation scale exists at this scale in nature, future measurements might be able to detect it.
It may be noted that we can also use the lifetime of particles to set bounds on \(\alpha _0\), for the modified Schrödinger equation. For example, the tau has a lifetime of \((290.3\pm 0.5)\times 10^{-15}\) s [55], and since the minimum time from Eq. (18) must be less than the uncertainty in measuring the tau's life time, then \( 2\pi \hbar \alpha < 0.5 \times 10^{-15}\) s This means that \(\alpha _0<1.5\times 10^{27}\). However, the bound on \(\alpha _0\) from the hydrogen atom is more stringent than the bound on \(\alpha _0\) from the lifetime of particles. So, in the case that a minimum measurable time exists in nature, we are more likely to first observe its effects on the rate of spontaneous emission in hydrogen atoms.
We have shown here that the existence of a minimum measurable time scale in a quantum theory naturally leads to the discretization of time. This is similar to the existence of a minimum measurable length scale leading to a discretization of space. Thus, a crystal in time gets naturally formed by the existence of a minimum measurable time scale in the universe. Time crystals have been studied recently for systems in which time reparametrization is broken, just as spatial translation is broken in regular crystals. Time crystals have also been studied earlier for analyzing superconducting rings [39–43]. We also analyzed the effect of such a deformation on the rate of spontaneous emission in a hydrogen atom. It would be interesting to analyze a combination of minimum length and minimum time deformations of quantum mechanics to demonstrate a discretization of space and time in four dimensions. We expect to obtain non-local fractional derivative terms in that case, which may possibly be dealt with using a theory of harmonic extension of functions [56, 57], or via the Dirac equation approach [34]. It may be noted that it is conceptually useful to view the minimum measurable time as a component of a minimum Euclidean four volume with complex time, and then analytically continue the results to a Lorentz manifold. However, as we analyzed a system with Galilean symmetry, we did not to go through this procedure.
It is expected that the deformation of the Hamiltonian studied here will affect all physical systems. Thus for example, one can study the decay rates of particle and unstable nuclei using this deformed time evolution, which are expected to change as well. In fact, by fixing the value of this deformation parameter just below the experimentally measured limit, it might be possible to devise tests for detecting such deformation of time evolution of quantum mechanics. The deformed Hamiltonian should affect time-dependent perturbation theory as well. For example, the out-of-equilibrium Anderson model has been studied using the time-dependent density functional theory [58]. This has important applications for time-dependent processes in an open system where different scattering processes take place. This behavior will get modified due to this deformation of quantum mechanics. Similarly the quantum mechanical systems for which the strict adiabatic approximation fails, but which do not escape too far from the adiabatic limit, can be analyzed using a time-dependent adiabatic deformation of the theory [59]. It would be interesting to analyze the effect of having a minimum measurable time for such a time-dependent adiabatic deformation of the theory.
M. Maggiore, Phys. Lett. B 304, 65 (1993)
ADS Article Google Scholar
M.I. Park, Phys. Lett. B 659, 698 (2008)
ADS MathSciNet Article Google Scholar
S. Hossenfelder, Living Rev. Relativ. 16, 2 (2013)
D. Amati, M. Ciafaloni, G. Veneziano, Phys. Lett. B 216, 41 (1989)
A. Kempf, G. Mangano, R.B. Mann, Phys. Rev. D 52, 1108 (1995)
L.N. Chang, D. Minic, N. Okamura, T. Takeuchi, Phys. Rev. D 65, 125027 (2002)
S. Benczik, L.N. Chang, D. Minic, N. Okamura, S. Rayyan, T. Takeuchi, Phys. Rev. D 66, 026003 (2002)
P. Dzierzak, J. Jezierski, P. Malkiewicz, W. Piechocki, Acta Phys. Polon. B 41, 717 (2010)
MathSciNet Google Scholar
L.J. Garay, Int. J. Mod. Phys. A 10, 145 (1995)
C. Bambi, F.R. Urban, Class. Quantum Grav. 25, 095006 (2008)
K. Nozari, Phys. Lett. B. 629, 41 (2005)
A. Kempf, J. Phys. A 30, 2093 (1997)
S. Das, E.C. Vagenas, Phys. Rev. Lett. 101, 221301 (2008)
J. Magueijo, L. Smolin, Phys. Rev. Lett. 88, 190403 (2002)
J. Magueijo, L. Smolin, Phys. Rev. D 71, 026010 (2005)
J.L. Cortes, J. Gamboa, Phys. Rev. D 71, 065015 (2005)
G. 't Hooft, Class. Quantum Grav. 13, 1023 (1996)
V.A. Kostelecky, S. Samuel, Phys. Rev. D 39, 683 (1989)
G. Amelino-Camelia, J.R. Ellis, N.E. Mavromatos, D.V. Nanopoulos, S. Sarkar, Nature 393, 763 (1998)
R. Gambini, J. Pullin, Phys. Rev. D 59, 124021 (1999)
S.M. Carroll, J.A. Harvey, V.A. Kostelecky, C.D. Lane, T. Okamoto, Phys. Rev. Lett. 87, 141601 (2001)
M. Faizal, Phys. Lett. B 705, 120 (2011)
M. Faizal, Mod. Phys. Lett. A 27, 1250075 (2012)
M. Faizal, J. Phys. A 44, 402001 (2011)
P. Horava, Phys. Rev. D 79, 084008 (2009)
J. Magueijo, L. Smolin, Class. Quantum Grav. 21, 1725 (2004)
J.J. Peng, S.Q. Wu, Gen. Relativ. Gravit. 40, 2619 (2008)
A.F. Ali, M. Faizal, M.M. Khalil, Nucl. Phys. B 894, 341 (2015)
A.F. Ali, M. Faizal, M.M. Khalil, Phys. Lett. B 743, 295 (2015)
A.F. Ali, M. Faizal, M.M. Khalil, JHEP 1412, 159 (2014)
A.F. Ali, Phys. Rev. D 89, 104040 (2014)
S. Das, E.C. Vagenas, A.F. Ali, Phys. Lett. B 690, 407 (2010)
P. Pedram, K. Nozari, S.H. Taheri, JHEP. 1103, 093 (2011)
A.F. Ali, S. Das, E.C. Vagenas, Phys. Rev. D 84, 044013 (2011)
A.F. Ali, S. Das, E.C. Vagenas, Phys. Lett. B 678, 497 (2009)
M. Faizal, A. F. Ali. S. Das, arXiv:1411.5675 (2014)
P. Bruno, Phys. Rev. Lett. 111, 070402 (2013)
F. Wilczek, Phys. Rev. Lett. 109, 160401 (2012)
A. Shapere, F. Wilczek, Phys. Rev. Lett. 109, 160402 (2012)
E. Castillo, B. Koch, G. Palma, arXiv:1410.2261 (2014)
H. Watanabe, M. Oshikawa, Phys. Rev. Lett. 114, 251603 (2015)
W. Paili, General Principle of Quantum Theory (Springer, Berlin, 1980)
J. von Neumann, Mathematische Grundlagen der Quantenmechanik (Springer, Berlin, 1932)
P. Buscha, M. Grabowskib, P.J. Lahtic, Phys. Lett. A 191, 357 (1994)
V.S. Olkhovsky, Adv. Math. Phys. 2009, 859710 (2009)
MathSciNet Article Google Scholar
V.S. Olkhovsky, Int. J. Mod. Phys. A 22, 5063 (2007)
V.S. Olkhovsky, E. Recami, Int. J. Mod. Phys. B 22, 1877 (2008)
R. Brunetti, K. Fredenhagen, M. Hoge, Found. Phys. 40, 1368 (2010)
G. Ludwig, Foundations of Quantum Mechanics, vol. 1 (Springer, Berlin, 1983)
A.S. Holevo, Probabilistic and Statistical Aspects of Quantum Theory (North-Holland, Amsterdam, 1982)
D.J. Griffiths, Introduction to Quantum Mechanics, 2nd edn. (Prentice Hall, NJ, 2004)
W. Wiese, J. Phys. Chem. Ref. Data 38, 565 (2009)
K.A. Olive et al. (Particle Data Group), Chin. Phys. C 38, 090001 (2014)
M. Faizal, arXiv:1406.2653 (2014)
M. Faizal, Int. J. Geom. Methods Mod. Phys. 12, 1550022 (2015)
A.M. Uimonen, E. Khosravi, A. Stan, G. Stefanucci, S. Kurth, R. van Leeuwen, E.K.U. Gross, Phys. Rev. B 84, 115103 (2011)
D. Viennot, J. Phys. A 47, 065302 (2014)
The work of SD is supported by the Natural Sciences and Engineering Research Council of Canada.
Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
Mir Faizal
Department of Electrical Engineering, Alexandria University, Alexandria, 12544, Egypt
Mohammed M. Khalil
Department of Physics and Astronomy, University of Lethbridge, 4401 University Drive, Lethbridge, AB, T1K 3M4, Canada
Saurya Das
Correspondence to Mohammed M. Khalil.
Funded by SCOAP3.
Faizal, M., Khalil, M.M. & Das, S. Time crystals from minimum time uncertainty. Eur. Phys. J. C 76, 30 (2016). https://doi.org/10.1140/epjc/s10052-016-3884-4
DOI: https://doi.org/10.1140/epjc/s10052-016-3884-4
Temporal Part
Loop Quantum Gravity
Generalize Uncertainty Principle
Not logged in - 18.234.247.75
|
CommonCrawl
|
Mon, 03 Jun 2019 03:33:44 GMT
5.3 Entropy As a Macroscopic Quantity
[ "article:topic", "authorname:crowellb", "license:ccbysa", "showtoc:no" ]
Book: Conceptual Physics (Crowell)
5: Thermodynamics
Contributed by Benjamin Crowell
Professor (Physics) at Fullerton College
5.3.1 Efficiency and grades of energy
5.3.2 Heat engines
5.3.3 Entropy
Some forms of energy are more convenient than others in certain situations. You can't run a spring-powered mechanical clock on a battery, and you can't run a battery-powered clock with mechanical energy. However, there is no fundamental physical principle that prevents you from converting 100% of the electrical energy in a battery into mechanical energy or vice-versa. More efficient motors and generators are being designed every year. In general, the laws of physics permit perfectly efficient conversion within a broad class of forms of energy.
Heat is different. Friction tends to convert other forms of energy into heat even in the best lubricated machines. When we slide a book on a table, friction brings it to a stop and converts all its kinetic energy into heat, but we never observe the opposite process, in which a book spontaneously converts heat energy into mechanical energy and starts moving! Roughly speaking, heat is different because it is disorganized. Scrambling an egg is easy. Unscrambling it is harder.
We summarize these observations by saying that heat is a lower grade of energy than other forms such as mechanical energy.
Of course it is possible to convert heat into other forms of energy such as mechanical energy, and that is what a car engine does with the heat created by exploding the air-gasoline mixture. But a car engine is a tremendously inefficient device, and a great deal of the heat is simply wasted through the radiator and the exhaust. Engineers have never succeeded in creating a perfectly efficient device for converting heat energy into mechanical energy, and we now know that this is because of a deeper physical principle that is far more basic than the design of an engine.
a / 1. The temperature difference between the hot and cold parts of the air can be used to extract mechanical energy, for example with a fan blade that spins because of the rising hot air currents. 2. If the temperature of the air is first allowed to become uniform, then no mechanical energy can be extracted. The same amount of heat energy is present, but it is no longer accessible for doing mechanical work.
Heat may be more useful in some forms than in others, i.e., there are different grades of heat energy. In figure a/1, the difference in temperature can be used to extract mechanical work with a fan blade. This principle is used in power plants, where steam is heated by burning oil or by nuclear reactions, and then allowed to expand through a turbine which has cooler steam on the other side. On a smaller scale, there is a Christmas toy, b, that consists of a small propeller spun by the hot air rising from a set of candles, very much like the setup shown in figure a.
In figure a/2, however, no mechanical work can be extracted because there is no difference in temperature. Although the air in a/2 has the same total amount of energy as the air in a/1, the heat in a/2 is a lower grade of energy, since none of it is accessible for doing mechanical work.
b / A heat engine. Hot air from the candles rises through the fan blades, and makes the angels spin.
In general, we define a heat engine as any device that takes heat from a reservoir of hot matter, extracts some of the heat energy to do mechanical work, and expels a lesser amount of heat into a reservoir of cold matter. The efficiency of a heat engine equals the amount of useful work extracted, \(W\), divided by the amount of energy we had to pay for in order to heat the hot reservoir. This latter amount of heat is the same as the amount of heat the engine extracts from the high-temperature reservoir, \(Q_H\). (The letter \(Q\) is the standard notation for a transfer of heat.) By conservation of energy, we have \(Q_H=W+Q_L\), where \(Q_L\) is the amount of heat expelled into the low-temperature reservoir, so the efficiency of a heat engine, \(W/Q_H\), can be rewritten as
\[\begin{equation*} \text{efficiency} = 1-\frac{Q_L}{Q_H} . \text{[efficiency of any heat engine]} \end{equation*}\]
It turns out that there is a particular type of heat engine, the Carnot engine, which, although not 100% efficient, is more efficient than any other. The grade of heat energy in a system can thus be unambiguously defined in terms of the amount of heat energy in it that cannot be extracted even by a Carnot engine.
c / Sadi Carnot (1796-1832).
How can we build the most efficient possible engine? Let's start with an unnecessarily inefficient engine like a car engine and see how it could be improved. The radiator and exhaust expel hot gases, which is a waste of heat energy. These gases are cooler than the exploded air-gas mixture inside the cylinder, but hotter than the air that surrounds the car. We could thus improve the engine's efficiency by adding an auxiliary heat engine to it, which would operate with the first engine's exhaust as its hot reservoir and the air as its cold reservoir. In general, any heat engine that expels heat at an intermediate temperature can be made more efficient by changing it so that it expels heat only at the temperature of the cold reservoir.
Similarly, any heat engine that absorbs some energy at an intermediate temperature can be made more efficient by adding an auxiliary heat engine to it which will operate between the hot reservoir and this intermediate temperature.
Based on these arguments, we define a Carnot engine as a heat engine that absorbs heat only from the hot reservoir and expels it only into the cold reservoir. Figures d-g show a realization of a Carnot engine using a piston in a cylinder filled with a monoatomic ideal gas. This gas, known as the working fluid, is separate from, but exchanges energy with, the hot and cold reservoirs. As proved on page 325, this particular Carnot engine has an efficiency given by
\[\begin{equation*} \text{efficiency} = 1 - \frac{T_L}{T_H} , \text{[efficiency of a Carnot engine]} \end{equation*}\]
where \(T_L\) is the temperature of the cold reservoir and \(T_H\) is the temperature of the hot reservoir.
Even if you do not wish to dig into the details of the proof, the basic reason for the temperature dependence is not so hard to understand. Useful mechanical work is done on strokes d and e, in which the gas expands. The motion of the piston is in the same direction as the gas's force on the piston, so positive work is done on the piston.
d / The beginning of the first expansion stroke, in which the working gas is kept in thermal equilibrium with the hot reservoir.
e / The beginning of the second expansion stroke, in which the working gas is thermally insulated. The working gas cools because it is doing work on the piston and thus losing energy.
In strokes f and g, however, the gas does negative work on the piston. We would like to avoid this negative work, but we must design the engine to perform a complete cycle. Luckily the pressures during the compression strokes are lower than the ones during the expansion strokes, so the engine doesn't undo all its work with every cycle. The ratios of the pressures are in proportion to the ratios of the temperatures, so if \(T_L\) is 20% of \(T_H\), the engine is 80% efficient.
f / The beginning of the first compression stroke. The working gas begins the stroke at the same temperature as the cold reservoir, and remains in thermal contact with it the whole time. The engine does negative work.
g / The beginning of the second compression stroke, in which mechanical work is absorbed, heating the working gas back up to \(T_H\).
We have already proved that any engine that is not a Carnot engine is less than optimally efficient, and it is also true that all Carnot engines operating between a given pair of temperatures \(T_H\) and \(T_L\) have the same efficiency. (This can be proved by the methods of section 5.4.) Thus a Carnot engine is the most efficient possible heat engine.
h / Entropy can be understood using the metaphor of a water wheel. Letting the water levels equalize is like letting the entropy maximize. Taking water from the high side and putting it into the low side increases the entropy. Water levels in this metaphor correspond to temperatures in the actual definition of entropy.
We would like to have some numerical way of measuring the grade of energy in a system. We want this quantity, called entropy, to have the following two properties:
(1) Entropy is additive. When we combine two systems and consider them as one, the entropy of the combined system equals the sum of the entropies of the two original systems. (Quantities like mass and energy also have this property.)
(2) The entropy of a system is not changed by operating a Carnot engine within it.
It turns out to be simpler and more useful to define changes in entropy than absolute entropies. Suppose as an example that a system contains some hot matter and some cold matter. It has a relatively high grade of energy because a heat engine could be used to extract mechanical work from it. But if we allow the hot and cold parts to equilibrate at some lukewarm temperature, the grade of energy has gotten worse. Thus putting heat into a hotter area is more useful than putting it into a cold area. Motivated by these considerations, we define a change in entropy as follows:
\[\begin{multline*} \Delta S = \frac{Q}{T} \\ {\text{[change in entropy when adding}}\ \\ {\text{heat $Q$ to matter at temperature $T$;}} \ {\text{$\Delta S$ is negative if heat is taken out]}} \end{multline*}\]
A system with a higher grade of energy has a lower entropy.
Example 10: Entropy is additive.
Since changes in entropy are defined by an additive quantity (heat) divided by a non-additive one (temperature), entropy is additive.
Example 11: Entropy isn't changed by a Carnot engine.
The efficiency of a heat engine is defined by
\[\begin{equation*} \text{efficiency} = 1 - Q_L/ Q_H , \end{equation*}\]
and the efficiency of a Carnot engine is
\[\begin{equation*} \text{efficiency} = 1 - T_L/ T_H , \end{equation*}\]
so for a Carnot engine we have \(Q_L/ Q_H = T_L/ T_H\), which can be rewritten as \(Q_L/ T_{L} = Q_{H}/ T_H\). The entropy lost by the hot reservoir is therefore the same as the entropy gained by the cold one.
Example 12: Entropy increases in heat conduction.
When a hot object gives up energy to a cold one, conservation of energy tells us that the amount of heat lost by the hot object is the same as the amount of heat gained by the cold one. The change in entropy is \(- Q/ T_{H}+ Q/ T_L\), which is positive because \( T_L\lt T_H\).
Example 13: Entropy is increased by a non-Carnot engine.
The efficiency of a non-Carnot engine is less than 1 - \( T_L/ T_H\), so \(Q_L/ Q_{H} > T_{L}/ T_H\) and \(Q_L/ T_{L} > Q_{H}/ T_H\). This means that the entropy increase in the cold reservoir is greater than the entropy decrease in the hot reservoir.
Example 14: A book sliding to a stop
A book slides across a table and comes to a stop. Once it stops, all its kinetic energy has been transformed into heat. As the book and table heat up, their entropies both increase, so the total entropy increases as well.
All of these examples involved closed systems, and in all of them the total entropy either increased or stayed the same. It never decreased. Here are two examples of schemes for decreasing the entropy of a closed system, with explanations of why they don't work.
Example 15: Using a refrigerator to decrease entropy?
\(\triangleright\) A refrigerator takes heat from a cold area and dumps it into a hot area. (1) Does this lead to a net decrease in the entropy of a closed system? (2) Could you make a Carnot engine more efficient by running a refrigerator to cool its low-temperature reservoir and eject heat into its high-temperature reservoir?
\(\triangleright\) (1) No. The heat that comes off of the radiator coils is a great deal more than the heat the fridge removes from inside; the difference is what it costs to run your fridge. The heat radiated from the coils is so much more than the heat removed from the inside that the increase in the entropy of the air in the room is greater than the decrease of the entropy inside the fridge. The most efficient refrigerator is actually a Carnot engine running in reverse, which leads to neither an increase nor a decrease in entropy.
(2) No. The most efficient refrigerator is a reversed Carnot engine. You will not achieve anything by running one Carnot engine in reverse and another forward. They will just cancel each other out.
Example 16: Maxwell's demon
\(\triangleright\) Maxwell imagined a pair of rooms, their air being initially in thermal equilibrium, having a partition across the middle with a tiny door. A miniscule demon is posted at the door with a little ping-pong paddle, and his duty is to try to build up faster-moving air molecules in room B and slower moving ones in room A. For instance, when a fast molecule is headed through the door, going from A to B, he lets it by, but when a slower than average molecule tries the same thing, he hits it back into room A. Would this decrease the total entropy of the pair of rooms?
\(\triangleright\) No. The demon needs to eat, and we can think of his body as a little heat engine, and his metabolism is less efficient than a Carnot engine, so he ends up increasing the entropy rather than decreasing it.
Observations such as these lead to the following hypothesis, known as the second law of thermodynamics:
The entropy of a closed system always increases, or at best stays the same: \(\Delta S\ge0\).
At present our arguments to support this statement may seem less than convincing, since they have so much to do with obscure facts about heat engines. In the following section we will find a more satisfying and fundamental explanation for the continual increase in entropy. To emphasize the fundamental and universal nature of the second law, here are a few exotic examples.
Example 17: Entropy and evolution
A favorite argument of many creationists who don't believe in evolution is that evolution would violate the second law of thermodynamics: the death and decay of a living thing releases heat (as when a compost heap gets hot) and lessens the amount of energy available for doing useful work, while the reverse process, the emergence of life from nonliving matter, would require a decrease in entropy. Their argument is faulty, since the second law only applies to closed systems, and the earth is not a closed system. The earth is continuously receiving energy from the sun.
Example 18: The heat death of the universe
Living things have low entropy: to demonstrate this fact, observe how a compost pile releases heat, which then equilibrates with the cooler environment. We never observe dead things to leap back to life after sucking some heat energy out of their environments! The only reason life was able to evolve on earth was that the earth was not a closed system: it got energy from the sun, which presumably gained more entropy than the earth lost.
Victorian philosophers spent a lot of time worrying about the heat death of the universe: eventually the universe would have to become a high-entropy, lukewarm soup, with no life or organized motion of any kind. Fortunately (?), we now know a great many other things that will make the universe inhospitable to life long before its entropy is maximized. Life on earth, for instance, will end when the sun evolves into a giant star and vaporizes our planet.
Example 19: Hawking radiation
Any process that could destroy heat (or convert it into nothing but mechanical work) would lead to a reduction in entropy. Black holes are supermassive stars whose gravity is so strong that nothing, not even light, can escape from them once it gets within a boundary known as the event horizon. Black holes are commonly observed to suck hot gas into them. Does this lead to a reduction in the entropy of the universe? Of course one could argue that the entropy is still there inside the black hole, but being able to "hide" entropy there amounts to the same thing as being able to destroy entropy.
The physicist Steven Hawking was bothered by this question, and finally realized that although the actual stuff that enters a black hole is lost forever, the black hole will gradually lose energy in the form of light emitted from just outside the event horizon. This light ends up reintroducing the original entropy back into the universe at large.
◊ In this discussion question, you'll think about a car engine in terms of thermodynamics. Note that an internal combustion engine doesn't fit very well into the theoretical straightjacket of a heat engine. For instance, a heat engine has a high-temperature heat reservoir at a single well-defined temperature, \(T_H\). In a typical car engine, however, there are several very different temperatures you could imagine using for \(T_H\): the temperature of the engine block (\(\sim100°\text{C}\)), the walls of the cylinder (\(\sim250°\text{C}\)), or the temperature of the exploding air-gas mixture (\(\sim1000°\text{C}\), with significant changes over a four-stroke cycle). Let's use \(T_H\sim1000°\text{C}\).
Burning gas supplies heat energy \(Q_H\) to your car's engine. The engine does mechanical work \(W\), but also expels heat \(Q_L\) into the environment through the radiator and the exhaust. Conservation of energy gives
\[\begin{equation*} Q_H = Q_L+W , \end{equation*}\]
and the relative proportions of \(Q_L\) and \(W\) are usually about 90% to 10%. (Actually it depends quite a bit on the type of car, the driving conditions, etc.)
(1) \(Q_L\) is obviously undesirable: you pay for it, but all it does is heat the neighborhood. Suppose that engineers do a really good job of getting rid of the effects that create \(Q_L\), such as friction. Could \(Q_L\) ever be reduced to zero, at least theoretically?
(2) A gallon of gas releases about 140 MJ of heat \(Q_H\) when burned. Estimate the change in entropy of the universe due to running a typical car engine and burning one gallon of gas. (You'll have to estimate how hot the environment is. For the sake of argument, assume that the work done by the engine, \(W\), remains in the form of mechanical energy, although in reality it probably ends up being changed into heat when you step on the brakes.) Is your result consistent with the second law of thermodynamics?
(3) What would happen if you redid the calculation in #2, but assumed \(Q_L=0\)? Is this consistent with your answer to #1?
◊ When we run the Carnot engine in figures d-g, there are four parts of the universe that undergo changes in their physical states: the hot reservoir, the cold reservoir, the working gas, and the outside world to which the shaft is connected in order to do physical work. Over one full cycle, discuss which of these parts gain entropy, which ones lose entropy, and which ones keep the same entropy. During which of the four strokes do these changes occur?
Benjamin Crowell (Fullerton College). Conceptual Physics is copyrighted with a CC-BY-SA license.
5.2 Microscopic Description of An Ideal Gas
5.4 Entropy As a Microscopic Quantity
Ben Crowell
|
CommonCrawl
|
Archetype A
$\square$ Summary: Linear system of three equations, three unknowns. Singular coefficient matrix with dimension 1 null space. Integer eigenvalues and a degenerate eigenspace for coefficient matrix.
$\square$ A system of linear equations (Definition SLE).\begin{align*} x_1 -x_2 +2x_3 & =1\\ 2x_1+ x_2 + x_3 & =8\\ x_1 + x_2 & =5 \end{align*}
$\square$ Some solutions to the system of linear equations, not necessarily exhaustive (Definition SSLE):
$x_1 = 2,\quad x_2 = 3,\quad x_3 = 1$
$\square$ Augmented matrix of the linear system of equations (Definition AM):\begin{bmatrix} 1 & -1 & 2 & 1\\ 2 & 1 & 1 & 8\\ 1 & 1 & 0 & 5 \end{bmatrix}
$\square$ Matrix in reduced row-echelon form, row-equivalent to the augmented matrix. (Definition RREF)\begin{bmatrix} \leading{1} & 0 & 1 & 3\\ 0 & \leading{1} & -1 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}
$\square$ Analysis of the augmented matrix (Definition RREF).\begin{align*}r&=2&D&=\set{1,\,2}&F&=\set{3,\,4}\end{align*}
$\square$ Vector form of the solution set to the system of equations (Theorem VFSLS). Notice the relationship between the free variables and the set $F$ above. Also, notice the pattern of 0's and 1's in the entries of the vectors corresponding to elements of the set $F$ in the larger examples.
$\colvector{x_1\\x_2\\x_3}=\colvector{3\\2\\0} + x_3\colvector{-1\\1\\1}$
$\square$ Given a system of equations we can always build a new, related, homogeneous system (Definition HS) by converting the constant terms to zeros and retaining the coefficients of the variables. Properties of this new system will have precise relationships with various properties of the original system.\begin{align*} x_1 -x_2 +2x_3 & = 0\\ 2x_1+ x_2 + x_3 & = 0\\ x_1 + x_2\quad\quad & = 0 \end{align*}
$\square$ Some solutions to the associated homogenous system of linear equations, not necessarily exhaustive (Definition SSLE). Review Theorem HSC as you consider these solutions.
$x_1 = -1,\quad x_2 = 1,\quad x_3 = 1$
$\square$ Form the augmented matrix of the homogenous linear system, and use row operations to convert to reduced row-echelon form. Notice how the entries of the final column remain zeros.\begin{bmatrix} \leading{1} & 0 & 1 & 0 \\ 0 & \leading{1} & -1 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}
$\square$ Analysis of the augmented matrix for the homogenous system (Definition RREF). Compare this with the same analysis of the original system, especially in the case where the original system is inconsistent (Theorem RCLS).\begin{align*}r&=2&D&=\set{1,\,2}&F&=\set{3,\,4}\end{align*}
$\square$ For any system of equations we can isolate the coefficient matrix, which will be identical to the coefficient matrix of the associated homogenous system. For the remainder of the discussion of this system of equations, we will analyze just the coefficient matrix.\begin{bmatrix} 1 & -1 & 2\\ 2 & 1 & 1\\ 1 & 1 & 0 \end{bmatrix}
$\square$ Row-equivalent matrix in reduced row-echelon form (Definition RREF).\begin{bmatrix} \leading{1} & 0 & 1\\ 0 & \leading{1} & -1\\ 0 & 0 & 0 \end{bmatrix}
$\square$ Analysis of the reduced row-echelon form of the matrix (Definition RREF). For archetypes begin as systems of equations, compare this analysis with the analysis for the coefficient matrices of the original system, and of the associated homogeneous system.\begin{align*}r&=2&D&=\set{1,\,2}&F&=\set{3}\end{align*}
$\square$ Is the matrix nonsingular or singular? (Consider Theorem NMRRI. At the same time, examine the sizes of the sets $D$ and $F$ for the analysis of the reduced row-echelon version of the matrix.)
Singular.
$\square$ The null space of the matrix. The set of vectors used in the span construction is a linearly independent set of column vectors that spans the null space of the matrix (Theorem SSNS, Theorem BNS). Solve a homogenous system with this matrix as the coefficient matrix and write the solutions in vector form (Theorem VFSLS) to see these vectors arise. Compare the entries of these vectors for indices in $D$ versus entries for indices in $F$.\begin{align*}\spn{\set{\colvector{-1\\1\\1}} }\end{align*}
$\square$ The column space of the matrix, expressed as the span of a set of linearly independent vectors that are also columns of the matrix. These columns have indices that form the set $D$ above (Theorem BCS).\begin{align*}\spn{\set{\colvector{1\\2\\1},\,\colvector{-1\\1\\1}} }\end{align*}
$\square$ The column space of the matrix, as it arises from the extended echelon form of the matrix. The matrix $L$ is computed as described in Definition EEF. This is followed by the column space described as the span of a set of linearly independent vectors that equals the null space of $L$, computed as according to Theorem FS and Theorem BNS. When $r=m$, the matrix $L$ has no rows and the column space is all of $\complex{m}$.\begin{align*}L&=\begin{bmatrix}1&-2&3\end{bmatrix}\end{align*}\begin{align*}\spn{\set{\colvector{-3\\0\\1},\,\colvector{2\\1\\0}} }\end{align*}
$\square$ The column space of the matrix, expressed as the span of a set of linearly independent vectors. These vectors are computed by bringing the transpose of the matrix into reduced row-echelon form, tossing out the zero rows, and writing the remaining nonzero rows as column vectors. By Theorem CSRST and Theorem BRS, and in the style of Example CSROI, this yields a linearly independent set of vectors that span the column space.\begin{align*}\spn{\set{\colvector{1\\0\\-\frac{1}{3}},\,\colvector{0\\1\\{\frac{2}{3}}}} }\end{align*}
$\square$ Row space of the matrix, expressed as a span of a set of linearly independent vectors, obtained from the nonzero rows of the row-equivalent matrix in reduced row-echelon form. (Theorem BRS)\begin{align*}\spn{\set{\colvector{1\\0\\1},\,\colvector{0\\1\\-1}} }\end{align*}
$\square$ Inverse of the matrix, if it exists (Definition MI). By Theorem NI an inverse exists only if the matrix is nonsingular.
$\square$ Subspace dimensions associated with the matrix (Definition ROM, Definition NOM). Verify Theorem RPNC.\begin{align*}\text{Rank: }2&&\text{Nullity: }1&&\text{Matrix columns: }3&\end{align*}
$\square$ Determinant of the matrix. The matrix is nonsingular if and only if the determinant is nonzero (Theorem SMZD).\begin{align*}\text{Determinant: }0\end{align*}
$\square$ Eigenvalues, and bases for eigenspaces (Definition EEM, Definition EM). Compute a matrix-vector product (Definition MVP) for each eigenvector as an interesting check.\begin{align*}\eigensystem{A}{0}{\colvector{-1\\1\\1}}\\ \eigensystem{A}{2}{\colvector{1\\5\\3}} \end{align*}
$\square$ Geometric and algebraic multiplicities (Definition GME, Definition AME).\begin{align*}\geomult{A}{0}&=1&\algmult{A}{0}&=2\\ \geomult{A}{2}&=1&\algmult{A}{2}&=1 \end{align*}
$\square$ Diagonalizable (Definition DZM)?No, $\geomult{A}{0}\neq\algmult{B}{0}$, Theorem DMFE.
|
CommonCrawl
|
BMC Women's Health
Child marriage in Ghana: evidence from a multi-method study
Babatunde Ahonsi1,
Kamil Fuseini ORCID: orcid.org/0000-0002-2417-19262,
Dela Nai2,
Erika Goldson1,
Selina Owusu1,
Ismail Ndifuna1,
Icilda Humes3 &
Placide L. Tapsoba2
BMC Women's Health volume 19, Article number: 126 (2019) Cite this article
Child marriage remains a challenge in Ghana. Over the years, government and development partners have made various commitments and efforts to curb the phenomenon of child marriage. However, there is little empirical evidence on the predictors, norms and practices surrounding the practice to support their efforts, a gap this study sought to fill.
The study employed a multiple-method approach to achieve the set objectives. Data from the women's file of the 2014 Ghana Demographic and Health Survey (GDHS) was used to examine the predictors of child marriage using frequencies and logistic regression methods. Data from Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs) collected in Central and Northern regions of Ghana were used to examine norms and practices surrounding child marriage using thematic analysis.
Two in ten (20.68%) girls in the quantitative sample married as children. The results revealed that girls who had never attended school compared to those who had ever attended school were more likely to marry as children (OR, 3.01). Compared with girls in the lowest wealth quintile, girls in the middle (OR, 0.59), fourth (OR, 0.37) and highest (OR, 0.32) wealth quintiles were less likely to marry as children. From the qualitative data, the study identified poverty, teenage pregnancy, and cultural norms such as betrothal marriage, exchange of girls for marriage and pressure from significant others as the drivers of child marriage.
The findings show that various socio-economic and cultural factors such as education, teenage pregnancy and poverty influence child marriage. Hence, efforts to curb child marriage should be geared towards retention of girls in school, curbing teenage pregnancy, empowering girls economically, enforcing laws on child marriage in Ghana, as well as designing tailored advocacy programs to educate key stakeholders and adolescent girls on the consequences of child marriage. Additionally, there is the need to address socio-cultural norms/practices to help end child marriage.
Child marriage (or early marriage) can be defined as "both formal marriages and informal unions in which a girl lives with a partner as if married before the age of 18" [1]. Child marriage, despite recent declines is still widely practiced in many parts of the developing world [2, 3]. In developing countries (excluding China), every third young woman continues to marry as a child [4]. While age at first marriage is generally increasing around the world, in many parts of sub-Saharan Africa, a significant proportion of girls still marry before their 18th birthday [5,6,7].
In developing countries, it is estimated that one in seven girls marry before age 15 and 38% marry before age 18 [8]. In Ghana, 4.4 and 5.8% of women aged 15–49 married by exact age 15 in 2006 and 2011 respectively. In addition, among women aged 20–24, the proportion who married before exact age 18 was 22% in 2006 and 21% in 2011 [9, 10]. The rest of the introductory section of the paper discusses reasons and incentives for child marriage, negative effects of child marriage and legal norms in relation to child marriage.
Reasons and incentives for child marriage
Child marriage is used as a mechanism to protect chastity as premarital sex and child bearing bring shame to the family [11]. In traditional Ghanaian societies premarital sex and child bearing is frowned upon, hence early marriage is encouraged. For instance, betrothal (in some cases, exchange of girls) is often early, sometimes before birth to ensure sex and child bearing occur within marriage [12]. The need to reinforce social ties or build alliances is another traditional factor that influences child marriage [13, 14].
The major religious traditions (Christianity and Islam) in Ghana encourage early marriage because premarital sex and child bearing are considered "immoral". These behaviours were, and often still are, strongly prohibited and sometimes punished. Both Christianity and Islam seek to ensure that sex and child bearing occur within marriage. Hence, they tend to encourage early marriage, mostly indirectly [15]. Some Muslim groups try to ensure that births occur within marriage by compressing the gap between age at menarche and marriage [16]. While traditional and religious practices try to protect girls from pre-marital sex and child bearing, girls who fall pregnant are sometimes married off to men who impregnated them to ensure they take care of them [12].
In Ghanaian societies, marriage is very important for women's status. Recognition and respect go hand in hand with marriage. Evidence suggest that early marriage brings some child brides respect and honor as both peers and adults in the community show them respect because they have "settled down" (married) and are seen to be responsible. Parents who have married daughters also enjoy some prestige and respect from community members [12].
Another factor that contributes to child marriage is poverty [7, 11, 17,18,19]. Its influence on child marriage is multi-dimensional that stems from parents' socio-economic status and children's demand for material goods that their parents cannot afford (in some cases attributable to parental neglect and supervision). Some parents and girls are motivated by financial gains and security to the family and they tend to agree to child marriage. In some cases, it provides financial stability to girls coming from economically disadvantaged homes as some child brides married to escape poverty. Child brides do not only get financial support from their husbands, but also from their in-laws to ensure they lack little or nothing. Some child brides are also able to amass some wealth from their husbands to take care of their own family [12]. Hence, parents who marry their children off early "are not necessarily heartless parents but, rather, parents who are surviving under heartless conditions", as some parents use child marriage as a strategy to break out of poverty [19].
Negative effects of child marriage
There are many reasons why child marriage is perpetuated, which can be beneficial in many ways. However, empirical evidence suggests that on the balance, the same reasons that make child marriage beneficial are the same reasons that make it problematic with various negative socio-economic and health effects to girls, their children, families and their communities.
Evidence show elevated rates of suicidal thoughts or attempts among girls promised or requested in marriage and married girls compared to those not yet in the marriage process, suggesting that child marriage is a problem at the very onset even before sex and child bearing [20]. Child marriage is a form of violence against young girls as it increases their vulnerability to sexual, physical and psychological violence due to the unbalanced power dynamics within marriage [21, 22].
While child marriage is usually used to ensure that sex and child bearing occur within marriage, it effectively brings a girl's childhood and adolescence to a premature end and imposes adult roles and responsibilities on young girls before they are physically, psychologically and emotionally prepared to handle them [23]. Sexual intercourse and child bearing among girls can lead to various health complications, however, the practice of child marriage worsens these health challenges. For instance, early sexual debut goes hand in hand with child marriage, which increases a girl's health risks, because an adolescent's vaginal mucosa is not yet fully matured, exposing them to increased risk of sexually infected diseases including HIV [24]. In 29 countries including Ghana, it was found that female adolescents were more vulnerable to HIV infection than older women. Women who marry young often tend to have much older husbands, in polygamous unions and are frequently junior wives which increases young girls' probability of HIV infection [25].
Child marriage will most likely result in early child bearing resulting in serious health implications. The mean age at first birth of girls who marry early is approximately 2 years younger compared to women who marry as adults [21, 24]. Further, early pregnancy loss among girls age 15–19 has been found to be twice as high as that of other age groups in Ghana [26]. For instance, the 2014 GDHS reported that neonatal (42 deaths per 1000 live births), infant (62 deaths per 1000 live births), and under-5 mortality (84 per 1000 live births) were highest among children born to mothers less than 20 years compared to those aged 20 years and above [27]. In another study in Ghana, it was found that first-born children of women who married before age 18 had increased odds of mortality compared to first-borns of women who married after 18 years [21]. Thus, child marriage, exposes girls to exacerbated intergenerational health risks as they are exposed to various reproductive health challenges, children born to them have higher mortality rates and are more likely to be born prematurely [21]. Aside reproductive health challenges, child marriage has also been found to be associated with increased likelihood of difficulties with activities of daily living (including carrying a 10 kg load for 500 m; bend, squat or kneel; and walking a distance of 2 km) [21].
A common belief is that child marriage is a coping strategy for poverty, accords girls and parents status and honour. However, evidence also show that child marriage is a catalyst for poverty which undermines status and honour in societies. In sub-Saharan Africa including Ghana, it was found that early marriage negatively influences education as it reduces the probability of literacy and completing secondary school [28]. In Ghana, early marriage among girls has been found to be one of the important challenges facing effective enrolment and school attendance, which leads to school dropout [29]. In essence, it ends a girl's opportunity to continue her education to acquire employable skills, which results in persistent poverty among girls and effectively undermines their status and honour as they are unable to meet their daily needs [12, 19, 30].
Legal norms in relation to child marriage
Child marriage undermines the fundamental human rights of children and violates Article 16(2) of the Universal Declaration of Human Rights, which states that "Marriage shall be entered into only with the free and full consent of the intending spouses". It also violates Article 16 of the Convention on the Elimination of all Forms of Discrimination Against Women (CEDAW) that women should have the same right as men to "freely choose a spouse and to enter into marriage only with their free and full consent".
The 1998 Children's Act of Ghana and the 1992 Constitution of Ghana define a child as a person below the age of 18. By age 18, young persons are expected to have developed enough intellectual, emotional and physical skills, and resources to fend for themselves as well as to successfully transition into adulthood. Until then they require care from adults, support, guidance and protection [31]. The 1998 Children's Act of Ghana (Act 560), indicates that no person shall force a child: (1) (a) to be betrothed; (b) to be the subject of a dowry transaction; or (c) to be married; and (2) the minimum age of marriage of whatever kind shall be eighteen years (18 years).
In Ghana, there is commitment towards curbing child marriage. The Ministry of Gender, Children and Social Protection established a Child Marriage Unit in 2014 to promote and coordinate national initiatives aimed at ending child marriage in Ghana. In 2016, the unit in partnership with the United Nations Children's Fund (UNICEF) and other key stakeholders developed a National Strategic Framework on Ending Child Marriage in Ghana. The framework is to ensure effective, well-structured and well-guided collaboration between state and non-state institutions [32].
Despite signing on to international resolutions, national laws, and efforts by various national and international organizations, child marriage in Ghana remains a phenomenon of concern with very limited empirical evidence to support program interventions to deal with the practice. The present study seeks to (a) identify the predictors of child marriage in the broader Ghanaian society and (b) explore in-depth the norms and practices surrounding child marriage as well as how the phenomenon could be addressed.
The study employed a multiple-method approach to achieve its objectives. The study utilised quantitative data from the women's file of the 2014 Ghana Demographic and Health Survey (GDHS) to examine the predictors of child marriage. This was complemented with qualitative data collected in purposively selected districts and communities in United Nations Population Fund (UNFPA) country program support regions (Central, Northern and Greater Accra) in 2016 to examine norms and practices surrounding child marriage. These were regions with high prevalence of teenage pregnancy (Central, 21.3%) and child marriage (Northern, 35.8%) [27]. The Central region is in the southern part of Ghana along the coast. The people in the region are generally of the Akan ethnic group and matrilineal. It is bordered by Ashanti and Eastern regions to the north, Western region to the west, Greater Accra region to the east and the Gulf of Guinea to the south. The Northern region is in the northern part of the country and the people are predominantly of the Mole-Dagbani ethnic group and patrilineal. The region is bordered on the north by the Upper West and Upper East region, on the east by Togo, on the south by Brong Ahafo and Volta regions, and on the west by Côte d'Ivoire.
Quantitative procedure
The GDHS data is a nationally representative survey that was first conducted in 1988 and has since been conducted roughly every 5 years. The GDHS collects data from women aged 15–49 and men 15–59 years on various topics including socio-demographic characteristics and age at first marriage [27].
The dependent variable for this study is child marriage. The child marriage variable is dichotomous, where 1 indicates an individual woman first married/cohabited before age 18 and 0 otherwise. In this paper, the analysis of child marriage is restricted to women aged 20–24 years to ensure that no respondent was still at risk for marriage during adolescence [22, 23]. This resulted in a sample size of 1571 (weighted sample size = 1613).
Independent variables
The independent variables considered in this study were ever attended school (yes, no), religion (Christian, Muslim, Traditional/Spiritualist, No religion), ethnicity (Akan, Ga/Dangme, Ewe, Mole-Dagbani, Gurma, Other), region (Greater Accra, Western, Central, Volta, Eastern, Ashanti, Brong Ahafo, Northern, Upper East and Upper West), residence (urban, rural) and wealth quintile (lowest, second, middle, fourth, highest).
Frequencies were used to describe the characteristics of respondents in the sample. The logit model was used at two levels, first, to examine the bivariate relationships between each of the independent variables and the dependent variable without accounting for other factors. Second, to examine the net effects of each of the independent variables on the dependent variable controlling for other variables. The logit regression model finds the best fitting model to describe the relationship between the dichotomous variable of interest and a set of independent variables [33]. The logit coefficients do not have an intuitive interpretation because they represent effects of the log of the odds. For easier interpretation, the log odds, are converted to odds ratios by exponentiation [33]. Only the odds ratios are presented for the logit regression models in this study. The basic logit regression model takes the form:
$$ \ln\ \left(\frac{\mathrm{pi}}{\left[1-\mathrm{pi}\right]}\right)=\mathrm{bo}+\mathrm{biXi} $$
Where pi is the estimated probability of a particular event occurring to an individual with a given set of characteristics, bo is the intercept, and bi represents the slope coefficients for a set of explanatory variables Xi.
The quantitative analysis was conducted using STATA (version 13). To correct for non-response and ensure representativeness across the country, the data was weighted taking into account the Demographic and Health Survey (DHS) complex survey design using the 'svyset' commands [34]. The svy prefix command subpop option was used to restrict the sample to women aged 20–24 years [34].
Qualitative procedure
The qualitative component of this study involved focus group discussions (FGDs) and Key Informant Interviews (KIIs) with stakeholders (adolescent girls, young women, parents, community leaders and those working directly or indirectly on issues affecting young people aged 10–24 years) on child marriage. The discussions covered norms and practices surrounding child marriage as well as how the phenomenon can be addressed. The KIIs and FGDs were conducted from June to August 2016. Participants for the FGDs and KIIs were recruited through key contacts in various organizations, Microfin, World Education Ghana and Ghana Health Service in the Central region and NORSAAC, Ghana Health Service and ActionAid in Northern region. The purpose of the study, the target population, as well as period of the study, and other details including mobilization of participants, logistics, transportation and community entry were discussed with the key contacts. Once feasibility was established, the key contacts identified community volunteers to mobilize eligible participants. The volunteers and key contacts sought audience with the traditional and local authorities approximately 10 days prior to data collection to inform them of the purpose of the study, target groups and key persons as well as seek their permission to conduct the research in their respective communities.
The qualitative data was used to build on statistical results by adding meaning, context and depth. Semi-structured interview guides were used for the FGDs (Additional file 1: Appendix A) and KIIs (Additional file 2: Appendix B), with a set of questions, however, questions that were not included in the guide were also asked as the interviewers probed further on things said by participants.
The focus group discussions (FGDs) targeted the northern and southern sectors of Ghana. These are regions with high prevalence of teenage pregnancy (Central: 21.3%) and child marriage (Northern: 35.8%) [27].
Focus group discussions (FGDs)
Focus group discussions were conducted in three communities in the Central region (Asubo-Awutu, Obidan and Dosii-Central) and in four communities in the Northern region (Zabzugu, Sabare, Tasundo and Kukpaligu). In each region, 10 focus group discussions were conducted (20 FGDs in total). Each focus group discussion had a maximum of 10 participants. The FGDs were conducted among the following subgroups: 12–17-year olds who were married, 18–24-year olds (who got married before the age of 18), unmarried 12–17-year olds (at risk of child marriage) and unmarried 18–24-year olds. Married 12–24-year olds were asked questions specifically about their lived experiences within marriage, while unmarried 12–24-year olds were asked about their motivations to delay marriage. FGDs were also held among parents/guardian, grandparents, and other adult community members. Male and female parent FGDs were conducted in Kukpaligu in the Zabzugu-Tatale district at the request of the community members, indicating that women would be reluctant to talk in a mixed gender setting.
Key informant interviews (KIIs)
Were conducted with focal persons/key informants/key stakeholders in government institutions (Ghana Education Service, Social Welfare, Ghana Health Service, Ghana Police Service with special attention on the Domestic Violence and Victims Support Unit, Parliament), and non-governmental (World Vision, Hope for Future Generation, Compassion International), as well as at the community level (Christian and Muslim leaders, Chiefs, other Community leaders and representatives, head teachers). Thirty (30) KIIs were conducted in the Central and Northern regions to get regional perspectives on child marriage, and in Greater Accra region with national representatives to get a national view on issues surrounding child marriage.
Research assistants transcribed (some with the help of a translator) the audio-recorded interviews and discussions verbatim into English. Codebooks modelled initially around topics of the interview guides were developed. Through the iterative process of coding and analysis, codes were added to the codebook. The transcripts were coded manually, guided by open and axial coding. To ensure inter-coder reliability, transcripts were analysed by a team of 5 persons (research assistants and principal investigator). The initial codes generated were then grouped into preliminary categories of themes. Through reading, re-reading and constant comparison, the preliminary categories of themes were categorized into themes and sub-themes.
Descriptive results in Table 1 show that about one in five (20.68%) of the women in the sample first married before age 18 (mean age at first marriage among girls aged 20–24 years = 17.7 years; std. dev. = 2.6; minimum age at first marriage = 10 years (0.18%, 3 women) and maximum age at first marriage = 24 years). A little more than one out of ten (11.92%) had never attended school. Four fifth (80.33%) of the women were Christians, 15% were Muslim, about 1 % were Traditionalist and 3 % had no religious affiliation. Half (49.66%) of the respondents belonged to the Akan ethnic group. The highest proportion of the respondents resided in Greater Accra region (20.74%) and the least in Upper West region (2.38%). A higher percentage of the respondents were in Urban areas (53.28%) and 26% of the women in the sample were in the fourth wealth quintile category.
Table 1 Characteristics of the sample, women age 20–24
Table 2 shows bivariate logistic regression results of the relationship between child marriage and each of the background variables. The results reveal a significant relationship between education and child marriage with women who had never attended school being more likely to marry as children. Except for Traditionalist/Spiritualist, Muslim women and women with no religion were significantly more likely to marry as children. With respect to ethnicity, women belonging to the Mole-Dagbani, Gurma, Other ethnic groups were significantly more likely to marry as children compared to Akan women. Women in Eastern, Northern, Upper East and Upper West were more likely to marry early compared to their counterparts in Greater Accra. Women in rural areas compared to women in urban areas were significantly more likely to marry as children. In addition, women in the middle, fourth and highest wealth quintiles were significantly less likely to marry early compared to those in the lowest quintile.
Table 2 Bivariate logistic regression predicting child marriage, women age 20–24
Drivers of child marriage
Table 3 shows results of binary logistic regression model predicting the net effects of each of the independent variables on child marriage controlling for other variables. The results show that the odds of a woman marrying as a child was 3 times more likely for those who had never attended school compared to their counterparts who had ever attended school.
Table 3 Results of multivariate logistic regression model predicting child marriage, women age 20–24
In the qualitative component of this study, it was also found that education was an important reason for delaying marriage. Across the two study areas, when adolescent girls and young women were asked about their plans and reasons for delaying marriage, they often mentioned their educational goals as the reason for not marrying early. Adolescent girls indicated that education was the foundation of their life aspirations, recognizing that early marriage truncates educational achievements:
Yes, I planned for that. When you get married before 18 years you can't further your education again. – FGD 18-24 Unmarried, Zabzugu
My family influenced my delay in marriage because I was always advised to further my education and be a better person before getting married. – FGD 12-17 Unmarried, Sabare
Parents also acknowledged how early marriage could derail educational achievements:
If she wants to further her education, she will say she will not marry. Unless she finishes her school before she will marry. – FGD Male Parents, Kukpaligu
When other variables are controlled for, the relationship between religious affiliation and child marriage becomes very weak. Only women with no religious affiliation were more likely to marry as children (OR = 2.12, significant at p < 0.1) compared to Christian women. Compared to Akan women, Ga/Dangme and Ewe women were less likely to marry as children. In comparison with Akan women, the odds of Ga/Dangme women marrying as children were 0.50 times less likely (significant at p < 0.1) and Ewe women were 0.46 times less likely.
Contrary to the bivariate results, the odds of women in Ashanti region marrying as children were 50% lower, Brong Ahafo was 48% lower, Northern 51% (significant at p < 0.1) lower and Upper East 50% (significant at p < 0.1) lower compared to their counterparts in Greater Accra region. There was no significant variation between women in rural and urban areas with respect to child marriage once other variables were accounted for. While women in the second wealth quintile were not significantly different from those in the lowest wealth quintile about child marriage, women in the middle, fourth and highest wealth quintiles were significantly less likely to marry as children compared to women in the lowest wealth quintile. The odds of young women in the middle, fourth and highest wealth quintile compared to those in lowest wealth quintile marrying as children were 0.59 (significant at p < 0.1), 0.37 and 0.32 times less likely (Table 3).
Consistent with the finding in the quantitative analysis, where women in the highest wealth quintile were less likely to marry as children compared to those in the lowest wealth quintile; it was found in the qualitative data that, poverty was a driver of child marriage in both regional settings. Adolescent girls and young women indicated that poverty was one of the main drivers of child marriage and this was common in both research settings:
When I ask my father for money, he says he doesn't have, so that is why I got married early. – FGD 12-17 Married, Sabare
Sometimes, because of poverty some people give their children out. Your father may be in need and may ask for help from a rich man. After the man renders help to your father, your father will say let me pay this person back for the kind gesture he has shown me by giving you in marriage to that man. – FGD 18-24 Married, Kukpaligu
The reason why our females marry early is because some of our parents do not have, so if the man will be able to cater for you then it means that you should understand him. If my mother doesn't have and I have somebody who can cater for me, I will understand him, for the pressure on my mother to be relieved. So that is why we marry so early. – FGD 18-24 Married, Obidan
Some parents don't have, so they are unable to meet the needs of their children. And the children "by force" engage in relationships and it will result in pregnancy and she will end up entering marriage. – FGD 18-24 Unmarried, Dosii
The key informant interviews also revealed that poverty was one of the important drivers of child marriage. Some of the key informants indicated that some parents allow their girls to marry early to get something in return from the man:
I will say it is poverty, because parents always give the excuse that they are poor because of lack of employment in the system. They will say they do not have money to take care of the child so by the time they think you are of age they should just give you out for marriage; they will get something in return from that man. So, at the end of the day it is poverty. – KII, Social Welfare, Cape Coast
Well, I will say maybe poverty. One, like I said religious beliefs and traditional setup is also the cause of it because when there is poverty at home, some parents do not look at the consequences. Some even lure the children into it so that they get monies from their in-laws. Thus, poverty is one of the main issues that drive child marriage. – KII, Domestic Violence and Victim Support Unit (DOVVSU), Accra
Adolescent girls and young women described how some parents were either aware of or encouraged their relationships borne out of a lack of money/wealth at the family level:
Some of the girls' parents don't have money, so when she meets a man who promises to help her in school, she will go and tell her mother that this man says he will help her in school. Then the mum then agrees to it and from there she will be courting with the guy and suddenly she gets pregnant. – FGD 12-17 Married, Awutu Asubo
On the other hand, particularly in the Central region, some parents acknowledged that their children engaged in transactional relationships because of family hardships, which leads to marriage:
Some are experiencing hardship, so when the girl goes to meet someone who is wealthy, the mother forces her to marry that person. It is not the time for her to marry, but because of hardship and the wealth of the man, she will be forced to marry him so that he can take care of her. – FGD Parents, Assin Dosii
Despite the family's economic circumstances, not all adolescents held their parents responsible for entering early marriage. Some girls from both regions felt that since they did not have money to go to school the best alternative was to marry early:
As I am schooling, I don't have anybody taking care of me, my parents are poor, but they did not force me to the man. Because my parents are poor, I have nothing to offer myself. That is why I got married early. – FGD 12-17 Married, Sabare
I went into marriage because there was no money. If I look back, there is no one, that is why I had to force <<do what it takes>> to get married. – FGD 18-24 Married, Obidan
It is hardship. In my case, my father did not pay my fees when I was about to complete, and the man promised to pay but the registration was over. But he helped me learn a trade and I got pregnant, so he brought me here that is the reason why I got married so early. But it was not as if somebody forced me. – FGD 18-24 Married, Awutu Asubo
Establishing causal relationship as to whether pregnancies occurred before marriage or that pregnancy led to early marriage was beyond the scope of this study. However, the focus group discussions in the two qualitative study settings revealed that teenage pregnancy was one of the main drivers of child marriage:
Yes, teenage pregnancy can lead to early marriage because we Muslims when you get pregnant you cannot live in your parents' house; you should move to your husband's house. We do not wish to go into early marriage, but immediately we get pregnant, our parents say that as far as they are concerned, we should move in with the men. – FGD 12-17 Married, Sabare
I went in for a boyfriend and whatever I asked him, he give to me. I got pregnant, I stopped school, and I am now living with him. So, it is pregnancy that led me into marriage. – FGD 12-17 Married, Obidan
What I also know is that some of the girls are in primary or junior secondary school (JSS) and before you realize the person is pregnant. So, this can make the person marry early. – FGD 18-24 Married, Kukpaligu
Parents offered their perspectives on how and why teenage pregnancy usually leads to child marriage. Some parents indicated that when a child falls pregnant, they will let the man responsible for the pregnancy marry the girl even if she does not want to enter early marriage:
When she is underage and conceives, they will give her to marry that boy. You the father will be thinking that she is not ready, but you will see that she is pregnant, so you should give her out for marriage. – FGD Male Parents, Kukpaligu
For us the mothers, we think that if you have a child, the child should live with you until she is ready for marriage. But before you realize the girls will bring pregnancy to you. Hence, the child would be forced to go to <<marry>> whoever impregnated her. – FGD Female Parents, Kukpaligu
In the key informant interviews, it also came out that the practice of parents forcing girls to marry men responsible for their pregnancy was a common phenomenon in their communities:
Yes, for some people, if someone impregnates your daughter, they will just give her to that boy to marry. Yes, to marry. – KII, Opinion Leader-Kukpaligu
Yeah, some of them their parents look out for those who put them in the family way <<pregnant>> and then they see their parents and they marry them. – KII, Opinion Leader-Zabzugu
Yes, I'll say yes. Especially in this district, what the people in the district are doing now is that; when you get a teenager pregnant, they ask you to come out and pay the bride price to legalize whatever you have done before you can even name the child. So, that is really causing more child marriages than before. – KII, Ghana Health Service-Zabzugu
[Pauses briefly to think about response to the factors that influence child marriage] Teenage pregnancy too can be one of them. – KII, Teacher at Assin Dosii
Some adolescents and young women recognized their own role in getting married or being in a union because of teenage pregnancy. They indicated that in some cases, parents could not be blamed for early marriage, as it is the girls who fall pregnant. Indeed, some of the girls insisted on marrying the men who made them pregnant:
What I know is that sometimes it is not the will of the parents that the children marry early. It is the fault of the children themselves. The person will be in school and before you realize she is pregnant. And when she is pregnant, she has to go into marriage. It is not the making of the father and the mother. – FGD 18-24 Married, Kukpaligu
When I was in school, a guy proposed to me and I accepted. After some time in the relationship, I got pregnant. When that happened, my parents were unhappy about it and did not agree for me to marry the man. I refused to listen to my parents and married the man. – FGD, 12-17 Married, Awutu Asubo
Socio-cultural drivers of child marriage
Reasons for child marriage vary from one society to the other. The data revealed that socio-cultural factors such as betrothal and exchange of girls for marriage were common in less developed settings in the Northern region. The betrothal of young girls, which was common in the Northern region, was also mentioned as a cultural practice that drives child marriage within some communities (Zabzugu-Tatale):
For example, just like I'm having this baby, my husband's mother will call his son and tell him when his child is grown, she would come for her as a wife for a particular man for marriage. – FGD 18-24 Married, Kukpaligu
What I also know is that while the children are young their parent will show them their husband. So, because of that, the person will be eager to enter it because she has a husband already. – FGD 18-24 Married, Kukpaligu
Aside from betrothal of young girls, in the focus group discussions, among the Konkombas of the Northern region, there was the cultural practice of exchange of girls for marriage by families, which was a main driver of child marriage in that area:
Most of us are exchanged, the person will go and bring her sister to your brother and your brother too will give you to that man. Because of that, we are marrying early. You are small, and your brother doesn't have a wife, he will use you to exchange like that. – FGD 12-24 Married, Tasundo
Your uncles will use you for exchange, so they will like to send you quick so that they get theirs quick. – FGD Female Parents, Kukpaligu
The key informants in the Zabzugu-Tatale district also mentioned the culture of exchange of girls among the Konkombas as one of the drivers of child marriage in the area:
The main cause is their culture and they don't want to leave that practice. They still exchange girls and when you marry, and you don't have a girl to give the other family, they will take your wife. So, they are compelled to give them. When you give them and she is a small girl, they will take her but if you don't have, they will take your wife. So sometimes they will take them out of school and exchange. – KII, Religious Leader-Kukpaligu
In describing these de-facto practices, the lack of consent of girls and the forced nature of these marriages were very apparent. Girls had no option than to marry the man their family members betrothed them to. In some cases, even the mothers of the girls do not have a say when the girls are being exchanged for marriage. Some of the participants indicated:
When you are young, your father will give you out for marriage so whether you like it or not, you'll have to go. And when it happens like that, you can't do anything than to agree. – FGD 18-24 Married, Kukpaligu
You the mother will be sitting there, and the uncle of the child will come and just tell you that they are taking your daughter to this community for a wife. If you say no, they will beat you and the girl and force the girl to the place. – FGD Female Parents, Kukpaligu
Bride wealth was another dimension related to the persistence of child marriage. It is a cultural phenomenon in most Ghanaian societies. In the focus group discussions, girls believed that because bride wealth is cheap, men find it easy to pay and ask for the hand of young girls for marriage. Some participants therefore felt that an increase in bride wealth could serve as a deterrent to delay age at which girls get married:
They should make the wedding things expensive. If it is expensive, it is like if the man goes and he has not got money, he can wait. Maybe when the girl is 17, he will wait till the girl is 20 before, he will have money then, to buy the things. – FGD 12-17 Unmarried, Dosii
I think if the bride price is increased, it will make the men not able to afford it, so they will not be able to pay, and this will make us wait till we get to the right age of marriage. Because if the bride price is low, the moment the man pays it, he insists you get married as early as possible, therefore increasing the bride price will make early marriage stop. – FGD 12-17 Unmarried, Sabare
Pressure to get married at an early age can come from various significant others, namely family, society, peers and self. Some parents/family encourage or pressure their daughters to get married early by always comparing them to their peers who are already married:
From parents. They see your colleagues marry, then they tell you that, 'you have seen your colleagues marrying and you are there, so you too hurry up and marry'. Thus, the pressure is from the parents. – FGD 18-24 Unmarried, Zabzugu
Sometimes the family. You know, you live with your parents, you live with your family members and most of the elders in the family will put pressure on you to marry. They will say look at this person, maybe she is your cousin or family member, she has married and maybe you are older than her and she is married, and you are still there. Through that you can even force someone to marry you. – FGD 18-24 Unmarried, Zabzugu
In the focus group discussions, it was found that marriage is cherished as it is in most African societies and unmarried young girls are usually teased or mocked because they are not married. Hence, some girls would want to get married early just to conform to the status quo:
Some want early marriage because of mockery. Sometimes, those who get married tend to make mockery of those who are not married, they ask them to accord them the respect simply because they married early. – FGD 12-17 Unmarried, Sabare
Some young girls enter early marriage because their colleagues are married or when they see their friends doing very well in marriage, they also want to get married. This was also noted in the KIIs. In other cases, young girls might go into relationships early for economic gains:
"When you see that your friends you walk with are getting married, you also want to get married. That is why we get married early." – FGD 18–24 Married, Kukpaligu
It is not because of anything that we girls are in a hurry to get married, it is because of peer pressure. When we see others like us being treated very well in their marriage, we get attracted and try to marry to be treated well <<laughs>> – FGD 12-17 Unmarried, Sabare
I also think it is bad influence that causes it, we listen to what our friends say. Sometimes a friend may have fancy clothes and you may be envious so that friend will tell you I slept with a man to get them, so you can also get a man who will look after you so that you can also get the clothes I have. She will also listen to her friend and go in for a man as her parents cannot provide her with those clothes." – FGD 18–24 Unmarried, Obidan
Sometimes it is from the peer group. Peer group influences. – KII, Opinion Leader-Kukpaligu
In the FGDs as well as from the KIIs, it was revealed that some girls decide to marry early, indicating that it was their own will or out of curiosity and in some cases, out of stubbornness (not listening to their parents' advice):
Our parents cannot force us to go and marry the men, but we did ourselves because of our own curiosity. Even though they advised us against it, we refused. That is why we are suffering like this. – FGD 12-17 Unmarried, Sabare
Nobody forced me to get married early, I forced myself to marry because I was schooling, and nobody was taking care of me that was why I got married. – FGD 12-17 Unmarried, Sabare
It was because of my stubbornness; my parents did whatever they could to cater for me. It was as result of my stubbornness and peer pressure that has landed me in such marriage. – FGD 18-24 Married, Obidan
Yes, stubbornness because regardless of what you say the children do not listen or take it when mothers talk, they just don't listen. Some also develop early, for instance at thirteen years, they menstruate so by fourteen when they go for a man, they get pregnant. When they get pregnant too, they will have a baby . . . they don't listen to their parents oh! If we try to correct them, we are not able to do so at all. – KII, Queen Mother, Central Region]
Ending child marriage
To help curb the practice of child marriage in Ghana, participants in the FGDs highlighted the role of the police. Adding that instead of giving a girl out for marriage because of pregnancy the man responsible should rather be arrested:
Taking the case to the police station will make it stop. – FGD 12-17 Married, Sabare
To stop early marriage, when a girl gets pregnant whilst in school, the man responsible should be arrested and this will make it stop. – FGD, 12-17 Unmarried, Sabare
Participants also spoke fervently about the authority and role of the chiefs in their communities in ending the practice. Participants indicated that the chiefs should be more vocal against child marriage and that women should make it a point to report their husbands to chiefs when they are going to give their girls out for marriage:
If your husband is forcing your children to marry early, you should report your husband to the chief. – FGD Female Parents, Kukpaligu
The chief in this village can make this child marriage stop because if he can open his mouth and talk about it, it will stop. But if an elder says it, they will not believe him unless the chief himself says it. If the chief decrees it himself, they will fear him and stop. – FGD 18-24 Unmarried, Obidan
Participants went further to explain the need for community-based laws and policies that should be established specifically by the chief of the community and elders, which they believe, will help curb child marriage:
The chief of this community and his elders can impose their laws and it will stop. – FGD 12-17 Married, Zabzugu
The chief can pass the law and it will work because everybody wants it to stop. – FGD Female Parents, Kukpaligu
Madam, they [elders] should bring a rule that tells parents to make sure their children sleep early so if they pay attention to the children and take care of them it can make all the child marriages and teenage pregnancies stop. – FGD 12-17 Unmarried, Obidan
All that can be done is that elders and opinion leaders must make laws that any man who impregnates a girl who is in school or under age should be arrested and this will put some fear in them. – FGD 12-17 Unmarried, Sabare
Key informants also indicated that chiefs have a key role to play in curbing child marriage. They suggested that chiefs should establish laws on child marriage and ensure offenders are punished:
For the chiefs, they can even establish some laws within the community that whenever you do this these are the punishments that you are going to face, and I think no one is ready for a punishment. Through that I think we can reduce the child marriage. – KII, Teacher, Central Region
Yeah, the police we have partnered with the UNFPA sensitizing the villagers on such issues or activities. They should not involve themselves in it. – KII, Police Officer-Zabzugu
The education of girls was regarded as both a protective factor against early marriage and a means to curbing or ending the practice. Participants in the FGDs pointed out that when a girl is enrolled in school, she cannot be given out for marriage easily. This view was shared in both the parents and girls focus group discussions:
When you don't want to marry early, you go to school direct. When you enter school, they won't give you like that. – FGD 12-24 Married, Tasundo
No matter how high the bride price, so far, the husband is willing to marry the child they will still pay. The only thing is we will bring our heads together and educate the girl on certain things and advise the girl that if she goes to school it will be better for her in future than if she gets into marriage. – FGD Female Parents, Kukpaligu
I think the only way to stop early marriage is through education. If the level of educating the girl child is intensified, early marriage will stop. – FGD 12-17 Unmarried, Sabare
Teenage pregnancy and marriage won't happen if you are in school because you want to do something good. Since our friends have done it and it looks nice, we should also learn so that all these things can help us. – FGD 12-17 Married, Obidan
Key informants also echoed education as a way of ending child marriage among girls, indicating that education will not only help delay marriage but also empower the girls:
We must make sure that girls' education is enforced at all levels and the welfare system. – KII, Head of NGO, Greater Accra
We are trying to look at it from every angle, it is education. That is why there are a lot of sayings about girl child education. In our religion, it is said that when you educate a girl child, you have much more blessings and that also go with the philosophical sayings of the great men in our society like Aggrey If you educate a girl, you educate a whole nation. – KII, Muslim leader, Central region
In addition, some of the key informants indicated that one of the ways to end child marriage was through awareness creation and advocacy on the consequences of child marriage:
Absolutely yes, we have been embarking on sensitizations in the communities, because it is the parents who should support the children to go school. A child cannot take herself to school even if she can take herself to school there should be support, financially, morally, everything. So, we sensitize the parents and we sensitize the girls. The girls' education unit especially sends me to go out to the schools with my colleagues and then we talk to the girls, we sensitize and inspire them to go high in education and we tell them the prospect or the benefit of education. So, we've been doing a lot of activities, we have Ahomeka, a local radio station, we've been going there to educate the public on the importance of education." – KII, Girl-child Education Officer, Central Region
Some of the key informants also indicated that, there are laws in Ghana to curb child marriage, but the challenge was with the enforcement of the laws:
The policies are there, because our law is clear on child marriage. Hence, if people marry a child at the age of fifteen, it is against our laws, but there is no punishment, you understand. – KII, Head of NGO, Greater Accra
In my opinion, the laws on child marriage are weak. When it comes to child marriage issues, it is like there are no sanctions. I have never witnessed a parent being sanctioned for giving out his or her child for early marriage. I am yet to see that. So, I feel stiffer punishment should be there for people who do that. So that it would scare the others from doing it. – KII, Staff of Ghana Health Service, Zabzugu
This study sought to identify the predictors of child marriage in Ghana and explored norms and practices surrounding child marriage as well as how the phenomenon can be addressed. It is worth noting that girls aged 12–15 years compared to girls aged 16–17 years who are sexually active, married or mothers raise different issues. However, this discussion is beyond the scope the present study. From the quantitative data, one in five (20.68%) young girls aged 20–24 years married as children, a reduction from 24.58% in the 2008 GDHS data. At this rate of decline, Ghana will most likely not meet the sustainable Development Goal 5, Target 5.3 which seeks to eliminate all harmful practices, such as child, early and forced marriage and female genital mutilations by 2030 [35].
This is considerably lower compared to findings from other developing country contexts. For instance, in five Indian states it was found that 63% of women aged 20–24 years were found to be married before age 18 [36]. This can plausibly be explained by some of the key activities implemented by the Child Marriage Unit which include the establishment of an Advisory Committee composed of influential individuals to tackle child marriage; formation of a network of stakeholders for experience sharing on best practices, lessons learnt and guidance on what works and what strategies do not work; launch of the Ending Child Marriage Campaign in Ghana in 2016; public sensitization through the use of popular Ghanaian personalities and the mass media; engagement with the youth to get their ideas on how to end child marriage; and engagement with the African Union and other actors at the continental level to share and learn from other African countries their efforts to end child marriage [32]. This could also be the result of significant increase in girl-child education in Ghana resulting in the decline of the incidence of early marriage (also see [7]). Young girls who had never been to school were more likely to marry as children compared to their counterparts who have ever been to school. This finding is in tandem with similar studies in Ghana and other sub-Saharan African countries [22, 37]. This underscores the importance of education as a preventive measure to child marriage.
Contrary to other studies in Ghana and other developing countries [6, 37], the results of the present study revealed that while young girls with no religious affiliation were marginally more likely to marry as children compared to their Christian counterparts, young Muslim girls were not different from young Christian girls. The differences in the findings could be attributed to the age range of women in the samples considered in the respective studies or context. Plausibly, the influence of religion is waning in modern Ghanaian societies as demonstrated in other studies (e.g. [38]). There were elements of cultural influence on child marriage as ethnicity and low bride wealth were found to be related to child marriage. This can be attributed to the high value placed on girl-child virginity as a source of honour to the family and higher bride wealth [13].
The results indicated that young girls in the Ashanti, Brong Ahafo, Northern and Upper East regions were less likely to marry as children compared to their counterparts in Greater Accra region. The reversal of these results in the relationship between region and child marriage in the bivariate and multivariate analysis can be explained by accounting for other factors in the multivariate model. The results especially in the Brong Ahafo, Northern, Upper East regions, can plausibly be explained by the continuous efforts by development partners such as UNFPA in those regions to curb child marriage. In addition, perhaps the traditional marriage process is being bypassed in the Greater Accra region attributable to the level of modernization or development in the region. In such modernized or developed settings, parents and families are less involved in individual marital preferences or strategies than they were in previous generations or in less developed or modernised context. Hence young girls can go into marriage early or live with men (cohabit) as if they are married without repercussions (also see [7]).
The influence of modernization or development was also manifested in the qualitative findings with the existence of socio-cultural practices such as betrothal and exchange of girls for marriage as marital strategies, which was common in less developed settings in contrast to modernised settings (Zabzugu-Tatale district in the Northern region). In some communities in the district, young girls were betrothed as early as when they were born and once they grow up they had no choice but to marry the men they were betrothed to. The results showed that these were cultural practices the communities were reluctant to let go despite their negative consequences.
Household economic status appeared to be significantly related to child marriage. The results revealed that young girls in the middle, fourth and highest wealth quintiles were significantly less likely to marry as children compared to their counterparts in the lowest wealth quintile. This finding corroborated with the findings from the qualitative data suggesting that poverty was also a crucial factor influencing child marriage. Parents who could not take care of the needs of their young girls either encouraged or forced them into early marriage [5, 12, 39]. In other cases, it was the desires and wants of young girls that led them into early marriage. The data showed that what usually led to teenage pregnancy was poverty, where the family of young girls were unable to take care of their needs in school and their social life. Once a girl got pregnant, she was forced to marry the man responsible for the pregnancy. These issues suggest that child marriage is used as an economic strategy for upward social mobility by girls and their parents in some instances. However, some girls acknowledged that it was not only poverty that led them to child marriage but asserted that it was sometimes their own stubbornness, inquisitiveness or materialistic desires that made them marry early.
Pressure from significant others also appeared to influence child marriage. Parents and other members of the society usually pressurized young girls into child marriage by comparing them to their peers who were married and sometimes appear to be doing well in their marital homes. The results further showed that some young girls enter child marriage simply because their peers were married, not knowing the negative effects of child marriage as some of the young girls lamented in the FGDs. From the results, there are strong similarities in terms of the drivers of child marriage in the two regional settings, even though they differ in several aspects such as language, lineage system and socio-economic development. Perhaps, this reiterates the widespread nature of the phenomenon though the context may differ.
Participants were asked how child marriage could be ended and they proffered various solutions. Participants in the qualitative interviews indicated that advocacy would be useful in curbing the practice. Hence tailored advocacy programs for adolescent girls should be developed. These programs should focus on educating communities and raising awareness on the consequences of child marriage. Additionally, the programs should be designed to equip community members to challenge insidious socio-cultural practices such as betrothal and exchange of girls for marriage. This could be done through opinion leaders such as chiefs to influence public opinion. Conscious efforts should be made by duty bearers to initiate a discourse for a specific policy on child marriage since the Children's Act of Ghana does not comprehensively address the issue of child marriage.
The education of girls was regarded as a protective factor against early marriage. Participants in the focus group discussions indicated that when a girl is enrolled and kept in school it can delay her age at first marriage. Interventions to stop child marriage should therefore include a component that aims at improving retention of adolescent girls in school. With out of school adolescent girls, conscious efforts should be made to empower them through vocational skills building for them to be able to earn a living.
Participants in the interviews also noted that the police and other law enforcement institutions should step up efforts to curb child marriage. Hence, law enforcement agencies should put major focus on implementing and enforcing existing laws governing child marriage in Ghana. Some participants also indicated that chiefs should put in more efforts by speaking publicly against child marriage and make local decrees prohibiting child marriage, as well as penalise offenders.
A potential limitation of this study is that the qualitative study was conducted in selected districts in the Northern and Central regions; hence, the results cannot be generalized to the whole country. However, these findings give some indications of and reflect the issues surrounding child marriage in these areas. These findings might not be very different from what is experienced in other parts of the country.
The findings reveal that various socio-economic and cultural factors such as education, teenage pregnancy, poverty and the exchange of girls for marriage influence child marriage. Hence, efforts to curb child marriage should be geared towards retention of girls in school, empowering girls economically through vocational training, enforcing laws on child marriage in Ghana, as well as designing tailored advocacy programs to educate key stakeholders and adolescent girls on the consequences of child marriage. Further, there is the need to address socio-cultural norms/practices to help end child marriage. Additionally, efforts should be directed towards curbing teenage pregnancy, which will lead to reducing child marriage. This could be done through working with reproductive health partners, both local and international, to improve adolescent girls' access to and utilization of reproductive health services including family planning.
The survey and dataset used for this study were from the 2014 Ghana Demographic Health Survey. The survey is available in the report [27]. The quantitative dataset generated and/or analysed during the current study is also available on the Demographic and Health Surveys Program repository, https://dhsprogram.com/data/new-user-registration.cfm. The qualitative interview guides used in this study were developed for this study. Four FGD guides and one KII guide were utilized to elicit qualitative data as described in the Methods section (see page 10). The qualitative dataset generated and/or analysed during the current study are not publicly available due to funding agreements but are available from Population Council and UNFPA ([email protected]) upon written request and approval.
CEDAW:
Convention on the Elimination of all Forms of Discrimination Against Women
DHS:
Demographic and Health Survey
FGD:
Focus Group Discussion
GHDS:
Ghana Demographic Health Survey
KII:
Key Informant Interviews
United Nations Children's Fund. Early marriage: a harmful traditional practice. New York: UNICEF; 2005. https://www.unicef.org/publications/index_26024.html. Accessed 10 May 2016
Singh S, Samara R. Early marriage among women in developing countries. Int Fam Plan Perspect. 1996;22:148–75.
United Nations Children's Fund. Ending child marriage: progress and prospects. New York: UNICEF; 2014. https://www.unicef.org/media/files/Child_Marriage_Report_7_17_LR..pdf. Accessed 10 May 2016
Santhya KG. Early marriage and sexual and reproductive health vulnerabilities of young women: a synthesis of recent evidence from developing countries. Curr Opin Obstet Gynecol. 2011;23:334–9.
Hossain M, Mahumud R, Saw A. Prevalence of child marriage among Bangledeshi women and trend of change over time. J Biosoc Sci. 2016;48:530–8.
Kamal S, Hassan C, Alam G, Ying Y. Child marriage in Bangladesh: trends and determinants. J Biosoc Sci. 2015;47:120–39.
Mensch BS, Bagah D, Clark WH, Binka F. The changing nature of adolescence in the Kassena-Nankana district of northern Ghana. Stud Fam Plan. 1999;30:95–111.
UNFPA, UNICEF. Fact sheet: girls and young women. 2011. https://social.un.org/youthyear/docs/fact-sheet-girl-youngwomen.pdf. Accessed 10 May 2018.
Ghana Statistical Service. Ghana multiple Indicator cluster survey 2006. Accra, Ghana: Ghana Statistical Service; 2006.
Ghana Statistical Service. Ghana multiple indicator cluster survey with an enhanced malaria module and biomarker, 2011, final report. Accra: Ghana Statistical Service; 2011.
Malhotra A. The causes, consequences and solutions to forced child marriage in the developing world: testimony submitted to U.S. house of representatives human rights commissions. Washington, DC: International Center for Research on Women International Center for Research on Women; 2010. https://www.icrw.org/files/images/Causes-Consequences-and%20Solutions-to-Forced-Child-Marriage-Anju-Malhotra-7-15-2010.pdf. Accessed 26 May 2018
University of Ghana Centre for Social Policy Studies, World Vision Ghana. A study on child marriage in selected World Vision Ghana operational areas. Ghana: WVG & UG-CSPS; 2017. http://csps.ug.edu.gh/content/study-child-marriage-selected-world-vision-ghana-operational-areas. Accessed 8 Mar 2018
Bulley M. Early childhood marriage and female circumcision in Ghana. Seminar on traditional practices affecting the health of women and children in Africa. Dakar: Senegal Ministry of Public Health and the NGO Working Group on Traditional Practices Affecting the Health of Women and Children; 1984. https://www.popline.org/node/401036. Accessed 10 Aug 2016
Nour NM. Child marriage: a silent health and human rights issue. Rev Obstet Gynecol. 2009;2:51–6.
Addai I. Religious affiliation and sexual initiation among Ghanaian women. Rev Relig Res. 2000;41:328–43.
Kirk D. Factors affecting Muslim natality. Belgrade: United Nations, New York; 1967. https://www.popline.org/node/516796. Accessed 10 Aug 2018
Chowdhury FD. The socio-cultural context of child marriage in a Bangladeshi village. Int J Soc Welf. 2004;13:244–53.
Mathur S, Greene M, Malhotra A. Too young to wed: the lives, rights and health of young married girls. Washington, DC: International Center for Research on Women (ICRW); 2003. https://www.issuelab.org/resources/11421/11421.pdf. Accessed 25 May 2018
Nour NM. Health consequences of child marriage in Africa. Emerg Infect Dis. 2006;12:1644–9.
Gage AJ. Association of child marriage with suicidal thoughts and attempts among adolescent girls in Ethiopia. J Adolesc Health. 2013;52:654–6.
de Groot R, Kuunyem MY, Palermo T. Child marriage and associated outcomes in northern Ghana: a cross-sectional study. BMC Public Health. 2018;18.
Erulkar A. Early marriage, marital relations and intimate partner violence in Ethiopia. Int Perspect Sex Reprod Health. 2013;39:6–13.
United Nations Population Fund. Marrying too young: end child marriage. New York: United Nations Population Fund; 2012. https://www.unfpa.org/sites/default/files/pub-pdf/MarryingTooYoung.pdf. Accessed 10 May 2018
Hessburg L, Awusabo-Asare K, Kumi-Kyereme A, Nerquaye-Tetteh JO, Yankey F, Biddlecom A, et al. Protecting the next generation in Ghana: new evidence on adolescent sexual and reproductive health needs. New York: Guttmacher Institute; 2007. https://www.guttmacher.org/sites/default/files/report_pdf/png_ghana.pdf. Accessed 10 Aug 2016
Clark S, Bruce J, Dude A. Protecting young women from HIV/AIDS: the case against child and adolescent marriage. Int Fam Plan Perspect. 2006;32:79–88.
Henry R, Fayorsey C. Coping with pregnancy: experiences of adolescents in Ga Mashi Accra. Calverton: ORC Macro; 2002. https://dhsprogram.com/pubs/pdf/QRS5/copingwithpregnancy.pdf. Aug 10 2016
Ghana Statistical Service, Ghana Health Service, ICF International. Ghana demographic and health survey 2014. Rockville: GSS, GHS, and ICF International; 2015.
Nguyen MC, Wodon Q. Impact of child marriage on literacy and education attainment in Africa. Background paper for fixing the broken promise of education for all. 2014. http://allinschool.org/wp-content/uploads/2015/02/OOSC-2014-QW-Child-Marriage-final.pdf. Accessed 24 Oct 2016.
Ampiah JG, Adu-Yeboah C. Mapping the incidence of school dropouts: a case study of communities in northern Ghana. Comp Educ. 2009;45:219–32.
Karei EM, Erulkar AS. Building progams to address child marriage: the Berhane Hewan experience in Ethiopia. New York: Population Council; 2010. https://www.popcouncil.org/uploads/pdfs/2010PGY_BerhaneHewanReport.pdf. Accessed 10 Jun 2016
Republic of Ghana. The children's act, 1998. 1998. http://www.unesco.org/education/edurights/media/docs/f7a7a002205e07fbf119bc00c8bd3208a438b37f.pdf. Accessed 10 Sep 2016.
Ministry of Gender, Children and Social Protection. National strategic framework on ending child marriage in Ghana 2017-2026. Accra: Ministry of Gender, Children and Social Protection; 2016. https://www.girlsnotbrides.org/wp-content/uploads/2017/05/2017-2026-National-Strategic-Framework-on-ECM-in-Ghana.pdf. Accessed 1 Apr 2019
DeMaris A. A tutorial in logistic regression. J Marriage Fam. 1995;57:956–68.
StataCorp. Stata 13 base reference manual. College Station: StataCorp LP; 2013. https://www.surveydesign.com.au/docs/manuals/stata13/r.pdf. Accessed 5 Oct 2015
United Nations. Sustainable development goals. 2015. https://sustainabledevelopment.un.org/sdg5. Accessed 25 Jul 2019.
Santhya KG, Ram U, Acharya R, et al. Associations between early marriage and young women's marital and reproductive health outcomes: evidence from India. Int Perspect Sex Reprod Health. 2010;36:132–9.
Amoo EO. Trends and determinants of female age at first marriage in Sub-Saharan Africa (1990–2014): What has changed? Afr Popul Stud. 2017;31.
Fuseini K, Kalule-Sabiti I. Women's autonomy in Ghana: does religion matter? Afr Popul Stud. 2015;29:1831–42.
Muthengi E, Gitau T, Austrian K. Is working risky or protective for married adolescent girls in urban slums in Kenya? Understanding the association between working status, Savings and Intimate-Partner Violence. PLoS One. 2016;11:e0155988.
The authors would like to thank Ann Blanc (Population Council) and Martin Bawa Amadu (UNFPA, Ghana) for their guidance and comments on this study. The authors wish to express their gratitude to the Demographic and Health Survey Program for allowing them to use their data. The authors are also grateful to all the consultants and research assistants who supported the study.
UNFPA Ghana provided funding and guidance on the design for this study. Population Council implemented the study including finalising the study design and conducting data collection, analysis and interpretation of the data as well as developing the manuscript. Co-authors from the funding body reviewed and made substantive revisions to the manuscript at various stages. However, the content is solely the responsibility of the authors and does not necessarily represent the official views of the authors' employers or funders. Any opinion, finding, and conclusion or recommendation expressed in this material is that of the authors.
UNFPA Ghana, P. O. Box GP, 1423, Accra, Ghana
Babatunde Ahonsi
, Erika Goldson
, Selina Owusu
& Ismail Ndifuna
Population Council, P. O. Box CT 4906, Cantonment, Accra, Ghana
Kamil Fuseini
, Dela Nai
& Placide L. Tapsoba
The Global Cottage, Inc., Florida, USA
Icilda Humes
Search for Babatunde Ahonsi in:
Search for Kamil Fuseini in:
Search for Dela Nai in:
Search for Erika Goldson in:
Search for Selina Owusu in:
Search for Ismail Ndifuna in:
Search for Icilda Humes in:
Search for Placide L. Tapsoba in:
All authors brainstormed to conceptualise this study. KF and DN carried out the quantitative and qualitative analyses respectively. KF drafted the manuscript. BA, EG, SO, IN, IH and PLT reviewed and made substantive revisions to the manuscript at various stages. All authors read and approved the final manuscript for submission.
Correspondence to Kamil Fuseini.
The Ghana Demographic and Health Survey protocol was reviewed and approved by the Ghana Health Service Ethical Review Committee and the Institutional Review Board of ICF International. Ethical clearance was acquired from the Ghana Health Service Ethical Review Committee and the Population Council Institutional Review Board for the qualitative component of this study. Permission was also obtained from community leaders in the study areas. For both datasets, written consent of each participant was obtained and where necessary written parental/guardian consent was obtained before the participant assented to be part of the study. Respondents gave their consent to participate in the studies voluntarily.
Additional file 1: Appendix A. Focus Group Discussion guides.
Additional file 2: Appendix B. Key Informant Interview guide.
Ahonsi, B., Fuseini, K., Nai, D. et al. Child marriage in Ghana: evidence from a multi-method study. BMC Women's Health 19, 126 (2019) doi:10.1186/s12905-019-0823-1
Women's public health issues
|
CommonCrawl
|
Reflections on the Dasgupta Review on the Economics of Biodiversity
Ben Groom ORCID: orcid.org/0000-0003-0729-143X1 &
Zachary Turk2
Environmental and Resource Economics volume 79, pages 1–23 (2021)Cite this article
The Dasgupta Review provides a rich overview of the economics of biodiversity, paints a bleak picture of the current state of biodiversity, and is a call to arms for action in anticipation of the CBD COP 15. The Review takes a global perspective aimed at the high level of international and national policy on biodiversity, while elucidating the very local nature of biodiversity threats and values. The approach is orthodox in its diagnosis via the language of externalities, natural capital, shadow pricing, asset returns, and the suite of remedial policies that follow. Yet, at its centre is an 'unorthodox' perspective: the economy is embedded in the environment and growth is limited. We offer reflections on this framing in light of its objectives for biodiversity. The limits to growth message will be criticised and applauded in equal measure by different economists. The central place of valuation and the aggregated concept of biodiversity will draw criticism from outside the discipline. Yet the Review provides a foundation for biodiversity economics, and its largely orthodox framing may invoke the intended step change in the mainstream approach to economic growth.
Avoid the common mistakes
The Dasgupta Review on the Economics of Biodiversity is a comprehensive and high-level report on the economic explanation behind the current state of global biodiversity, the failings of the economic system that are responsible. It provides the policy prescriptions that economists ought to be making to reverse the impending, some would say current, disaster that the unfettered global economy is having on the biosphere (Bradshaw et al. 2021). The statistics are eyewatering. Over the past 11000 years, the start of agriculture, terrestrial vegetation biomass has halved. Moreover, in the last 500 years alone more than 20% of biodiversity has been lost, including over 700 plants and around 600 vertebrate species becoming extinct (Bradshaw et al. 2021). The current rate of extinction is 15 times the background rate, meaning that we are on course for a mass extinction event (75% loss of species in a 'geologically short period': < 3m years, (Ceballos et al. 2015)). Populations are down too, by nearly 70%. Furthermore, although known for some time that biodiversity loss, habitat destruction and the trade in wild animals is linked to zoonoses (e.g. Daily and Ehrlich, 1996; Pepin, 2013), we now find ourselves in the middle of a global pandemic that appears to be linked to the sale of exotic species and the destruction of their habitats (Wu et al., 2020). The costs of environmental degradation are now very tangible.
The Review, housed at the UK Treasury, is "high-level" in that its fundamental aim was not to preach to the choir of environmental economists, but to ministries at the core of governments worldwide, that make decisions on economic policy, the measurement of long-term well-being and the economic incentives that affect biodiversity and nature. It is aimed at finance ministries, central banks and, particularly, the mainstream economists that work in and advise these institutions, which in turn determine the allocation of public and private capital.Footnote 1 In the Q&A of the Review's launch event, the first question that Sir Professor Dasgupta was asked, was "what is the first thing that you would change?". His response was telling: "my colleagues in the economics profession". This response encapsulates the view that mainstream economic theory of growth has been hegemonic, influential and stubbornly persistent as the theoretical platform for defining economic goals worldwide. If one discards the decades of work in environmental economics at least since Dasgupta and Heal (1974) and Dasgupta and Heal (1980) from the category of 'mainstream' growth economics, mainstream theories of economic growth have essentially ignored the demands economic activity places on the biosphere and the constraints that the biosphere places on economic activity.Footnote 2 The Dasgupta Review is comprehensive, detailed and thorough on the role of biodiversity in determining well-being from the micro to the macro level. Nevertheless, it is this high level macroeconomic point that is the essential message. The economy is embedded in the environment, and the biosphere and the biodiversity contained within it is not something that can be easily substituted. Neither can technological change solve the essential imbalance between the demands of economic activities, and the renewable output of the environment. There are, the Review argues, limits to growth due to embeddedness in the biosphere and binding planetary boundaries.
Realising this aim and communicating this message requires the right language, one that can be understood at a high level and by the mainstream. The idea that biodiversity is valuable at the macroeconomic level requires the language of natural capital, assets and asset returns, stocks and flows and national accounts, when its audience is the mainstream economists in ministries and in finance. This language, while probably anathema to many who are concerned about biodiversity loss and who see economics as the problem, is also necessary within the theoretical framework that Dasgupta uses: a hybrid of mainstream economic growth theories which presents biodiversity as a macro aggregate stock and flow. From the perspective of environmental and resource economics (ERE), this framing is familiar ground. Less so within the mainstream. Yet, as we discuss, the idea of limits to growth and the embeddedness of economy in biosphere is an important departure both from the mainstream and typical discussions of sustainable economic development in ERE (e.g. Hamilton and Hepburn 2017). Embeddedness is more in the realm of ecological economics (e.g. Boulding 1968). Irrespective of these discussions, limits to growth is one thing, imposing limits on growth to rebalance economic demands with the environment is quite another.
With these high level aims and the audience in mind, in what follows the way in which the Dasgupta Review's message is conveyed is explored, and some of the key ideas are discussed. Of course, the Review is more than these aims and the language of ERE. The attention to detail is impressive given the broad coverage of topics. Among the numerous things that we learn from the Review are the manifold values that biodiversity can provide. The abridged version tells us that 60% of cancer drugs in the 90s came from soil funghi and bacteria (See Box 2). We also learn that subsidies to activities that harm biodiversity run to about US$4-6 trillion annually (Box 8.1 of the Review), and that in a stress test of the Dutch Financial Sector, €1.4 trillion of investments were highly dependent on ecosystem services, and that the footprint of Dutch Financial institutions' investments is the equivalent of 58000 \(\mathrm{km}^{2}\) of pristine natural habitat (25% more than the area of the Netherlands itself. See Box 17.8 of the Review). Furthermore, were everyone to consume the diets seen in rich countries today, it would require an area of land greater than the entire surface of the globe. Such is the evidence of overshooting sustainable use of the biosphere.
The Review will be a key resource for economists and policy makers from here on. The aggregated approach to biodiversity, which ostensibly glosses over the complexities of biodiversity and the role it plays in the embedded economy (see Pascual et al. (2021) for a perspective), can be understood in terms of the audience and the high level aims of the Review. While light on specific policies and only hinting at practical steps, the Review provides a much needed foundation upon which debates on limits to growth can be centred, and policy responses subsequently built. The following sections hope to contribute to both.
Measuring Biodiversity
In order to understand the high-level arguments that the Dasgupta Review makes concerning the role of biodiversity in the economy, and the economy within the biosphere, it is important to understand the way in which biodiversity is conceived of in the Review. The measures of biodiversity and Natural Capital that are used to make the main arguments in the Review are the equivalent of the macroeconomic aggregates of flows like output or income (Y) on the one hand, and the stocks of 'labour' (L) and 'capital' (K) on the other, that appear in typical neoclassical growth models. Net Primary Product (NPP) is the flow of biomass regenerated by a stock of primary producers (plants, algae, bacteria) that are the building blocks for species, ecosystems and communities, which fall under the umbrella term of natural capital. Primary producers convert primary energy sources into useful biomass and other inputs which flow into these higher ecological structures. There is biodiversity both within ecosystems, in the genetic diversity within and between species, and between ecosystems, communities and biomes.Footnote 3 Aggregate NPP has the same level of abstraction as the typical macroeconomic flows, containing a wide variety of biomass, and the stocks that produce NPP are also diverse. Just as output is a collection products measured in a common metric, so is NPP. Just as K is an abstraction to many different forms of productive man-made capital, so is the stock of primary producers and ultimately natural capital of which they are part. The need for such stock and flow measures of biodiversity reflects both the theory of sustainable economic development and its focus on wealth as the determinant of long-run well-being, as seen in Arrow et al. (2012), but also the aims and audience of the Review as discussed above. The framing fits the arguments that need to be made and the audience of macroeconomists that are being targetted. NPP is also potentially more easily understood as being relevant to economic growth than some other measures of biodiversity that could have been used, such as species richness, genetic diversity, 'intactness' or even habitat. NPP reflects an emphasis on functionality in the measurement of biodiversity in supporting ecosystems, generating ecosystem services and engendering resilience. Using NPP also allows a fairly standard Natural Resource Management framing for conveying ideas about biodiversity formally.
Conceptually then, NPP has clear practical appeal with regard to the audience, the context of economics growth and the central message of the Review. Fears that this definition of biodiversity is too limited are allayed in Ch 2, which is a useful reference in its own right to help navigate this complex territory. There is a firm ecological basis, rooted in the ecological literature, for framing biodiversity via NPP. Nevertheless, NPP does not necessarily satisfy all of the characteristics that are typically associated with biodiversity. For instance, the correspondence between NPP, the biomass produced each year, and biodiversity is not always positive. There are situations where NPP is large in non-diverse systems, such as mono-crop plantations or agriculture, grassland ecosystems, or some fisheries.Footnote 4 Neither does NPP necessarily reflect the nuances of anthropocentric or intrinsic values of biodiversity. Such values view biodiversity as a direct determinant of welfare (e.g. aesthetic values) or as a store of information that affects or, via solving future problems of pathogen resistence, has the potential to affect welfare indirectly (genetic diversity in itself), or as an aspect of natural capital / wealth with associated ecosystem services (Mace 2014). Yet the Review convincingly positions NPP as the key lens through which to understand the many roles of biodiversity: biodiversity contributes to NPP, it is an outcome of primary production, and it facilitates natural capital and its resilience in a way analogous to the way trust facilitates economic activity. Care is naturally taken to separate out the valuation side from the product/biomass side. For instance, the shadow prices for invasive species are likely to be negative, despite successful biomass production. Some aspects of biodiversity do not have a positive value for humans, and are effectively pollutants (See Section 2.6).
The focus on the functionality of biodiversity, and NPP as the central concept means that some measures of biodiversity are virtually ignored. Fans of Weitzman's work on measuring biodiversity will be disappointed to see no mention of what some would argue are seminal works. From a functionality perspective, such measures, which focus on genetic distinctiveness, typically ignore quantity/population size, the distribution of species on the ground, and complex interactions within ecosystems. While practical examples of Weitzman's measure exist for primates (Jean-Louis et al. 1998), cattle (Reist-Marti et al. 2003) and cacao (Samuel 2013), and extensions to include more complex ecological relationships have been attempted (van der Heide et al. 2005; Courtois et al. 2014; Simianer and Simianer 2008) such measures are perhaps more appropriate for more specific informational problems, like organising gene banks or specific conservation interventions.Footnote 5 It is difficult to see how such measures could be central to the objectives of the Review. Yet, despite the thorough justification for the framing of biodiversity around NPP in the Review, perhaps one criticism might be that it skirts around some of the more practical aspects of measuring biodiversity, particularly which measures of biodiversity are useful for which purpose. Nevertheless, one thing the Review's conception of biodiversity has in common with Weitzman's informational-type measure is that the focus is taken away from matters relating to particular (e.g. flagship) species that humans find important, more subjective ideas of 'pristine-ness' and normative ideas of what conservation ought to do. Of course, the aggregated and functional NPP approach does not solve all disputes about the meaning and definition of biodiversity that beset pluralistic approaches. Pascual et al. (2021) provide a fascinating excursion into these difficulties.
The special characteristics of biodiversity that require specific attention in economics are discussed at length, often making the point that standard economic analysis is sometimes inappropriate. A good example in Ch 13 concerns the discussion of non-linearities in the environment and biosphere, which essentially invalidate marginal analysis if the objective is to find the best sustainable equilibrium. Chapters 3 and 4 discuss the lumpiness and complementarity of investments in ecosystem services, and how a fragmented approach would not necessarily pass a CBA test if it ignored such complementarities. Furthermore, Weitzman's dismal theorem is used as an example of how conventional CBA may fail in the presence of fat-tailed uncertainty (Weitzman 2009). Just as in climate change policy where there is a serious rift between the Social Cost of Carbon and the carbon price people face, the Review argues the same is an 'important truth' (p. 168) with regard to biodiversity. The next section explains how the Review formalises this point (in general terms) in a model of growth embedded in the environment.
Limits to Growth?
The Review's central messages is that the economy is embedded in nature, and hence dependent on it. This is short-hand for saying that there are limits within which economic activity should remain. The formal analysis in the supplement to Ch 4 (Ch 4*) models embeddedness via a materials balance constraint, which reflects the demands of economic activity on a generic renewable natural capital asset. As discussed in Ch 4, the basis of this framing is closely related to, and motivated by, the concept of planetary boundaries first discussed by Rockstrom et al. (2009) and Steffen et al. (2015), which point out the quantitative constraints on various natural processes and the current demands placed upon them by economic activity (climate, nitrates, forests, biodiversity etc). The materials balance ideas also harken back to Boulding and Georgescu-Roegen, whose steady state economy ideas remain influential, if nowhere else than in ecological economics (Boulding 1968; Georgescu-Roegen 1971).
The model is recognisable as a more or less standard natural resource modelling approach, applied to the economy as a whole. The limits presented here are not exactly the 'limits to growth' in the sense of the Club of Rome report of the early 70s, which emphasised non-renewable resources and their imminent exhaustion (while forgetting relative price changes and substitution), but rather focus on the renewable flow of NPP, and the erosion of the stocks (the primary producers and other aspects of natural capital) that underpin NPP. The chief concern of the Dasgupta Review is that the demands placed on the biosphere outstrip the renewable supply of NPP from natural capital, from which flow the benefits of biodiversity and nature (fish, soil, water quality, forests, climate, etc.). Ch 4 provides two arguments to suggest that biodiversity has been run down too much, and is in danger of collapse. First, the Review argues that these natural assets are currently below the level that can maximise well-being by looking at the disparity in returns compared to man-made capital: a disparity of 14 percentage points in favour of Nature's primary producers (See box 2.3 and Bar-On et al. (2018)). Second, there is a considerable imbalance or 'impact inequality' since according to Wackernagel et al. (2019b), estimates of the global ecological footprint suggest that the demands on NPP are approximately 170% of the supply. This erosion of natural capital is unsustainable since these stocks are for the most part irreplaceable in aggregate, and not generally substitutable for physical or other capitals.Footnote 6
It should not be underestimated how much of a departure the idea of embeddedness and planetary boundaries are to the mainstream of economic growth. Neither should it be forgotten that issues of sustainability and its relationship to renewable and non-renewable natural resources are absolutely familiar to environmental and resource economists, since Dasgupta and Heal (1974), Stiglitz (1974), Hartwick (1977), and Dasgupta and Heal (1980), via Hamilton et al. (1999), Arrow et al. (2004), Pezzey (2004) and Dasgupta (2010), until more recent policy pieces like Arrow et al. (2012), collections like Hamilton and Hepburn (2017) and textbook analyses of Fleurbaey and Blanchet (2013). Furthermore, the embeddedness idea is pretty much the essential and definitional departure point for ecological economics. Yet, while the idea of limiting the demands on environmental and natural resources seems intuitive, and is well-trodden ground in the field (environmental and resource, and ecological economics), it is not the focus of mainstream growth economics, which emphasises the role of technical change and institutions in driving perpetual growth (Acemoglu et al. 2001; Romer 1990; Aghion et al. 2004). Even recent proposals for better measures of macro performance have avoided the environment (Jones and Klenow 2016). There are some high profile exceptions that address the environment, such as Nordhaus' work on climate change, Brander, Copeland and Taylor's work on trade and the environment (e.g. Copeland and Taylor 2004), and Acemoglu et al. (2012), but these still remain firmly 'unembedded' in the terminology of the Review. In each case the environment is just tacked-on, seen more as a sink for pollutants than the basis of well-being, and infinite growth is always possible. An exception to this is Brander and Taylor (1998), which focused on the boom-bust environment-population travails of Easter Island but is otherwise intended to be canonical.Footnote 7 With these exceptions, even within environmental and resource economics, when it focuses on sustainability there has been a strong emphasis on the substitutability of natural capital for other forms: weak sustainability (e.g. Hartwick 1977; Hamilton et al. 1999), with some notable exceptions such as Neumayer (2012). Physical laws and material and balances are more the realm of ecological economics.
As the Review makes clear in Ch 4, recent contributions, such as Arrow et al. (2012), do not make weak sustainability assumptions per se when making the positive connection between comprehensive wealth measures and long-term, sustainable, well-being. Rather, they emphasise the important role of shadow prices to reflect the approaching non-substitutability, tipping points or subsistence constraints. Nevertheless, whether weak or strong, sustainability has largely been the reserve of environmental, resource and ecological economics. Coupled with embeddedness and limits to growth, the Review is a deliberate departure from mainstream growth theory.
Irrespective of its provenance, from the perspective of long-run growth, there are three immediate questions that arise from the Review's framing of biodiversity economics in the aggregate. First, are there really inherent limits to growth as economists understand it? Second, if we are beyond the limits of sustainable demands on NPP, how far, and how pressing is the action required? Third, how can we correct the imbalance? Is a period of degrowth required, or can we rely on technological change? The next section looks at the formal representation of the embedded economy from Ch 4* of the Review as a means of discussing some of these points.
Formal Model of the Embedded Economy
The formal analysis of the economy embedded in the environment/nature adjusts the Solow framework to include a materials balance equation (See Ch 4*). Embeddedness appears first in a resource dynamics equation, which is an 'impact inequality' when natural regeneration, \(G\left( S\right)\), is outweighed by the direct harvesting, \(R= \frac{Y}{\alpha _{X}}\), and the indirect demands of the economy as a whole, \(\frac{Y}{\alpha _{z}}\):
$$\begin{aligned} \frac{dS}{dt}=G\left( S\right) -\frac{Y}{\alpha _{X}}-\frac{Y}{\alpha _{z}} \end{aligned}$$
where \(\alpha _{X}>0\) and \(\alpha ^{*}\ge \alpha _{z}>0\) and r is intrinsic growth of S. Also, \(G\left( S\right)\) is subject to constraints of its own such that \(G\left( S\right) =rS\left[ 1-S/{\underline{S}}\right] \left[ \left( S-L\right) / {\underline{S}}\right]\), where L is the minimum viable level of natural capital: above this is the 'safety zone', and \({\underline{S}}\) is the global carrying capacity. The second and third terms on the RHS reflect first direct extraction of S: \(R=\frac{Y}{\alpha _{X}}\), and the indirect resource intensiveness of output (coupling): \(\frac{Y}{\alpha _{z}}\). With output Y also dependent on S and the flow of renewable resources, R, \(Y=Af\left( S,K,H,R\right)\), the embeddedness of the economy in nature is completed by the constraint \(\alpha ^{*}\ge \alpha _{z}>0\). This conception of the economy contains limits to growth. Exogenous growth in Y via A (TFP) eventually leads to more demands on S via the materials balance equation until an impact inequality occurs and \(\frac{dS}{dt}<0\) and eventually \(\frac{dY}{dt}<0\). How quickly this happens depends on how close the economy is to exiting the safe zone defined by L. Figures 1, 2 and 3 show the optimal paths of growth and development associated with this model. These optimal paths are not necessarily sustainable, and typically lead to negative growth in the future. Figure 1 shows the sensitivity of the optimal path to the planetary boundary, represented by L. If the economy is operating below the safe zone so \(S<L\) (which is represented by numbers lower than 1 in Fig. 1) a collapse of GDP and the biosphere happens sooner than if the economy is well within the safe zone. The perception of positive growth over the next few years belies the negative growth that is to come. Still, the time horizons are important, as are changes in the other parameters of the model. Figure 2 shows similar results for GDP per capita (GDPpc), in which the downturn due to the impact imbalance is starker with higher population growth. Furthermore, in this set up, technological change also 'buys time', to use Dasgupta's phrase, but technological change cannot separate the economy from the environment. The key to this point is that \(\alpha _{Z}\) is bounded above to \(\alpha ^{*}\), meaning that technological change can never completely decouple the economy from the environment. Figure 2 shows that larger values of \(\alpha _{Z}\) merely postpone the inevitable decline of an optimal path.
These dynamics capture the point that there are relevant policy interventions to be had in relation to technological change and population growth, but the long-run implications are inescapable: growth ad infinitum is not possible in an embedded economy. The prognosis looks particularly bad if the analysis of ecological footprints is correct, which suggests that global demands on NPP outstrip global supply by a factor of 1.7, so \(S<L\), compared to 1970 after which the global consumption left the safe-zone (See Box 4.6) (Wackernagel et al. 2019a).
The decline in GDP per capita arises because as the level of GDP rises so do the demands on the biosphere, and optimality trades off growth today, overshoot and degrowth tomorrow. Of course, the paths illustrated are optimal paths in an augmented Solow model. An environmentally sustainable path could in principle be chosen, one that stays within the planetary boundaries, such that \(\frac{dS}{dt}=0\). Is positive growth possible forever in this case? The answer is found in the materials balance condition with the limiting constraint on \(\alpha _{Z}\). Working this through, consider the materials balance equation and how this changes over time:
$$\begin{aligned}&\frac{d}{dt}\frac{dS}{dt} =\frac{d}{dt}\left[ G\left( S\right) -\frac{Y}{ \alpha _{X}}-\frac{Y}{\alpha _{Z}}\right] \nonumber \\&\quad =G_{S}\left( S\right) {\dot{S}}-\left[ \left( \frac{1}{\alpha _{X}}+\frac{1}{ \alpha _{Z}}\right) g_{Y}-\frac{g_{\alpha _{X}}}{\alpha _{X}}-\frac{ g_{\alpha _{Z}}}{\alpha _{Z}}\right] Y \end{aligned}$$
With \({\dot{S}}=0\) and with \(\alpha _{Z}\) reaching the limit \(\alpha ^{*}\) in the long-run, for the steady state to remain with positive growth in GDP, the conditions on growth and technological change are:
$$\begin{aligned} g_{\alpha _{X}}\left( \frac{\alpha ^{*}}{\alpha ^{*}+\alpha _{X}} \right) =g_{Y} \end{aligned}$$
which means that for a time technological change in \(\alpha _{X}\) can be relied upon to maintain positive growth, but as \(\alpha _{X}\) gets larger, the only sustainable level of growth in GDP is 0. Worse, this analysis does not contain any rebound-type effects on demand. The formal connection would be that technological change in extraction (increased \(g_{\alpha _{X}}\)), leads to a higher impact of income on the biosphere (lower \(g_{\alpha _{Z}}\) or \(\alpha ^{*}\)). The Review provides agricultural intensification as an example of this process, which reduces the demands on land and soils, but raises pesticide and herbicide use. Indeed, there are many such endogenous processes that this stylized model does not reflect.
GDP and Biosphere paths of the embedded economy: The top graph plots GDP per capita for various levels of the 'safe-space' or planetary boundary L. A value less than 1 means that the current stock S is below the planetary boundary. In all cases, demands on NPP eventually outstrip supply: the 'impact inequality'. The bottom graph plots the associated mining of the biosphere, S, with different growth paths according the the level of the planetary boundary, L
Growths paths of the embedded economy: Optimal GDP paths with different levels of population growth and technological change. In the lower graph, \(\alpha ^{Z}\) grows logistically to the limit \(\alpha ^{*}\), with the starting value multiplied by the numbers 0.1 to 10
Optimal Growth Paths
The Review ultimately has a limits to growth message, with the best long-term outcome being a steady state economy. The constraints on decoupling define the point at which optimal growth returns to zero, and the level of GDP at which this occurs. Sustainable growth is only possible over the longer term within this framework if saving remains very low and increases slowly. Indeed, high levels of saving increases capital and output, and hence the demands on the biosphere. Figure 3 shows the optimal paths of growth for different savings rates and time horizons, showing that embeddedness brings a very different logic to growth. For instance, higher savings rates, the driver of growth in the neoclassical frame, brings a collapse sooner due to its effect on Y and then S (see the lower panel of Fig. 3).
Of course, it is well understood since Dasgupta and Heal (1980) that the optimal path is problematic when it comes to environmental resources and sustainability, leading to similar collapses in consumption if we discount future utilities. Yet, in the Dagupta–Heal–Solow–Stiglitz (DHSS) model sustainable, Rawlsian constant consumption paths are possible with sufficient substitutability of the non-renewable resource flow for physical capital. Even rising consumption paths are possible in the simpler non-renewables framework, provided genuine savings are positive and not growing too fast (Hamilton and Clement 1999; Arrow et al. 2012; Hamilton and Hartwick 2014). Yet in the embedded economy, where S and R are essential, such rising paths are not possible because rising incomes eventually imply a path of biosphere exhaustion that cannot be compensated for by increases on human or physical capital.
While new to mainstream economists, this message will not be new to ecological economists, and certainly has echoes of past works on steady state growth. As Boulding (1968) writes:
When we have developed the economy of spaceship earth, in which man will persist in equilibrium and his environment, the notion of GDP will disintegrate. We will be less concerned with income-flow concepts and more with capital-stock concepts. Then technological changes that result in the maintenance of the total stock with less throughput (less consumption and production) will be a clear gain.
This is not far from the Review's position, and similarly leaves open the role for technological change to ease the path to a steady state. The limits to growth idea will certainly be debated. Some who focus on planetary boundaries and ecological footprints could well argue that provided those physical boundaries are adhered to, the economy can get on with the business of decoupled growth. Others will seize upon embeddedness as an affirmation that degrowth is necessary. Others will remain agnostic about GDP growth altogether (van den Bergh 2011).
Growth paths of the embedded economy: Top: Optimal GDP paths with typical savings rates show an earlier collapse of GDPpc with high savings rates. Bottom: long-term growth sustained at a low level with low savings rates
On the question of whether we have yet exceeded the planetary boundaries, and whether there is an impact inequality, the Review ultimately places a great deal of stock in the global aggregated perspectives provided by the Ecological Footprint approach from which the stark message that the world has already overshot derives (Wackernagel et al. 2019a, b). The trouble is that these measures are subject to measurement problems, practical rules of thumb and static assumptions, and remain open to question. In short they may not be sufficient evidence to convince the target audience. For instance, one of the reasons why the ecological footprint suggests that the world has overshot (there is an impact inequality) arises from excessive carbon emissions when gauged against the 1.5C target of the Paris Agreement. When converted into the required hectares of reforestation for this target our demands on the planet are pushed beyond global capacity (area). Taken literally, the implication seems to be that decarbonisation can only happen via land-based solutions. Technological change is also not always well captured by the measures. Furthermore, certain paradoxes raised about the lack of relationship between 'biocapacity' and GDP (see Wackernagel et al. (2019) for instance) are not convincing and neglect empirical evidence to the contrary (Brunnschweiler and Bulte 2008; van der Ploeg, 2010). However, on the other side, it can be argued that the ecological footprint underestimates the impact inequality, particularly in agriculture where the global hectares required are probably not a good measure of its overall impact on the biosphere.
Many of these issues arise because of the scarcity of reliable data and the need for rough approximations to answer globally important questions. It is also an example of two literatures failing to talk to one another, ignoring the mutual gains of doing so to tighten up the arguments. Nevertheless, the fact that the Review has organised around the idea of embeddedness, and centred arguments for the impact inequality on contributions from the natural sciences will be influential in fixing ideas, guiding future research questions, and building the evidence base that can better illustrate the extent of the constraints. This will be the central point of debate with mainstream growth economists. So far, there are a few signs of engagement with these perspectives.Footnote 8
Addressing the Impact Inequality: Growth and Biodiversity Policy
The remaining question is, given that the embedded economy is beyond the planetary boundary, what policies can be implemented to address the impact inequality? There are clearly demand side and supply side issues that can be addressed, and a range of policies that are required to address the absence of biodiversity values from everyday decisions.
The review offers many of standard policies that are available in the environmental and resource economists toolkit, and some which are generally not. Microeconomic policy interventions include the use of corrective Pigouvian taxes or permit trading equivalents to reflect the shadow price of externalities, and the reduction of subsidies to harmful activities, such as agriculture and fossil fuels. Subsidies to biodiversity harmful sectors such as agriculture, fossil fuels and fisheries are in the order of US$4-6 trillion annually, dwarfing the finance available for conservation. On macroeconomic policy the chief recommendation is to change the way that we measure economic performance, to focus less on GDP per capita and more on measures that reflect sustainability. This could include ensuring non-negative genuine investment/saving measures or non-declining measures of comprehensive wealth in national accounts. Since their potential was first realised, these policies have not really broken through into day-to-day macro policy despite the clear connection to sustainable development and numerous examples of how this could be done (e.g.Arrow et al. 2012, in India and various chapters in Hamilton and Hepburn 2017).Footnote 9 Hopefully the recommendations of the Review will be the push that is needed for policy makers to change tack. There are data issues, and real difficulties in estimating shadow/accounting prices as the Review recommends, but a number of indicators have been proposed that try to reflect sustainability in current measures of performance. The review points to Gross Ecosystem Product (China), the UN System of Environmental and Economic Accounts, and comprehensive wealth measures, even quantitative accounts, as fruitful ways forward. These approaches fit with the overall natural capital approach taken by the Review. Indeed, March 2021 saw the adoption by the UN of the System of Environmental Economic Accounting—Ecosystem Accounting (SEEA-EA), and the Gross Ecological Product (GEP) measure into the UN system of accounts (Ouyang et al. 2020). The former will complement GDP flow measures with measures of natural capital stocks, as recommended by the Review (and environmental economists since the 70s).Footnote 10
Yet in light of the previous section, these measures are necessary, but not sufficient. If we take the model and the evidence on NPP demands seriously, it is clear that a period of passive investment in the biosphere is required to restore natural capital and to reduce the impact inequality. Indeed, active investment in nature restoration is also required. Passive investment requires reducing Y or increasing \(\alpha _{Z}\) to obtain the materials balance. Passive investment: natural regrowth and restoration, will only happen when \(\frac{dS}{dt}>0\). In the absence of quick technological fixes to limit the impact on the biosphere, this will require a drop in income: degrowth, followed by a move to a long run-steady state. Is degrowth politically feasible, particularly for low income countries? Probably not. Growth in India and China is in the order of 7% per annum at present, and there is no sign of this reversing any time soon, particularly where poverty reduction remains a priority. Technological change will clearly be important and whereas for climate change renewable energy is the key innovation, with regard to the biosphere, food systems and agriculture are likely to be the most important areas to relax the constraints of the biosphere and local ecosystems, and reduce the pressure on biodiversity. Restoration and quantitative restrictions (protected areas, rules on trade) will also be required to prevent further biodiversity losses. The Review recognises this (See Chapter 16 on Trade and the Biosphere).
Finally, with regard to closing the impact inequality, the stylised macro-policy decision is which steady state should be targeted? Chapter 11–13 of the Review speaks to the relationship between wealth and natural capital accounting and the attainment of the optimal steady state. The guiding principle is that changes in wealth translate into changes in long-run well-being. Navigating the movement from an unsustainable impact inequality to a sustainable embedded economy can be best (with minimum cost to society) achieved by following these valuation principles, using appropriate accounting prices. Annex 13.1 shows some historical applications (Arrow et al. 2012), prefaced by the statement:
The publications should be viewed as reconnaissance exercises. You know they got it wrong, but you also know they are in the right territory.(p. 350)
There is much more work to be done in this area. The recent adoption inclusion of natural capital accounting in the UN-SEEA and GEP are useful and pragmatic steps in the right direction (see Ouyang et al. (2020) for an example of its application in China).
Consumption, Fertility and Socially Embedded Preferences
As Fig. 1 shows, population growth is also important. These two key issues: consumption levels in rich countries and population growth, deserve some attention, and the Review does not shy away from them. A key idea in the Review that is central to the proposed solutions in each case is that of socially embedded preferences. That is, what we do, how we act, is driven in part by social norms, habituation, traditions and reference dependencies, that are difficult to deviate from.
The Review argues that preferences over consumption and fertility are socially embedded, meaning that much of our behaviour is governed by what our peers in society do. What we end up doing collectively derives from social norms, habits and other references that arise for reasons that are sometimes long since forgotten, or coordinate people in arbitrary ways. Dasgupta provides evidence to show that our consumption patterns and, indeed, fertility decisions (how many children women have) are to a large extent determined in this way. Bluntly, if people around you buy a new car every year, then it is highly likely that you will. If people in society have large families, it is more likely that you will too. It is difficult not to conform. We see evidence from around the world on how changing social norms may change consumption and fertility patterns. In a remarkable study, for instance, Jensen and Oster (2009) provide evidence on the role of cable TV in changing attitudes to fertility in India, while La Ferrara (2012) shows similar effects on fertility in Brazil as a result of popular telenovelas. Drawing upon a wealth of his own research, Dasgupta takes on the difficult topic of population growth and the environment through the lens of socially embedded preferences, and argues for and evidences the case that family planning has an important part to play in reducing the impact inequality, more so because of the externalities that embedded preferences give rise to.
Relating back to the question of income growth and its limitations, the issue of over-consumption is given a similar treatment. In each case the implication is that provided social norms can be augmented, and new, lower consumption and fertility equilibria can be arrived at without significant loss of welfare. Part of the idea is that if well-being is obtained via conformism, conforming to other norms would leave people no worse off and reduce demands on NPP. Looked at another way, both consumption decisions and fertility decisions within the household have significant external costs. This is a simplification of otherwise nuanced arguments presented in Chapter 13, which cover issues of human rights, gender, and how in the presence of externalities, clashes between rights arise, e.g. between fertility rights, and the rights of future generations. Can cable TV be relied upon to shift our conformist equilibria to one with lower resource intensity, be it on the consumption or, bluntly, the population side? Analysis of apparent public policy successes in this area suggest that public policy can invoke more than marginal changes. Theoretical results on multiple equilibria in social coordination problems are numerous, e.g. Nyborg (2020) in relation to smoking. While experimental evidence on multiple equilibria also exist in relation to issue of smoking and diet, it suggests that the mechanisms are complicated. Was the change in smoking behaviour in many European countries a shift from one conformist equilibrium to another, or just a standard response to higher costs due to bans in public places? Where nudges appear to work, there remains a question as to how permanent they are and also whether they are the preferred intervention. For instance, Vringer et al. (2017) show that in deciding whether household budgets should be allocated to more sustainable consumption bundles, people are reluctant to impose restrictions on others, preferring regulatory interventions. On consumption a key area is diet, and more work is needed to see how diets can be shifted (Willett et al. 2019), and how conformism can inhibit simple nudges and cause inertia. Indeed, this is an area in which changes in consumption could also solve a 'nutrition crisis' of excess, the economic costs of which run to almost US$2Tr per year for obesity related health care costs alone (Mande 2019, p. 14).Footnote 11 There is certainly room for behavioural public policy here, but there remain questions as to scale and permanence, and precisely how individual motivations can be aligned with public policy goals (Banerjee and John, forthcoming, Garnett et al. 2019). Ultimately, not all consumption and fertility decisions are socially embedded.
The Supply Side
Rebalancing could be addressed on the supply side too, through passive investment (allowing ecosystems to regenerate naturally via, e.g. moratoriums on use) or by directly addressing restoration of habitats and ecosystems. This fits with the thrust of the UN Decade of Ecosystem Restoration (2021–2030) and is discussed at length in the Section 4.7, and Ch 19 of the Review, which describe successes in peatlands, wetlands, coral reefs. Ch 18 speaks to the conservation of nature and the role of protected areas and, more broadly, quantitative non-price measures. Indeed, a number of recent studies have attempted to show not just the economic gains of expanding protected areas, but also the financial gains (Waldron et al. 2020).Footnote 12 The reason why such apparently low hanging fruit has not so far been picked is an interesting question, the answer to which lies in the property rights structures of natural resources and issues of local political economy. The potential costs of protected areas to local communities was one aspect of Waldron et al. (2020) that came in for criticism. Footnote 13 The 30% target for protected areas is certainly a target that could coordinate activities and the CBD COP 15, but the localised and distributional impact should not be ignored given historical experiences.
While restoration and conservation activities serve an important supply side role, in aggregate and in the medium term technological change, and changes in consumption patterns, particularly diets, will ease passive investment. The review also explores the distributional implications of changes in consumption and remarks the burden of such changes ought not to fall on poor countries for whom rising incomes address poverty alleviation (See Ch 14 of the Review).
Thinking about distributional issues requires unpacking the stylized representative agent type model discussed above. Another issue in this regard is in the aggregated representation of the stock of primary producers and global flow of NPP. Of course, this stylization is purely illustrative, but the introduction of natural capital into the model means that Total Factor Productivity (TFP) as a multiplier of productive inputs could well be insufficient.Footnote 14 With tightening natural capital options, we might be better served by separating out the efficiency with which elements of natural capital are used, asking whether the same output can be obtained with less natural inputs. To some extent this is recognized by discussion of the dependence of \(\alpha _{Z}\) on A (TFP), but perhaps factors that enter the model as a parameter directly on S, R, or possibly both depending on how we envision improvements in the efficient use of the environment in production are made, and how restoration and passive investment are achieved. There are X-inefficiency type issues, and perhaps issues concerning the use of natural inputs that might be labelled Total Factor Efficiency (TFE). Both highlight the role that innovation and investment in efficiency and clean production can make in stabilizing—even decreasing—our demands on the natural environment wholly independent of other production processes. The details will be context specific, but improvements could be sought via, for instance, re-siting of productive processes: restricting production to fewer, more tightly regulated or less harmful sites.
Nevertheless, while such demand and supply side interventions interventions can change the flows of demand and the impacts on the biosphere, and are important parts of the response to the impact inequality, they do not change the fundamentals of physical planetary boundaries and the associated materials balance. The fundamental truth, the Review argues is that ultimately:
The efficiency with which its goods and services can be converted into produced goods and services is bounded. (p. 126)
Biodiversity as Reflected in the Financial Sector
Market failures with respect to nature and biodiversity are channelled through the financial sector, leading to the misallocation of capital and the facilitation of harmful activities. In Chapter 17 the review spends some time working through the failings of the sector to account for nature when allocating capital. The sector is also one of the target audiences for the Review. Throughout, the Review uses the language of assets and returns, makes the analogy of natural capital and ecosystem services as the return to nature. The idea of free passive investment: just leaving nature to regenerate and restore, as would happen in a moratorium on fisheries or forests for example, is used to make the issue at stake intuitive to a finance and macro/growth audience. Evidence is then provided to suggest that the returns to nature and biodiversity are an order of magnitude higher than the returns to physical capital and typical financial assets, hence there is under-investment in biodiversity and nature compared to other more traditional assets and capitals. This is not to say that demonstrating these returns will lead the financial sector to respond, more that there is work for policy and regulation to address the imbalance.
The special features of biodiversity that are explained in detail in the Review: (i) prone to tipping points and irreversible changes; and (ii) higher levels of biodiversity tending to reduce risks to many sectors including agricultural commodities, make biodiversity loss and the degradation of nature a concern for the financial sector. The physical risks associated with biodiversity loss have the potential to be material to many companies and investors. Box 17.1 in Ch 17 has some examples of the savings to insurance companies, for instance, provided by natural flood defenses (wetlands including mangroves). Barbier et al. (2018) estimate savings of up to $52bn from global wetland conservation alone. Furthermore, with suitable policies, rules on trade (see chapter 18) and laws in place, there are also potentially transition risks and litigation risks. Stranded assets may also arise in nature just as for fossil fuels in relation to climate change. Agriculture and forestry may well be the industries that are vulnerable to being stranded. All of which may have implications for the value of sovereign debt for countries dependent on nature related industries and hence facing nature related risks. The downturns shown in Fig. 1 arising from the impact inequality and loss of biosphere natural capital, reflect in aggregate terms the kind of risks faced by an embedded economy. These risks affect economic activity, but the extent to which they are reflected in the financial sector at present is limited for biodiversity.
The potential connectedness of financial values to nature is well argued in Chapter 17. A great deal of work is required now to understand and price these risks in the financial sector as a whole. At present such values and risks are mispriced, capital is therefore misallocated, and damaging activities persist (Gostlow 2019). Indeed, a glut of other measures are required according to the Review to encourage companies and funds to understand the risks they face, measure the impacts their activities cause on biodiversity, and signal this information to investors and consumers. Disclosure mechanisms like the Taskforce for Nature Related Financial Disclosures (TNFD), categorisations like the EU Taxonomy approach (which categorises financial funds according to their impact on nature and sustainability), liability rules to call financial intermediaries to account for lending to damaging activities, and better information and pricing of nature related risks and nature linked bonds.Footnote 15 These are some of the instruments available to embody biodiversity into investment decisions.Footnote 16
The Review is descriptive and supportive of some of the measures and approaches being taken in the financial sector, in relation to green bonds and other macroeconomic initiatives related to nature and biodiversity. The Review is not particularly prescriptive however. Responses to the Dasgupta Review have laid out a clear agenda for the financial sector in relation to biodiversity. In an excellent summary of the entry points and remedies for nature in the finance sector, the response from Vivid Economics tells of the difficult task ahead.Footnote 17 Here, it is argued inter alia that citizens should be empowered to make decisions that reflect their preferences over biodiversity and nature. This can be done in part by disclosure mechanisms that connect businesses and financial institutions to the impacts they have on biodiversity. Obviously rules are required to regulate particular behaviours. Liability mechanisms could be important here. Finally, biodiversity should be mainstreamed into the financial governance structure of financial institutions to ensure accountability for actions that cause damage, and promote stewardship. Regulations and mandates exist for disclosing other material risks, and these should be extended towards biodiversity related material risks and impacts. Perhaps current sources of information for material risks are insufficient.Footnote 18
There is much work to be done to establish the nature related risks that companies and funds face, as well as connecting the biodiversity related impacts to companies and funds. Understanding which financial mechanisms and instruments will be successful in this area is something that economists should now start establishing causal inference for. It is interesting to note the recent surge in companies and assets that fall under the Environment, Social and Governance (ESG) disclosure/ratings umbrella (an increase of 34% since 2016), and the finding that such rated assets have weathered the COVID storm better (Albuquerque et al. 2020). Yet these findings, while promising, ought to be interpreted cautiously in terms of what is actually being achieved in each of the E, the S and the G.Footnote 19 For instance, Berg et al. (2020) show the inconsistencies that exist between different ESG ratings agencies, which sometimes come to different conclusions about the same companies, and which are more generally uncorrelated with one another across attributes.Footnote 20 Compare this to risk ratings, which are highly correlated across raters, and contain much of the same information.
There is also concern about ESG being an exercise in greenwash, given the difficulty in separating out the different components, and tying ratings to specific actions on the ground. One also may have to take the rough with the smooth, as it were. British Allied Tobacco is in the top 3 ESG rated firms of the FTSE100 due to, it is reported, its net-zero pledges, commitment to reduced water use and more sustainable agriculture (E) and treatment of workers (G). It is obviously performs less well in other other dimensions (S).Footnote 21 Also in the top 3, AstraZeneca scores highly due a commitment to sell vaccines at cost, while Glaxo Smith Klein have various net-zero carbon and nature neutral commitments by 2030. Such companies may not suit all investors that are interested in sustainability, climate change and nature.
Such is the concern about greenwash that the European Union has recently imposed higher levels of regulation on ESG ratings, the Sustainable Finance Disclosure Regulations (SFDR) to ensure that ESG rated funds publish their sustainability processes, and have generally more rigorous disclosures that assist investors in making decisions. Yet, remarkably, lobbying removed deforestation from the list of issues that should be reported on.Footnote 22 These are some of the potentially confusing outcomes that Berg et al. (2020) have pointed out. The Review also lauds other disclosure type mechanisms such as the TNFD. The TNFD is being constructed in the mould of a similar mechanism for climate disclosures. The success of such disclosure mechanisms, think also of the Transition Pathway Initiative (TPI), is often measured in terms of the asset values of the firms that sign up to disclose, rather than what is disclosed per se. The TPI reports sector by sector on carbon emissions against sector level benchmarks for performance, and how these relate to the target of the Paris Agreement to limit temperatures to below 2CFootnote 23. So the TPI is somewhat clear about its benchmarks. Yet among the US$23Tr of assets that have signed up to the TPI, only 14% are aligned to the 2C target of the Paris Agreement. While more are aligned with the country level Nationally Determined Contribution (NDCs) typically NDCs are insufficient to meet the target.Footnote 24 On the other hand, around 60% of firms have processes in place to manage their own material climate risks. So there is a long way to go to connect ESG ratings which affect the demand side, with action on the ground on the supply side of biodiveristy benefits. The Review provides a good overview of the work that is being undertaken in the sector. Overall though, some care is needed to ensure that conformity to one set of standards, e.g. ESG, TNFD, does not perversely provide even more space for those companies and investors who do not conform.
The Review is more than its focus on limits to growth and how to reverse the impact inequality. It rehearses all the economic aspects of biodiversity loss, recognises the role of institutions and property rights, the failure of markets, and the difficulties of bringing future generations, an important constituency in relation to biodiversity, to the table when making social choices. These are well known arguments in economics, and they are well articulated here in the context of nature and biodiversity.
The idea of limits to growth is controversial, and it will be debated further. It is difficult to envisage the future crashes predicted even by optimal growth paths in an embedded economy (See Figs. 1, 2 and 3). The prescriptions of the Review, which revolve around pricing biodiversity properly so that it affects our day to day decisions, the decisions of companies and investors and the programmes and policies that governments implement, are less controversial to economists. Yet, the fact that these prescriptions rely in great measure on valuation of biodiversity may sit uneasily with those who see biodiversity as more than the instrumental values that such valuations might imply. Some may find the very language of assets, prices and natural capital irksome for this reason. The use of the term capital conjures up an association with capitalism, and the sense that economists are the problem, rather than the solution here. The hackneyed phrase that an economist knows the price of everything and the value of nothing, may well make a comeback in the aftermath of the Review.Footnote 25
The opposite is closer to the objective for environmental and ecological economists. A major part of the process of valuation is to point out the low value that decisions makers (consumers, financiers, governments, international agencies) place on biodiversity and the environment in their day to day decisions. Neither is the Review so narrow in its discussion of values in its opening chapters, explaining a plurality of views. Of course there are problems that economists need to recognise with commodification and invaluable goods in general, and how valuation of such goods (e.g. rights, principles, perhaps biodiversity), may change us or our individual relationship with them (Pascual et al. 2021). Partly in light of some of these issues, the Review recognises the need for both quantitative as well as pricing measures. There are places that humans simply ought not to go for reasons of intrinsic value, as much as for instrumental reasons of, for instance, reducing contact with zoonoses. Yet, at the same time in a paper entitled 'Invaluable Goods' Ken Arrow makes the point that:
politicising activities is no greater guarantee of preserving individuation as commodifying them. (Arrow 1997)
In short, rather than one or the other, balance is required between valuation and participation. Associating natural capital, valuation and shadow prices with capitalism potentially misses the point.Footnote 26 Firstly, that markets, including financial markets, are failing biodiversity is central to the Review. Shadow or accounting prices are required to correct the misallocations that this leads to, both spatially and intertemporally, in their neglect of future generations. Of course, the theory of change which works through 'correcting externalities' or 'correcting markets' may not appeal to everyone, but shadow pricing and valuation in general is not best characterised as a capitalist endeavour. Leonid Kantorovich and Tjalling Koopmans won the Nobel Prize in 1975 for their work on allocation and planning in the macroeconomy. The former worked on planning within the Soviet Union, the latter in the US, back when the question of whether market or planned economies were better at solving allocation problems was at its height. Both found the use of shadow pricing central to solving allocations problems and making tradeoffs between different aspects of the plan. So the concept of a shadow price is not necessarily associated with market, capitalist economies. Shadow prices reflect the societal objectives, be they utilitarian, or focused on sustainability or fairness more generally (Turk et al. 2020). Both approaches fail if the impact of the economy on the environment is neglected from the plan, even more so if the economy is as Dasgupta sees it: embedded in the environment. Understanding the constraints, defining the societal objectives, measuring performance properly, reflecting these in day to day decisions, this is the change in the economic grammar that the Review calls for.
Finally, the comparison of the Dasgupta Review with the Stern Review is irresistible. The UK now has the climate act and a net zero target for 2050, with interim targets to stop slippage. The Nordhaus–Stern–Weitzman debate in the aftermath of the Stern Review raised the profile of the Stern Review tremendously, and raised the profile of climate economics in the profession. In the Paris Agreement we have some political consensus. The hope for the Dasgupta Review on the economics of biodiversity must be the same: that it will become the go to source of information for policy makers, speak to the institutions that can effect aggregate change: finance ministries, central banks and large corporations and international organisations, in the run up to the COP15 of the Convention on Biodiversity Economics, and serve as the impetus for action. Rather than distracting discussions about the discount rate, however, the core controversy among economists will probably surround the new grammar proposed for the economics of growth, encapsulated by embeddedness and limits to growth. The evidence here looks beyond doubt when one focuses on headline figures for biodiversity loss, such as Bradshaw et al. (2021), but many will not put their faith in ecological footprints as the key measure upon which catastrophic predictions should turn. Furthermore, the arrow of causality between economists and policy change is a difficult one to evaluate (e.g. Groom and Hepburn 2017), and the theory of change in the Review weights heavily the correction of markets, when clearly other structural, institutional and political changes will be required. Not least among these will be a binding agreement at the COP15 which, unlike the Paris Agreement, has real teeth. Yet, while we await Dasgupta's colleagues in the economics profession to be persuaded, biodiversity should not be assigned a value of zero at the point of decision. Measuring natural capital, reorientating the financial sector, placing limits where necessary, rethinking our social norms, and yes getting shadow prices right, will all help to safeguard biodiversity and future generations, as well as current generations. In the past year we have already witnessed the sharp changes in policy that are possible when faced with catastrophic events. Similar step changes are likely to be required in relation to Biodiversity. Overall, the Review makes this case.
It is actually difficult to define 'mainstream' cleanly these days, but we have in mind the traditional training in the economics of growth which in our experience, and often still, barely touched on the environment but was heavily focused on neoclassical and endogenous growth.
In climate change economics, Integrated Assessment Models do not typically constrain growth (Nordhaus 2017).
The first four paragraphs of Ch 2 elegantly summarise these relationships.
E.g. a eucalypt plantation in a wetland increases NPP while reducing most ecosystem services, other than perhaps carbon sequestration.
Such measures can capture values of biodiversity in managing risk and well-being that biomass may not (Brock and Xepapadeas 2003), and Box 2.1 of the Review acknowledges the role of genetic diversity.
The evidence provided makes for quite stark reading. E.g. with current technologies if everyone on the planet had the same diets as those in high income countries, the amount of land required would exceed the surface area of the planet, sea included.
Solow has also criticised new growth theories of the 'AK' type (Solow 1994).
e.g. See Martin Wolf's "Humanity is the cuckoo in the nest" in the Financial Times: https://www.ft.com/content/a3285adf-6c5f-4ce4-b055-e85f39ff2988?.
The World Bank's WAVES program is another prominent example.
https://www.un.org/en/desa/un-adopts-landmark-framework-integrate-natural-capital-economic-reporting.
Mande (2019) estimate that the global externalities of the food system run to $12Tr annually.
The report is not peer reviewed at the time of writing: https://www.conservation.cam.ac.uk/files/waldron_report_30_by_30_publish.pdf.
See e.g. https://www.resilience.org/stories/2021-01-12/an-open-letter-to-the-lead-authors-of-protecting-30-of-the-planet-for-nature-costs-benefits-and-implications/
TFP is also a problematic concept in its own right (Fine 2016, ch5 ).
https://www.bloomberg.com/news/articles/2021-02-25/first-sovereign-nature-bonds-get-lift-from-world-bank-backed-hub?sref=HkIFZ0t4.
On April 1st 2021, 9 members of the EU Taxonomy advisory group resigned due to the apparent watering down of the EU taxonomy definitions of which industries can be classified as sustainable. https://www.reuters.com/article/europe-regulations-finance-idUSL4N2LT4LJ.
see https://www.f4b-initiative.net/publications-1/the-dasgupta-review%3A-what-it-means-for-the-global-financial-system.
Gostlow (2020) points out that Form 8k may provide more pertinent sources of information about susceptibility to climate risk that the usual 10k.
Establishing causality is also a major empirical issue.
The Aggregate Confusion Project: https://mitsloan.mit.edu/sustainability-initiative/aggregate-confusion-project.
https://www.hl.co.uk/news/articles/ftse-100-the-5-highest-esg-rated-companies.
https://www.ft.com/content/74888921-368d-42e1-91cd-c3c8ce64a05e.
see https://www.transitionpathwayinitiative.org/publications/77.pdf?type=Publication and the TPI tool.
https://www.transitionpathwayinitiative.org/publications/74.pdf?type=Publication.
The original quote refers to a cynic. Interestingly the less heard part of the dialogue continues: "And a sentimentalist, my dear Darlington, is a man who sees an absurd value in everything and doesn't know the market price of any single thing.
Statements like this can be found in the IPBES Draft Values Assessment for instance.
Acemoglu D, Philippe A, Leonardo B, Hemous D (2012) The environment and directed technical change. Am Econ Rev 102(1):131–166
Acemoglu D, Aghion P, Bursztyn L, Hemous D, Johnson S, Robinson JA (2001) The colonial origins of comparative development: an empirical investigation. Am Econ Rev 91(5):1369–1401
Aghion P, Alesina A, Trebbi F (2004) Endogenous political institutions. Q J Econ 119(2):565–611
Albuquerque R, Koskinen Y, Yang S, Zhang C (2020) Resiliency of environmental and social stocks: an analysis of the exogenous COVID-19 market crash. Rev Corp Finance Stud 9(3):593–621
Arrow KJ (1997) Invaluable goods. J Econ Lit 35(2):757–765
Arrow KJ, Dasgupta P, Goulder LH, Mumford KJ, Oleson K (2012) Sustainability and the measurement of wealth. Environ Dev Econ 17(3):317–353
Arrow K, Dasgupta P, Goulder L, Daily G, Ehrlich P, Heal G, Levin S, Maler K-G, Schneider S, Starrett D, Walker B (2004) Are we consuming too much? J Econ Perspect 18(3):147–172
Banerjee S, John P(forthcoming) Nudge plus: incorporating reflection into behavioral public policy. Behav Public Policy 1–16
Bar-On YM, Phillips R, Milo R (2018) The biomass distribution on Earth. Proc Natl Acad Sci 115(25):6506–6511
Barbier EB, Burgess JC, Dean TJ (2018) How to pay for saving biodiversity. Science 360(6388):486–488
Berg F, Koelbel J, Rigobon R (2020) Aggregate confusion: the divergence of ESG ratings. SSRN
Boulding KE (1968) Beyond economics: essays on society, religion, and ethics
Bradshaw CJA, Ehrlich PR, Beattie A, Ceballos G, Crist E, Diamond J, Dirzo R, Ehrlich AH, Harte J, Harte ME, Pyke G, Raven PH, Ripple WJ, Saltre F, Turnbull C, Wackernagel M, Blumstein DT (2021) Underestimating the challenges of avoiding a ghastly future. Front Conserv Sci 1:9
Brander JA, Taylor MS (1998) The simple economics of Easter Island: a Ricardo-Malthus model of renewable resource use. Am Econ Rev 88(1):119–138
Brock WA, Xepapadeas A (2003) Valuing biodiversity from an economic perspective: a unified economic, ecological, and genetic approach. Am Econ Rev 93(5):1597–1614
Brunnschweiler CN, Bulte EH (2008) The resource curse revisited and revised: a tale of paradoxes and red herrings. J Environ Econ Manag 55(3):248–264
Ceballos G, Ehrlich PR, Barnosky AD, García A, Pringle RM, Palmer TM (2015) Accelerated modern human-induced species losses: Entering the sixth mass extinction. Sci Adv 1(5):e1400253
Copeland BR, Taylor MS (2004) Trade, growth, and the environment. J Econ Lit 42(1):7–71
Courtois P, Figuieres C, Mulier C (2014) Conservation priorities when species interact: the Noah's Ark metaphor revisited. PLoS One 9(9):e106073–e106073
Daily GC, Ehrlich PR (1996) Global change and human susceptibility to disease. Annu Rev Energy Environ 21(1):125–144
Dasgupta PS, Heal GM (1980) Economic theory and exhaustible resources. Cambridge University Press, Cambridge
Dasgupta P (2010) Nature's role in sustaining economic development. Philos Trans Biol Sci 365(1537):5–11
Dasgupta P, Heal G (1974) The optimal depletion of exhaustible resources. Rev Econ Stud 41:3–28
La Ferrara E, Chong A, Duryea S (2012) Soap operas and fertility: evidence from Brazil. Am Econ J Appl Econ 4(4):1–31
Fine B (2016) Microeconomics: a critical companion. Pluto Press, Febrero
Fleurbaey M (2013)Beyond GDP measuring welfare and assessing sustainability
Garnett EE, Balmford A, Sandbrook C, Pilling MA, Marteau TM (2019) Impact of increasing vegetarian availability on meal selection and sales in cafeterias. Proc Natl Acad Sci 116(42):20923–20929
Georgescu-Roegen N (1971) The entropy law and the economic process. Harvard University Press, Cambridge
Gostlow G (2019) Pricing climate risk
Gostlow G (2020) The materiality and measurement of physical climate risk: evidence from form 8-K
Groom B, Hepburn C (2017) Reflections—looking back at social discounting policy: the influence of papers, presentations, political preconditions, and personalities. Rev Environ Econ Policy 11(2):336–356
Hamilton K, Hepburn C (2017) National wealth: what is missing, why it matters. Oxford University Press, Oxford
Hamilton K, Hepburn C, Hartwick J (2014) Wealth and sustainability. Oxford Rev Econ Policy 30(1):170–187
Hamilton K, Hepburn C, Hartwick J, Clemens M (1999) Genuine savings rates in developing countries. World Bank Econ Rev 13(2):333–356
Hartwick JM (1977) Intergenerational equity and the investing of rents from exhaustible resources. Am Econ Rev 67(5):972–974
Jean-Louis F, Caroline A, Louis O (1998) An overview of the Weitzman approach to diversity. Genet Sel Evol 30(2):149–161
Jensen R, Oster E (2009) The power of TV: cable television and women's status in India. Q J Econ 124(3):1057–1094
Jones CI, Klenow PJ (2016) Beyond GDP? Welfare across countries and time. Am Econ Rev 106(9):2426–2457
Mace GM (2014) Biodiversity: its meanings, roles, and status. In: Nature in the balance. Oxford University Press, Oxford
Mande J et al (2019) Report of the 50th anniversary of the White House conference on food, nutrition, and health: honoring the past, taking actions for our future. Boston, MA
Neumayer E (2012)Human development and sustainability
Nordhaus WD (2017) Revisiting the social cost of carbon. Proc Natl Acad Sci 114(7):1518–1523
Nyborg K (2020) No man is an island: social coordination and the environment. Environ Resour Econ 76(1):177–193
Ouyang Z, Song C, Zheng H, Polasky S, Xiao Y, Bateman IJ, Liu J, Ruckelshaus M, Shi F, Xiao Y, Xu W, Zou Z, Daily GC (2020) Using gross ecosystem product (GEP) to value nature in decision making. Proc Natl Acad Sci 117(25):14593–14601
Pascual U, Adams WM, Diaz S et al. (2021) Biodiversity and the challenge of pluralism. Nature Sustain
Pepin J (2013) The origins of AIDS: from patient zero to ground zero. J Epidemiol Commun Health 67(6):473–475
Pezzey JCV (2004) One-sided sustainability tests with amenities, and changes in technology, trade and population. J Environ Econ Manag 48(1):613–631
Reist-Marti SB, Simianer H, Gibson J, Hanotte O, Rege JEO (2003) Weitzman's approach and conservation of breed diversity: an application to African cattle breeds. Conserv Biol 17(5):1299–1311
Rockstrom J, Steffen W, Noone K, Persson A, Stuart CF III, Lambin EF, Lenton TM, Scheffer M, Folke C, Schellnhuber HJ, Nykvist B, de Wit CA, Hughes T, van der Leeuw S, Rodhe H, Sorlin S, Snyder PK, Costanza R, Svedin U, Falkenmark M, Karlberg L, Corell RW, Fabry VJ, Hansen J, Walker B, Liverman D, Richardson K, Crutzen P, Foley JA (2009) A safe operating space for humanity: identifying and quantifying planetary boundaries that must not be transgressed could help prevent human activities from causing unacceptable environmental change, argue Johan Rockstrom and colleagues. Nature 461(7263):472
Romer PM (1990) Endogenous technological change. J Political Econ 98(5):S71–S102
Samuel AF, Drucker AG, Andersen SB, Simianer H, van Zonneveld M (2013) Development of a cost-effective diversity-maximising decision-support tool for in situ crop genetic resources conservation: the case of cacao. Ecol Econ 96:155–164
Simianer H, Simianer H (2008) Accounting for non-independence of extinction probabilities in the derivation of conservation priorities based on Weitzman's diversity concept. Conserv Genetics 9(1):171–179
Solow RM (1994) Perspectives on growth theory. J Econ Perspect 8(1):45–54
Steffen W, Richardson K, Rockstrom J, Cornell SE, Fetzer I, Bennett E, Biggs R, de Vries W (2015) Planetary boundaries: guiding human development on a changing planet. Science 347(6223):1259855
Stiglitz J (1974) Growth with exhaustible natural resources: efficient and optimal growth paths. Rev Econ Stud 41:123–137
Turk Z, Groom B, Fenichel E (2020) Mean-spirited growth. Grantham Research Institute on Climate Change and the Environment working paper 351
van den Bergh JCJM (2011) Environment versus growth—a criticism of "degrowth" and a plea for "a- growth ". Ecol Econ 70(5):881–890
van der Heide CM, van den Bergh JCJM, van Ierland EC (2005) Extending Weitzman's economic ranking of biodiversity protection: combining ecological and genetic considerations. Ecol Econ 55(2):218–223
van der Ploeg F, Poelhekke S (2010) The pungent smell of "red herrings": subsoil assets, rents, volatility and the resource curse. J Environ Econ Manag 60(1):44–55
Vringer K, Van Der Heijden E, Van Soest D, Vollebergh H, Dietz F (2017) Sustainable consumption dilemmas. Sustainability 9(6):942
Wackernagel M, Beyers B, Rout K (2019a) Ecological footprint: managing our biocapacity budget. New Society Publishers, Gabriola Island
Wackernagel M, Lin D, Evans M, Hanscom L, Raven P (2019b) Defying the footprint oracle: implications of country resource trends. Sustainability 11(7):2164
Waldron A et al (2020) Protecting 30% of the planet for nature: costs, benefits and economic implications. unpublished manuscript. https://www.conservation.cam.ac.uk/files/waldron_report_30_by_30_publish.pdf
Weitzman ML (2009) On modeling and interpreting the economics of catastrophic climate change. Rev Econ Stat 91(1):1–19
Walter W, Rockstrom J, Loken B, Springmann M, Lang T, Vermeulen S, Garnett T, Tilman D, DeClerck F, Wood A, Jonell M, Clark M, Gordon LJ, Fanzo J, Hawkes C, Zurayk R, Rivera JA, De Vries W, Sibanda LM, Afshin A, Chaudhary A, Herrero M, Agustina R, Branca F, Lartey A, Fan S, Crona B, Fox E, Bignet V, Troell M, Lindahl T, Singh T, Cornell SE, Reddy KS, Narain S, Nishtar S, Murray CJL (2019) Food in the Anthropocene: the EAT-Lancet Commission on healthy diets from sustainable food systems. Lancet 393(10170):447–492
Wu F, Zhao S, Bin Y, Chen Y-M, Wang W, Song Z-G, Yi H, Tao Z-W, Tian J-H, Pei Y-Y, Yuan M-L, Zhang Y-L, Dai F-H, Liu Y, Wang Q-M, Zheng J-J, Lin X, Holmes EC, Zhang Y-Z (2020) A new coronavirus associated with human respiratory disease in China. Nature 579(7798):265–269
Dragon Capital Chair of Biodiversity Economics, Department of Economics, LEEP Institute, University of Exeter Business School, Exeter, UK
Ben Groom
Department of Geography and Environment, London School of Economics and Political Science, London, UK
Zachary Turk
Correspondence to Ben Groom.
This paper has benefited from comments from Ben Fine, Daan van Soest, Frank Venmans, Ian Bateman, Sanchayan Banerjee, Glen Gostlow and Ben Balmford. The usual disclaimer applies.
Groom, B., Turk, Z. Reflections on the Dasgupta Review on the Economics of Biodiversity. Environ Resource Econ 79, 1–23 (2021). https://doi.org/10.1007/s10640-021-00560-2
Issue Date: May 2021
Dasgupta Review
Limits to growth
|
CommonCrawl
|
Evolution and transition of expression trajectory during human brain development
Ming-Li Li1,2 na1,
Hui Tang3 na1,
Yong Shao1,2,
Ming-Shan Wang1,2,
Hai-Bo Xu1,2,
Sheng Wang1,
David M. Irwin1,4,5,
Adeniyi C. Adeola1,2,
Tao Zeng3,
Luonan Chen ORCID: orcid.org/0000-0002-3960-00683,6,7 na1,
Yan Li8 na1 &
Dong-Dong Wu ORCID: orcid.org/0000-0001-7101-72971,2,7 na1
BMC Evolutionary Biology volume 20, Article number: 72 (2020) Cite this article
The remarkable abilities of the human brain are distinctive features that set us apart from other animals. However, our understanding of how the brain has changed in the human lineage remains incomplete, but is essential for understanding cognition, behavior, and brain disorders in humans. Here, we compared the expression trajectory in brain development between humans and rhesus macaques (Macaca mulatta) to explore their divergent transcriptome profiles.
Results showed that brain development could be divided into two stages, with a demarcation date in a range between 25 and 26 postconception weeks (PCW) for humans and 17-23PCWfor rhesus macaques, rather than birth time that have been widely used as a uniform demarcation time of neurodevelopment across species. Dynamic network biomarker (DNB) analysis revealed that the two demarcation dates were transition phases during brain development, after which the brain transcriptome profiles underwent critical transitions characterized by highly fluctuating DNB molecules. We also found that changes between early and later brain developmental stages (as defined by the demarcation points) were substantially greater in the human brain than in the macaque brain. To explore the molecular mechanism underlying prolonged timing during early human brain development, we carried out expression heterochrony tests. Results demonstrated that compared to macaques, more heterochronic genes exhibited neoteny during early human brain development, consistent with the delayed demarcation time in the human lineage, and proving that neoteny in human brain development could be traced to the prenatal period. We further constructed transcriptional networks to explore the profile of early human brain development and identified the hub gene RBFOX1 as playing an important role in regulating early brain development. We also found RBFOX1 evolved rapidly in its non-coding regions, indicating that this gene played an important role in human brain evolution. Our findings provide evidence that RBFOX1 is a likely key hub gene in early human brain development and evolution.
By comparing gene expression profiles between humans and macaques, we found divergent expression trajectories between the two species, which deepens our understanding of the evolution of the human brain.
Our highly developed and distinctive brains, which set humans apart from other mammals, are the product of evolution [1, 2], the mechanism of which has fascinated people for centuries [3]. Based on compelling differences in cognitive and behavioral capacities, but relatively close phylogenetic relationship between humans and non-human primates (NHPs) [4,5,6], recent comparative analyses have provided a novel strategy to study human-specific neurodevelopment [7,8,9]. Increasingly persuasive evidence suggests that brain development is not static but is a continuous process of molecular changes throughout life, including changes in gene expression, glucose metabolism, and synaptic density [10,11,12,13,14]. Previous comparative analyses between humans and NHPs have only offered a snapshot in time [15, 16]. However, it is necessary to compare the whole process of brain development to provide a more objective and comprehensive understanding of human brain evolution.
Earlier research noted that neurodevelopmental timing is impacted by different developmental rates and life history strategies [2]. These differences in neurodevelopmental timing among species, also called heterochrony, have long been considered a crucial impetus for evolution [17,18,19]. Humans have an unusually extended childhood and slow rate of neurodevelopment (known as neoteny) relative to other animals, which is considered a possible mechanism for human brain evolution [18, 20]. While previous studies have primarily focused on heterochronic gene expression during postnatal brain development [20], the macroscopic layout of the brain is nearly complete at the time of birth [21]. Thus, extending comparative analysis to the prenatal stages is necessary for exploring the features of neurodevelopment.
Currently, it is widely accepted that changes in spatiotemporal gene expression play a critical role in the emergence of the sophisticated human brain, and several attempts have been made to estimate divergent gene expression patterns between humans and NHPs [15, 22, 23]. However, our understanding of how gene expression patterns have changed in the human lineage remains incomplete. With increasing high-quality brain transcriptome data [24,25,26], an unprecedented opportunity to investigate gene expression trajectory in multiple brain regions and different developmental stages among primates has become possible [22, 27]. In this study, we collected large-scale gene expression data from humans and macaques to systematically investigate and compare their divergent gene expression trajectory. We aimed to identify critical states during brain development as well as novel molecular mechanisms underlying human brain evolution.
Figure 1 highlights the strategy used to investigate evolution of gene expression trajectory in humans, including hierarchical clustering and dynamic network analyses to identify demarcation times of brain development in humans and macaques, expression heterochrony analysis to explore the mechanism of neurodevelopmental timing in humans, and differential expression and gene co-expression network analyses to identify key genes in human brain development and evolution.
Overview of study. Hierarchical clustering and dynamic network analyses were used to identify demarcation time of brain development in humans and macaques. Expression heterochrony analysis was used for exploring the mechanism of neurodevelopmental timing between humans and macaques. Differential expression and gene co-expression network analyses were used to identify key genes in human brain development and evolution. The clipart depicted in the figure are original
Human brain transcriptome data were used in this study, including RNA-sequencing (RNA-seq) and microarray data across multiple brain regions downloaded from the Allen Brain Atlas [24, 25] (Table 1; Additional file 1: Table S1 ~ Table S2). These data cover 14 brain regions spanning 27 different developmental ages from 8 postconception weeks (PCW) to postnatal 40 years old (Table 2). In total, 30,881 and 17,280 genes had detectable expression signals in the RNA-seq and microarray data, respectively. We also used microarray data from macaques, which contained five brain regions (22 brain subregions) corresponding to brain regions in humans (Table 1; Additional file 1: Table S3 ~ Table S4) and spanning 8 PCW to postnatal 48 months (Table 2). In total, 15,381 genes exhibited detectable expression signals.
Table 1 Human and macaque tissues used in this study
Table 2 Age distribution of humans and macaques
Different developmental trajectories and demarcation times in humans and macaques
We performed hierarchical clustering analysis based on gene expression levels to determine whether transformation exists during brain development. In humans, the RNA-seq and microarray data supported the division of brain development into two stages, with a demarcation time of 25–26 PCW (Fig. 2a; Additional file 2: Figure S1). In macaques, gene expression levels from the microarray data in most brain regions were also clustered into two groups, with a demarcation time of 17 PCW ~ birth (17 -23PCW)(Fig. 2a; Additional file 2: Figure S2). These results suggest that brain development in both humans and macaques could be divided into two major stages, separated by species-specific demarcation points that occurred prior to birth (25–26 PCW in humans and 17–23 PCW in macaques), rather than birth times, which have been widely used as a uniform demarcation time of neurodevelopment across species [28, 29].
Different gene expression trajectories during brain development in humans and rhesus macaques. a Hierarchical clustering analysis revealed different expression demarcation time points in humans and macaques based on primary visual cortex transcriptome data. b Time course for neurogenesis in humans and macaques. Data were according to previous study [10]. c–d Detection of transition phases during brain development in humans (c) and macaques (d) using dynamic network biomarkers (DNBs). Plot represents composite index of DNB (see Materials and methods, CI in Eq.(1)), which indicates transition phase at around 26 PCW in humans and around 17 PCW in macaques
Transitional state and critical transitions during brain development based on dynamic network biomarker (DNB) analysis
Having identified the demarcation points of brain development in humans and macaques, we further applied DNB analysis to verify whether the above demarcation points were also transitional states of brain development. Based on nonlinear dynamic theory, biological processes, such as brain development, are not always smooth, but can exhibit dramatic transitions from one state to another [30, 31]. When the process occurs near a critical transition phase, a dominant group of genes/molecules, i.e., DNBs, can drive transition of the dynamic process [30, 32, 33]. We thus performed a genome-wide DNB analysis to identify the transitional states and DNB genes in both humans and macaques.
The DNB results demonstrated that the transitional states of brain development in humans and macaques occurred at around 26 PCW and around 17PCW, respectively (Fig. 2b-c; Additional file 2: Figure S3). After the transitional state, gene expression patterns changed markedly. The transitional states identified by DNB analysis were largely consistent with the demarcation points above (Fig. 2a), thus supporting the robustness of our results. We also obtained the corresponding DNBs, which included 369 DNB genes in humans and 34 DNB genes in macaques (Additional file 1: Table S5). The greater number of DNB genes in humans suggests more dramatic changes between the early and later stages of brain development in humans relative to that in macaques.
Transcriptional profile and cell fate change from early to later stages of brain development
We used differential expression analysis to compare the degree of change in early and later gene expression between humans and macaques. We selected five brain regions (i.e., hippocampus (HIP), striatum (STR), anterior cingulate cortex (ACC), amygdala (AMY), and primary visual cortex (V1C)) that coexist and contain similar sample sizes in the two species. Based on microarray probes, which matched between humans and macaques, we found a larger number of differentially expressed genes (DEGs) (Benjamini-Hochberg FDR < 0.05, fold change [FC] > 1.5) between early and later stages in humans than in macaques (Fig. 3a; Additional file 1: Table S6). These results also suggest more dramatic changes between early and later stages of brain development in humans relative to macaques.
Transcriptional profiles across early to later stages during brain development. a DEGs among five brain regions (HIP (hippocampus), V1C (primary visual cortex), ACC (anterior cingulate cortex), STR (striatum), and AMY (amygdala). b Enriched categories for up-regulated genes in early human brain development. c Enriched categories for up-regulated genes in later human brain development. d Matrix summary of enrichment of oligodendrocyte, neuron, microglia, endothelial, or astrocyte genes [34] in DEGs up-regulated (red) and down-regulated (blue) in each human tissue
To better understand brain developmental processes in the human lineage, we conducted functional enrichment analysis of the DEGs between early and later stages in humans. Results showed that up-regulated genes in the early stage were mainly involved in cell cycle, DNA packaging, and meiosis (Fig. 3b), whereas, up-regulated genes in the later stage were enriched in synaptic signaling, myelination, and axon establishment (Fig. 3c). Remarkably, the DEG patterns well matched the reported properties of the neurodevelopmental timeline in humans [35].
Increasing evidence suggests that a cell fate switch from neurons to glial cells is operational in prenatal brains and represents a key process in brain development [36,37,38]. We thus considered whether the demarcation points in human correspond to the transient time of known neuron to glial cell fate switch. We found up-regulated genes in the early stage were predominantly enriched in neuronal genes (Fig. 3d), whereas up-regulated genes in the later stage represented a diversity of cell-type-associated genes, including astrocytes, oligodendrocytes, and neurons (Fig. 3d), which likely reflects the transformation of cell fate switch from neurons to glial cells during the demarcation point. This conclusion is confirmed by the previous single-cell transcription analysis [39], which reported that neurons developed from neural progenitor cells in early gestational weeks (GW8, GW9, GW10, GW12, GW13, GW16, GW19, GW23), whereas glial cells (oligodendrocyte progenitor cells and astrocytes) differentiated from neural progenitor cells in later weeks (GW26).
Earlier studies have also reported several pathways that govern the neuron to astrocyte cell switch, including the gp130/JAK/STAT and MEK/MAPK pathways … [38, 40]. We found that DEGs between early and later stages were significantly enriched in the MEK/MAPK pathway (P = 4.6e-04, Fisher's exact test) (Additional file 1: Table S7). Although DNBs were not significantly enriched in the MEK/MAPK pathway (P = 0.15, Fisher's exact test), 15 DNBs were still involved (Additional file 1: Table S7). This suggests that the MEK/MAPK pathway likely plays an important role in the transformation ratio of cell type from neurons to glial cells during the human demarcation point (25–26 PCW).
Molecular mechanism underlying protracted timing of early human brain development
The different demarcation points (25–26 PCW in humans and 17-23PCWin macaques) identified here reflect prolonged timing of early brain development in the human lineage. Thus, we performed heterochronic analysis to explore the molecular mechanism underlying different timing of early neurodevelopment between humans and macaques.
We again used five brain regions (i.e., HIP, STR, ACC, AMY, and V1C) that coexist in humans and macaques to test heterochronic gene expression. After rigorous quality control (see Methods), we retained 9758 genes with microarray probes well matched between humans and macaques. Among these genes, we selected several that showed both age-related and species-specific differences for each brain region (see Methods; Additional file 1: Table S8). We then sorted these genes into two categories: (i) human acceleration genes, whose expression change was significantly faster during human brain development than that during macaque brain development (Fig. 4a); and, (ii) human neoteny genes, whose expression change was significantly delayed during human brain development than that during macaque brain development (Fig. 4b), as defined in previous study [20]. Compared to macaques, more genes displayed a neotenic pattern in all five human brain regions (Fig. 4c; Additional file 1: Table 9), consistent with the above delayed demarcation point in the human lineage. In addition, the result suggest that neoteny of human brain development could be traced to prenatal period.
Analysis of expression heterochrony. a Example gene showing accelerated expression in humans. b Example gene showing neotenic expression in humans (right). c Number of genes showing acceleration and neoteny in early human brain development for five brain regions. d Enriched categories for neotenic genes in early human brain development
Interestingly, the neotenic genes from the five brain regions significantly overlapped (Additional file 1: Table 10), suggesting that neotenic mechanisms among different brain regions are largely convergent. The functions of these neotenic genes were mainly involved in neurodevelopment-related pathways (Fig. 4d), suggesting that more neurodevelopmental genes exhibited neotenic features, which eventually help humans develop a more complex brain.
Co-expression analysis identifies gene network during early brain development in humans
Extended timing of early neurodevelopment in humans is important for brain evolution [2, 18]. We applied weighted gene co-expression network analysis (WGCNA) to further explore the transcriptional profile of early neurodevelopment in humans [41,42,43]. A total of 38 modules related to early human neurodevelopment were identified (see Methods; Fig. 5a; Additional file 1: Table S11). To quantify network reorganization across early and later brain development, we applied modular differential connectivity (MDC), which is the ratio of the average connectivity for any pair of modules sharing genes in the early stage compared to that in the later stage. Among the 38 early modules, five (GCM1–GCM5) showed gain of connectivity compared to later development, with co-regulation enhanced between genes in these modules. In contrast, 21 modules (LCM1–LCM21) showed loss of connectivity and 12 modules (NCM1–NCM12) (31.5%) showed no change in connectivity and were conserved during development (Additional file 2: Figure S4A-S4B).
Weighted gene co-expression network analysis (WGCNA). a Topological overlap matrix plots for early brain modules in human. Light color represents low topological overlap, with progressively darker red representing higher overlap. b Enrichment of DEGs across 13 brain regions among different modules. c Enrichment of genes located in human-accelerated conserved non-coding sequences (HACNSs) [44] and genes located in human DNA sequence accelerated regions (HARs) [45] in different modules. d Functional enrichment of genes in GCM1. e Cell specificity of genes in module GCM1
For modules with gains or losses of connectivity, we ranked them according to their degree of DEG enrichment across brain regions (Fig. 5a) and MDC scores (Additional file 1: Table S11). Module GCM1, which showed a gain of connectivity in the early stage, was identified as the most highly ranked module. The genes in this module were enriched in neurogenesis, neuron projection morphogenesis, and axon development (Fig. 5d). Additionally, GCM1 showed highly significant enrichment for known autism susceptibility markers (P = 5.49E-07; Fisher's exact test) [46], and the expression levels of genes in this module were significantly higher during early brain development (Additional file 2: Figure S4C). These results suggest that GCM1 likely plays an important role in early development of the human brain.
Remarkably, most genes in the GCM1 module were located in human-accelerated conserved non-coding sequences (HACNSs) (P = 3.19E-16; Fisher's exact test) [44] or in human DNA sequence accelerated regions (HARs) (P = 7.49E-12; Fisher's exact test) [45] (Fig. 5c), suggesting that genes in the GCM1 module also likely played an important role in human brain evolution.
We next mapped the genes in GCM1 to single-cell expression data derived from 20,262 prenatal human prefrontal cortex cells that ranged in age from 8 to 26 gestational weeks [39] (Additional file 2: Figure S5), which represent a broad diversity of cell types including neural progenitor cells, interneurons, astrocytes, oligodendrocyte progenitor cell, microglia and excitatory neurons. The expression patterns of the GCM1 genes closely matched the cell type specific of excitatory neurons (see Methods) (Fig. 5e), confirming that genes in the GCM1 module function through excitatory neurons.
We further reconstructed the network structure of genes within the GCM1 module based on their connectivity and identified 53 hub genes, 36 of which were early stage-specific hub genes [47] (see Methods, Fig. 6a). Among these genes, RBFOX1(RNA Binding Fox-1 Homolog 1) was of particular interest. RBFOX1 is a highly conserved splicing regulator that displays higher expression in the brain than in other tissues [48] (Fig. 6b). RBFOX1 is implicated in autism, epilepsy syndromes, and Alzheimer's disease [49,50,51] and plays an important role in mammalian brain development [52]. Interestingly, RBFOX1 is also a DNB gene, and therefore considered to play an important role in critical transition during brain development [30].
Hub gene RBFOX1 in module GCM1. a Network plot of hub genes identified within GCM1 module. Blue nodes indicate all genes. Red nodes indicate hub genes. Yellow halos indicate early stage-specific hub genes. Cyan node indicates RBFOX1. Edges reflect significant interactions between genes based on mutual information. b Expression level of RBFOX1 in different tissues. c Location of HAEs at RBFOX1 locus in human genome and conservation of RBFOX1 among 17 mammals according to UCSC Genome Browser (www.genome.ucsc.edu). d Cell specificity of RBFOX1
We next used evolutionary analysis to test if RBFOX1 experienced positive selection in the human lineage. Although the protein coding sequence of RBFOX1 has not changed in humans compared to other primates (Additional file 2: Table S12; see Methods), six HARs were found in the non-coding regions for this gene (Fig. 6c). HARs are non-coding regions conserved across mammals, and which have acquired many sequence changes in humans since their divergence from chimpanzees [45]. Only seven human RefSeq genes from the entire human genome (based on hg18) contain six or more HARs [53], suggesting that strongly human-specific accelerated evolution occurred recently in the non-coding region of the human RBFOX1 gene. Our findings provide evidence that RBFOX1 is a likely key hub gene in early human brain development and evolution. In addition, RBFOX1 also showed cell specificity to excitatory neurons by single cell transcriptome analysis (Fig. 6d), suggesting that RBFOX1 functions through excitatory neurons.
The remarkable abilities of the human brain set us apart from NHPs. With the advent of large-scale genomic, transcriptomic, and epigenomic data, many genetic underpinnings of the rapid evolution of the human brain have been revealed [54,55,56,57,58]. However, our understanding of how the brain has changed in the human lineage remains incomplete [3]. Based on large-scale transcriptomic and genomic data, the results of the current study provide new insight into the evolution and transition of gene expression trajectory in the human brain.
Firstly, we found that brain development could be divided into two stages in both humans and macaques; more specifically, demarcation times of 25–26 PCW and 17–23 PCW in humans and rhesus macaques, respectively. Further DNB analysis indicated that the demarcation points were nearly the same as the critical transitional states during brain development in humans and macaques. Previous studies on brain development have primarily used birth as the default boundary [28, 29, 59]. However, we suggest that the demarcation points identified here should be considered in the future to minimize biases in studies of brain development.
Secondly, we also found that neoteny of human brain development could be traced to the prenatal period. Previous studies have primarily focused on heterochronic gene expression during postnatal brain development [7]. The macroscopic layout of the brain is nearly complete at the time of birth [8]. Thus, extending comparative analysis to the prenatal stages is necessary to explore the features of neurodevelopment, which is lacking in prior studies. Thus, in this paper, we performed heterochronic analysis across prenatal samples between humans and macaques, and found that more genes displayed a neotenic pattern in humans than in macaques, consistent with the delayed demarcation time in the human lineage, and proving that neoteny in human brain development could be traced to the prenatal period.
Thirdly, we used gene co-expression network analysis to identify transcription profiles in early human brain development and identified that the RBFOX1 gene likely plays an important role in early human brain development and displays positive selection in its non-coding region [50,51,52]. Therefore, we speculated thatRBFOX1 is a likely key hub gene in early human early brain development and evolution. As such, we propose that RBFOX1 should be considered in further neurodevelopmental research.
Finally, we highlighted the importance of excitatory neurons in human brain development and evolution. Over the past few decades, the comparisons of excitatory neurons between humans and NHPs have mainly focused on their differences in morphology and abundance [60, 61]. Further molecular biology research on excitatory neurons is limited. In this paper, we found that the GCM1 module and RBFOX1 gene were related to early human brain development and evolution and were enriched in excitatory neurons. Therefore, studies on excitatory neurons would be promising for exploring human brain evolution.
We note that the study presented here is far from being comprehensive: Firstly, Based on currently available transcriptome data, we identified a demarcation line time-frame of brain development in humans and macaques. The precise demarcation point could not be concluded from existing data but should be explored in future studies.
Secondly, due to the relatively small sample sizes used in the current study, as well as the sparse distribution of samples across ages, we cannot rule out certain important changes in transcriptional profiles during neurodevelopment that may have occurred beyond the sampling range used in this study. For instance, previous studies have reported that during juvenile development in humans (1–8 postnatal years), the cerebral cortex consumes nearly twice the amount of glucose as observed during adulthood and is accompanied by dramatic changes in synaptic density during that developmental window [11, 12]. Thus, the transition state we identified is not absolute, with more saturated samples across different ages required to confirm our conclusions.
Thirdly, comparative analysis between humans and macaques was based on microarray data only, which rely on pre-existing knowledge of RNA sequences; as such, some important genes may be missed.
Finally, due to the lack of prenatal transcriptome data on brain development in hominoids, it is difficult to compare hominoids with humans, which would be valuable when exploring human brain evolution.
Further analyses, including expression data analysis across additional development stages, comparative analysis of RNA-seq data between humans and NHPs, as well as analysis incorporating more hominoids, are needed to expand our results.
In this study, we integrated transcriptomic analysis to reveal the evolution and transition of expression trajectories during human brain development. By comparing gene expression profiles between humans and rhesus macaques, our results provide new insights into the gene expression trajectory of human brain development, which will deepen our understanding on evolution of the human brain.
Dataset resources
The normalized gene expression human and rhesus macaque datasets were obtained from the Allen Brain Atlas (http://www.brain-map.org) (Table 1; Table 2) [24, 25]. We used two datasets for humans, including RNA-seq and microarray data, which contained 14 brain regions and 27 developmental stages. The RNA-seq data were summarized to Gencode 10 gene-level reads per kilobase million mapped reads (RPKM), whereas the microarray data were based on the Affymetrix GeneChip Human Exon 1.0 ST Array platform. Several quality control measures were implemented to reduce errors due to spatial artifacts on the chips, technical differences between chips in probe saturation, or other unaccounted for batch effects. Detailed information can be found in the Supplementary Materials of Kang et al. (2011) [24]. For rhesus macaques, we used the microarray dataset based on GeneChip Rhesus Macaque Genome Arrays from Affymetrix. From the 52,865 probe sets included in the macaque microarray data, 12,441 high-confidence probe sets were retained after quality control filtering. Detailed description of the macaque data can be found in Bakken et al. (2016) [25]. The macaque microarray dataset contained five brain regions (22 brain subregions) and nine developmental stages (Table 1; Table 2).
Clustering of genes in each tissue
The human microarray and RNA-seq datasets and rhesus macaque microarray dataset were used to cluster genes for each brain region according to their expression levels. To reduce the influence of technical noise, only genes with expression values of more than 0 in 80% of the available samples were considered expressed. Before clustering, we log2 transformed and then z-transformed the expression levels (normalized the mean to 0 and variance to 1). Using agglomerative hierarchical clustering with the average and complete methods in the flashClust R package [62], the RNA-seq and microarray data from most human tissues were clustered into two groups, separated with a demarcation point of 25 PCW. The microarray data from the rhesus macaques were also clustered into two groups, with 17 PCW as the demarcation point.
Dynamic network biomarker (DNB) analysis
Based on DNB theoretical analysis [30, 32, 33], we can prove that when a system is near the critical state/transition phase, a dominant group of genes/molecules, i.e., DNBs, can drive transition of the dynamic process. These molecules must satisfy the following three criteria [30]:
Deviation (or fluctuation) for each molecule inside the dominant group (SDd: standard deviation) drastically increases.
Correlation between molecules inside the dominant group (PCCd: Pearson correlation coefficients in absolute values) rapidly increases.
Correlation between molecules inside and outside the dominant group (PCCo: Pearson correlation coefficients in absolute values) rapidly decreases.
The dominant group is considered a DNB and plays an important role in phase transition. A quantification index (CI) considering all three criteria can then be used as the numerical signal of the critical state or transition phase and also for the identification of DNB members/molecule, with the following equation:
$$CI=: \frac{SD_d\bullet {PCC}_d}{PCC_o}$$
where, PCCd is the average Pearson's correlation coefficient (PCC) between the genes in the dominant group (or DNB) of the same time stage in absolute value; PCCo is the average PCC between the dominant group (or DNB) and others of the same time stage in absolute value; and, SDd is the average standard deviation (SD) of genes in the dominant group (or DNB). The three criteria together construct the composite index (CI). The CI is expected to peak or increase sharply during the measured stages when the system approaches the critical state, thus indicating imminent transition or transition phase of the biological process [30].
We applied this DNB method to detect the critical points and DNB members in humans and macaques based on the transcriptome data of the primary visual cortex. In each sampling stage, there were 1–4 samples with gene expression profiles. To increase the reliability of the DNB results, the slide window method was incorporated into the DNB model to process data [30].
Differentially expressed genes (DEGs) between early and later stages of brain development
To remove the potential effect of different high-throughput platforms on gene expression values, we only used microarray data for DEG analysis for both species. Pairwise differential expression was investigated using the edgeR R package [63]. To determine the DEGs between the two developmental times for humans and macaques, the demarcation times were set 25 PCW and 17 PCW, respectively. A nominal significance threshold of Benjamini-Hochberg FDR < 0.05 and fold change [FC] > 1.5 was used to identify DEGs.
For DEGs in humans, we applied g:Profiler (https://biit.cs.ut.ee/gprofiler/) [64] for functional annotation analysis (GO and KEGG). To assess cell-type specificity in the 14 brain regions of humans, we used genes expressed at least five-fold higher in one cell type than all other cell types (neuron, microglia, astrocyte, oligodendrocyte, endothelial) from brain-based RNA expression data as the cell marker [65].
Heterochrony analyses with dynamic time warping algorithm (DTW-S)
We combined data from microarray probes of humans and macaques to study heterochronic gene expression in five brain regions (i.e., hippocampus (HIP), striatum (STR), anterior cingulate cortex (ACC), amygdala (AMY) and primary visual cortex (V1C)). We used the "sva" R package [66] to remove batch effects between microarray datasets of humans and rhesus macaques.
To choose age-related genes, we first used a log2 transformed age scale to ensure a more linear relationship between age and phenotype [67]. We tested the effect of age on the expression levels using polynomial regression models, as described previously [68]. We next tested each gene for expression divergence between humans and rhesus macaques using analysis of covariance [69] (F-test P < 0.05). Identification of age-related genes and species-specific genes were based on the adjusted r2 criterion. The identification methods have been described previously [68]. Consequently, we selected 892 genes in AMY, 2431 genes in HIP, 1961 genes in STR, 1899 genes in ACC, and 2416 genes in V1C as the test gene set for DTW-S, satisfying the following criteria: (i) significant expression change with age and (ii) significant expression difference between humans and macaques.
The DTW-S algorithm was then used to analyze the data for heterochrony [68]. We defined genes showing significant heterochrony into two categories: (i) human acceleration genes, whose expression changes were significantly faster during human brain development than that during macaque brain development; and, (ii) human neoteny genes, whose expression changes were significantly delayed during human brain development compared with that during macaque brain development. Using those genes that showed significant age-related and species-specific differences, as defined above, we aligned the macaque and human expression trajectories and estimated the time-shift (heterochrony) between humans and macaques, with simulations conducted to estimate the significance of the shifts. We considered genes as 'significantly heterochronic' if they showed a shift at P < 0.05. A detailed description of the DTW-S algorithm can be found in Yuan et al. 2011 [68].
Construction of gene co-expression modules for early human brain development
We constructed multi-tissue co-expression networks that simultaneously captured intra- and inter-tissue gene-gene interactions using the human RNA-seq expression data [42, 70]. Before identifying co-expressed gene modules, we used linear regression to correct sex and brain region covariates in the expression data. To quantify the differences in the transcript network organization between the early and late stages, we employed the modular differential connectivity (MDC) metric [71]. In brief, MDC represents the connectivity ratios of all gene pairs in a module from the early stage to the same gene pairs from the later stage: MDC > 0 indicates a gain of connectivity or enhanced co-regulation between genes in the early stage, whereas MDC < 0 indicates a loss of connectivity or reduced co-regulation between genes in the early stage.
To identify key regulator (driver) genes of the GCM1 module, we applied key driver analysis to the module-based unweighted co-expression networks derived from ARACNE [47]. ARACNE first identified significant interactions between genes in the module based on their mutual information and then removed indirect interactions through data processing inequality (DPI). For each ARACNE-derived unweighted network, we further identified the key regulators by examining the number of N-hop neighborhood nodes for each gene.
Identification of cell type and subtype from single cell data
Single-cell RNA-seq data (accession number GSE104276) were reported in previous study [39]. Transcript counts for each cell were normalized to transcripts per million (TPM), with the TPM values then normalized by log ((TPM/10) + 1) for subsequent analysis [39]. The Seurat package [72] v1.2.1 implemented in R was applied to identify major cell types among the 2394 single cells from the prefrontal cortex. Only cells that expressed more than 1000 genes were considered, and only genes with normalized expression levels greater than 1 and expressed in at least three single cells were included, which left 20,262 genes across 2344 samples for clustering analysis. After initial clustering, PAX6, NEUROD2, GAD1, PDGFRA, AQP4, and PTPRC were used as markers to identify the major cell types in the brain: i.e., neural progenitor cells, excitatory neurons, interneurons, oligodendrocyte progenitor cells, astrocytes, and microglia, respectively.
Coding sequence evolutionary analysis of RBFOX1
To analyze the evolution of the coding regions of RBFOX1, we obtained the human, chimpanzee, rhesus macaque, marmoset, mouse lemur, mouse, rat, cow, dog, and opossum coding sequences for this gene from Ensembl [48]. The coding sequences were aligned using Prank [73]. Gblocks v0.91b was used to remove poorly aligned regions in the resulting nucleotide sequence alignments [74]. We then used the modified branch-site model A from the PAML package v4.9 to test positive selection of RBFOX1 in the human and primate lineages, respectively [75]. The null hypothesis of the branch test was that all lineages shared the same dN/dS ratio. The alternative hypothesis was that human or primate lineages had a different dN/dS ratio from other lineages, with w0, w1, and w2 representing codons under negative, null, and positive selection, respectively. The Chi-square test was used to calculate the P value P_adjust< 0.05 was considered as significant.
The RNA-seq and microarray dataset of human analysed during the current study are available in the http://www.brainspan.org/static/download.html repository [24]. The microarray dataset of macaque analysed during the current study are available in the http://blueprintnhpatlas.org/static/download repository [25].
PCW:
Postconception week
DNB :
Dynamic network biomarker
NHPs :
RNA-seq :
RNA-sequencing
HIP :
STR:
Striatum
ACC :
Anterior cingulate cortex
AMY :
V1C:
Primary visual cortex
DEGs :
Differentially expressed genes
FC:
Fold change
Gestational week
WGCNA :
Gene co-expression network analysis
MDC:
Modular differential connectivity
GCM:
Modules that show gain of connectivity
LCM :
Modules that show loss of connectivity
NCM :
Modules that show no change of connectivity
RBFOX1 :
RNA Binding Fox-1 Homolog 1
HACNSs :
Human-accelerated conserved non-coding sequences
HARs:
Human DNA sequence accelerated regions
TPM :
Transcripts per million
Sousa AMM, Meyer KA, Santpere G, Gulden FO, Sestan N. Evolution of the human nervous system function, structure, and development. Cell. 2017;170(2):226–47.
Workman AD, Charvet CJ, Clancy B, Darlington RB, Finlay BL. Modeling transformations of neurodevelopmental sequences across mammalian species. J Neurosci. 2013;33(17):7368.
Bae B-I, Jayaraman D, Walsh CA. Genetic changes shaping the human brain. Dev Cell. 2015;32(4):423–34.
Lieberman P. The evolution of language and thought. J Anthropol Sci. 2016;94:127–46.
Penn DC, Holyoak KJ, Povinelli DJ. Darwin's mistake: explaining the discontinuity between human and nonhuman minds. Behav Brain Sci. 2008;31(2):109–30 discussion 130-178.
Zhang M-L. M-LLAOARWMD-DWYS: Conserved sequences identify the closest living relatives of primates. Zool Res. 2019;40(6):532–40.
Enard W. The molecular basis of human Brain evolution. Curr Biol. 2016;26(20):R1109–17.
Franchini LS, Pollard K. Genomic approaches to studying human-specific developmental traits. Development. 2015;142:3100–12.
Silver DL. Genomic divergence and brain evolution: how regulatory DNA influences development of the cerebral cortex. BioEssays. 2016;38(2):162–71.
Chugani HT, Phelps ME, Mazziotta JC. Positron emission tomography study of human brain functional development. Ann Neurol. 1987;22(4):487–97.
Peter RH. Synaptic density in human frontal cortex — developmental changes and effects of aging. Brain Res. 1979;163(2):195–205.
Sterner K, Weckle AT, Chugani H, Tarca A, Sherwood C, Hof P, Kuzawa C, Boddy A, Abbas A, Raaum R, et al. Dynamic Gene Expression in the Human Cerebral Cortex Distinguishes Children from Adults. PloS one. 2012;7:e37714.
Jacobs B, Chugani HT, Allada V, Chen S, Phelps ME, Pollack DB, Raleigh MJ. Developmental changes in Brain metabolism in sedated rhesus macaques and Vervet monkeys revealed by positron emission tomography. Cereb Cortex. 1995;5(3):222–33.
Ye L-Q, Zhao H, Zhou H-J, Ren X-D, Liu L-L, Otecko NO, Wang Z-B, Yang M-M, Zeng L, Hu X-T, et al. The RNA editome of Macaca mulatta and functional characterization of RNA editing in mitochondria. Sci Bull. 2017;62(12):820–30.
Cáceres M, Lachuer J, Zapala MA, Redmond JC, Kudo L, Geschwind DH, Lockhart DJ, Preuss TM, Barlow C. Elevated gene expression levels distinguish human from non-human primate brains. Proc Natl Acad Sci. 2003;100(22):13030.
Gu J, Gu X. Induced gene expression in human brain after the split from chimpanzee. Trends Genet. 2003;19(2):63–5.
Kim J, Kerr JQ, Min G-S. Molecular heterochrony in the early development of <em>Drosophila</em>. Proc Natl Acad Sci. 2000;97(1):212.
Langer J. The Heterochronic evolution of primate cognitive development. Biol Theory. 2006;1(1):41–3.
Moss EG. Heterochronic genes and the nature of developmental time. Curr Biol. 2007;17(11):R425–34.
Somel M, Franz H, Yan Z, Lorenc A, Guo S, Giger T, Kelso J, Nickel B, Dannemann M, Bahn S, et al. Transcriptional neoteny in the human brain. Proc Natl Acad Sci U S A. 2009;106(14):5743–8.
Stiles J, Jernigan TL. The basics of brain development. Neuropsychol Rev. 2010;20(4):327–48.
Zhu Y, Sousa AMM, Gao T, Skarica M, Li M, Santpere G, Esteller-Cucala P, Juan D, Ferrández-Peral L, Gulden FO, et al. Spatiotemporal transcriptomic divergence across human and macaque brain development. Science. 2018;362(6420):eaat8077.
Khaitovich P, Muetzel B, She X, Lachmann M, Hellmann I, Dietzsch J, Steigele S, Do H-H, Weiss G, Enard W, et al. Regional patterns of gene expression in human and chimpanzee brains. Genome Res. 2004;14(8):1462–73.
Kang HJ, Kawasawa YI, Cheng F, Zhu Y, Xu X, Li M, Sousa AM, Pletikos M, Meyer KA, Sedmak G, et al. Spatio-temporal transcriptome of the human brain. Nature. 2011;478(7370):483–9.
Bakken TE, Miller JA, Ding S-L, Sunkin SM, Smith KA, Ng L, Szafer A, Dalley RA, Royall JJ, Lemon T, et al. A comprehensive transcriptional map of primate brain development. Nature. 2016;535:367.
Sunkin SM, Ng L, Lau C, Dolbeare T, Gilbert TL, Thompson CL, Hawrylycz M, Dang C. Allen Brain atlas: an integrated spatio-temporal portal for exploring the central nervous system. Nucleic Acids Res. 2013;41(Database issue):D996–D1008.
Li M-L, Wu S-H, Zhang J-J, Tian H-Y, Shao Y, Wang Z-B, Irwin DM, Li J-L, Hu X-T, Wu D-D. 547 transcriptomes from 44 brain areas reveal features of the aging brain in non-human primates. Genome Biol. 2019;20(1):258.
Raznahan A, Greenstein D, Lee NR, Clasen LS, Giedd JN. Prenatal growth in humans and postnatal brain maturation into late adolescence. Proc Natl Acad Sci. 2012;109(28):11366.
Somel M, Guo S, Fu N, Yan Z, Hu HY, Xu Y, Yuan Y, Ning Z, Hu Y, Menzel C, et al. MicroRNA, mRNA, and protein expression link development and aging in human and macaque brain. Genome Res. 2010;20(9):1207–18.
Chen L, Liu R, Liu ZP, Li M, Aihara K. Detecting early-warning signals for sudden deterioration of complex diseases by dynamical network biomarkers. Sci Rep. 2012;2:342.
Richard A, Boullu L, Herbach U, Bonnafoux A, Morin V, Vallin E, Guillemin A, Papili Gao N, Gunawan R, Cosette J, et al. Single-cell-based analysis highlights a surge in cell-to-cell molecular variability preceding irreversible commitment in a differentiation process. PLoS Biol. 2016;14(12):e1002585.
Liu X, Chang X, Liu R, Yu X, Chen L, Aihara K. Quantifying critical states of complex diseases using single-sample dynamic network biomarkers. PLoS Comput Biol. 2017;13(7):e1005633.
Liu R, Wang X, Aihara K, Chen L. Early diagnosis of complex diseases by molecular biomarkers, network biomarkers, and dynamical network biomarkers. Med Res Rev. 2014;34(3):455–78.
Cahoy JD, Emery B, Kaushal A, Foo LC, Zamanian JL, Christopherson KS, Xing Y, Lubischer JL, Krieg PA, Krupenko SA, et al. A transcriptome database for astrocytes, neurons, and oligodendrocytes: a new resource for understanding brain development and function. J Neurosci. 2008;28(1):264–78.
Silbereis JC, Pochareddy S, Zhu Y, Li M, Sestan N. The cellular and molecular landscapes of the developing human central nervous system. Neuron. 2016;89(2):248–68.
Shen Q, Wang Y, Dimos JT, Fasano CA, Phoenix TN, Lemischka IR, Ivanova NB, Stifani S, Morrisey EE, Temple S. The timing of cortical neurogenesis is encoded within lineages of individual progenitor cells. Nat Neurosci. 2006;9(6):743–51.
Qian X, Shen Q, Goderie SK, He W, Capela A, Davis AA, Temple S. Timing of CNS cell generation: a programmed sequence of neuron and glial cell production from isolated murine cortical stem cells. Neuron. 2000;28(1):69–80.
Miller FD, Gauthier AS. Timing is everything: making neurons versus glia in the developing cortex. Neuron. 2007;54(3):357–69.
Zhong S, Zhang S, Fan X, Wu Q, Yan L, Dong J, Zhang H, Li L, Sun L, Pan N, et al. A single-cell RNA-seq survey of the developmental landscape of the human prefrontal cortex. Nature. 2018;555(7697):524–8.
Li X, Newbern JM, Wu Y, Morgan-Smith M, Zhong J, Charron J, Snider WD. MEK is a key regulator of Gliogenesis in the developing Brain. Neuron. 2012;75(6):1035–50.
Bagot RC, Cates HM, Purushothaman I, Lorsch ZS, Walker DM, Wang J, Huang X, Schluter OM, Maze I, Pena CJ, et al. Circuit-wide transcriptional profiling reveals Brain region-specific gene networks regulating depression susceptibility. Neuron. 2016;90(5):969–83.
Langfelder P, Horvath S. WGCNA: an R package for weighted correlation network analysis. BMC bioinformatics. 2008;9:559.
Parikshak NN, Luo R, Zhang A, Won H, Lowe JK, Chandran V, Horvath S, Geschwind DH. Integrative functional genomic analyses implicate specific molecular pathways and circuits in autism. Cell. 2013;155(5):1008–21.
Meyer KA, Marques-Bonet T, Sestan N. Differential gene expression in the human Brain is associated with conserved, but not accelerated, noncoding sequences. Mol Biol Evol. 2017;34(5):1217–29.
Lindblad-Toh K, Garber M, Zuk O, Lin MF, Parker BJ, Washietl S, Kheradpour P, Ernst J, Jordan G, Mauceli E, et al. A high-resolution map of human evolutionary constraint using 29 mammals. Nature. 2011;478(7370):476–82.
Willsey AJ, Sanders SJ, Li M, Dong S, Tebbenkamp AT, Muhle RA, Reilly SK, Lin L, Fertuzinhos S, Miller JA, et al. Coexpression networks implicate human midfetal deep cortical projection neurons in the pathogenesis of autism. Cell. 2013;155(5):997–1007.
Margolin AA, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Dalla Favera R, Califano A. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC bioinformatics. 2006;7(Suppl 1):S7.
Zerbino DR, Achuthan P, Akanni W, Amode MR, Barrell D, Bhai J, Billis K, Cummins C, Gall A, Girón CG, et al. Ensembl 2018. Nucleic Acids Res. 2018;46(D1):D754–61.
Alkallas R, Fish L, Goodarzi H, Najafabadi HS. Inference of RNA decay rate from transcriptional profiling highlights the regulatory programs of Alzheimer's disease. Nat Commun. 2017;8(1):909.
Lal D, Reinthaler EM, Altmuller J, Toliat MR, Thiele H, Nurnberg P, Lerche H, Hahn A, Moller RS, Muhle H, et al. RBFOX1 and RBFOX3 mutations in rolandic epilepsy. PLoS One. 2013;8(9):e73323.
Lee JA, Damianov A, Lin CH, Fontes M, Parikshak NN, Anderson ES, Geschwind DH, Black DL, Martin KC. Cytoplasmic Rbfox1 regulates the expression of synaptic and autism-related genes. Neuron. 2016;89(1):113–28.
Gehman LT, Stoilov P, Maguire J, Damianov A, Lin CH, Shiue L, Ares M Jr, Mody I, Black DL. The splicing regulator Rbfox1 (A2BP1) controls neuronal excitation in the mammalian brain. Nat Genet. 2011;43(7):706–11.
Kamm GB, Pisciottano F, Kliger R, Franchini LF. The developmental brain gene NPAS3 contains the largest number of accelerated regulatory sequences in the human genome. Mol Biol Evol. 2013;30(5):1088–102.
He Z, Han D, Efimova O, Guijarro P, Yu Q, Oleksiak A, Jiang S, Anokhin K, Velichkovsky B, Grünewald S, et al. Comprehensive transcriptome analysis of neocortical layers in humans, chimpanzees and macaques. Nat Neurosci. 2017;20:886.
Kronenberg ZN, Fiddes IT, Gordon D, Murali S, Cantsilieris S, Meyerson OS, Underwood JG, Nelson BJ, Chaisson MJP, Dougherty ML, et al. High-resolution comparative analysis of great ape genomes. Science. 2018;360:eaar6343.
Sousa AMM, Zhu Y, Raghanti MA, Kitchen RR, Onorati M, Tebbenkamp ATN, Stutz B, Meyer KA, Li M, Kawasawa YI, et al. Molecular and cellular reorganization of neural circuits in the human lineage. Science. 2017;358(6366):1027–32.
Vermunt MW, Tan SC, Castelijns B, Geeven G, Reinink P, de Bruijn E, Kondova I, Persengiev S, Netherlands Brain B, Bontrop R, et al. Epigenomic annotation of gene regulatory alterations during evolution of the primate brain. Nat Neurosci. 2016;19:494.
Xu C, Li Q, Efimova O, He L, Tatsumoto S, Stepanova V, Oishi T, Udono T, Yamaguchi K, Shigenobu S, et al. Human-specific features of spatial gene expression and regulation in eight brain regions. Genome Res. 2018;28(8):1097–110.
Bakken TE, Miller JA, Luo R, Bernard A, Bennett JL, Lee C-K, Bertagnolli D, Parikshak NN, Smith KA, Sunkin SM, et al. Spatiotemporal dynamics of the postnatal developing primate brain transcriptome. Hum Mol Genet. 2015;24(15):4327–39.
Bianchi S, Stimpson CD, Bauernfeind AL, Schapiro SJ, Baze WB, McArthur MJ, Bronson E, Hopkins WD, Semendeferi K, Jacobs B, et al. Dendritic morphology of pyramidal neurons in the chimpanzee neocortex: regional specializations and comparison to humans. Cereb Cortex. 2013;23(10):2429–36.
Elston G, Benavides-Piccione R, Elston A, Manger P, Defelipe J. Pyramidal cells in prefrontal cortex of Primates: marked differences in neuronal structure among species. Front Neuroanat. 2011;5:2.
Langfelder P, Horvath S. Fast R functions for robust correlations and hierarchical clustering. J Stat Softw. 2012;46(11):i11.
Guttman M, Garber M, Levin JZ, Donaghey J, Robinson J, Adiconis X, Fan L, Koziol MJ, Gnirke A, Nusbaum C, et al. Ab initio reconstruction of cell type-specific transcriptomes in mouse reveals the conserved multi-exonic structure of lincRNAs. Nat Biotechnol. 2010;28(5):503–10.
Reimand J, Arak T, Adler P, Kolberg L, Reisberg S, Peterson H, Vilo J. g:profiler-a web server for functional interpretation of gene lists (2016 update). Nucleic Acids Res. 2016;44(W1):W83–9.
Zhang Y, Chen K, Sloan SA, Bennett ML, Scholze AR, O'Keeffe S, Phatnani HP, Guarnieri P, Caneda C, Ruderisch N, et al. An RNA-sequencing transcriptome and splicing database of glia, neurons, and vascular cells of the cerebral cortex. J Neurosci. 2014;34(36):11929–47.
Leek JT, Johnson WE, Parker HS, Jaffe AE, Storey JD. The sva package for removing batch effects and other unwanted variation in high-throughput experiments. Bioinformatics. 2012;28(6):882–3.
Clancy B, Darlington RB, Finlay BL. Translating developmental time across mammalian species. Neuroscience. 2001;105(1):7–17.
Yuan Y, Chen Y-PP, Ni S, Xu AG, Tang L, Vingron M, Somel M, Khaitovich P. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series. BMC bioinformatics. 2011;12:347.
Faraway JJ. Practical regression and ANOVA using R; 2002.
Zhang B, Gaiteri C, Bodea LG, Wang Z, McElwee J, Podtelezhnikov AA, Zhang C, Xie T, Tran L, Dobrin R, et al. Integrated systems approach identifies genetic nodes and networks in late-onset Alzheimer's disease. Cell. 2013;153(3):707–20.
McKenzie AT, Katsyv I, Song WM, Wang M, Zhang B. DGCA: a comprehensive R package for differential gene correlation analysis. BMC Syst Biol. 2016;10(1):106.
Butler A, Hoffman P, Smibert P, Papalexi E, Satija R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nat Biotechnol. 2018;36(5):411–20.
Loytynoja A. Phylogeny-aware alignment with PRANK. Methods Mol Biol. 2014;1079:155–70.
Castresana J. Selection of conserved blocks from multiple alignments for their use in phylogenetic analysis. Mol Biol Evol. 2000;17(4):540–52.
Yang Z, dos Reis M. Statistical properties of the branch-site test of positive selection. Mol Biol Evol. 2011;28(3):1217–28.
This work was supported by the Animal Branch of the Germplasm Bank of Wild Species, Chinese Academy of Sciences (the Large Research Infrastructure Funding).
This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB13000000, XDB38040400), National Key R&D Program of China (2017YFA0505500), National Natural Science Foundation of China (31671325, 31822048, 31771476, 31930022), and the Bureau of Science and Technology of Yunnan Province (2019FI010). The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Y.L. was supported by the Young Academic and Technical Leader Raising Foundation of Yunnan Province.
Ming-Li Li, Hui Tang, Luonan Chen, Yan Li and Dong-Dong Wu contributed equally to this work.
State Key Laboratory of Genetic Resources and Evolution, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, 650223, Yunnan, China
Ming-Li Li, Yong Shao, Ming-Shan Wang, Hai-Bo Xu, Sheng Wang, David M. Irwin, Adeniyi C. Adeola & Dong-Dong Wu
Kunming College of Life Science, University of the Chinese Academy of Sciences, Kunming, 650223, Yunnan, China
Ming-Li Li, Yong Shao, Ming-Shan Wang, Hai-Bo Xu, Adeniyi C. Adeola & Dong-Dong Wu
State Key Laboratory of Cell Biology, Shanghai Institute of Biochemistry and Cell Biology, Center for Excellence in Molecular Cell Science, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, 200031, China
Hui Tang, Tao Zeng & Luonan Chen
Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, M5S 1A8, Canada
David M. Irwin
Banting and Best Diabetes Centre, University of Toronto, Toronto, Ontario, M5G 2C4, Canada
Key Laboratory of Systems Biology, Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Hangzhou, 310024, China
Luonan Chen
Center for Excellence in Animal Evolution and Genetics, Chinese Academy of Sciences, Kunming, 650223, Yunnan, China
Luonan Chen & Dong-Dong Wu
State Key Laboratory for Conservation and Utilization of Bio-Resource, Yunnan University, Kunming, 650091, Yunnan, China
Ming-Li Li
Yong Shao
Ming-Shan Wang
Hai-Bo Xu
Sheng Wang
Adeniyi C. Adeola
Tao Zeng
Dong-Dong Wu
D.D.W. led the project. D.D.W., Y.L., and L.N.C. designed and conceived the study. L.M.L performed primary analyses. H. T conducted DNB analysis. L.M.L,Y.S. and M.S.W. contributed to the positive selection analysis of RBFOX1. L.M.L,H.B.X. and S.W. contributed to single cell analysis. D.D.W., L.M.L., D.M.I., and A.A. prepared the paper. All authors read and approved the manuscript.
Correspondence to Luonan Chen or Yan Li or Dong-Dong Wu.
Additional file 1: Supplementary Tables S1–S12.
Additional file 2: Supplementary Figure S1–S5.
Li, ML., Tang, H., Shao, Y. et al. Evolution and transition of expression trajectory during human brain development. BMC Evol Biol 20, 72 (2020). https://doi.org/10.1186/s12862-020-01633-4
Macaques
Expression trajectory
Brain evolution
Evolutionary developmental biology and morphology
|
CommonCrawl
|
Are Changes in Physical Work Capacity Induced by High-Intensity Functional Training Related to Changes in Associated Physiologic Measures?
Derek A. Crawford, Nicholas B. Drake, Michael J. Carper, Justin Deblauw, Katie M. Heinrich
Subject: Biology, Physiology Keywords: high-intensity functional training; work capacity; performance
High-Intensity Functional Training (HIFT) is a novel exercise intervention that may test body systems in a balanced and integrated fashion by challenging individuals' abilities to complete mechanical work. However, research has not previously determined if physical work capacity is unique to traditional physiologic measures of fitness. Twenty-five healthy men and women completed a six-week HIFT intervention with physical work capacity and various physiologic measures of fitness assessed pre- and post-intervention. At baseline, these physiologic measures of fitness (e.g., aerobic capacity) were significantly associated with physical work capacity and this relationship was even stronger at post-intervention assessment. Further, there were significant improvements across these physiologic measures in response to the delivered intervention. However, the change in these physiologic measures failed to predict the change in physical work capacity induced via HIFT. These findings point to the potential utility of HIFT as a unique challenge to individuals' physiology beyond traditional resistance or aerobic training. Elucidating the translational impact of increasing work capacity via HIFT may be of great interest to health and fitness practitioners ranging from strength/conditioning coaches to physical therapists.
Relationship of Work Context and Work Stress among Sonographers in Riyadh, KSA
Uzma Zaidi, Lena F. Hammad, Salwa S. Awad, Safaa M. A. Elkholi, Hind D. Qasem
Subject: Behavioral Sciences, Clinical Psychology Keywords: work context; work conditions; work stress; job satisfaction; lifestyle; sonographers; ergonomics
Work context is essential to understand in relation to handle the stress at work that ultimately creates a feeling of satisfaction or dissatisfaction among health professionals. The current study was conducted to investigate the relationship of work context and work stress among sonographers (n=153) in Riyadh, Saudi Arabia. Additionally, the study provided a gender-based comparison of both variables among sonographers. Work context was measured by administering subscale of work context derived from Work Design Questionnaire. Whereas, work stress was measured by Job Stress Scale. In addition, relationship of lifestyle was explored with work context and work stress. Data was collected through survey research forms. Results revealed the significant relationship of work context and work stress (r=.251, p=.002). Among lifestyle variables, perceived good health (r= .214, p=.008) and sleep (r=.242. p=.003) were found positively related with satisfaction toward work. Whereas, the strong positive correlation was found between work context and frequency of physical activity (r=.255, p=.005). No significant difference was found among male and female sonographers. The findings of this study contributed to evaluating the working condition of sonographers in relation to work stress. Effective strategies for better working settings as well as strategies for achieving satisfaction in work will be discussed to enhance the performance of sonographers.
Older Physical Education Teachers' Wellbeing at Work and its Challenges
Henry Lipponen, Mirja Hirvensalo, Kasper Salin
Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: physical education; older employee; ageing; work ability; coping at work; wellbeing at work
This article examines older physical education (PE) teachers' wellbeing over the course of their career in Finland. The study highlights challenges to physical and mental functioning as well as how teachers respond to these challenges. The six interviewees were over 55-year-old PE teachers, whose career had lasted for more than 30 years. Qualitative methods were used in the collection, transcription and analysis of the research data. The qualitative analysis consisted of a series of interpretations that visualised the world described by the interviewees. All the research participants had physical problems that affected their teaching and make teachers consider a potential career change. To be able to teach, teachers adapted their ways of working according to the challenges brought by age and injuries. The research participants found that the challenges caused by musculoskeletal problems and ageing were an inevitable part of the profession. They emphasised the positive sides of the work: the profession permits varied workdays. In addition, the teachers noted that their work provides them with opportunities to remain physically fit. Teaching health education is a means to lighten the workload of older teachers. PE teachers enjoy their profession and are dedicated to it, despite all the challenges. The interviewed participants clearly experienced work engagement. Our development proposal for teacher education is that future PE teachers be informed about the risks involved in the profession. Such activity helps young teachers reflect proactively on the measures taken to maintain their functioning during their career and on perspectives related to the ways of working.
Offices after the COVID-19 Pandemic and Changes in Perception of Flexible Office Space
Matus Barath, Dusana Alshatti Schmidt
Subject: Social Sciences, Organizational Economics & Management Keywords: work environment; employers; office space; remote work; COVID-19
The pandemic is fast moving, accelerating rapid changes that lead to new challenges and making organizations suffer an impact. A big mark has been left on the workplaces - places where we do business, because an ongoing change to remote work challenges the role of the office. It is highly possible that as the change is progressing, it is not only the workplace that will change its design, but also the way in which work will be planned, organized, done and controlled. However, as the restrictions ease up questions appear: What is the potential of office sustainability? How has the perception of flexible office space changed due to the COVID-19 pandemic? This paper used an online survey as a quantitative research method. In this paper, we looked at the employer's vision of the office. We investigated employers' perspectives of where and in what settings the work will be done in the post-pandemic time. Specifically, we discussed the changes employers will apply in terms of work environment and office layout. The findings suggest that an increasing mobile workforce and expansion of the new workstyle will not mean an office exodus, but will certainly have an impact on office utilization.
Working from Home, Telework, and Psychological Wellbeing? A Systematic Review
Joseph Crawford
Subject: Social Sciences, Business And Administrative Sciences Keywords: remote work; telework; systematic literature review; work design; workforce planning
Online: 2 September 2022 (05:54:08 CEST)
The practice of telework, remote work, and working from home has grown significantly across the pandemic era (2020+). These practices offer new ways of working but come with a lack of clarity as to the role it plays in supporting the wellbeing of staff. (1) Background: the purpose of this study is to examine the current literature on wellbeing outcomes and effects of telework; (2) Methods: this study adopts a systematic literature review from 2000-2022 using the PRISMA approach and thematic analysis guided by the United Nations Sustainable Development Goals (Wellbeing, Decent Work, Gender Equality, and Inclusive Production); (3) Results: it was evident that there is a lack of clarity on the actual effects of telework on employee wellbeing, but it appeared that it had a generally positive effect on short-term wellbeing of staff, and created more flexible and proactive work design opportunities; (4) Conclusions: there is a need for more targeted research into work designs that support wellbeing and productivity of staff, and consider the environmental sustainability changes from reduced office and onsite work and increased working from home.
Sustainable Development of an Individual as a Result of Mutual Enrichment of Professional and Personal Life
Katarzyna Mikołajczyk
Subject: Social Sciences, Accounting Keywords: work-life balance; work-life enrichment; outside-of-work activity; sustainable human capital development; COVID-19 pandemic
Nowadays, the development of civilization requires a vision of balancing the interests of employees and employers in the sphere of work as never before. Work-life balance is directly linked to social sustainability. The aim of this article is to analyse various dimensions of mutual enrichment of the professional and private life of an individual and to describe how positive experiences in professional and non-professional life influence the improvement of satisfaction, health and achievements, thus enabling the sustainable development of the individual. The conducted research was of a qualitative nature. Thematic exploration was used to analyse the findings of 34 in-depth interviews with experienced HR managers and employees at various levels of enterprises in Poland. The research shows that the work and personal life of the respondents interact, complement, and enrich in different ways, depending on the stage of the employee's life. Habits developed by practicing a specific sport discipline or other type of hobby are helpful in the effective implementation of professional tasks. Also, non-professional interests, including communing with culture and art have a positive impact on professional activities. On the other hand, the respondents emphasized that thanks to their professional activities, specific to the type of work they perform, they are sometimes more extroverted, meticulous, organized and consistent when performing activities outside of work and in other aspects of private life.
Discipline and Work Environment Affect Employee Productivity: Evidence From Indonesia
Rusdiyanto Rusdiyanto
Subject: Social Sciences, Accounting Keywords: Discipline; Work Environment; Productivity
Online: 6 May 2021 (16:11:55 CEST)
Objective: This paper aims to test and evaluate the Effect of Discipline And Work Environment on Employee Productivity of state-owned public bodies.Design/methodology/approach: This paper uses a quantitative approach using a survey approach. The survey is a study conducted on the employee population of state-owned public agency companies, samples taken from the employee population of state-owned public agency companies to find events related to discipline variables and work environments that can affect employee productivity variables, to analise the influence between discipline variables and the work environment on employee productivity variables using a statistic regression approach. This method is used to explain the influence of discipline variables and work environment on employee productivity variables. This approach is simply to provide a description and test the influence between discipline variables and the work environment on employee productivity variables that can be known how much the influence of discipline variables and work environment on employee productivity variables.Findings: The findings of this study explain that discipline has an influence on the productivity of employees of publicly owned companies, the work environment has an influence on the productivity of employees of publicly owned companies, while together discipline and work environment have an influence on the productivity of employees of publicly owned companies.Practical Implications: The results of the study are recommended for employees to improve the effectiveness and efficiency of the performance of state-owned public bodies.Originality: Previous research conducted to test the influence of discipline and work environment on the productivity of employees of manufacturing companies listed on the Indonesia Stock Exchange, the findings concluded that discipline and work environment have an influence on the work productivity of employees of manufacturing companies listed on the Indonesia Stock Exchange. This research object of research on publicly owned companies owned by the state.
Too Committed to Switch off – Capturing and Organizing the Full Range of Work-Related Rumination from Detachment to Overcommitment
Oliver Weigelt, J. Charlotte Seidel, Lucy Erber, Johannes Wendsche, Yasemin Z. Varol, Gerald M. Weiher, Petra Gierer, Claudia Sciannimanica, Richard Janzen, Christine J. Syrek
Subject: Behavioral Sciences, Applied Psychology Keywords: work-related rumination; overcommitment; psychological detachment; burnout; irritation; problem-solving pondering; positive work reflection; negative work reflection; affective rumination; satisfaction with life
Work-related thoughts in off-job time have been studied extensively in occupational health psychology and related fields. We provide a focused review of research on overcommitment – a component within the effort-reward imbalance model – and aim to connect this line of research to the most commonly studied aspects of work-related rumination. Drawing on this integrative review, we analyze survey data on ten facets of work-related rumination, namely (1) overcommitment, (2) psychological detachment, (3) affective rumination, (4) problem-solving pondering, (5) positive work reflection, (6) negative work reflection, (7) distraction, (8) cognitive irritation, (9) emotional irritation, and (10) inability to recover. First, we leverage exploratory factor analysis to self-report survey data from 357 employees to calibrate overcommitment items and to position overcommitment within the nomological net of work-related rumination constructs. Second, we leverage confirmatory factor analysis to self-report survey data from 388 employees to provide a more specific test of uniqueness vs. overlap among these constructs. Third, we apply relative weight analysis to quantify the unique criterion-related validity of each work-related rumination facet regarding (1) physical fatigue, (2) cognitive fatigue, (3) emotional fatigue, (4) burnout, (5) psychosomatic complaints, and (6) satisfaction with life. Our results suggest that several measures of work-related rumination (e.g., overcommitment and cognitive irritation) can be used interchangeably. Emotional irritation and affective rumination emerge as the strongest unique predictors of fatigue, burnout, psychosomatic complaints, and satisfaction with life. Our study assists researchers in making informed decisions on selecting scales for their research and paves the way for integrating research on effort-reward imbalance and work-related rumination.
Working Paper BRIEF REPORT
Experiences of being a Couple and Working by Shifts in the Mining Industry: Continuities
jimena silva, Pablo Zuleta, Estefany Castillo, Tarut Segovia-Chinga
Subject: Social Sciences, Sociology Keywords: couple; shift work; gender; Chile
This study seeks to understand, from a gender perspective, the experiences of mining couples in Antofagasta, Chile, especially the negotiation between their intimate lives and the absences of their partners due to the shift work modality. We analyzed testimonies from men and women living in Antofagasta, considered one of the three largest mining regions in the world. Among the main findings, power relations based on the hegemonic gender model supported by the sexual division of labor are identified, which persist in this mining area, despite progress in equity issues in Chile. We propose that, although there are differences between the discourses of men and women and their subjective positioning, both actively collaborate with the reproduction of social gender relations marked by male domination. We observe that this way of living as a couple is associated with the organization of mining work, which is central to the reproduction of the gender order with a hetero-patriarchal tone.
Psychological Resilience and Occupational Injuries
Simo Salminen, Pia Perttula, Vuokko Puro
Subject: Keywords: work accidents; drivers; waste; Finland
Resilience embodies the personal qualities that enable one to thrive in the face of adversity. A previous Italian study showed that injured workers had a lower level of resilience than non-injured workers. The aim of this paper is to examine the relationship between occupational injuries and psychological resilience. The subjects were 197 drivers from two Finnish waste transport companies. As a part of larger questionnaire, they fulfilled the Connor-Davidson Resilience Scale, which consisted of 25 items. Drivers reported their occupational injuries during the last three years. The drivers involved in occupational injuries had higher score (average 69.3) on Connor-Davidson Resilience Scale than drivers avoided injuries (67.7). According to Student's t-test the difference between groups was highly significant (t = 40.44, df = 196, p<0.001). The result of this study was contradictory to earlier Italian study. One explanation may be that the Italian study was done with traumatic context with seriously injured patients. Waste transport drivers were rather young and fit males, who had suffered only minor injuries.
Telecommuting, Off-Time Work, and Intrusive Leadership in Workers' Well-Being
Nicola Magnavita, Giovanni Tripepi, Carlo Chiorri
Subject: Medicine & Pharmacology, Allergology Keywords: smart work; psychosocial stressors; health promotion; work-related stress; Covid-19; anxiety; depression; happiness
Telecommuting is a flexible form of work that has progressively spread over the last 40 years and which has been strongly encouraged by the measures to limit the Covid19 pandemic. There is still limited evi-dence on the effects it has on workers' health. In this survey we invited 905 workers of companies that made a limited use of telework to fill out a questionnaire to evaluate: Intrusive leadership of managers (IL), the request for work outside traditional hours (OFF-TAJD), workaholism (BWAS), effort / reward imbalance (ERI), happiness and common mental issues (CMIs), anxiety and depression, assessed by the Goldberg scale (GADS). The interaction between these variables has been studied by structural equation modeling (SEM). Intrusive leadership and working after hours were significantly associated with occu-pational stress. Workaholism is a relevant moderator of this interaction: intrusive leadership significantly increased the stress of workaholic workers. Intrusive leadership and overtime work were associated with reduced happiness, anxiety and depression. These results indicate the need to guarantee the right to disconnect, to limit the effect of the OFF-TAJD. In addition to this, companies should implement policies to prevent intrusive leadership and workaholism.
Continuous Health Promotion and Participatory Ergonomics in a Small Company
Nicola Magnavita
Subject: Medicine & Pharmacology, Other Keywords: workplace; health promotion; work-related stress; anxiety; depression; participatory ergonomics; wellbeing; best practice; work organization
The workplace is an ideal setting for health promotion. The regular medical examination of workers enables us to screen for numerous diseases, spread good practices and correct lifestyles, and obtain a favourable risk/benefit ratio. The continuous monitoring of the level of workers' wellbeing using a holistic approach that goes beyond the simple prevention of occupational risks enables us to promptly identify problems in work organization and the company climate. Problems of this kind can be adequately managed by using a participatory approach. In this study participatory ergonomics groups were used to improve occupational life in a small company. After intervention we observed a reduction in levels of perceived occupational stress measured with the effort / reward imbalance model, and an improvement in psychological wellbeing assessed by means of the Goldberg anxiety / depression scale. Although the limited size of the sample calls for a cautious evaluation of this study, the GEP© strategy proved to be a useful tool due to its cost-effectiveness.
Intersectional Stigma, Identity, and Culture: A Grounded Theory of Female Escort Perspectives from Brazil and Pakistan
Belinda Brooks-Gordon, Nasra Poli
Subject: Social Sciences, Sociology Keywords: sex work; stigma; intersectionality; migrant; culture
Intersectional experiences, socio-cultural meanings, ethnic traditions and morals compound stigma-related stress (Jackson et al., 2020; Schmitz 2019). Sex workers are subject to various stigmatizing forces which can lead to secrecy, isolation and lack of social and cultural support (Koken 2012). Stigmatizing forces include structural humanitarian governance and aid interventions that conflate migration and sex work with insidious constraints and coercion. This study explored how migrant female sex workers from distinctive ethnic cultures manage their identity on a day to day basis in relation to the separation of work and home life. Methods: The perspectives of female sex workers were collected via a series of in-depth semi-structured interviews. The inclusion criteria were that the women had worked in sex work for over 18months, defined their involvement in sex work as voluntary, and were over 18yrs of age. The perspectives of seven women from South Asian (Pakistani), Brazilian, and British backgrounds were analyzed using Grounded Theory (Glaser and Strauss, 1967). Ethnicity was considered to explore how the women experienced stigma, how it impacted on the management of their identity, and how the process of change occurred. Results: The women used a variety of methods to maintain work and home life boundaries, processes they used switch into a role and all experienced stigma and tried to deal with it in ways such as concealment from friends and family. Two core categories and properties emerged from the data as participants felt guilt and/or shame but only the South Asian participants spoke of this with reference to their culture and religion. Conclusion: It was not migration per se but rather the relationship of migration to culture that was key to identity management. Participants reflected that as their country was considered collectivist country with interdependent thought, that any negativity felt could not only be reflected on the individual, but also the entire family. For these reasons Pakistani sex workers were subject to more complex stigmatizing forces, shame and guilt as regards risk and exposure. Discussion focusses on the processes and management strategies used to extend social and cultural support.
Personality and the Moderating Effect of Mood on Verbal Aggressiveness Risk Factor from Work Activity
María del Mar Molero Jurado, María del Carmen Pérez-Fuentes, Ana Belén Barragán Martín, María del Mar Simón Márquez, África Martos Martínez, José Jesús Gázquez Linares
Subject: Behavioral Sciences, Social Psychology Keywords: personality; emotional aspects; communication; work activity
One of the trends in current research in psychology explores how personal variables can determine a person's communication style. Our objective was to find out the moderating effect of Mood in the relationship between the five big personality traits and an aggressive verbal communication style risk factor from work activity in a sample of nursing professionals. This study is a quantitative descriptive design. The final sample was 596 nurses with a range of 22 to 56 years. An ad hoc questionnaire was used to collect sociodemographic data, the 10-item Big Five Inventory, the Communication Styles Inventory, and the Brief Emotional Intelligence Inventory for Senior Citizens. This study showed that for nursing professionals, the "Agreeableness", "Conscientiousness" and "Neuroticism" traits have a close relationship with aggressive verbal communication. Even though Mood moderates this relationship, it is only significant for those individuals with high scores in "Neuroticism". Because personality dimensions are considered relatively stable over time and consistent from one situation to another, organizations should hold workshops and other types of practical activities to train workers in communication skills and Emotional Intelligence in order to promote employee health and that of their patients and avoid risk factor from work activity in nursing.
Striking a Balance between Work and Play: The Effects of Work-life Interference and Burnout on Faculty Turnover Intentions and Career Satisfaction
Sheila A. Boamah, Hanadi Hamadi, Farinaz Havaei, Hailey Smith, Fern Webb
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: Burnout; career satisfaction; COVID-19; faculty shortage; nursing faculty; turnover intentions; work environment; work-life interference
Online: 10 January 2022 (13:58:18 CET)
The interactions between work and personal life are important for ensuring well-being especially during COVID-19 where the lines between work and home are blurred. Work-life interference/imbalance can result in work-related burnout, which has been shown to have negative effects on faculty members' physical and psychological health. Although our understanding of burnout has advanced considerably in recent years, little is known about the effects of burnout on nursing faculty turnover intentions and career satisfaction. Thus, this study aimed to test a hypothesized model examining the effects of work-life inference on nursing faculty burnout (emotional exhaustion and cynicism), turnover intentions and ultimately, career satisfaction. A predictive cross-sectional design was used. An online national survey of nursing faculty members was administered throughout Canada in Summer 2021. Nursing faculty who held full-time or part-time positions in Canadian academic settings were invited via email to participate in the study. Data was collected from an anonymous survey housed on Qualtrics. Descriptive statistics and reliability estimates were computed. The hypothesized model was tested using structural equation modeling. Data suggest that work-life interference significantly increase burnout which contribute to both higher turnover intentions and lower career satisfaction. Turnover intentions in turn was negatively associated with career satisfaction. The findings add to the growing body of literature linking burnout to turnover and dissatisfaction, highlighting key antecedents and/or drivers of burnout among nurse academics. These results provide suggestions for suitable areas for the development of interventions and policies within the organizational structure to reduce the risk of burnout during and post-COVID-19 and improve faculty retention.
Maritime Workers' Quality of Life: Organizational Culture, Self-efficacy, and Perceived Fatigue
Jae hee Kim, Soong-nang Jang
Subject: Medicine & Pharmacology, Other Keywords: quality of work life; organizational culture; organizational support; self-efficacy; maritime workers; culture-work-health model
Using the culture-work-health model, this study investigates the factors influencing the quality of life of maritime workers. This study conducted a survey of 320 maritime workers who have experience living and working on a ship for more than six months. This self-administered questionnaire included questions on organizational culture and support, self-efficacy, perceived fatigue, as well as the quality of work life. Organizational culture and self-efficacy were identified as factors affecting the quality of work life, while organizational support was found to have an indirect effect after passing through self-efficacy and perceived fatigue. The final model accounts for 63.1% of the variance in maritime workers' quality of life. As such, this study shows that self-efficacy is important for the quality of life of maritime workers, having both direct and indirect effects. Moreover, organizational support may prove the primary intervention point for relieving perceived fatigue and enhancing self-efficacy, thus improving the quality of work life.
Balancing Work and Life When Self-Employed: The Role of Business Characteristics, Time Demands and Gender Contexts
Emma Hagqvist, Susanna Toivanen, Claudia Bernhard-Oettel
Subject: Social Sciences, Sociology Keywords: contextual risk factors; gender; individual risk factors; life-work interference; self-employed; wellbeing; work-life interference
This study explores individual and contextual risk factors for the onset of work interfering with private life (WIL) and private life interfering with work (LIW) among self-employed men and women across European countries. It also studies the relationship between interference (LIW and WIL) and wellbeing among self-employed men and women and the effect of macro level risk factors. Data from the fifth round of European Working Conditions Survey was utilized and a sample of self-employed men and women with active businesses was extracted. Applying multilevel regressions, results show that though business characteristics are important for level of WIL, time demand is the most evident risk factor for WIL and LIW. There is a relationship between wellbeing and WIL and LIW respectively, and time demands is the most important factor in this relationship. Gender equality on the labor market did not relate to level of interference, nor did it mediate the relationship between interference and wellbeing. However, the main and most important risk factor for experiencing WIL and LIW and for how interference relate to wellbeing is gender relation processes in work and life, both on individual and contextual level.
My Mind is Working Overtime – Towards an Integrative Perspective of Psychological Detachment, Work-Related Rumination and Work Reflection
Oliver Weigelt, Petra Gierer, Christine J. Syrek
Subject: Behavioral Sciences, Applied Psychology Keywords: rumination; psychological detachment; perseverative cognition; work reflection; vitality; burnout; thriving; work engagement; employee well-being; mental health
In the literature on occupational stress and recovery from work several facets of thinking about work in off-job time have been conceptualized. However, research on the focal concepts is currently rather disintegrated. In this study we take a closer look at the five most established concepts, namely (1) psychological detachment, (2) affective rumination, (3) problem-solving pondering, (4) positive work reflection, and (5) negative work reflection. More specifically, we scrutinized (1) whether the five facets of work-related rumination are empirically distinct, (2) whether they yield differential associations with different facets of employee well-being (burnout, work engagement, thriving, satisfaction with life, and flourishing), and (3) to what extent the five facets can be distinguished from and relate to conceptually similar constructs, such as irritation, worry, and neuroticism. We applied structural equation modeling techniques to cross-sectional survey data from 474 employees. Our results provide evidence that (1) the five facets of work-related rumination are highly related, yet empirically distinct, (2) that each facet contributes uniquely to explain variance in certain aspects of employee well-being, and (3) that they are distinct from related concepts, albeit there is a high overlap between (lower levels of) psychological detachment and cognitive irritation. Our study contributes to clarify the structure of work-related rumination and extends the nomological network around different types of thinking about work in off-job time and employee well-being.
Preprint COMMUNICATION | doi:10.20944/preprints202205.0253.v1
No Signs of Excessive Burnout in Public Forest Officers Working in the Temperate Region During the Covid-19 Pandemic
Ernest Bielinis, Emilia Janeczko, Aneta Anna Omelan, Grażyna Furgała-Selezniow
Subject: Social Sciences, Sociology Keywords: burnout; foresters; OLBI; Sars-Cov-2; work
The Covid-19 pandemic has influenced the style of work of many people. However, it remains a question to what extent it has influenced the work of outdoor workers like forestry workers. Therefore, the objective of this study was to assess the level of professional burnout among forest-ry workers, as a lack of burnout symptoms is a dimension of well-being at work. The Oldenburg Burnout Inventory was administered to 42 respondents. Both subscales of the inventory were reliable: Cronbach's alpha was 0.806 for disengagement and 0.865 for exhaustion. The mean number of overtime hours was 10.13 hours per month. The mean disengagement score of 2.24 was lower than the reference value of 2.25, but the mean exhaustion score of 2.33 was high-er than the reference value of 2.1. Age correlated significantly with stage of work, as did exhaustion with stage of work, and over-time hours with disengagement. The average forestry officer had no symptoms of disengagement and slight symptoms of exhaustion. These results suggest that being in the forest can help prevent burnout. Overtime work and a heavy workload appear to threaten forestry workers' well-being, as they can cause exhaustion and lower commitment.
Preparing Returning Workplaces for COVID-19: An Occupational Health Perspective
Mikhael Yosia, Nuri Purwito Adi
Subject: Medicine & Pharmacology, Allergology Keywords: COVID-19; occupational health; returning to work
Online: 13 April 2021 (11:15:02 CEST)
With the COVID-19 pandemic continuing and the resulting economic burden increasingly apparent, the Indonesian government began to prepare a "new normal" phase and make peace with COVID-19. From this new decision arises the question of the readiness of businesses and the industrial sector to resume operations amid COVID-19. This article aims to provide concise and precise information about the preparations that can be made by businesses to operate safely amid COVID-19 based on existing scientific studies and literatures. From the literature visits it can be concluded that transmission and danger of a COVID-19 pandemic can be prevented through: creation of infectious disease prevention and response plan, implementing basic infection prevention measures, policies and procedures for proper identification and isolation of sick people, applying flexibilities in policies, and protections in the workplace.
The Effect of Heat Treatments on the Fatigue Strength of H13 Hot Work Tool Steel
Ruhi Yeşildal
Subject: Materials Science, General Materials Science Keywords: Fatigue, heat treatment, hot-work tool steel
The fatigue strength of the hot work steel depends on various factors, including the mechanical, properties and behavior and bulk and the surface under layer, the microstructural features as well as heat treatments. The influence of a series of heat treatments on the fatigue strength of H13 hot work steel was investigated. Different preheating, quenching and tempering treatments were applied to four sets of specimens and fatigue tests were conducted at room temperature using a rotating bending test machine. All heat treatments resulted in a certain improvement of the fatigue strength. Highest fatigue strength obtained by applying a double tempering heat treatment (first tempering at 550 °C for two hours and second tempering at 610 °C for two hours) after initial preheating and quenching. One tempering treatment (550 °C for two hours after preheating and quenching) did not significantly improve the fatigue strength.
The Mediating Role of Perceived Stress in the Relationship of Self-Efficacy and Work Engagement in Nurses
María del Carmen Pérez-Fuentes, María del Mar Molero Jurado, Ana Belén Barragán Martín, María del Mar Simón Márquez, África Martos Martínez, José Jesús Gázquez Linares
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: stress perceived; self-efficacy; engagement; work; nursing
Positive Occupational Health Psychology (POHP) examines the mechanisms that promote workers' health and wellbeing, in addition to risk factors arising from work activity. The aim of this study was to analyze the mediating role of perceived stress in the effect that self-efficacy has on engagement in nurses. The sample comprised 1777 currently working nurses. We administered the Utrecht Work Engagement Scale (UWES), the Perceived Stress Questionnaire and the General Self-Efficacy Scale. Following bivariate correlational analysis, multiple linear regression analysis, and simple and multiple mediation analysis the results showed Self-efficacy to be a powerful personal resource that positively predicts employees' engagement, although the effect diminishes when there are mediating variables of stress. We found differences in the way the different aspects of stress mediated the relationship between Self-efficacy and the engagement dimensions. "Energy–joy" was the strongest mediating variable for all of the engagement dimensions, and this, together with "harassment–social acceptance" dampened the effect of Self-efficacy on vigor and dedication, whereas "Overload" was only a mediator for dedication. Because nurses work in a stressful environment, risk factor arising from work activity, hospital management should design interventions to enhance their workers' personal resources and improve personal and organizational wellbeing.
Southern African Social Work Students' Acceptance of Rape Myths: Results from an Exploratory Study
John Matthews, Lisa Avery, Johanna Nashandi
Subject: Social Sciences, Sociology Keywords: Rape Myths; Africa; Social Work; Students; Attitudes
Online: 6 August 2018 (10:59:49 CEST)
Despite numerous interventions to promote gender equality, sub-Saharan Africa has one of the highest prevalence rates of non-partner sexual assault in the world, thus constituting a major social and public health issue in the region. As social workers frequently provide services to this population, an exploratory cross-sectional study was conducted to explore rape myth acceptance among undergraduate social work students studying in Namibia. Findings revealed the positive influence of social work education in reducing rape myth acceptance as well as highlighted the influence of age, gender, country of origin, self-identification as a feminist, and religiosity on rape myth acceptance among this population.
Stress and Burnout Among Social Workers in The VUCA world of COVID-19 Pandemic
Gabriela Dima, Luiza Meseșan Schmitz, Marinela Cristina Șimon
Subject: Social Sciences, Accounting Keywords: COVID-19; social work/er; stress; burnout; VUCA
This paper aims to contribute to the advancement of scientific knowledge about the impact of the COVID-19 pandemic on social workers and the social work profession in Romania. Research has shown that social work is a profession at high risk for developing the burnout syndrome, which has many detrimental effects on both social workers and the clients that they serve. Two conceptual models are used to frame the discussion: the theoretical framework of VUCA (volatility, uncer-tainty, complexity, and ambiguity) to discuss the challenges of the unprecedented context the COVID-19 pandemic has created for social workers; stress and burnout to explain the negative impact of this period of time. Based on convergent mixt methods, the study sample consisted of 83 social workers employed in statutory and private social services in Romania, from different fields of intervention. Results show that 25,3% of respondents suffer from a high level of burnout and 44.6% scored in a range that indicate a medium level of burnout. A group of 31.1% have managed to handle stress factors in a healthy manner. Main stressors found are especially personal (fear of contamination, personal and family) and work-related factors (workload, new legislative rules and decisions, inconsistency, instability, ambiguity of managerial decisions, or even their absence or non-assumption, lack of clarity of working procedures, limited managerial and supervisory support, limited financial resources), less than client related factors (lack of direct contact, risk of contami-nation in two ways, managing beneficiaries fears, difficulties related to technology and digital skills). Study results point to the importance of organizational support and developing a self-care plan that help protect against occupational stress and burnout. Recommendations are made putting forward the voice of fieldworkers and managers fostering initiatives and applications of sustainability-based measures and activities designed to deal with the challenges of the VUCA environment.
Technical and Economic Aspects of Stone Pine (Pinus pinea L.) Maintenance in Urban Environments
Marcello Biocca, Pietro Gallo, Giulio Sperandio
Subject: Biology, Anatomy & Morphology Keywords: urban forestry; work analysis; residual biomass; pruning costs
The Italian Stone Pine (Pinus pinea L.) is one of the most employed ornamental trees in towns with Mediterranean climates. For example, in the city of Rome, Pinus is the most common genus, with more than 51,000 trees. This study investigates technical and economic features of maintenance operations of Stone Pines and evaluates the productivity and costs of the observed yards. Pruning and felling are the most frequent management operations of trees in towns and this study analyzes the features of these operations carried out in 14 work sites. The operations were carried out either with aerial platforms (19 trees) or ascending the crown by tree-climbing (6 trees). The operations were sampled with time studies (12 trees for pruning and 13 for felling). Work time was measured from the beginning of operations to the transport of the residual biomass to the collection and loading point, using centesimal stopwatches and video recording. The total residual biomass was weighed or assessed. Total observation time amounted to 63.1 hours. The evaluation of the costs of each work site considered the fixed and the variable costs and the costs for the labor force. A Multiple Linear Regression model (statistics: determination coefficient R2: 0.74, adjusted R2: 0.67, p-value < 0.001) which utilizes four regressors easily evaluable before the work, was adopted to predict the gross time of the operations. This paper can contribute to optimize trees maintenance methods in urban sites and to assess the potential residual wood biomass attainable from urban forestry maintenance in the city of Rome.
Effects of 3-Week Work-Matched High-Intensity Intermittent Cycling Training with Different Cadences on VO2max in University Athletes
Nobuyasu Tomabechi, Kazuki Takizawa, Keisuke Shibata, Masao Mizuno
Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: aerobic capacity, graded‑exercise test, total work-load
The aim of this study was to clarify effects of 3-week work-matched high-intensity intermittent cycling training (HIICT) with different cadences on VO2max in university athletes. Eighteen university athletes performed HIICT with either 60 rpm (n = 9) or 120 rpm (n = 9). HIICT consisted of eight sets of 20-s exercise with a 10-s passive rest between each sets. The initial training intensity was set at 135% of VO2 max and was decreased by 5% every two sets. Athletes in both groups performed 9 sessions of HIICT during 3-week. The total work-load and achievement rate of the work load calculated before experiments in each group were used for analysis. VO2max was measured pre and post-training. After 3-week of training, no significant differences in the total work-load and achievement rate of the work load were found between the two groups. VO2max similarly increased in both groups from pre to post training (p = 0.016), with no significant differences between the groups (p = 0.680). These results suggest that cadence during HIICT is not training variable affecting effect of VO2max.
Researching Teacher Work Motivation in Ghana through the lens of COVID-19
Michael Agyemang Adarkwah
Subject: Social Sciences, Education Studies Keywords: teacher motivation; work motivation; job satisfaction; COVID-19; Ghana
Teachers, particularly in developing contexts, were vulnerable populations during the COVID-19 pandemic. As natural parental figures for students, they had to reconcile the dual role of ensuring the safety and health of students and their own and family well-being. The external crisis of COVID-19 heightened the negative experiences of teachers in their work environments during both online and physical instruction. The qualitative phenomenological study involving thirty (30) secondary school teachers in Ghana took a comprehensive and fresh look at how COVID-19 impacted the work motivation of teachers. It was found that teachers suffered a great deal of stress in the wake of the pandemic and had face-mounting concerns about their working conditions. The low morale of teachers precipitated by COVID-19 made them develop attrition intentions. However, intrinsic and altruistic traits such as passion, the feeling of responsibility, and the desire to contribute to society and foster student development made teachers resilient towards the deleterious effects of the pandemic to promote optimal teaching. Future studies should investigate the installation of support structures that strengthen the motivation of teachers in unforeseen crises.
Impact of Inclusive Leadership on Innovative Work Behavior: The Mediating Role of Job Crafting
Yinping Guo, Junge Jin, Sanghyuk Lim
Subject: Social Sciences, Business And Administrative Sciences Keywords: Inclusive leadership; Job crafting; Innovative work behavior; Belongingness; Uniqueness
The study aims to examine the mediating role of job crafting between inclusive leadership and innovative work behavior. The data were collected from 314 workers employed in China's small and medium-sized industries. The data collection was done through survey design. The data analysis was done through structural equation modeling using Spss 26 and Mplus 8. Inclusive leadership was found to be related to job crafting and innovative work behavior of the employees. Job crafting was found to be mediating between inclusive leadership and innovative work behavior. The study delineated the link mechanism between inclusive leadership and innovative work behavior. Studying inclusive leadership in the context of Chinese culture is a powerful complement to inclusive leadership theory. This paper provides the managers of SMEs with significant managerial insights into how inclusive leadership can effectively motivate employees' innovative work behaviors.
The Impact of COVID-19 That Outbreak on the Quality Education vs. Role of Social Worker in the Context of Nepal
Rajesh Tamang, Som Nepali
Subject: Arts & Humanities, General Humanities Keywords: COVID-19; Quality education; social work; students; implication level
Abstract The article discusses about the current situation of Novel Corona Virus also called as the COVID-19 that hinder for all human's life including the education. Rapidly escalating COVID-19, has caused havoc in quality education and every educational institution are closed. As the UNESCO report it showed that 1.6 billion children being affected due to the close of institution across 191 countries. With the alternative method every education institution started blended learning virtual classes in order to continue learning environment in students. The articles investigate COVID-19 impact on student's quality education in Nepal and social work implication. The findings of the study shows that the COVID-19 has seriously effects on the students learning environment. It showed the huge gap between getting the good education in Nepal. However, Nepal has also made some policies to provide equal quality education to all the children through the ICT and also encourage social work to actively participate on providing education to all the majority of group children in Nepal. Whereas social work applied the micro, messo, and macro level of implication in practice to provide the education for children in remote area of Nepal.
Work Function Tuning in Hydrothermally Synthesized Vanadium-Doped Moo3 and Co3O4 Mesostructures for Energy Conversion Devices
Pietro Dalle Feste, Matteo Crisci, Federico Barbon, Marco Salerno, Filippo Drago, Mirko Prato, Silvia Gross, Teresa Gatti, Francesco Lamberti
Subject: Materials Science, Biomaterials Keywords: Metal oxide; doping; semiconductor; work function tuning; energy device
The wide interest in developing green energy technologies stimulates the scientific community to seek, for devices, new substitute material platforms with low environmental impact, ease of production and processing and long-term stability. The synthesis of metal oxide (MO) semiconductors fulfils these requirements and efforts are addressed at optimizing their functional properties, through improvement of charge mobility or energy level alignment. Two MOs have rising perspectives for application in light harvesting devices, mainly for the role of charge selective layers but also as light absorbers, namely MoO3 (an electron blocking layer) and Co3O4 (a small band gap semiconductor). The need to achieve better charge transport has prompted us to attempt doping strategies with vanadium (V) ions that, when combined with oxygen in V2O5, produce a high work function MO. We report on subcritical hydrothermal synthesis of V-doped mesostructures of MoO3 and of Co3O4, in which a tight control of the doping is exerted by tuning the relative amounts of reactants. We accomplished a full analytical characterization of these V-doped MOs that unambiguously demonstrates incorporation of the vanadium ions in the MO crystal lattice, as well as effects on the optical properties and work function. We foresee a promising future use of these materials as charge selective materials in energy devices based on multilayer structures.
COVID-19 Anxiety – A Longitudinal Survey Study of Psychological and Situational Risks among Finnish Workers
Iina Savolainen, Reetta Oksa, Nina Savela, Magdalena Celuch, Atte Oksanen
Subject: Medicine & Pharmacology, Allergology Keywords: COVID-19; mental health; anxiety, work; stress; personality; loneliness
Background: COVID-19 crisis has changed the conditions of many throughout the globe. One negative consequence of the on-going pandemic is anxiety brought by uncertainty and the COVID-19 disease. Increased anxiety is a potential risk factor for wellbeing at work. This study investigated psychological, situational, and socio-demographic predictors of COVID-19 anxiety using longitudinal data. Methods: Nationally representative sample of Finnish workers (N = 1308) was collected before and during the COVID-19 crisis. Eighty percent of the participants responded to the follow-up study (N=1044). COVID-19 anxiety was measured with a modified Spielberger State–Trait Anxiety Inventory. Psychological and situational predictors included perceived loneliness, psychological distress, technostress, personality, social support received from work community, and remote working. Also, number of socio-demographic factors were investigated. Results: Perceived loneliness, psychological distress, technostress, and neuroticism were identified as robust psychological predictors of COVID-19 anxiety. Increase in psychological distress and technostress during the COVID-19 crisis predicted higher COVID-19 anxiety. Recent change in work field and decreased social support from work community predicted COVID-19 anxiety. Women and young people experienced higher anxiety. Conclusions: Different factors explain workers' COVID-19 anxiety. Increased anxiety can disrupt wellbeing at work, emphasizing organizations' role in maintaining an inclusive and caring work culture and providing technical and psychological support to workers during crisis.
The Role of Inclusive Leadership Behaviours on Innovative Workplace Behaviour with Emphasis on the Mediator Role of Work Engagement
Dheyaa Falih Bannay, Mohammed Jabbar Hadi al-Thalami, Ahmed Abdullah Al–Shammari
Subject: Behavioral Sciences, Applied Psychology Keywords: innovative; inclusive leadership behaviour; work engagement; innovative workplace behaviour
(1) Background: Work creativity, manifested in innovative workplace behaviour (IWB) and employee work engagement, is fundamental to maintain firms' sustainability and competitiveness. In this regard, this study aims at investigating the supporting effect of innovative leadership on IWB and employee engagement through maximising employee vigour, dedication and absorption. (2) Methods: The study data were collected from questionnaires administered to 150 respondents working in mobile phone companies in southern and central Iraq. The statistical analyses were conducted through the Statistical Package for the Social Sciences (SPSS) and Smart PLS. In analysing the measurement model and testing the proposed hypotheses, the study results revealed that inclusive leadership and work engagement were intimately connected to IWB; (3) Results: Work engagement played a mediating role between inclusive leadership and IWB. The questionnaire data indicated that inclusive leadership behaviours, such as openness, accessibility and availability, motivated the subordinates to be engaged in IWB.; and (4) Conclusions: To promote IWB, company leaders then need to effectively engage their followers by taking pride and satisfaction in employee output, which might aid employee workplace and IWB engagement.
Modeling the Impact of Mentoring on Women's Work-Life Balance: A Grounded Theory Approach
Parvaneh Bahrami, Saeed Nosratabadi, Khodayar Palouzian, Szilárd Hegedűs
Subject: Social Sciences, Business And Administrative Sciences Keywords: mentoring; women studies; work-life balance; role management; grounded theory
The purpose of this study was to model the impact of mentoring on women's work-life balance. Indeed, this study considered mentoring as a solution to create a work-life balance of women. For this purpose, semi-structured interviews with both mentors and mentees of Tehran Municipality were conducted and the collected data were analyzed using constructivist grounded theory. Findings provided a model of how mentoring affects women's work-life balance. According to this model, role management is the key criterion for balancing work-life of women. In this model, antecedents of role management and the contextual factors affecting role management, the constraints of mentoring in the organization, as well as the consequences of effective mentoring in the organization are described. The findings of this research contribute to the mentoring literature as well as to the role management literature and provide recommendations for organizations and for future research.
Recognising the Embedded Child: Children's Participation, Child Protection Inequities and Cultural Capital in Child Protection
Emily Keddell
Subject: Social Sciences, Other Keywords: Child protection, social work, participation, child abuse, inequalities, cultural capital
Children's right to participation in child protection decision-making is supported by moral imperatives and international conventions. The fragmented implementation of this right reflects an already-conflicted discursive terrain that attempts to incorporate both children's agency and their need for protection. This article uses two key theoretical lenses to further examine this terrain: child welfare inequalities and cultural capital. These theories draw attention to how social inequities and cultural capital relating to culture and class affect how participation plays out. An unintended consequence of constructing children within a traditional liberal account of rights, within neoliberal and 'child focussed' policy paradigms, is the reduction of an acknowledgment of the culturally contested nature of an individualistic construction of children, excising children from their social backgrounds and promoting the notion of a 'universal child'. With a particular focus on class, culture and professional paradigms, I argue that the ways children's views are elicited, the content of those views and how they are interpreted, become subject to a set of professional assumptions that tend to take little cognisance of the social backround of children, including norms relating to class, ethinicity and the oppressive structural relations relating to those two factors. This process is shored up with concepts such as attachment theory, the 'adultification' of children of colour, the diminishing of Indigenous concepts of children and childhood, and the pre-eminence of the 'concerted cultivation' middle class parenting style. The child's cultural worldview and manner of expressing it may clash with professional cultures that emphasise an ability for verbal expression, independence, and entitlement when negotiating preferences with representatives of powerful social institutions such as child protection systems. Many children may not comply with this expectation due to both cultural and class socialisation processes, and the histories of the oppressive functions of child protection systems. The unspoken power of child protection organisations that must engage in constant translation of children's cultural capital to ensure participation, may instead better serve children's participation aims by devolving authority to affected communities. Communities reflecting children's own, may be better able to offer recognition to children and enable their participation more effectively.
Preschool Teachers' Psychological Distress and Work Engagement During COVID -19 Outbreak: The Protective Role of Mindfulness and Emotion Regulation
Mor Keleynikov, Joy Benatov, Rony Berger
Subject: Social Sciences, Education Studies Keywords: Teachers; Mindfulness; Emotion regulation; COVID-19; Work engagement; Emotional distress
The COVID-19 has dramatically affected mental health and work environment of many labor sectors, including the educational sector. Our primary aim was to investigate preschool teachers' psychological distress and work engagement during the early stages of the COVID-19 outbreak, while examining the possible protective role of participating in mindfulness-based intervention (C2C-IT) and emotion regulation. Emotional distress, work engagement and COVID-19 concerns' prevalence were evaluated among 165 preschool teachers in the early stages of the COVID-19 outbreak in Israel, using self-report questionnaires. Findings show that preschool teachers have experienced increased emotional distress. Teachers who had participated in the C2C-IT intervention six month before the pandemic outbreak (N=41) reported lower emotional distress, higher use of adaptive emotion regulation strategies and higher work engagement, compared to their counterparts that had not participated in the mindfulness training (N = 124). Emotion regulation strategies mediated the link between participating in the CTC-IT intervention and emotional distress and work engagement. Teaching is a highly demanding occupation, especially during a pandemic, therefore it is important to invest resources in empowering this population. According to the findings of the current study, implementation of mindfulness-based intervention during the school year, may benefit teachers' well-being, even during stressful events such as the COVID-19 pandemic.
Alternative Exhibition Spaces. Multi-criteria Method of Comparative Analysis
Joanna Stefańska, Paulina Kowalczyk, Agata Gawlak
Subject: Arts & Humanities, Anthropology & Ethnography Keywords: white cube; exhibition space; space; work; architecture; site specific; interaction
The aim of this article is to make a multi-criteria analysis of various exhibition spaces of an originally non-exhibition character and to determine how these spaces affect the selection of works and the exhibition concept. The analysis is based on the exhibitions of art objects at collective exhibitions in unconventional architectural spaces: commercial, i.e. the modern office building of PBG Gallery Skalar Office Centre in Poznań, post-industrial i.e. in the former Zakłady Przemysłu Ziemniaczanego Lubanta S.A. and in the historic interior of the "U Jezuitów" Gallery of the Cultural Integration Centre in Poznań. The multi-criteria comparative analysis shows a variety of features of the studied spaces as well as the relationship between architecture and art and their mutual interaction. The participatory role of the non-exhibition space in the process of creating an exhibition and selecting works has been proven. It has also been confirmed that the presentation of works of art in originally non-exhibition spaces creates a new quality of the artwork. Unconventional architectural space, when used for the exhibition of works of art, expands and strengthens the area of their influence through the interaction between the work and the architectural space. The specificity of the space adapted for exhibition needs, the presence and type of architectural details in the interior, the quantity and quality of light and its distribution in space, the volume and colour of the interior determine the exhibition space and influence the shape of the exhibitions organised and the reception of the artworks. The only condition for the change of the original function of an architectural space into that of an exhibition space is a coherent artistic vision of the creator. This should take into account the appropriate selection of the exhibited objects, where the process of searching for the relationship between architecture and art determines the features of the architectural space as integral components influencing the realisation of the exhibition .
Human Resource Information System and Work Stress during COVID-19 Pandemic
Misnal Munir, Amaliyah Amaliyah, Moses Glorino Rumambo Pandin
Subject: Keywords: Hi-Tech HR; Human resources management; Work Stress; Covid-19
The increasingly rapid flow of technological development is now a tool to improve backwardness, renew the system, as well as a means of improvement in various aspects of life. For companies, the development of information and communication technology is a new innovation or breakthrough to improve service quality. Various benefits provided by technology and information. However, if not handled properly, technology and information will become a boomerang for an organization. HRIS is the application of HRM functions in the application of information and communication technology. The concept of HRIS as a digital form of HRM is increasing along with the outbreak of the Covid-19 pandemic which has caused a large loss impact on all lines of life. The limitation of social interaction in an effort to prevent the wider spread of covid-19 and the higher number of casualties causes business activities to stall. This research uses quantitative methods with a study sample of 100 randomly selected. This study aims to see how clarity of high-tech goals in human resources management, perceived benefits, perceived ease of use, company conditions that can influence attitudes towards HRIS.
Social (in) Mobility and Social Work with Families with Children. Case Study of a Disadvantaged Microregion in Hungary
Andrea Rácz, Dorottya Sik
Subject: Social Sciences, Sociology Keywords: social work; families with children; child welfare services; social mobility
Abstract The aim of our study is to analyse the perception of the families and concerned social workers. The research was conducted in an underprivileged and disadvantaged microregion in North Hungary. The main focus was the perception on the available health, educational, child welfare and social services and supports. The starting point was to enquire the target group's knowledge of these services. The study examines the extent to which social work is able to provide support to disadvantaged, marginalized families with children, and the way how the dysfunctional operation of the system contributes to the perpetuation of the clients' life conditions. Analysing the quality of these services and supports is crucial to understand the social mobility chance of the children living in this microregion. The results show that without capability and talent development for the children and given the lack of welfare services, the mobility chance and opportunities of these families are extremely low in Hungary.
Evaluation of the Geometallurgical Indexes for Comminution Properties at Sarcheshmeh Porphyry Copper Mine
Saiwan Mohammadi, Bahram Rezai, Aliakbar Abdollahzadeh, Sayed Mojtaba Mortazavi
Subject: Materials Science, General Materials Science Keywords: geometallurgy; geometallurgical index; ore variability; bond work index; comminution; Sarcheshmeh
Geometallurgy has become an important toll to predict the process behavior of the ores, and to decrease the production risks cause by variability of geological settings. In this paper geometallurgical index for grinding properties of the ore has investigated. In a comprehensive research at Sarcheshmeh porphyry copper mine, geological features, what were supposed to affect the main process responses including product's grade-recovery, and plant's throughput, were subjected to investigate as the possible geometallurgical indexes. The rock breakage variability, in terms of a ball mill grinding circuit and its effects on the plant throughput and energy consumption are presented. Ninety samples were collected based on geological features including lithology, hydrothermal alteration, and geological structures. The samples were characterized using XRD, XRF, and electron and optical microscopy. A simulated test method for Bond ball mill work index (BWI) was used to perform the comminution test. The results showed BWI values vary from 5.67 kWh/t to 20.21 kWh/t. Examination of the possible correlations between BWI and the geological features showed that the key geological feature related to comminution variability is lithology. In addition, the hydrothermal alteration would be an effective parameter in the period that the plant is fed with a single lithology
Authentic Student Laboratory Classes in Science Education
Jurgen Schulte
Subject: Physical Sciences, Other Keywords: authentic learning; work integrated learning; curriculum development; laboratory classes; proxemics
The traditional hands-on nature in science laboratory classes creates a sense of immediacy and a presence of authenticity in such learning experiences. The handling of physical objects in a laboratory class, and the immediate responses provided by these experiments, are certainly real-live observations, yet may be far from instilling an authentic learning experience in students. This paper explores the presence of authenticity in hands-on laboratory classes in introductory science laboratories. With our own laboratory program as a backdrop we introduce four general types of hands-on laboratory experiences and assign degrees of authenticity according the processes and student engagement associated with them. We present a newly developed type of hands-on experiment which takes a somewhat different view of the concept of hands-on in a laboratory class. A proxemics-based study of teacher-student interactions in the hands-on laboratory classes presents us with some insights into the design of the different types of laboratory classes and the pedagogical presumptions we made. A step-by-step guide on how to embed industry engagement in the curriculum and the design of an authentic laboratory program is presented to highlight some minimum requirement for the sustainability of such program and pitfalls to avoid.
Preprint CASE REPORT | doi:10.20944/preprints201810.0465.v1
Improving Distribution Process Using Lean Manufacturing and Simulation: A Case of Mexican Seafood Packer Company
Julián I. Aguilar-Duque, Juan L. Hernandez-Arellano, Cesar Omar Balderrama-Armendariz, Guillermo Amaya-Parra, Liliana Avelar
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: production system; simulation manufacturing process; simulation model; work in process
During the last decades, the production systems have developed different strategies to increase their competitiveness in the global market. In a manufacturing and services systems, Lean Manufacturing has been consolidated through the correct implementation of its tools. The present paper presents a case study developed in a Food Packer company where a Simulation Model was considered as an alternative to reduce the waste time generated by the poor distribution of operations and transportation areas for a product within the factory. As a matter of fact, the company has detected problems on the layout distribution that prevents to fulfill the market demand. In addition, the principal aim was to create a simulation model to test different hypothetical scenarios and alternative designs for the layout distribution without modifying its facilities. Moreover, the implemented methodology was based on classical models of simulation projects and a compendium of the manufacturing systems optimization by simulation process used during the last ten years. Also, a mathematic model supported by the Promodel ® simulation software was developed considering the company characteristics; along with the model development, it was possible to compare the production system performance from the percentage of used locations, the percentage of resources utilization, the number of finished products, and the level of Work in Process (WIP). Finally, the verification and validation stages were performed before running the scenarios in the real production area. The results generated by the implementation of the project represent an increase of 68% in the production capacity and a reduction of 5% in the WIP. In addition, both outcomes are associated with the resources management, which were reassigned to other production areas.
How Did Poor Sleep Quality and Working from Home Influence the Prevalence of Leisure‑time Physical Inactivity During the Covid-19 Pandemic? COVID‑Inconfidentes
Samara Silva de Moura, Luiz Antônio Alves de Menezes-Júnior, Julia Cristina Cardoso Carraro, George Luiz Lins Machado-Coelho, Adriana Lúcia Meireles
Subject: Life Sciences, Other Keywords: Physical inactivity; work from home; sleep; Covid-19 and public health.
To examine the association of sleep quality and work from home with physical inactivity (PI) in leisure time during Covid-19 pandemic. A population-based household survey was conducted in two Bra-zilian municipalities from October to December 2020. Leisure-time physical activity (PA) was self-reported, and individuals who practiced less than 150 minutes of moderate PA or 75 minutes of vigorous PA weekly were classified as PI. Sleep quality was measured using the Pittsburgh Sleep Quality Index (PSQI). WFH was assessed by: "Currently, how is your work routine regarding location? Associations were investigated using logistic regression and directed acyclic graphs (DAG) for the multivariate models. A total of 1,750 adults were interviewed, 69.1% were PI and 51.9% had poor sleep quality. Furthermore, 79.8% were not in WFH. In multivariate analysis, leisure PI was associated with poor sleep quality (OR:1.59: 95% CI: 1.02-2.48), and not being in WFH (OR:1.62: 95% CI: 1.05-2.50). When performing the combined analysis between these two factors, and who were not in WFH were four times more likely to be PI at leisure (OR=4.22;95%CI:2.05-8.65). The results indicate a high prevalence of PI, with poor quality sleep and non-WFH associated with leisure PI. These combined factors exacer-bated the occurrence of PI.
Enterprise Work Safety Standardization Optimization Mode on VSM in China
Dunwen Liu, Chun Gong, Yinghua Jian
Subject: Earth Sciences, Atmospheric Science Keywords: Work safety standardization; Viable System Model; Chinese enterprise; Safety process control
The work safety standardization of enterprises based on the traditional work safety theory has played a significant role in reducing the number of accidents and improving work safety to some extent in China. However, some problems are coming with the work safety standardization of enterprises developing constantly in China. On the one hand, it is not combined with the actual situation of the enterprise, lacking pertinence and specificity, these defects resulted that it is not integrated with original safety production management system of enterprise and make it difficult to carry out. On the other hand, there is a lack of systematic management methods for the work safety management system of enterprise, most of enterprises only pay attention to the inspection result rather than the process control. This means after the check of the government, many enterprises will relax to carry out the system. This paper puts forward a new method for optimizing the standardization management mode of work safety based on the Viable System Model(VSM), which can solve the defects of work safety standardization of enterprises management system. An optimization model of work safety standardization based on VSM was construct for explaining the process optimization and control of work safety standardization management. It can improve the connectivity between the enterprise and the government. The conclusion of this paper can provide reference for achieving the development and optimization of work safety standardization of enterprises in China.
A Method Based on NLP for Twitter Spam Detection
Ratul Chowdhury, Kumar Gourav Das, Banani Saha, Samir Kumar Bandyopadhyay
Subject: Keywords: Twitter; Social Media; NLP; Tweet; User Categorizations and Mathematical Frame Work
Social networking applications such as Twitter have increasingly gained significance in terms of socio-economic, political, and religious as well as entertainment sectors. This in turn, has witnessed a wide gamut of information explosion in the social networking realm that can tend to be both useful as well as misleading at the same point of time. Spam detection is one such solution that caters to this problem through identification of irrelevant users and their data. However, existing research has so far laid primary focus on user profile information through activity detection and relevant techniques that may underperform when these profiles exhibit characteristics of temporal dependency, poor reflection of generated content from the user profile, etc. This is the primary motivation for this paper that addresses the aforementioned problem of user profiles by focusing on both profile information and content-based spam detection. To this end, this work delivers three significant contributions. Firstly, exhaustive use of Natural language processing (NLP) techniques has been rendered towards creation of a new comprehensive dataset with a wide range of content-based features. Secondly, this dataset has been fed into a customized state-of-art hybrid machine learning model that has been exclusively built using a combination of both machine learning and deep learning techniques. Extensive simulation based analysis not only records over 98% accuracy but also establishes the practical applicability of this proposal by proving that modeling based on the mixed profile and content-generated data is more capable of spam detection in contrast to each of these standalone approaches. Finally, a novel methodology based on logistic regression is proposed and supported by analytical formulations. This paves the way for the custom-built dataset to be analyzed and corresponding probabilities to be obtained that differentiate legitimate users from spammers. The obtained mathematical outcome can henceforth be used for future prediction of user categories through appropriate parameter tuning for any given dataset. This makes our method a truly generic one capable of identifying and classifying different user categories.
Work-related Stress and Coping Profiles among Workers in Outer Garment Sector; A Cross-sectional study
Ozlem Koseoglu Ornek, Erdem Sevim
Subject: Medicine & Pharmacology, Other Keywords: Work-related stress; occupational stress; coping profile; garment workers; textile workers
Online: 7 February 2018 (10:26:49 CET)
Garment sector has crucial working field in Turkey.It has also very high risky occupational health conditions and safety.The objective of this study is to define level of job level, work-related stress' symptoms, social support and coping mechanisms of garment workers and to determine any related factors.This study is descriptive and cross-sectional. The study population comprised garment workers in the 16-65 age range. The data was collected by Assessment Form, The Brief Stress Coping Profile and Brief Job Stress Questionnaire. The level of work-related stress was statistically higher among the workers who had chronic disease, low economic, education status and poor quality of sleep. Psychological and physical physiological reactions to stress were found higher among women workers and those with chronic disease.It also was seen that job stress scores had a meaningful relationship with "emotional expression involving others" (r =.20) and "Avoidance and suppression" coping profile (r =.16; p <.01).Psychological symptom scores were found to have a low level of meaningful relationship with "Seeking help for solution" (r =-.08), "changing point of view" (r=.13) and "emotional expression involving others" coping profiles (r=.21). Work-related stress causes many health and behavioral problems. Work related reasons and coping profiles have powerful effects on stress.
Correlation between Engagement and Quality of Life at Work in Nursing Professionals: Cross-Sectional Study in a Brazilian Hospital at the Beginning of the Covid-19 Pandemic
Taisa Moitinho de Carvalho, Luciano Garcia Lourenção, Maria Helena Pinto, Renata Andrea Pietro Pereira Viana, Ana Maria Batista da Silva Gonçalves Moreira, Leticia Pepineli de Mello, Carla Graziela Carvalho Matos, Lucia Marinilza Beccaria, Cristina Prata Amendola, Amanda Maria Ribas Rosa de Oliveira, Maria Aurélia da Silveira Assoni, Eliana Fazuoli Chubaci, Luciana de Souza Lima, Katia Jaira Galisteu, Franciso Rosemiro Guimarães Ximenes Neto, Natalia Sperli Geraldes Marin dos Santos Sasaki, Maria de Lourdes Sperli Geraldes Santos, Jacqueline Flores de Oliveira, Carlos Leonardo Figueiredo Cunha, Flávio Adriano Borges, Juliana Lima da Cunha
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: work engagement; job satisfaction; quality of life; occupational health; nursing practitioners; nursing
Objective: To investigate the correlation between engagement and quality of life at work in nursing professionals, from a public hospital in the interior of the state of São Paulo, Brazil, at the beginning of the Covid-19 pandemic. Methods: Cross-sectional, descriptive, and correlational study, with nursing professionals, conducted between December 2020 and January 2021. We used the Brazilian versions of the Utrecht Work Engagement Scale and the Walton Model scale. Results: The nursing professionals obtained a strong and positive correlation (r≥0.70) between the social integration domain of QWL and vigor dimension of work engagement (r=0.88; p=<0.001); moderate positive correlation (r≥0.40≤0.69) between QWL working conditions and vigor (r=0.40; p=<0.001), dedication (r=0.40; p=<0.001) and overall score (r=0.41; p=<0.001) of the work engagement. The correlations were positive and weak (r≤0.39) for the other domains of QWL and dimensions of work engagement. Conclusion: Professionals with satisfactory levels of quality of life tended to have higher levels of engagement at work. Professionals were strongly engaged and satisfied with their quality of life at work at the beginning of the Covid-19 pandemic.
Quality of Work Life and Work Process of Assistance Nurses
Denisse Parra-Giordano, Denisse Quijada Sánchez, Patricia Grau Mascayano, Daniela Pinto-Galleguillos
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: Occupational Health; Quality of Life; Nurses; Working Conditions; Work; Primary Health Care.
Background: The concept of Quality of Work Life (QWL) has been built multidimensionally through social reproduction; it is impacted by the perceptions of each individual and by the relationship between workers and the work environment. Objective: to analyze the Work Process and QWL of assisting nurses in public health. Methods: Research in a critical paradigm, descriptive, exploratory with a qualitative approach. The population corresponds to Nurses who work in care work. Semi-structured guiding questions were applied and were analyzed with content analysis. Results: seven participants declared female; all Chilean; seven are young adults; six singles; only one has children, and one has a person dependent on her care; six are heads of household, and five receive help with housework. All have a nursing degree, five have a diploma, but none have a postgraduate. Work Process has three subcategories: work object, instrument, organization, and work conditions; the QWL category has six subcategories: definition and perception of QWL, QWL potentiating factors, QWL exhausting factors, QWL improvement strategies, the emotional burden associated with QWL, and Health problems. Conclusions: In this way, the lifestyle built by the assistance in the health area has repercussions on the quality of life and health in general.
Analysis of the Mediating Role of Psychological Empowerment in Perceived Trust and Work Performance
Xiaoli Liu, Xiaopeng Ren
Subject: Behavioral Sciences, Social Psychology Keywords: Perceived Trust; Psychological Empowerment; Work Performance; Perceived Information Disclosure; Perceived Superior Dependence
As a potential motivation, psychological empowerment stimulates employees' work behaviors, and it determines the degree of effort and duration of employees' work. Only when employees are psychologically empowered, will they have an impact on their behavior when they believe that they are trusted. This paper chose to set the independent variable as the employee's perceived trust and the dependent variable as the company's work performance, and explored the mediating role of psychological empowerment in the two. The psychological empowerment of employees had a positive impact on work performance. Employees with high psychological empowerment tended to be proactive in their work, and had more input in the work, which in turn encouraged employees to have higher work performance. The four dimensions of psychological empowerment can positively affect employee task performance, and the ability and influence of psychological empowerment had a positive impact on relationship performance. Psychological empowerment as a whole perception played a part of the mediating role between the perception of superior dependency and task performance, and it played a part of the mediating role between perception of superior dependency and relationship performance. As a whole perception, psychological empowerment played a part of mediating role between perceived information disclosure and task performance, and part of mediating role between perceived information disclosure and relationship performance. In the study of perceived trust and work performance, this article focused on the mediating role of psychological empowerment, and further understood the internal mechanism of perceived trust.
The Impact of Human Resource Management Practices of Sharing Workers on Service Performance
Liping Liao, Yinhua Gu, Jing Wang
Subject: Social Sciences, Accounting Keywords: sharing economy; sharing workers; human resource management practices; service performance; work engagement
Based on the Organizational Support Theory, this study examines the relationship between human resource management practices and service performance of sharing workers by demonstrating the mediation role of work engagement. We tested this theoretical model using an in-person interview questionnaire survey of 318 downwind drivers. Results showed that: (1) the main effect of human resource management practices of sharing workers on service performance was significant; (2) work engagement played a prominent mediation role between human resource management practices and service performance of sharing workers; (3) the mediation role of employee vigor between the platform incentives and the performance of employee services was significant; (4) employee dedication had an obvious and indirectly positive mediating effect between sharing workers' dimensions of human resource management practice and their service performance; (5) employee absorption on the mediation role between the various dimensions of the sharing human resources management practices (platform support, platform incentives and platform constraints) and employee service performance was significant. This study has important value for the study on human resource management practices in the context of sharing economy, and provides practical enlightenment for employee management of the sharing economy platform.
Neurobehavioral Alterations in Occupational Noise Exposure: A Systematic Review
Veronica Traversini, Nicola Mucci, Lucrezia Ginevra Lulli, Eleonora Tommasi, Luigi Vimercati, Raymond Paul Galea, Simone De Sio, Giulio Arcangeli
Subject: Medicine & Pharmacology, Allergology Keywords: occupational noise; job; work; behavioral disorders; psychological disorders; annoyance; occupational medicine; prevention.
Online: 30 March 2021 (13:39:23 CEST)
Chronic exposure to noise can cause several extraordinary effects and involve all the systems of the human organism. In addition to cardiovascular, gastrointestinal and immune effects, the data in the literature show alterations in behavioral disturbances, in memory capacity and cognitive performance. Through this systematic review, the authors try to find out which are the main neurobehavioral alterations, in case of occupational exposure to noise. Literature review included articles published in the major databases (PubMed, Cochrane Library, Scopus), using a combina-tion of some relevant keywords. This online search yielded 4434 references; after selection, the authors analyzed 41 articles (4 narrative reviews and 37 original articles). From this analysis, it appears that main symptoms are related to psychological distress, annoyance, sleep disturbances, cognitive performance. Regarding tasks, the most frequent employments concern school staff, followed by employees from various industrial sectors and office workers. Although the causes are still widely debated, it is essential to protect these workers against chronic exposure to noise. In fact, in addition to a hearing loss, they can manifest many other related discomforts over time and compromise their full working capacity, as well as expose them to a greater risk of accidents or absences from work.
Towards Ultra-Tough Oxide Glasses with Heterogeneities by Consolidation of Nanoparticles
Yanming Zhang, Liping Huang, Yunfeng Shi
Subject: Materials Science, Metallurgy Keywords: oxide glasses; strength; ductility; work hardening ability; nanoscale heterogeneities; molecular dynamics simulations
We prepared heterogeneous alumina-silicate glasses by consolidating nanoparticles using molecular dynamics simulations. Consolidated glasses from either low alumina content alumina-silicate glasses or high alumina content alumina-silicate glasses show significantly improved ductility around consolidation pressure of ~3 GPa. The introduced structural heterogeneities, namely over-coordinated network formers and their neighboring oxygen atoms, are identified as plasticity carriers due to their high rearrangement propensity. In addition, consolidated oxide glass from both 23.4Al2O376.6SiO2 and 73.1Al2O326.9SiO2 nanoparticles show improved flow strength (up to 1 GPa) due to the introduction of chemical heterogeneities. Last but not least, apparent hardening behavior appears upon cold work in consolidated glasses, with an increase of yield strength from ~3.3 GPa to ~6.4 GPa. This method is a big advancement toward ultra-strong and ultra-tough glasses by breaking the structure, composition and size limitations in traditional melt-quench process.
Crisis Self-Efficacy and Work Commitment of Education Workers among Public Schools during COVID-19 Pandemic
Erick Baloran, Jenny Hernan
Subject: Social Sciences, Other Keywords: crisis self-efficacy; work commitment; education workers; public schools; COVID-19 pandemic
COVID-19 pandemic has affected the public educational sectors in terms of adjustment in educational modalities of instructional delivery, school operations, and policies. With this emerging paradigm shift, teachers' crisis self-efficacy and work commitment are relevant for research. This study's main objective was to determine the significant influence of crisis self-efficacy on the work commitment of public school teachers in Region XI (Davao Region), Philippines, during the COVID-19 pandemic. The sample consisted of 1,340 public school teachers across the Davao Region. The researchers collected the data through adapted questionnaires contextualized to the local setting and administered through online Google forms with appended consent. Mean, standard deviation, Pearson r, and regression analysis were used to analyze data. Results revealed that crisis self-efficacy significantly influences the work commitment of public school teachers during the COVID-19 pandemic. Uncertainty management during this crisis, in particular, best predicts teachers' work commitment. Data also showed a high level of crisis self-efficacy in terms of action, preventive, achievement and uncertainty management, and high level of teachers' work commitment in terms of commitment to school, commitment to students, commitment to teaching, and commitment to profession. Correlation results also showed a link between crisis self-efficacy and the work commitment of teachers amid pandemic. Finally, the study concluded with practical recommendations and directions for future research.
Escape Education: A Systematic Review on Escape Rooms in Education
Alice Veldkamp, Liesbeth van de Grint, Marie-Christine Knippels, Wouter van Joolingen
Subject: Social Sciences, Education Studies Keywords: escape room; escape game; game design; team work; collaborative learning; student engagement
The global increase of recreational escape rooms has inspired teachers around the world to implement escape rooms in educational settings. As escape rooms are increasingly popular in education, there is a need to evaluate their use, and a need for guidelines in order to develop and implement escape rooms in the classroom. This systematic review synthesizes current practices and experiences, focussing on important educational and game design aspects. Subsequently, relations between the game design aspects and the educational aspects are studied. Finally, student outcomes are related to the intended goals. In different disciplines, educators appear to have different motives to use aspects such as time constraints or teamwork. These educators make different choices for related game aspects such as the structuring of the puzzles. Other educators base their choices on common practices in recreational escape rooms. However, in educational escape rooms players need to reach the game goal by achieving the educational goals. More alignment in game mechanics and pedagogical approaches are recommended. These and more results lead to recommendations for developing and implementing escape rooms in education, and will help educators creating these new learning environments, and eventually help students' foster knowledge and skills more effectively.
Telework, Hybrid Work and the United Nation's Sustainable Development Goals: Towards Policy Coherence
Magnus Moglia, John Hopkins, Anne Bardoel
Subject: Social Sciences, Accounting Keywords: Telework; hybrid work; working from home; sustainability; UN Sustainable Development Goals; policy coherence
With increased participation in telework expected to continue, to support emerging hybrid work models in the aftermath of the Covid-19, it is important to consider the long-term impact this practice could have on sustainability outcomes. This paper describes a systematic review of 113 academic journal articles and identifies associations between telework and sustainability, explored by previous researchers. Those associations were categorized and discussed, based on their contributions to different United Nations Social Development Goals. Most of research was found to focus on countries classified as having a very high human development index status, and regions with a low, medium or high human development index, largely ignored. The SWOT matrix technique was used to illustrate the strengths and weaknesses identified in the current literature as well as threats and opportunities for future work. This can help to ensure policy coherence and that strategies to promote one outcome, such as economic productivity improvements, does not undermine another, such as improved health. Practical implications and potential research opportunities were identified across a range of SDG impact areas, including good health and well-being, gender equality, reduced inequality, climate mitigation, sustainable cities and resilient communities. On the whole, our impression is that increased rates of telework present an important opportunity to improve sustainability outcomes, however, it will be important that integrated and holistic policy is developed that mitigates key risks.
Did Biology Emerge from Biotite in Micaceous Clay?
Helen Greenwood Hansma
Subject: Keywords: clay; mica; biotite; muscovite; origin of life; abiogenesis; mechanical energy; work; wet-dry
This paper presents a hypothesis about the origins of life in a clay mineral, starting with the earliest molecules, continuing through the increasing complexity of the development, in neighboring clay niches, of "Metabolism First," "RNA World," and other necessary components of life, to the encapsulation by membranes of the components in the niches, to the interaction and fusion of these membrane-bound protocells, resulting finally in a living cell, capable of reproduction and evolution. Biotite (black mica) in micaceous clay is the proposed site for this origin of life. Mechanical energy of moving biotite sheets provides one endless source of energy. Potassium ions between biotite sheets would be the source of the high intracellular potassium ion concentrations in all living cells.
Failure Analysis of fractured Fixing Bolts of a Mobile Elevating Work Platform using Finite Element Methods
DongHoon Choi, Jae-Hoon Kim
Subject: Engineering, Mechanical Engineering Keywords: Fatigue Analysis; Finite Element Analysis(FEA); Mobile Elevating Work Platforms(MEWPs); Fixing bolt
Mobile elevating work platforms (MEWPs) consist of a work platform, extending structure, and chassis, and are used to move persons to working positions. MEWPs are useful but are composed of pieces of equipment, and accidents do occur owing to equipment defects. Among these defects, accidents caused by the fracture of bolts fixed to the extension structure and swing system are increasing. This paper presents a failure analysis of the fixing bolts of MEWP. Standard procedure for failure analysis was employed in this investigation. Visual inspection, chemical analysis, tensile strength measurement, and finite element analysis (FEA) were used to analyze the failure of the fixing bolts. Using this failure analysis approach, we found the root cause of failure and proposed a means for solving this type of failure in the future. First, the chemical composition of the fixing bolt is obtained by a spectroscopy chemical analysis method, which determined that the chemical composition matched the required standard. The tensile test showed that the tensile and yield strengths were within the required capacity. The stress analysis was carried out at five different boom angles, and it was determined that the fixing bolt of MEWP can withstand the loads at all the boom angles. The outcomes of the fatigue analysis revealed that the fixing bolt fails before reaching the design requirements. The results of the fatigue analysis showed primarily that the failure of the fixing bolt was due to fatigue. A visual inspection of the fractured section of the fixing bolt also confirmed the fatigue failure. We propose a method to prevent failure of the fixing bolt of the MEWP from four different standpoints: the manufacturer, safety certification authority, safety inspection agency, and owner.
Inflation Propensity of Collatz Orbits: A New Proof-of-Work for Blockchain Applications
Fabian Bocart
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: geometric distribution; collatz conjecture; inflation propensity; systemic risk; cryptocurrency; blockchain; proof-of-work
Cryptocurrencies like Bitcoin rely on a proof-of-work system to validate transactions and prevent attacks or double-spending. A new proof-of-work is introduced which seems to be the first number theoretic proof-of-work unrelated to primes: it is based on a new metric associated to the Collatz algorithm whose natural generalization is algorithmically undecidable: the inflation propensity is defined as the cardinality of new maxima in a developing Collatz orbit. It is numerically verified that the distribution of inflation propensity slowly converges to a geometric distribution of parameter $0.714 \approx \frac{(\pi - 1)}{3}$ as the sample size increases. This pseudo-randomness opens the door to a new class of proofs-of-work based on congruential graphs.
Potassium at the Origins of Life: Did Biology Emerge From Biotite in Micaceous Clay?
Subject: Life Sciences, Biophysics Keywords: clay; mica; biotite; muscovite; origin of life; abiogenesis; mechanical energy; work; wet-dry cycles
Intracellular potassium concentrations, [K+], are high in all types of living cells, but the origins of this K+ are unknown. The simplest hypothesis is that life emerged in an environment that was high in K+. One such environment is the spaces between the sheets of the clay mineral, mica. The best mica for life's origins is the black mica, biotite, because it has a high content of Mg++ and it has iron in various oxidation states. Life also has many of the characteristics of the environment between mica sheets, giving further support for the possibility that mica was the substrate on and within which life emerged.
Enable Fair Proof-of-Work (PoW) Consensus for Blockchains in IoT by Miner Twins (MinT)
Qian Qu, Ronghua Xu, Yu Chen, Erik Blasch, Alexander Aved
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Digital Twin; Blockchain; Proof-of-Work; Microservices; Singular Spectrum Analysis (SSA); Byzantine Fault Tolerance
Blockchain technology has been recognized as a promising solution to enhance the security and privacy of Internet of Things (IoT) and Edge Computing scenarios. Taking advantage of the Proof-of-Work (PoW) consensus protocol, which solves a computation intensive hashing puzzle, Blockchain assures the security of the system by establishing a digital ledger. However, the computation intensive PoW favors members possessing more computing power. In the IoT paradigm, fairness in the highly heterogeneous network edge environments must consider devices with various constraints on computation power. Inspired by the advanced features of Digital Twins (DT), an emerging concept that mirrors the lifespan and operational characteristics of physical objects, we propose a novel Miner-Twins (MinT) architecture to enable a fair PoW consensus mechanism for blockchains in IoT environments. MinT adopts an edge-fog-cloud hierarchy. All physical miners of the blockchain are deployed as microservices on distributed edge devices, while fog/cloud servers maintain digital twins that periodically update miners' running status. By timely monitoring miner's footage that is mirrored by twins, a lightweight Singular Spectrum Analysis (SSA) based detection achieves to identify individual misbehaved miners that violate fair mining. Moreover, we also design a novel Proof-of-Behavior (PoB) consensus algorithm to detect byzantine miners that collude to compromise a fair mining network. A preliminary study is conducted on a proof-of-concept prototype implementation, and experimental evaluation shows the feasibility and effectiveness of proposed MinT scheme under a distributed byzantine network environment.
Educating Tomorrow's Workforce for the Fourth Industrial Revolution – The Necessary Breakthrough in Mindset and Culture of the Engineering Profession
Michael Max Bühler, Konrad Nübel, Thorsten Jelinek
Subject: Engineering, Automotive Engineering Keywords: engineering education; Forth Industrial Revolution; 4IR; skills gap; future of work; e-learning; didactics
We are calling for a paradigm shift in engineering education. In times of the Fourth Industrial Revolution ("4IR"), a myriad of potential changes is affecting all industrial sectors leading to increased ambiguity that makes it impossible to predict what lies ahead of us. Thus, incremental culture change in education is not an option any more. The vast majority of engineering education and training systems, having remained mostly static and underinvested in for decades, are largely inadequate for the new 4IR labor markets. Some positive developments in changing the direction of the engineering education sector can be observed. Novel approaches of engineering education already deliver distinctive, student centered curricular experiences within an integrated and unified educational approach. We must educate engineering students for a future whose main characteristics are volatility, uncertainty, complexity and ambiguity. Talent and skills gaps across all industries are poised to grow in the years to come. The authors promote an engineering curriculum that combine timeless didactic tradition, such as Socratic inquiry, project-based learning and first-principles thinking with novel elements (e.g. student centered active and e-learning by focusing on the case study and apprenticeship pedagogical methods) as well as a refocused engineering skillset and knowledge. These capabilities reinforce engineering students' perceptions of the world and the subsequent decisions they make. This 4IR engineering curriculum will prepare engineering students to become curious engineers and excellent communicators better navigating increasingly complex multistakeholder ecosystems.
The Impact of New Ways of Working on Organizations and Employees: A Systematic Review of Literature
Karine Renard, Frederic Cornu, Yves Emery, David Giauque
Subject: Social Sciences, Accounting Keywords: New Ways of Working, Flexible Work Arrangements, Activity-Based Offices, Flexitime, Telework, Knowledge Workers
A new research stream emerged in the 2000s dedicated to flexible work arrangements in public and private organizations, called "new ways of working" (NWW). This article aims to examine NWW from both a theoretical and empirical perspective, focusing on outcomes of this new concept and the debate between "mutual gains" vs. "conflicting outcomes." Through a literature review, it examines this research field's innovation and its rather vague theoretical foundations. Findings demonstrate that NWW definitions are diverse and somewhat imprecise, leading to fragmented research designs and findings; the research stream's theoretical foundations should be better addressed. Findings also highlight the current lack of empirical data, which therefore does not allow any real conclusions on NWW's effects on employees' and organizations' well-being and performance.
Development of a Field-deployable RT-qPCR Workflow for COVID-19 Detection
Raphael Nyaruaba, Bo Zhang, Caroline Muema, Elishiba Muturi, Greater Oyejobi, Jin Xiong, Bei Li, Zhengli Shi, Caroline Mwaliko, Junping Yu, Xiaohong Li, Hongping Wei
Subject: Life Sciences, Microbiology Keywords: COVID-19; SARS-CoV-2; field work; community; diagnosis; rapid detection; inactivation; RT-qPCR
Outbreaks of coronavirus disease 2019 (COVID-19) have been recorded in different countries across the globe. The virus is highly contagious, hence early detection, isolation, and quarantine of infected patients will play an important role in containing the viral spread. Diagnosis in a mobile lab can aid to find infected patients in time. Here, we develop a field-deployable diagnostic workflow that can reliably detect COVID-19. Instruments used in this workflow could easily fit in a mobile cabin hospital and also be installed in the community. Different steps from sample inactivation to detection were optimized to find the fastest steps and portable instruments in detection of COVID-19. Each step was compared to that of the normal laboratory diagnosis set-up. From the results, our proposed workflow (80 min) was two times faster compared to that of the normal laboratory workflow (183 min) and a maximum of 32 samples could be detected at each run. Additionally, we showed that using 1% Rewocid WK-30 could inactivate the novel coronavirus directly without affecting the overall detection results. Comparison of our workflow using an in-house assay to that of a commercially acquired assay produced highly reliable results. From the 250 hospital samples tested, there was a high concordance 247/250 (98.8%) between the two assays. The in-house assay sensitivity and specificity were 116/116 (100%) and 131/134 (97.8%) compared to that of the commercial assay. Based on these results, we believe that our workflow is fast, reliable, adaptable and most importantly, field deployable.
Examining the Impact of Workplace Spirituality and Procedural Justice on Work Locus of Control, Employee Job Satisfaction, and Employee Organisational Commitment
Eugine Tafadzwa Maziriri, Miston Mapuranga, Nkosivile Welcome Madinga
Subject: Social Sciences, Organizational Economics & Management Keywords: Workplace spirituality; procedural justice; Work Locus of Control, Employee Job Satisfaction; Employee Organisational Commitment
The present examination explored the impact of work spirituality and procedural justice on work locus of control, worker work fulfillment and representative authoritative duty among workers from private establishments of high learning in South Africa. Due to limited researches that have concentrated on the impact of workplace spirituality and procedural justice on work locus of control, employee job satisfaction and employee organisational commitment in developing countries especially in Southern Africa. A review was done and information was accumulated by methods for surveys on a sample of 150 academics and support staff in a private university setting in Gauteng, South Africa. Structural equation modelling was employed to analyse data using the Smart Partial Least Squares (PLS) software. By means of a partial least squares structural equation modeling approach, this study validates that elements such as workplace spirituality, procedural justice and work locus of control are instrumental in stimulating the employee job satisfaction and employee job commitment. The present investigation offers suggestions for academicians in the field of resource management by upgrading their comprehension of the how workplace spirituality and procedural justice impacts work locus of control, employee job satisfaction and employee organisational commitment
Uneven Use of Remote Work to Prevent the Spread of COVID-19 in South Korea's Stratified Labor Market
Joonmo Cho, Sanghee Lee, Saejung Park
Subject: Social Sciences, Economics Keywords: COVID-19; remote work; dual labor market; polarization; collective bargaining; rule revision unfavorably to workers
This research analyzed South Korean companies' adoption of remote work during the COVID-19 pandemic, by focusing on the dual labor market structure comprising of the primary (large corporations) and the secondary sectors (small and medium enterprises (SMEs)). Companies in the dual labor market were classified as per firm size. We used Statistics Korea's August supplementary data from the Economically Active Population Survey, covering 2017–2020. This empirical study analyzed the factors affecting remote work in 2020, after the outbreak of the pandemic. The results showed that the probability of large corporations introducing remote work during the pandemic increased by a significantly larger margin than for small and medium-sized firms. This suggests that the polarization within the dual labor market structure between large corporations and SMEs also spilled over into companies' adoption of remote work, which was initially introduced to prevent the spread of the pandemic. Additionally, the polarization in the use of digital technology is likely to persist even after the pandemic. Hence, based on our analysis of remote work adoption in the dual labor market, this study examined the system and factors of labor-management relations contributing toward such polarization and presented policy directions for the current labor market structure.
Unwrapping Ethics: Framing Effects Within the Construction of Team Ethics in Online Discourse at the Workplace
Maria Cristina Gatti
Subject: Arts & Humanities, Anthropology & Ethnography Keywords: framing; online discourse strategies; ethical behaviour; work-life blurred boundaries; effective teamwork; individual virtuousness; alignment
The present paper brings to the fore issues relating to the meaning and construction of ethics in online team communication by exploring the discursive strategies that contribute to the construction of a team's sense of duty and individual virtuousness. The study relies on a complex toolkit which includes ethnolinguistics, sociolinguistics, discourse and conversation analysis. Data consist in a one-day interaction unit as part of a larger set of real communication exchanges (ca. 34,000) over a time period of six months, observation notes, as well as unstructured interviews. Our empirical analysis has revealed that individual virtuousness and sense of duty are actually interrelated. A virtuous team climate leads team members to share positive perceptions about the team, which in turn increases team commitment. Furthermore, we argue that the blurring of private and professional life not only allows for the enactment of ethic-driven discourse strategies that result in enhanced cooperation and improved team performance but also for high levels of interconnectivity and improved social interaction. The results of the analysis supplement organisational literature based on ethics-centred observations on the effectiveness of virtual work, and show how a discourse-driven approach can provide tools for further theorisations about the practices and the ecology of digital communication.
Remote Indoor Construction Progress Monitoring Using Extended Reality
Ahmed Khairadeen Ali, One Jae Lee, DoYeop Lee, Chansik Park
Subject: Engineering, Automotive Engineering Keywords: virtual reality; point cloud data; building information modeling; work inspection; progress monitoring; quality assurance inspection.)
Despite recent developments in monitoring and visualizing construction progress, the data exchange between construction jobsite and office lacks automation and real-time recording. To address this issue, a near real-time construction work inspection system called iVR is proposed; this system integrates 3D scanning, extended reality, and visual programming to visualize interactive onsite inspection for indoor activities and provide numeric data. iVR comprises five modules: iVR-location finder (finding laser scanner located in the construction site) iVR-scan (capture point cloud data of jobsite indoor activity), iVR-prepare (processes and convert 3D scan data into a 3D model), iVR-inspect (conduct immersive visual reality inspection in construction office), and iVR-feedback (visualize inspection feedback from jobsite using augmented reality). An experimental lab test is conducted to validate the applicability of iVR process; it successfully exchanges required information between construction jobsite and office in a specific time. This system is expected to assist Engineers and workers in quality assessment, progress assessments, and decision-making which can realize a productive and practical communication platform, unlike conventional monitoring or data capturing, processing, and storage methods, which involve storage, compatibility, and time-consumption issues.
Organizational Climate and Work Style: The Missing Links for Sustainability of Leadership and Satisfied Employees
Massoud Moslehpour, Purevdulam Altantsetseg, WeiMing Mou, Wing-Keung Wong
Subject: Social Sciences, Organizational Economics & Management Keywords: job satisfaction (JS); work style (WS); leadership style (LS); organizational climate (OC); register office; Mongolia
Purpose - The purpose of the study is to investigate the missing link between leadership style and job satisfaction among Mongolian public sector employees. This study reiterates the mediating role of organizational climate (OC) and work style (WS) in a new proposed model. Methodology - The questionnaire is designed by a synthesis of existing constructs in the current relevant literature. The research sample consisted of 143 officers who work in the primary and middle units of territory and administration of Mongolia. Factor analysis, reliability test, collinearity test, and correlation analyses confirm validity and reliability of the model. Multiple regression analysis, using Structural Equation Modeling (SEM), tests the hypotheses of the study. Practical implications - This study has several important implications for studies related to organizational behavior and job satisfaction. Furthermore, the implications of findings are beneficial to organizations aiming at improving policies and practices related to organizational behavior and human resource management. Regulators and supervisors of private or public organizations aiming to increase the level of their employees' job satisfaction will also benefit from the findings. Therefore, this study's new proposed model can be the basis of fundamental research to build a better human resource policy. Although leadership style is an influential factor for job satisfaction, this study identifies the mediating missing links between leadership style and employees' job satisfaction. Findings: The findings of this research indicate that organizational climate and work style complement and fully mediate the relationship between leadership style and job satisfaction. Appropriate leadership style is most effective when it matches organizational climate as well as employees' work style. Furthermore, suitable organizational climate will increases the level of job satisfaction. If work style of employees is respected and taken into consideration, leadership style can find its way into job satisfaction. Originality/value - This study is the first to understand the motivators of job satisfaction in government sector of Mongolia. This study suggests valuable findings for executive officers, junior and primary unit's officers of register sector of government in Mongolia. The findings of this study help managers and executives in their effort to develop and implement successful human resource strategies.
Research on a Precision and Accuracy Estimation Method for Close-range Photogrammetry
Kai-feng Ma, Gui-ping Huang, Hai-jun Xu, Wei-feng Wang
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: surveying; close-range photogrammetry; internal coincidence precision estimation; external coincidence accuracy estimation; experimental work; testing
Precision and accuracy estimation is an important index used to reflect the measurement performance and quality of a measurement system. To reveal the significance and connotations of the precision and accuracy estimation index of a close-range photogrammetry system, several common precision and accuracy estimation methods used in close-range photogrammetry are explained from a theoretical perspective, and the mechanism of the internal coincidence precision estimation and the external coincidence accuracy estimation are deduced, respectively. Through detailed experimental design and testing, the validity and reliability of the proposed precision and accuracy estimation methods are verified, which provides strong evidence for the quality control, optimisation, and evaluation of the measurement results from a close-range photogrammetry system. At the same time, it has significance for the further development of precision and accuracy estimation analysis of close-range photogrammetry systems.
Increasing Evidence of Impaired Team Mindfulness in Online Academic Meetings Intended to Reduce Burnout
Carol Nash
Subject: Behavioral Sciences, Social Psychology Keywords: burnout; team mindfulness; work engagement; online meetings; academic meetings; writing prompts; doodling; COVID-19; online games
Online: 31 October 2022 (06:55:37 CET)
Burnout, a negative job-related psychological state particularly associated with the health professions, equates to a loss of valuable research in healthcare researchers. Team mindfulness, recognized to enhance personal fulfilment through work engagement, represents one important aspect found effective in reducing burnout. In a specific series of diverse membership academic meetings intended to reduce research burnout—employing writing prompts, doodling and continuous developmental feedback to do so—team mindfulness was demonstrated when conducted in person. Therefore, determining if team mindfulness is evident when holding such academic meetings online is relevant. When COVID-19 limitations required moving these academic meetings online, it was previously noted and reported that team mindfulness was affected in no longer being present during the first eighteen months of restrictions. To discover if this result persisted, question asking, doodles submitted and feedback responses were analyzed of the following year's academic meetings for the same group, both quantitively and qualitatively. In finding the team mindfulness of these meetings additionally compromised the second full year, online practices actually found successful at creating and supporting team mindfulness—online games—are identified and considered. Concluding implications are noted and recommendations made regarding team mindfulness in reducing burnout for future online academic meetings.
Mat-O-Covid: Validation of a SARS-CoV-2 Job Exposure Matrix (JEM) Using Data from a National Compensation System for Occupational Covid-19
Alexis Descatha, Grace Sembajwe, Fabien Gilbert, Marc Fadel
Subject: Medicine & Pharmacology, Other Keywords: public health; occupational; Covid; SARS-CoV-2; work; job exposure matrix; JEM; compensation; predictivity; validity; accuracy
Background. We aimed to assess the validity of the Mat-O-Covid Job Exposure Matrix (JEM) on SARS-CoV2 using compensation data from the French National Health Insurance compensation system for occupational-related COVID-19. Methods. Deidentified compensation data for occupational COVID-19 in France were obtained between August 2020 and August 2021. The acceptance was considered as the reference. Mat-O-Covid is an expert based French JEM on workplace exposure to SARS-CoV2. Bivariate and multivariate models were used to study the association between the exposure assessed by Mat-O-Covid and the reference, as well as the Area Under Curves (AUC), sensitivity, specificity, predictive values, and likelihood ratios. Results. In the 1140 cases included, there was a close association between the Mat-O-Covid index and the reference (p<0.0001). The overall predictivity was good, with an AUC of 0.78 and an optimal threshold at 13 per thousand. Using Youden's J statistic resulted in 0.67 sensitivity and 0.87 specificity. Both positive and negative likelihood ratios were significant: respectively 4.9 [2.4-6.4] and 0.4 [0.3-0.4]. Discussion. It was possible to assess Mat-O-Covid's validity using data from the national compensation system for occupational COVID-19. Though further studies are needed, Mat-O-Covid exposure assessment appears to be accurate enough to be used in research.
Discrete Element Method Modeling for the Failure Analysis of Dry Mono-Size Coke Aggregates
Alireza Sadeghi Chahardeh, Roozbeh Mollaabbasi, Donald Picard, Seyed Mohammad Taghavi, Houshang Alamdari
Subject: Engineering, Automotive Engineering Keywords: Carbon anode production; Crack generation; Discrete element method; Failure analysis; Second-order work criterion; Strain localization
An in-depth study of the failure of granular materials, which is known as a mechanism to generate defects, can reveal the facts about the origin of the imperfections such as cracks in the carbon anodes. The initiation and propagation of the cracks in the carbon anode, especially the horizontal cracks below the stub-holes, reduce the anode efficiency during the electrolysis process. In order to avoid the formation of cracks in the carbon anodes, the failure analysis of coke aggregates can be employed to determine the appropriate recipe and operating conditions. In this paper, it will be shown that a particular failure mode can be responsible for the crack generation in the carbon anodes. The second-order work criterion is employed to analyze the failure of the coke aggregate specimens and the relationships between the second-order work, the kinetic energy, and the instability of the granular material are investigated. In addition, the coke aggregates are modeled by exploiting the discrete element method (DEM) to reveal the micro-mechanical behavior of the dry coke aggregates during the compaction process. The optimal number of particles required for the failure analysis in the DEM simulations is determined. The effects of the confining pressure and the strain rate as two important compaction process parameters on the failure are studied. The results reveal that increasing the confining pressure enhances the probability of the diffusing mode of the failure in the specimen. On the other hand, the increase of strain rate augments the chance of the strain localization mode of the failure in the specimen.
Perceived Factors of Stress and its Outcomes among Hotel Housekeepers in The Balearic Islands: A Qualitative Approach from a Gender Perspective
Xènia Chela-Alvarez, Oana Bulilete, M. Esther García-Buades, Victoria A. Ferrer Pérez, and Joan Llobera-Canaves
Subject: Behavioral Sciences, Applied Psychology Keywords: hotel housekeepers; stress; occupational health; job demands-resources model; qualitative research; work- life balance; gender perspective.
Tourism is the main economic sector in the Balearic Islands (Spain) and hotel housekeepers (HHs) are a large occupational group, in which stress is becoming a major issue. This study aims at exploring in-depth factors perceived as stressors by HHs and key-informants, and their effects on work-life balance (WLB). A qualitative design with phenomenological approach was used, conducting six focus groups with 34 HHs and 10 individual interviews with key-informants. Results were analyzed adopting the job demands-resources model and a gender perspective. High demands –e.g work overload, time pressure, physical burden...-, lack of enough resources and little control –derived from role conflict, unexpected events...- were the most important factors explaining HHs' stress. Additionally, this imbalance was perceived as leading to health problems –mainly musculoskeletal disorders-. Working schedule was mentioned as a facilitator to WLB, whereas an imbalance between job demands and resources led to work-home conflict –preventing them from enjoying leisure time-. Multiple roles at work and at home increased their stress. HHs experienced their job as invisible and unrecognised. Regarding practical implications, our recommendations for hotel organization include reducing workload and increasing resources, which would improve the job demands-resource balance, diminish negative mental and physical outcomes and improve WLB.
Public Health Intervention Framework for Reviving Economy Amid the COVID-19 Pandemic (1): A Concept
Jianqing Wu, Zha Ping
Subject: Social Sciences, Business And Administrative Sciences Keywords: coronavirus; COVID-19; public health intervention; revive economy; disease severity; transmission route; influenza; ventilation; work environment
The COVID-19 pandemic has great adverse impacts on personal life, the U.S. economy, and the world economy. Freezing all human activities is not a sustainable measure. Thus we want to develop a public intervention framework that allows people to resume personal and economic activities. In this article, we examined transmission routes, disease severity, personal vulnerability, available treatments, and person-person interactions to establish a general public intervention framework. We divide people into risk groups, non-risk group and group that may serve as viral transmitters, explore interactions between individual persons within each group and between different groups, and propose interaction behavior modifications to mitigate viral exposures. For the non-risk groups, we identified preventive measures that can help them avoid the most serious exposures and infections that pose higher death risks. The invention measures for the vulnerable groups include prior-exposure measures, heightened protective measures, interaction behavior changes, post-exposure remedial measures, and multiple factors treatments to reduce death and disability risks. The multiple interventions and two-ways defensive behavior modifications are expected to result in reduced rate of detectable infections and lowered disease severity for the vulnerable groups. In this framework, most human activities and economic activities can continue as normal. With time passing, the population acquires population immunity against the COVID-19 virus. Implementation of this intervention framework requires considerable resources and governmental effects while the multiple factors treatment protocol requires the support of health care professionals.
Regularized Reconstruction of HBIM for Built Heritage—Case Study with Chinese Ancient Architecture
Que Raner, Wang Xi, Wu Cong, Bai Chengjun
Subject: Arts & Humanities, Architecture And Design Keywords: Chinese ancient architecture; bracket set; tile work; regularized reconstruction; parametric; algorithm modeling; Grasshopper; HBIM; built heritage
By the study of the pattern book Ying Zao Fa Shi (building regulations of Song Dynasty, 1103 AD), while analyzing the combining and dimensioning rule of timber framework and tile work, a model self-generating program has been compiled for the first time. The operating framework has been firstly defined, while solving the issues of clustering principle, connecting method, output classification, etc. with the detailed description of algorithm theory. Taking the corner bracket set and nine-ridge roof for example, after the compilation and debug by Grasshopper, according to various input parameters, various models have been generated automatically by the plugin, proving the velocity and the veracity of the algorithm.
Exploring the Vital Worker Over Time – A Week-Level Study on How Positive and Negative Work Events Contribute to Affect and Sustain Work Engagement
Oliver Weigelt, Antje Schmitt, Christine J. Syrek, Sandra Ohly
Subject: Behavioral Sciences, Applied Psychology Keywords: affective events; work engagement; sensitization-satiation effects; job demands-resources model; experience sampling; growth curve modeling
Online: 3 October 2019 (04:37:58 CEST)
Although work events can be regarded as pivotal elements of organizational life, only a few studies have examined how positive and negative events relate to and combine to affect work engagement over time. Theory suggests that to better understand how current events affect work engagement (WE), we have to account for recent events that have preceded these current events. We present competing theoretical views on how recent and current work events may affect employees (e.g., getting used to a high frequency of negative events or becoming more sensitive to negative events). Although the occurrence of events implies discrete changes in the experience of work, prior research has not considered whether work events actually accumulate to sustained mid-term changes in WE. To address these gaps in the literature, we conducted a week-level longitudinal study across a period of 15 consecutive weeks among 135 employees, which yielded 849 weekly observations. While positive events were associated with higher levels of WE within the same week, negative events were not. Our results support neither satiation nor sensitization processes. However, high frequencies of negative events in the preceding week amplified the beneficial effects of positive events on WE in the current week. Growth curve analyses show that the benefits of positive events accumulate to sustain high levels of WE. WE dissipates in the absence of continuous experience of positive events. Our study adds a temporal component and informs research that has taken a feature-oriented perspective on the dynamic interplay of job demands and resources.
SSWiS: An Information System for Graduate Education in Social Work
Oleg Kapeljushnik, Larry Rosenfeld, Manuel E. Garcia, Rebecca Brigham, Sarah Naylor, Karamarie Fecho, Charles P. Schmitt
Subject: Social Sciences, Other Keywords: learning management system; integrated planning and advising system; information system; field education; social work; graduate education
In graduate programs such as social work, field education is the signature pedagogy of education. As such, student placement with an appropriate field education agency is critical to ensure academic success and career readiness. A variety of Learning Management System (LMS) and Integrated Planning and Advising Service (IPAS) technologies have been developed to fully integrate technology into the educational system and streamline and improve the learning experience for students, educators, and administrators. Few (if any) of the existing solutions have capabilities to match students with field educators on the basis of an individual student's completed coursework and area of specialization, as well as field educator needs and opportunities. This paper describes our experience developing a custom LMS/IPAS system—the School of Social Work information System (SSWiS)—that was designed specifically for student learning, faculty advising, and academic administration within our social work graduate program. We present the challenges that motivated the design of the SSWiS before describing the architecture and functionality of our solution. We then discuss our preliminary evaluation results. We conclude with a discussion of the benefits and limitations of our system in the context of today's technical needs in graduate education in social work and other fields.
Digital Transition in Rural Emergency Medicine: Impact of Job Satisfaction and Workload on Communication and Technology Acceptance
Joachim Hasebrook, Leonie Michalak, Dorothea Kohnen, Bibiana Metelmann, Camilla Metelmann, Peter Brinkrolf, Steffen Flessa, Klaus Hahnenkamp
Subject: Behavioral Sciences, Applied Psychology Keywords: telemedicine; emergency medicine; emergency medical services; workload; work job satisfaction; technology acceptance; knowledge sharing; Dunning-Kruger effect
Background: Tele-emergency physicians (TEPs) take an increasingly important role in the need-oriented provision of emergency patient care. To improve emergency medicine in rural areas, we set up the project Land|Rettung (English: Rural|Rescue), which uses TEPs to restructure professional rescue services using information and communication technologies (ICTs) in order to reduce the therapy-free interval. Successful implementation of ICTs relies on user acceptance and knowledge sharing behavior. Methods and findings: We conducted a factorial design with active knowledge transfer and technology acceptance as a function of work satisfaction (high vs. low), workload (high vs. low) and point in time (prior to vs. after digitalization). Data were collected via machine readable questionnaires issued to 755 persons (411 pre, 344 post), of which 304 or 40.3% of these persons responded (194 pre, 115 post).Technology acceptance was higher after the implementation of TEP for nurses but not for other professional groups, and it was higher when the workload was high. Regarding active communication and knowledge sharing, employees with low work satisfaction are more likely to share their digital knowledge as compared to employees with high work satisfaction. Additional and more detailed analyses reveal that this is an effect of previous knowledge concerning digitalization. After implementing the new technology, work satisfaction increased for the more experienced employees, but not for the less experienced ones. Results are discussed considering the Dunning-Kruger effect. The Dunning-Kruger effect describes a cognitive bias. People with high expertise often underestimate their actual skill level. They have a more critical attitude towards their performance and feel the urgent need to fill possible knowledge gaps they notice. Conclusions: Our research illustrates that employees' workload has an impact on the intention of using digital applications. The higher the workload, the more people are willing to use TEPs. Regarding active knowledge sharing, we see that employees with low work satisfaction are more likely to share their digital knowledge compared to employees with high work satisfaction. This might be attributed to the Dunning-Kruger effect. Highly knowledgeable employees initially feel uncertain about the change, which translates into temporarily lower work satisfaction. They feel the urge to fill even small knowledge gaps, which in return leads to higher work satisfaction. Those responsible need to acknowledge that digital change affects their employees' workflow and work satisfaction. During such times, employees need time and support to gather information and knowledge in order to cope with digitally changed tasks.
Theory of the Academic Blockchain
Martin Wright
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: cryptography; timestamping; cryptocurrency; proof-of-knowledge; proof-of-work; proof-of-stake; proof-of-authority; litcoin; bitcoin
This article integrates existing theory from distributed computing and cryptology with anecdotal material from the cryptocurrency industry, to provide a comprehensive description of the minimum requirements of the hypothetical academic blockchain. The paper argues that such a community could significantly reduce the biases and misconduct that now exist in the academic peer review process. Theory suggests such a system could operate effectively as a distributed encrypted telecommunications network where nodes are anonymous, do not trust each other, and there is minimal central authority. To incentivize the academic community to join such a proposed community, the paper proposes a pseudo-cryptocurrency called litcoin (literature coin). This litcoin-based system would create economic scarcity based on proof of knowledge (POK), which is a synthesis of the proof of work (POW) mechanism used in bitcoin, and the proof of stake (POS) mechanism used in various altcoin communities. The paper argues that the proposed POK system would enable the academic community to more effectively develop the research it finds valuable.
The Pathways to Participation (P2P) Program: A Pilot Outcomes Study
Danielle Hitch, Lindsay Vernon, Rachel Collins, Carolyn Dun, Sarah Palexas, Kate Lhuede
Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: recovery; mental illness; mental health; psychiatry; social inclusion; occupational therapy; occupations; time use; activities of daily living; work.
Research has consistently found that people with mental illness (known as consumers) experience lower levels of participation in meaningful activities, which can limit their opportunities for recovery support. The aim of this study was to describe the outcomes of participation in a group program designed to address all stages of activity participation, known as Pathways to Participation (P2P). A descriptive longitudinal design was utilized, collecting data at three time points. Outcomes were measured by the Camberwell Assessment of Need Short Appraisal (CANSAS), Recovery Assessment Scale – Domains and Stages (RAS-DS), Behaviour and Symptom Identification Scale (BASIS-24), Living in the Community Questionnaire (LCQ) and time use diaries. All data was analysed using descriptive statistics, and Chi square analyses. Seventeen consumers completed baseline data, eleven contributed post program data and eight provided follow up data. Most were female (63.64%) and had been living with mental illness for 11.50 (± 7.74) years on average. Reductions in unmet needs and improvements in self-rated recovery scores were re-ported, but no changes were identified in either time use or psychosocial health. The findings indicate the P2P program may enable consumers to achieve positive activity and participation out-comes as part of their personal recovery.
Tribological Behaviour of K340 Steel PVD Coated with CrAlSiN Versus Popular Tool Steel Grades
Kazimierz Drozd, Mariusz Walczak, Mirosław Szala, Kamil Gancarczyk
Subject: Materials Science, General Materials Science Keywords: cold/hot-work steel; sliding; friction; wear testing; XRD analysis; wear mechanism; hardness; heat treatment; thin film; abrasion
The tribological performance of metalwork steel tools is of vital importance in both cold and hot working processes. One solution for improving metal tool life is the application of coatings. This paper investigates the effect of CrAlSiN thin-film PVD-deposition on the tribological behaviour of tool steel K340. The sliding wear performance of the coated K340 steel is analysed in relation to both the uncoated K340 steel and a range of tool steels dedicated to hot- and cold-working, such as X155CrVMo12-1, X37CrMoV5-1, X40CrMoV5-1, 40CrMnMo7 and 90MnCrV8. The investigated tool steels were heat-treated, while K340 was subjected to thermochemical treatment and then coated with a CrAlSiN hard film (K340/CrAlSiN). The hardness, chemical composition, phase structure and microstructure of steels K340 and K340/CrAlSiN are examined. Tribological tests were conducted using the ball-on-disc tester in compliance with the ASTM G99 standard. The tests were performed under dry unidirectional sliding conditions, using an Al2O3 ball as a counterbody. The wear factor and coefficient of friction are estimated and analysed with respect to hardness and alloying composition of the materials under study. SEM observations are made to identify the sliding wear mechanisms of the analysed tool steels and PVD-coated K340 steel. In contrast to the harsh abrasive-adhesive wear mechanism observed for uncoated tool steels, the abrasive wear dominates in case of the AlCrSiN. The deposited thin film effectively prevents the K304 substrate from harsh wear severe degradation. Moreover, thanks to the deposited coating, the K304/CrAlSiN sample has a COF of 0.529 and a wear factor of K=5.68×10−7 m3 N−1 m−1, while the COF of the reference tool steels ranges from 0.702 to 0.885 and their wear factor ranges from 1.68×10−5 m3 N−1 m−1 to 3.67×10−5 m3 N−1 m−1. The CrAlSiN deposition reduces the wear of the K340 steel and improves its sliding properties, which makes it a promising method for prolonging the service life of metalwork tools.
A Study Analysis on Effect of Software Scope Management and Scope creeping Factors in Software Project Management
Mehreen Sirshar, Muneeza Khalid
Subject: Engineering, Other Keywords: scope creep; software engineering; software project management; work breakdown structure; agile method; traditional methodology; functional point analysis; stakeholders
Scope, time, and cost permanently effects each other and most of Information Technology projects fails due to these three factors. Scope shifting mostly occur due to time and cost. At project start, lack of understanding of project and product scope is focal involvement that leads to unsuccessful projects. Complete software scope definition determines quality of project. Defining the customer requirement and the definite scope of project has key role for implementation of project management. The complications originates when systems are developed from impractical expectations and misunderstanding requirements. These problems are cause of many changes, occurs in system development and leads to poor scope management. Scope creep is one of the momentous prompting parameter on the success of project. The failure in manage scope creep leads for 80 percent of software projects failure. However, using agile approach the impact of scope creep on projects become insignificant. A correctly distinct scope tends us to develop a quality product, within identified plans and decided cost to the stake-holders.
Mechanical Energy before Chemical Energy at the Origins of Life?
H. Greenwood Hansma
Subject: Life Sciences, Molecular Biology Keywords: origin of life; origins of life; mechanical energy; work; entropic forces; mica; biotite; Muscovite; wet/dry cycles; clay
Forces and mechanical energy are prevalent in living cells. This may be because forces and mechanical energy preceded chemical energy at life's origins. Mechanical energy is more readily available in non-living systems than the various other forms of energy used by living systems. Two possible prebiotic environments that might have provided mechanical energy are hot pools that experience wet/dry cycles and mica sheets as they move, open and shut, as heat pumps or in response to water movements.
Dialectic Critical Realism: Grounded Values and Reflexivity in Social Science Research
Christopher Bagley, Alice Sawyerr, Mahmoud Abubaker
Subject: Social Sciences, Sociology Keywords: Dialectical Critical Realism; Education; Islam; Childhood Studies; Child Abuse; Work-Life-Balance; Roy Bhaskar; Priscilla Alderson; Margaret Archer
Critical realism emerged from the philosophical writings of Roy Bhaskar, and has evolved into a philosophy of social science research using the model of "dialectical critical realism" (DCR) which begins with the researcher's assumptions that the structures being researched have a real, ontological grounding which is independent of the researcher. This approach has proved fruitful in British and European social science research, but has had less influence in North America. We outline DCR's four level model for understanding society and its changing social structures through "the pulse of freedom". DCR has been used by Marxists, Muslims, Catholics and secular scholars who engage fruitfully in morphogenic dialogues leading to a critical realist understanding of society and social research, which transcends positivist and social constructionist models. Examples of DCR's application in the fields of childhood research, child abuse, education, and research on organisations are outlined to illustrate the working of this new research paradigm. We are enthusiastic in our advocacy of DCR as a model of qualitative research, and for constructing models of positive social change, and are particularly impressed by the substantive and theoretical expositions of DCR by Priscilla Anderson, Matthew Wilkinson and Margaret Archer, whose work we document and review.
Electrocardiogram, Echocardiogram and NT-proBNP in Screening for Thromboembolism Pulmonary Hypertension in Patients after Pulmonary Embolism
Olga Dzikowska-Diduch, Katarzyna Kurnicka, Barbara Lichodziejewska, Iwona Dudzik-Niewiadomska, Michał Machowski, Marek Roik, Małgorzata Wiśniewska, Jan Siwiec, Izabela Magdalena Staniszewska, Piotr Pruszczyk
Subject: Medicine & Pharmacology, Cardiology Keywords: screening after pulmonary embolism; chronic thromboembolic pulmonary disease; chronic thromboembolic pulmonary hypertension; diagnostic work-up of post-pulmonary syndrome
Background: The annual mortality of patients with untreated chronic thromboembolism pulmonary hypertension (CTEPH) is approximately 50% unless a timely diagnosis is followed by adequate treatment. In pulmonary embolism (PE) survivors with functional limitation the diagnostic work-up starts with echocardiography. It is followed by lung scintigraphy and right heart catheterization. However, noninvasive test providing diagnostic clues to CTEPH, or ascertain this diagnosis as very unlikely, would be extremely useful since the majority of post PE functional limitations is caused by deconditioning. Methods: Patients after acute PE underwent a structured clinical evaluation with electrocardiogram, routine laboratory tests including NT-proBNP and echocardiography. The aim of study was to verify whether the parameters from echocardiographic or perhaps electrocardiographic examination and NT-proBNP concentration best determine the risk of CTEPH. Results: A total (n = 261, male n = 123) patients after PE were included into the study, in group of 155 patients (59.4%) with reported functional impairment 13 patients (8.4%) had CTEPH and 7 PE survivors had chronic thromboembolic pulmonary disease (CTEPD) (4,5%). Echo parameters differed significantly between CTEPH/CTEPD cases and other symptomatic PE survivors. Patients with CTEPH/CTEPD had also higher level of NT-proBNP (p = 0.022) but concentration of NT-proBNP above 125 pg/ml did not differentiate patients with CTEPH/CTEPD (p>0.05). Additionally, proportion of patients with right bundle brunch block registered in ECG was higher in group with CTEPH/CTED (23.5% vs. 5.8%, p = 0.034) but there were no differences between other ECG characteristics of right ventricle overload. Conclusion: Screening for CTEPH/CTEPD should be performed in patients with reduced exercise tolerance compared to pre PE period, It is not effective in asymptomatic PE survivors. Patients with CTEPH/CTED had predominantly abnormalities indicatingchronic thromboembolism in the echocardiographic assessment. NT-proBNP and electrocardiographic characteristics of right ventricle overload proved to be insufficient in predicting CTEPH/CTEPD development.
Impact of COVID-19 Vaccination on Healthcare Worker Infection Rate and Outcome during SARS-CoV-2 Omicron Variant Outbreak in Hong Kong
Sze Tsing Jonpaul Zee, Lam Fung Kwok, Carmen Ka Man Kee, Ling Hiu Fung, Luke Wing Pan Luk, Chris Tsun Leung Chan, Alex Chin Pang Leung, Bella Pik Wa YU, Jhan Raymond L Hung, Kit Ying SzeTo, Queenie Wai Leng Chan, Bone Siu Fai Tang, Ada Wai Chi Lin, Edmond Shiu Kwan Ma, Koon Hung Lee, Chor Chiu Lau, Raymond Wai Hung Yung
Subject: Life Sciences, Microbiology Keywords: SARS-CoV-2; Omicron variant of concern; homologous boosting; heterologous boosting; Coro-naVac; BNT162b2; healthcare worker; return-to-work
Immune escape is observed with SARS-CoV-2 Omicron (Pango lineage B.1.1.529), the predominant circulating strain worldwide. Booster dose was shown to restore immunity against Omicron infection, however, real world data comparing mRNA (BNT162b2; Comirnaty) and inactivated vaccine (CoronaVac; Sinovac) homologous and heterologous boosting is lacking. A retrospective study was performed to compare the rate and outcome of COVID-19 in healthcare workers (HCWs) with various vaccination regime during a territory-wide Omicron outbreak in Hong Kong. During the study period 1 Feb – 31 Mar 2022, 3167 HCWs were recruited, 871 HCWs reported 746 and 183 episodes of significant household and non-household close contact. 737 HCWs acquired COVID-19 which were all clinically mild. Time dependent Cox regression showed that, comparing with 2-dose vaccination, 3-dose vaccination reduced infection risk by 31.7% and 89.3% in household contact and non-household close contact respectively. Using 2-dose BNT162b2 as reference, 2-dose CoronaVac recipient had significantly higher risk of being infected (HR 1.69 P<0.0001). Three-dose BNT162b2 (HR 0.4778 P<0.0001) and 2-dose CoronaVac + BNT162b2 booster (HR 0.4862 P=0.0157) were associated with lower risk of infection. Three-dose CoronaVac and 2-dose BNT162b2 + CoronaVac booster were not significantly different from 2-dose BNT162b2. The mean time to achieve negative RT-PCR or E gene cycle threshold 31 or above was not affected by age, number of vaccine dose taken, vaccine type and timing of the last dose. In summary, we have demonstrated lower risk of breakthrough SARS-CoV-2 infection in HCWs given BNT162b2 as booster after 2 doses of BNT162b2 or CoronaVac.
Preprint ESSAY | doi:10.20944/preprints202108.0066.v1
Impact of Spiritual Intelligence in Leadership: Some Biblical Cases
Pitshou Moleka
Subject: Arts & Humanities, Anthropology & Ethnography Keywords: spiritual intelligence; leadership; Bible; project management; supply chain; workplace spirituality; theology of work; construction; neuroscience; cognitive psychology; psychoanalysis; neurology
Spiritual intelligence had an impact on different biblical leaders, and in this text, we see some cases to serve as a sample (Joseph, Bezalel, and Daniel). In the Bible, this impact is demonstrated in innovations introduced by Joseph in Egypt, Bezalel the manager of the macro project of building in crisis time, Daniel the politician. It is the supreme intelligence and leaders are invited to make a shift from rationality to spirituality. The more leaders of organizations will use spiritual intelligence, the more leaders and followers will experience satisfaction, joy, accomplishment.
Primary School Physical Education at the Time of the COVID-19 Pandemic: Could Online Teaching Undermine Teachers' Self-Efficacy and Work Engagement?
Erica Gobbi, Maurizio Bertollo, Alessandra Colangelo, Attilio Carraro, Selenia di Fronso
Subject: Social Sciences, Accounting Keywords: Physical education; COVID-19; primary school; self-efficacy; work engagement; school closure; classroom teachers; digital competence; online teaching; lockdown
This study aimed to evaluate whether primary school classroom teachers reported changes in physical education teaching self-efficacy (SE-PE) and work engagement (WE) during the first COVID-19 wave. Six-hundred-twenty-two classroom teachers filled in an online questionnaire on SE-PE and WE, referring to before and during the lockdown, and on perceived digital competence. While controlling for perceived digital competence, a mixed between-within Repeated Measures Multivariate Analysis of Covariance (RM-MANCOVA) was performed, with a two-time (before vs. during the lockdown) and three age-categories (≤40 vs. 41-50 vs. ≥51 years) factorial design. The RM-MANCOVA revealed that perceived digital competence significantly adjusted teachers' SE-PE and WE values (p<0.001). The analysis yielded a significant multivariate main effect by time (p< 0.001) and by time × age-categories (p=0.001). Follow-up univariate ANCOVA showed significant differences by time in teachers' SE-PE (p<0.001) and WE (p < 0.001), with a reduction of both values from before to during the lockdown. A Bonferroni post hoc pairwise comparisons showed teachers' SE-PE significantly decreased in all age categories (p<0.001). The present findings confirm the importance of promoting SE-PE among primary school teachers, regardless of the crisis due to the COVID-19 pandemic. Teachers' self-efficacy and WE are essential to master the challenges of PE teaching.
Robust Dynamics of Synthetic Molecular Systems as A Consequence of Broken Symmetry
Yoshiyuki Kageyama
Subject: Physical Sciences, General & Theoretical Physics Keywords: dissipative structure; energy conversion; mechanical work; self-oscillation; collective dynamics; autonomous motion; self-replication; autocatalysis; molecular motor; molecular robot
The construction of molecular robotic-like objects that imitate living things is an important challenge for current chemists. Such molecular devices are expected to perform their duties robustly to carry out mechanical motion, process information, and make independent decisions. Dissipative self-organization plays an essential role in meeting these purposes. To produce a micro-robot that can perform the above tasks autonomously as a single entity, a function generator is required. Although many elegant review articles featuring chemical devices that mimic biological mechanical functions have been published recently, the dissipative structure, which is the minimum requirement, has not been sufficiently discussed. This article aims to show clearly that dissipative self-organization is a phenomenon involving autonomy, robustness, mechanical functions, and energy transformation. Moreover, the author details the recent experimental results of an autonomous light-driven molecular device that achieves all of these features. In addition, a chemical model of cell-amplification is also discussed to focus on the generation of hierarchical movement by dissipative self-organization. By reviewing this research, it may be perceived that mainstream approaches to synthetic chemistry have not always been appropriate. In summary, the author proposes that the integration of catalytic functions is a key issue for the creation of autonomous microarchitecture.
Preprint ESSAY | doi:10.3390/sci2020019
Helen Hansma
Subject: Keywords: origin of life; origins of life; mechanical energy; mechanochemistry; work; entropic forces; mica; biotite; Muscovite; wet/dry cycles; clay
Mechanical forces and mechanical energy are prevalent in living cells. This may be because mechanical forces and mechanical energy preceded chemical energy at life's origins. Mechanical energy is more readily available in non-living systems than the various forms of chemical energy used by living systems. Two possible prebiotic environments that might have provided mechanical energy are hot pools that experience wet/dry cycles and mica sheets as they move, open and shut, as heat pumps or in response to water movements.
Personality, Work-Life Balance, Hardiness, and Vocation: A Typology of Nurses and Nursing Values in a Special Sample of English Hospital Nurses
Christopher Bagley, Mahmoud Abubaker, Alice Sawyerr
Subject: Behavioral Sciences, Applied Psychology Keywords: nursing values; burnout; hardy personality; work-life balance; nursing stress; co-counselling; critical realism; nurse education; nurse-patient ratios
This initial report of a longitudinal study of 192 English hospital nurses has measured Nursing Values (the 6Cs of nursing); Personality, Self-Esteem and Depression; Burnout Potential; Work-Life Balance Stress; 'Hardy Personality'; and Intention to Leave Nursing. Correlational, component and cluster analysis identifies four groups: "The Soldiers" (N = 79) , with medium scores on most measures, who bravely 'soldier on' in their nursing roles, in the face of numerous financial cuts to the National Health Service, and worsening nurse-patient ratios; "Cheerful Professionals" (N = 54), coping successfully with nursing roles, and a variety of challenges, in upwardly mobile careers; "High Achievers" (N = 39), senior nurses with strong profiles of a 'hardy personality', and commitment to fundamental nursing values; "Highly Stressed, Potential Leavers" (N = 20), with indicators of significant psychological distress, and difficulty in coping with nursing role challenges. We propose a model of co-counselling and social support for this distressed group, by nurses who are coping more successfully with multiple challenges. We discuss the role of nurse educators in fostering nursing values, and developing and supporting 'hardy personality' and emotional resilience in recruits to nursing. This study is framed within the disciplinary approach of Critical Realism, which identifies the value basis for research and dialogue in developing strategies for social change.
The Human Sustainability of ICT and Management Changes: Evidence for the French Public and Private Sectors
Maëlezig Bigi, Nathalie Greenan, Sylvie Hamon-Cholet, Joseph Lanfranchi
Subject: Social Sciences, Economics Keywords: organizational changes; ICT; management tools; work experience; employee outcomes; comparison of public and private sectors; linked employer-employee survey
We investigate the human sustainability of ICT and management changes using a French linked employer-employee survey on organizational changes and computerization (COI). We approach the human sustainability of changes through the evolutions of work intensity, skill utilization and the subjective relationship to work. We compare in the private sector and the State civil service the impacts of ICT and management changes on the evolution of these three dimensions of work experience. We find that when ICT and management changes are intense, they are positively associated in the public sector with work intensification and new knowledge. In the private sector ICT and management changes increase the use of skills, but at a rate decreasing with their intensity and without favoring the accumulation of new knowledge. However, their impacts on the subjective relationship to work are much stronger, with public sector employees expressing discouragement as well as the feeling of an increased effort-reward imbalance when private sector employees become more committed. We tested that the self-selection of employees, the specific sources and paths of changes and the implementation of performance pay did not explain this divergence. We identify two partial explanations: one is related with employee turnover in the private sector, the other one with the role of trade unions. These results suggest that the human sustainability of ICT and management changes depends on their intensity and on how their implementation takes into account the institutional context of the organization.
Statistical Data Set and Data Acquisition System for Monitoring The Voltage and Frequency of The Electrical Network in An Environment Based On Python and Grafana
Javier Fernández-Morales, Juan-José González-de-la Rosa, José-María Sierra-Fernández, Manuel-Jesús Espinosa-Gavira, Olivia Florencias-Oliveros, Agustín Agüera-Pérez, and José-Carlos Palomares-Salas, Paula Remigio Carmona
Subject: Engineering, Electrical & Electronic Engineering Keywords: Grid frequency; GrafanaTM; Higher-order statistics; LabVIEWTM; Low-cost instrument; Net-work-attached storage; Power Quality; PythonTM; Statistical Signal Processing; Voltage monitoring
This article presents a unique set of voltage and current data from a public building and acquired using a hybrid measurement solution that combines Python and Grafana. The transversal purpose consists of contributing to the community with a vision of the quality of the supply more oriented to the monitoring of the state of the network, providing a more realistic vision, which allows a better understanding, and the adoption of the best decisions to achieve the efficient energy management and thus optimize the operation and maintenance of power systems. The work focuses on higher order statistical estimators that, combined with exploratory data analysis techniques, improve the characterization of the shape of the stress signal. These techniques and data, together with the acquisition and monitoring system, present a unique combination in the line of low-cost measurement solutions. It also incorporates the underlying benefit of the contribution to industrial benchmarking. The paper also includes a computational comparison between Python and LabVIEW to elicit the performance of the measurement solution.
Impact of Mass Workplace COVID-19 Rapid Testing on Health and Healthcare Resource Savings
Francesc López Seguí, Jose Maria Navarrete Duran, Albert Tuldrà Niño, Maria Sarquella, Boris Revollo, Josep Maria Llibre, Jordi Ara del Rey, Oriol Estrada Cuxart, Roger Paredes, Guillem Hernández Guillamet, Bonaventura Clotet, Josep Vidal Alaball, Patricia Such Faro
Subject: Life Sciences, Biochemistry Keywords: workplace testing; economic analysis; COVID-19; asymptomatic screening; mass testing; employee population health; return to work practices; SARS-CoV-2; surveillance; workplace mitigation
Background: The epidemiological situation generated by COVID-19 has cast into sharp relief the delicate balance between public health priorities and the economy, with businesses obliged to toe a line between employee health and continued production. In an effort to detect as many cases as possible, isolate contacts, cut transmission chains and limit the spread of the virus in the workplace, mass testing strategies have been implemented in both public health and industrial contexts to minimize the risk of disruption in activity. Objective: To evaluate the economic impact of mass workplace testing strategy as carried out by a large automotive company in Catalonia in terms of health and healthcare resource savings. Methodology: Analysis of health costs and impacts based on the estimation of mortality and morbidity avoided because of screening and the resulting savings in healthcare costs. Results: The economic impact of the mass workplace testing strategies (using both PCR and RAT tests) was approximately €10.44 per test performed or €5,575.49 per positive detected. 38% of this figure corresponds to savings derived from better use of health resources (hospital beds, ICU beds and follow-up of infected cases), while the remaining 62% corresponds to improved health rates due to avoided morbidity and mortality. In scenarios with higher positivity rates and a greater impact of the infection on health and the use of health resources, these results could be up to ten times higher (€130.24 per test performed or €69,565.59 per positive detected). Conclusion: In the context of COVID-19, preventive actions carried out by the private sector to safeguard industrial production also have concomitant public benefits in the form of savings in healthcare costs. Thus, governmental bodies need to recognize the value of implementing such strategies in private settings and facilitate them through, for example, subsidies.
An ultradian feeding schedule in rats affects metabolic gene expression in liver, brown adipose tissue and skeletal muscle with only mild effects on circadian clocks
Paul de Goede, Satish Sen, Yan Su, Ewout Foppen, Vincet-Joseph Poirel, Etienne Challet, Andries Kalsbeek
Subject: Life Sciences, Molecular Biology Keywords: Suprachiasmatic nucleus (SCN); Circadian clock; Soleus Muscle (SM); Brown adipose tissue (BAT); liver; 6-meal feeding; Respiratory exchange ratio (RER); Clock genes; metabolic genes; Shift work.
Restricted feeding is well known to affect expression profiles of both clock and metabolic genes. However, it is unknown whether these changes in metabolic gene expression result from changes in the molecular clock or in feeding behavior. Here we eliminated the daily rhythm in feeding behavior by providing 6-meals evenly distributed over the light/dark-cycle. Animals on this 6-meals-a-day feeding schedule retained the normal day/night difference in physiological parameters including body temperature and locomotor activity. The daily rhythm in respiratory exchange ratio (RER), however, was significantly phase-shifted through increased utilization of carbohydrates during the light phase and increased lipid oxidation during the dark phase. This 6-meals-a-day feeding schedule did not have a major impact on the clock gene expression rhythms in the master clock but did have mild effects on peripheral clocks. By contrast, genes involved in glucose and lipid metabolism showed differential expression. Concluding, eliminating the daily rhythm in feeding behavior in rats does not affect the master clock and only mildly affects peripheral clocks, but disturbs metabolic rhythms in liver, skeletal muscle and brown adipose tissue in a tissue-dependent manner. Thereby a clear daily rhythm in feeding behavior strongly regulates timing of peripheral metabolism, separately from circadian clocks.
|
CommonCrawl
|
Wasy Research
Tolerancing
2D tolerance stack-up analysis with examples
Every part constituting an assembled product has tolerances assigned to the part's features. These assigned tolerances cause variations on the key characteristics (KCs) of the assembled part. How can we know how much the variations on the KCs before manufacturing the parts?
Wahyudin Syam
Oct 12, 2021 • 16 min read
Every part constituting an assembled product has tolerances assigned to the part's features. These assigned tolerances cause variations on the key characteristics (KCs) of the assembled part.
How can we know how much the variations on the KCs before manufacturing the parts?
This post will explain the method used to analyse tolerance/variation stack-ups in 2D. Examples are given to give a clear understanding on how to perform 2D tolerance stack-up analysis on parts.
(You may read a post about the explanation of assembly and key characteristic)
for 3D tolerance stack-up analysis, you can read here.
Tolerance stack-up analysis
Tolerance stack-up analysis is a method used to evaluate the cumulative effect of tolerances allocated on the features of components and to assure that the cumulative effect is acceptable to guarantee the functionality of a product after assembly processes (see this book).
Different names refer to the same tolerance stack-up analysis, such as: tolerance analysis, tolerance-chain analysis, variation stack-up analysis and assembly-chain analysis.
The main goal of tolerance analysis is to check that the dimensions and tolerances of components are correct so that after the components are assembled, the assembled product can function as desired.
Tolerance stack-up analysis is unique because this analysis is half-science and half-art (depends on how we determine a tolerance chain)
Tolerance allocation and tolerance stack-up analysis always come "hand-in-hand" and cannot be separated each other. Tolerance stack-up analysis and tolerance allocation are iterative processes that are carried out until the desired KC of an assembly is satisfied based on given geometrical and dimensional tolerances (GD&T) on parts.
A systematic approach on tolerance stack-up analysis is necessary to reduce the number of iteration processes to determine correct tolerance values on part's features.
The experience of design and manufacturing engineers play an important role on the first determination of tolerance values. Because, the value of tolerances are directly related to how difficult a part can be made and how much the cost to make the part.
The smaller the tolerance value, the tighter the tolerance, and then the higher the production and inspection cost, and otherwise.
Questions to be answered by performing tolerance stack-up analysis
By performing tolerance stack-up analysis, important questions regarding the assembly process and the final KC of a product can be answered before manufacturing, for examples:
What is the effect on a final assembled product when the location of a hole on a bracket deviating few millimetres from the hole nominal position?
How much material need to be preserved in a machining process so that there are still materials for post-processing, for example boring process, to get smooth surface finish or high dimensional accuracy on a feature?
What is the effect if a manufactured hole is made larger from its nominal diameter?
What is the effect if the number of components constituting an assembly are added?
Does the surface of the rotor and stator of a motor touch each other during operation?
How much the gap or clearance variation between two surfaces of a part after an assembly process?
How much the optimal temperature of the assembly process of a micro-scale product should be to eliminate or reduce the effect of component thermal expansions during the assembly process so that the KC of the product can be maintained?
General steps or procedures to perform tolerance stack-up analysis
Define the critical dimension (KC) of the assembly feature of a product to be analysed, such as clearance between two plates
Construct the tolerance chain of the product, that is the chain of components that affect the assembly feature or critical dimensions
Define the model of the tolerance stack-up
Consider all possible variation sources on the model
Add all the variation sources, either with worst-case or statistical based methods (see the following section).
Tolerance analysis methods: worst-case and statistical based
In general, there are two types of methods to add all variations in tolerance stack-up analysis, that are:
Worst-case based analysis
Worst-case analysis is a tolerance analysis method that adds all maximum values of allocated tolerances. Worst-case analysis is formulated as:
Total variation = $Tol_{1}+ Tol_{2}+ Tol_{3}+ … + Tol_{n}=\Sigma_{i=1..n}Tol_{i}$
Where $ Tol_{i}$ is the $i$-the tolerance in equal-bilateralformat ($\pm Tol_{i}$).
The properties of worst-case based analysis are:
This method represents the largest possible variation on an assembled product based on allocated tolerance values
This method assumes that all components are in their largest deviation at the same time in assembly processes (in reality, this situation rarely happens as very often, one feature deviate at maximum and the other features are not)
This method requires all parts should be inspected one-by-one to assure that there is no single part that is out of tolerance.
This method is suitable for low-volume and high-value products such as jet engines that need inspection for each engine.
This method implies that when all parts constituting an assembly are in conformance then the assembly (KC) will be assured to be in tolerance.
Statistical based analysis
Statistical-based analysis is a tolerance analysis method that sum-of-squares all values of allocated tolerances. This method assumes some degree of confidences on the estimated sum-of-squares total variations ($2\sigma$ is 95% confidence).
The formula for statistical-based analysis is:
Total variation = $k\sqrt{ Tol_{1} ^{2}+ Tol_{2}^{2}+ Tol_{3}^{2}+ … + Tol_{n}^{2}}=k\sqrt{\Sigma_{i=1..n}Tol_{i}^{2}}$
Where $ Tol_{i}$ is the $i$-the tolerance in equal-bilateralformat ($\pm Tol_{i}$) and $k$ is a safety factor to take into account variances from components supplied from other companies. In general, $k=1.5$ if components are supplied from other companies. If all components are made in-house, $k$ can be set to 1 (meaning the variations of components can be controlled because the components are made in-house).
The properties of statistical based analysis are:
This method requires the production processes of products to be analyses are in control, that is operating under the process normal condition. To determine whether a process is in or out of control, process capability index calculations can be performed ($C_{p}>1$)
This method requires there is no mean-shift on the production processes of products ($C_{pk}>1$)
This method assumes that all dimensions of features are very likely to be at their nominal values because the production processes of the features are controlled
This method will have total variation values to be less than the values calculated based-on worst-case method. This lower variation values mean that the values for allocated tolerance on features can be made larger (than that when using worst-case method) so that the production and inspection cost can be reduced
This method assumes that inspection processes are not performed for all parts. Instead, the inspection processes are performed to some products in a batch by random sampling
This method implies that when all parts constituting an assembly are in conformance then there will be low probability that an assembly (KC) is out of tolerance.
Figure 1 shows an illustration of the effect of mean-shift of the production process of a component. In this illustration, even if the process is under control (the process variations on both red and green situations are within $3\sigma$ tolerances), the produced component will be out of its tolerance.
Figure 1: Illustration of mean-shift of the production process of a component.
EXAMPLE 1: 2D tolerance analysis with "+/-" tolerance
In this example, the slot feature of a component made by a milling process is analysed. Figure 2 shows the dimensions and tolerances of the features on the component. The KC is the distance between the two slots.
In figure 2, the tolerance is shown in two formats: unequal bilateral (figure 2 left) and equal bilateral (figure 2 right) formats. Both worst-case and statistical-based analyses require tolerances to be in equal bilateral format.
Figure 2: (left) unequal bilateral format and (right) equal bilateral format.
The format of equal bilateral tolerance is
$X\pm T$
Where $X$ is nominal dimension and $T$ is tolerance of "X".
To convert the format of tolerances from unequal bilateral to equal bilateral, the following procedure is used:
$X=\frac{min+max}{2}$
$T=\frac{min-max}{2}$
Where $min$ and $max$ are the minimum and maximum value, respectively, of a dimension and tolerance.
To perform the tolerance stack-up analysis, we need to define the tolerance chain of the part. Figure 3 shows the KC and tolerance chain of the part in figure 2. Tolerance chain describes the propagation of accumulated tolerances from point A to point B.
Figure 3: The tolerance chain from point A to point B.
From figure 3, we can perform the tolerance analysis with both worst-case and statistical-based methods.
Table 1 shows the analysis based on worst-case method. The direction of tolerance propagations that are relevant is only in horisontal direction. From table 1, the results of the total variation based on worst-case method is that the distance variation between point A and B is $5.325mm$ and the nominal distance is $50.33mm$.
Table 1: Tolerance stack-up analysis based on worst-case method (for part in figure 3).
Meanwhile, table 2 shows the analysis based on statistical method. From table 2, the total variation calculated by statistical-based method is $2.82mm$ with the same nominal dimension of $50.33mm$.
As explained before, with statistical-based method, the total variation is smaller than that calculated from worst-case method. In this case, the total variation calculated by using statistical-based method is 47% lower than that of worst-case method.
This smaller total variation implies that with statistical-based method, we can allocate higher values for the tolerance than with worst-case method. This higher tolerance values mean that the part can be manufactured and inspected at lower cost than the one with tighter tolerance.
Table 2: Tolerance stack-up analysis based on statistical-based method (for part in figure 3).
EXAMPLE 2: 2D tolerance analysis with GD&T tolerance
In this example, an assembly consisting of two parts is used. The assembly use not only "+/-" tolerance, but also geometric tolerance (GD&T).
Figure 4 shows the two parts used in this example. The assembly features are the two pins and holes of the parts. These pins and holes are the features that join the two parts.
Figure 4: the two components example considering GD&T.
It is worth to note that, in general, with additional geometric tolerances, the total variation will be higher than in the case where only "+/-" tolerancing is used. The reason is that with geometric tolerance, the source of tolerance values is more than only use "+/-" tolerances.
However, with GD&T, the tolerancing of the part is better than only using "+/-" tolerances. Because, with GD&T, the representation of the parts and assembly considers the real conditions of the manufacturing process of the parts so that the results of the tolerance analysis will be significantly more accurate than the analysis that only considers "+/-" tolerance (you may read this post for more explanations).
The detailed dimensions and tolerances of the two parts are shown in figure 5 and 6. It is worth to note that in figure 5 and 6 there are datums. These datums are the requirements of using GD&T, especially to tolerance related features.
In figure 5, datum A (the most stable and easy to access surface on a part) on part 1 is the top surface of the part and is given a flatness tolerance, that is an unrelated tolerance (the family of form tolerance).
Datum B and C are assigned to the side surfaces of datum A and are given perpendicularity tolerances with respect to datum A. All geometrical tolerances on this part will refer to these datum A,B and/or C.
In figure 6, datum A (again, the most stable and easy to access surface on a part) on part 2 is the bottom surface of the part and is also given a flatness tolerance. And, similar to part 1, the other datum B and C are assigned to the side surfaces of datum A.
It is important to note that datum A on both part 1 and 2 has the smallest (tightest) tolerance values compare to other tolerances. The reason is that datum A is the main reference of other datums and tolerances and should be manufactured with the highest accuracy compared to other features on part 1 and 2.
Figure 5: The nominal dimension and the tolerance for part 1.
In figure 5 and 6, to tolerance pin and hole features, position tolerances are commonly used. The reason is that position tolerance also controls cylindricity of the pins and holes.
Figure 7 shows the tolerance chain of the two parts' assembly. The KC is the vertical distance between point A and point B and also shown in figure 7. In figure 7, we can observe there are many variation sources related to the centre of the pins and holes that are the assembly features.
Figure 7: The tolerance chain for the two parts' assembly.
Table 3 shows the results of the 2D tolerance stack-up analysis based on worst-case method and table 3 shows the results of the analysis based on statistical method. As can be observed from table 3 and 4, beside assembly shift, other variations occur and are caused by bonus tolerances.
These bonus tolerances are obtained because the hole and pin features deviates from the maximum material condition (MMC) of the holes and pins.
The results of the tolerance analyses with the two methods are $(70\pm 2.485) mm$ for the calculation based on worst-case method and $(70\pm 1.26) mm$ for the calculation based on statistical method.
EXAMPLE 3: 2D tolerance analysis of a real product with GD&T tolerance
A belt-tensioner assembly is presented in this example. The belt-tensioner has a function to provide force so that the tension level of a belt can be maintained.
Belt-tensioners are commonly found on various applications, such as timing-belt systems in car engines, chain tensioners in bicycles and conveyor belts in factories. The main components of a belt-tensioner are pulleys to bear the belts so that tensions can be applied to the belts.
The distance between the pulleys and the base on the tensioner should be maintained. Because, the pulley should not touch the base when the tensioner is in operation.
In this example, both tolerance analysis and allocation will be presented. The 2D tolerance analyses of the parts constituting the belt-tensioner assembly use both worst-case and statistical-based methods.
The design of the belt-tensioner
The design and assembly of the belt tensioner are shown in figure 8. Meanwhile, figure 9 shows the different 2D projection views of the belt-tensioner assembly.
There are four main parts constituting the assembly: base, support, rotor and pulley. The KC or the assembly key characteristic that should be controlled is the clearance between the pulley and the base so that there is no friction between the pulley and base during operation.
Figure 8: the assembly of the belt-tensioner.
Figure 9: 2D projection views of the belt-tensioner.
The nominal dimension and tolerances (both "+/- " and GD&T) of the parts are shown in figure 10. In figure 10, for the base, there are three datums: A, B and C. Datum A (selected due to the surface is the most stable and easy to access) is the main reference and has a flatness tolerance of $0.01 mm$, that is the tightest tolerance due to datum A is the main reference for all geometric tolerances on the base.
The other geometric tolerances on base are profile tolerance and position tolerance. The position tolerance controls the accuracy of the holes as the features to assemble other parts to the base.
For the support (figure 10), the most stable surface for datum A is the bottom surface of the support with flatness tolerance of $0.01 mm$. The other datums are datum B and C that have perpendicularity tolerances with respect to datum A.
The datum B and C on the support is used as the references of the hole position to insert the rotor part. Other geometrical tolerances on the support are profile tolerances to control the surface profile of the surfaces.
For the rotor (figure 10), the most stable feature for datum A is the main axis of the cylinder. Because, the rotor is cylindrical and does not have a large stable or flat surface. Geometric tolerances applied to the rotor are cylindricity to control the cylinder shape including the axis and position tolerances to control the axis of the middle cylinder.
Finally, the dimension and tolerance for the pulley (figure 10) are similar to the rotor part as both the pulley and rotor are cylindrical parts. Datum A is the internal (middle) cylinder of the pulley. Position tolerances are applied to control the axis of the big cylinder to be coaxes with the middle cylinder (datum A).
Figure 10: the nominal dimension and tolerances of the base, support, rotor and pulley.
Tolerance chain of the belt-tensioner
The tolerance chain of the belt-tensioner is shown in figure 11. In figure 11, the KC is clearance or gap between the pulley and base, as being mentioned above, and is shown as green arrow in figure 11.
Since the analysis is 2D, we only consider variation in horizontal and vertical directions. Specifically for this case, only variation chains in vertical direction are relevant since the clearance (KC) is in vertical direction.
The tolerance chain is shown as red arrows and is flowing through the features affecting the KC. The determination of this tolerance chain is half-science and half-art. Because, very often, there are several ways in defining the tolerance chain of an assembly.
Figure 11: the tolerance chain of the belt-tensioner assembly.
Tolerance analysis and allocation
From figure 11, the tolerance chain is A—B—C—D—E—F—G—H—I—J—K—L—M—N—O. B,C,E,H are nominal dimensions so that their variations are zero. A,D,F,G,I,J,K,L,M,N,O are due to tolerances both dimensional and geometrical tolerances so that the mean value is zero.
Table 5 shows the detailed calculation of the mean ($X_{n}$) and variation ($T_{x}$) for each point on the tolerance chain in figure 11. In table 5, the mean and variation value for each point on the chain are presented. Note that the tolerance format is in equal-bilateral format.
Table 5: Detailed calculation for the mean and variation values of each point on the tolerance chain.
As can be seen in table 5, the nominal values ($X_{n}$) for all geometrical tolerances (at point A, D,F,G, I,J,K,L,M,N) are always zero. Because, in perfect condition, all geometrical variations should be zero.
The nominal value of the KC (the clearance between the pulley and base) can be calculated by summing all the nominal values ($X_{n}$) for every feature (points) in the tolerance chain. The nominal value of the KC is:
$X_{n}=A-B-C+D+E+F+G+H+I+J+K+L+M+N-O$
$ X_{n}=0-6-15+0+15+0+0+80+0+0+0+0+0+0-66.5=7.5 mm$
The next step is to calculate the total variation with respect to the nominal clearance.
Worst-case method
Based on this method, the total variation is calculated by summing all the absolute values of $T_{x}$. Based on this method, all manufactured parts (base, support, pulley and rotor) should be inspected to assure that all parts are in tolerance.
The total variation, based on worst-case, due to the given tolerances is (based on figure 11 and table 5):
$T_{x}=T_{XA}+ T_{XB}+ T_{XC}+ T_{XD}+ T_{XE}+…+ T_{XO}$
$ T_{x}=0.05+0+0+0.05+0+0+0+0+0.075+0.1+0+0.35+0.2=0.925$
Finally, the nominal dimension and total variation of the KC = $X_{n}\pm T_{x}=7.5\pm 0.925$.
That is, the KC will have values ranging between $6.575mm-8.425mm$.
Statistical-based method
For this analysis, the total variation is calculated by root-sum-squared all the $T_{x}$. The safety factor in this analysis is 1.5 considering some parts are made from other manufacturers.
Then, the total variation of the KC is calculated as (based on figure 11 and table 5):
$T_{x}=1.5\sqrt{T_{XA}^2+ T_{XB}^2+ T_{XC}^2+ T_{XD}^2+…+ T_{XO}^2}$
$ T_{x}=1.5\sqrt{0.05^2+0^2+0^2+0.05^2+…+0.2^2}$
$ T_{x}=1.5\sqrt{0.1931}=0.66$
Finally, the nominal dimension and total variation of the KC = $X_{n}\pm T_{x}=7.5\pm 0.66$.
That is, the KC will have values ranging between $6.84mm-8.16mm$.
Note that with this statistical-based method, we do not need to inspect all parts to be within their tolerances. Instead, we can sample parts from a batch production and inspect them.
By sampling inspection, a significant time can be saved in the production chain.
But remember, there will be a small probability that the KC will be not met after assembly because there is a change that some defective parts pass the inspection.
Tolerance stack-up analysis is a powerful method to estimate the final variation on the key characteristic (KC) of an assembly before manufacturing.
With the capability of estimating the final variation, design correction and improvement can be done at early design stages and can significantly save product development costs.
Tolerance stack-up analysis cannot be separated with tolerance allocation. Because both tolerance analysis and allocation are an iterative process that is performed until the variation on the final KC below a threshold.
In this case we consider tolerance stack-up analysis in 2D, that is variation is considered in a planar plane and rotational variations are not considered.
We sell tutorials (containing PDF files, MATLAB scripts and CAD files) about 3D tolerance stack-up analysis based on statistical method (Monte-Carlo/MC Simulation).
We recommend the book of Prof. Eisenhardt about "Simple rules: How to thrive in a complex world".
In addition, We also recommend books "Deep Work: Rules for Focused Success in a Distracted World" about how to do work that give big results by Cal Newport.
Optical coordinate measuring machine (Optical-CMM): Performance verification and measurement uncertainty estimation
Performance verification and measurement uncertainty estimation of an optical coordinate measuring machine (optical-CMM) are very important aspects assuring that the optical-CMM works within its specification and its measurement results are traceable to the definition of metre.
Wahyudin Syam Jan 23, 2023 • 7 min read
Optical coordinate measuring machine (Optical-CMM): Two fundamental limitations
Optical coordinate measuring machines (Optical-CMM) have advantages over tactile (contact) CMMs, such as more part-feature accessibility, no surface damaging-risk and large surface points capture in relatively a short period of time (compared to tactile CMMs).
Digital transformation of dimensional and geometrical measurements
Today era is the time for thorough digitalisation, from product design, process design to data analysis and executive summary making.
Wahyudin Syam Jan 10, 2023 • 11 min read
Wasy Research © 2023
|
CommonCrawl
|
Maximum acceleration to prevent rotation of body when held by a gripper
MAIN QUESTION
A mass body is held in place by a friction force. The object that generates this friction accelerates upwards. Because the acceleration is not acting inline with the center of mass of the body, the body starts to rotate around its contact point. Which is a result of the moment of friction that is not big enough to keep the body in place.
The main question is how to calculate the maximum linear acceleration to prevent any angular displacement of the body? (The acceleration is applied on the object that holds the body through the friction)
Some sub-questions that would help me are:
How to calculate the moment of friction?
How to translate the linear acceleration to a moment or angular momentum when it accelerates from the center of the friction connection?
Is it correct to use the center of the friction connection as the "origin" of the calculation? (Moments, inertia and angular acceleration would be calculated relative to this "origin")
Previous formulation of the question. It gives more background information and an example situation of the question to clarify the question.
I am currently writing software that automatically calculates the maximum linear acceleration of a 6-axis robot when holding a product. The robot uses a gripper to pick and place products. The product may not displace or rotate relative to the gripper.
In the first place I assumed it would be as simple as: $\sum F=m*a$. But this would only suffice if the product is picked in its center of mass. When the product is not picked in its center of mass, the product starts to rotate a bit when the robot (de-)accelerates (Example situation can be seen in figure 1).
Thoughts so far...
Assumed is that the clamping force is evenly spread over the contact area between the product and gripper. The gravitational force generates a moment relative to the center of the contact area. The friction force is spread over the area which would generate a opposing moment. The acceleration of the gripper would generate an angular momentum.
These moments together with the differential of the momentum would then be implemented in: $$\sum M + \dfrac{\Delta L}{\Delta t} = I * \alpha$$ Then assume $\alpha = 0 \hspace{2mm}rad/s^2$ and calculate the maximal momentum. Which would need to be translated back to a linear acceleration ...
No idea if I am heading in the right direction. I cannot find any examples where a clamped body rotates around the center of the friction contact area due to linear motion.
homework-and-exercises angular-momentum mass rotational-dynamics friction
$\begingroup$ The question in its present state may be marked as off-topic. Can you rephrase it to refer to the concepts you're trying to apply to the problem? $\endgroup$ – user191954 Jun 21 '18 at 12:05
$\begingroup$ Thanks for the tip. Thought it was better to describe the problem directly in an example situation, because I find it hard to explain the problem without it. Hopefully, it is better now... $\endgroup$ – Jasper Jun 21 '18 at 13:01
$\begingroup$ It's better, particularly because your new first paragraph sets up a somewhat general situation. But the question " how to calculate the maximum linear acceleration of the object..." is still a bit sticky. I don't know how much this'll help, but maybe you could try asking about the factors to be considered while calculating the acceleration, or the causes of different forces that are experienced by the object... $\endgroup$ – user191954 Jun 21 '18 at 13:07
$\begingroup$ Edited the question again. Hopefully the main question is better described. Added some sub-questions which would help me in the right direction. $\endgroup$ – Jasper Jun 21 '18 at 13:50
$\begingroup$ If your interest is purely practical, you can always increase the tangential frictional force $F$ and torque $M$ applied by the gripper by increasing the normal gripping force $N$. From a theoretical point of view the problem is much more difficult. Sliding friction is dealt with in the article Frictional Coupling between Sliding and Spinning Motion. In that case the local direction of friction can be found from the local direction of relative motion, and where there is relative motion the friction force is defined by $F=\mu N$. ... $\endgroup$ – sammy gerbil Jun 22 '18 at 13:48
Your intuition is correct. The problem is as "simple" as applying Newton's 2nd Law for linear and rotational motions : $$\vec{F}-m\vec{g}=m\vec{a}$$ $$M-mgx=I\alpha$$ where $F$ is the resultant static friction force, $M$ is the moment of the friction forces, and $x$ is the horizontal distance of the centre of the friction force from the centre of mass of the block.
$\vec{F}$ can be deduced quite easily from the 1st equation. If there were no torques involved we could then assume that $F$ is spread uniformly over the gripping surface, and is everywhere in the same direction. Then we could apply $F\le \mu N$ to calculate the maximum acceleration for a given gripping force $N$.
Likewise if the applied forces amount to a pure torque, and we know the centre of rotation so that we can find $x$, then we could again assume that this torque is spread uniformly [1] across the face of the gripper but in concentric rings. For a circular gripper we would find the maximum moment of friction is $M \le \frac23 \mu NR$ (see Single Friction Disk Clutch). Even for a rectangular or irregular shaped gripper, it would not be difficult to integrate over its surface to find the maximum torque it can supply by friction.
The difficulty comes when trying to combine translational and rotational forces acting on the gripper. We cannot deal with those forces separately because the friction forces which oppose them are not independent, they are coupled : the resultant friction force at each point on the gripper cannot exceed the static limit.
This situation is addressed for sliding friction in Rotational physics of a playing card. It persists for static friction because the direction of friction can still vary from point to point while its magnitude is limited. In the cited article Frictional coupling between sliding and spinning motion the analysis involves elliptic integrals.
This is not an easy problem to solve.
Note [1] :
How can friction act uniformly across the gripper if a torque is applied to it? Won't points furthest from the axis bear the most load?
Yes, outer points of the gripper furthest from the axis of rotation would have to supply the most torque at first. This is because microscopically the amount of friction force increases with shear displacement until there is slippage, then it is constant (assuming kinetic friction is the same as static).
Outer points reach the limit of friction first then slip microscopically but continue to supply maximum friction. Inner points progressively closer to the axis gradually reach the limit of friction but slip less. Microscopic slippage continues at each radius inwards until the total friction torque equals the applied torque. Friction has reached maximum at every radius except for a disk at the centre, but this radius will be small if the amount of slippage at maximum static friction is small.
The same mechanism also works when applied forces are not purely rotational. There will be microscopic slippage at some points to enable other points to provide more friction. This continues until either (i) the total friction force equals the applied forces and torques, or (ii) every point has reached the static frictional limit so that no more frictional force or torque can be supplied.
sammy gerbilsammy gerbil
As long as the acting forces do not pass through the barycenter of the object being manipulated, torque of imbalance will arise. These torques will cause turns in the manipulated object as long as the forces of friction and therefore the moment of friction, is not enough to contain the moment generated by the accelerations with which the manipulator moves the object. Therefore it is more a problem of friction.
CesareoCesareo
$\begingroup$ Thanks for your response. I know that the moment of friction limits the acceleration. But my question is how I can calculate the maximum acceleration if the friction force is already defined. How can I calculate the moment of friction? And how to translate the acceleration to an angular momentum if the object accelerates from the center of the moment of friction? $\endgroup$ – Jasper Jun 21 '18 at 12:19
$\begingroup$ This may be more suitable as a comment. $\endgroup$ – user191954 Jun 25 '18 at 3:55
$\begingroup$ @Chair The times I answered the questions explicitly, I was advised that it should be shorter. I'm really not understanding the criteria adopted for a good response. $\endgroup$ – Cesareo Jun 25 '18 at 10:49
$\begingroup$ The length usually isn't a solid standard: sometimes a short answer is appropriate, sometimes a long description is needed. More frequently, all the answers to the same question will be of extremely different lengths, each providing a different degree of specificity. The thing is that in this answer, you don't answer the explicit question: How do you calculate the moment of friction. Instead, you gave tips about how to think of the problem, along with a description of what's happening, which is hence more suitable as a comment. $\endgroup$ – user191954 Jun 25 '18 at 11:26
$\begingroup$ More generally, regarding the criteria for a good response, you can have any length, just think twice before posting a single-sentence answer: it's very rarely appropriate. Conversely, if your answer is really, really long, you may want to include some bolt text in the beginning which gives a short, 3-sentence summary of what you're trying to explain. Feel free to look at Physics SE meta, there's tonnes of advice there. $\endgroup$ – user191954 Jun 25 '18 at 11:28
Not the answer you're looking for? Browse other questions tagged homework-and-exercises angular-momentum mass rotational-dynamics friction or ask your own question.
Rotational physics of a playing card
Distribution of normal force on a book resting on the edge of a table
Angular momentum conservation while internal frictional torque is present
Push a box in a plane with friction. How to deal with the rotation?
Resultant center of rotation due to multiple moments
How do i calculate the linear acceleration of a body with 2 opposite torques on it?
How can angular momentum of an object needed to make another object rotate be calculated?
Friction of a sliding, then rolling, sphere
|
CommonCrawl
|
Hostname: page-component-7ccbd9845f-wr4x4 Total loading time: 0.287 Render date: 2023-01-29T20:45:13.231Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Compositio Mathematica
>Volume 147 Issue 3
>Base change for semiorthogonal decompositions
Base change for semiorthogonal decompositions
Part of: Algebraic geometry: Foundations Abelian categories
Published online by Cambridge University Press: 15 February 2011
Alexander Kuznetsov
Alexander Kuznetsov*
Algebra Section, Steklov Mathematical Institute, 8 Gubkin str., Moscow 119991, Russia (email: [email protected]) The Poncelet Laboratory, Independent University of Moscow, 119002, Bolshoy, Vlasyevskiy Pereulok 11, Moscow, Russia
Let X be an algebraic variety over a base scheme S and ϕ:T→S a base change. Given an admissible subcategory 𝒜 in 𝒟b(X), the bounded derived category of coherent sheaves on X, we construct under some technical conditions an admissible subcategory 𝒜T in 𝒟b(X×ST), called the base change of 𝒜, in such a way that the following base change theorem holds: if a semiorthogonal decomposition of 𝒟b (X) is given, then the base changes of its components form a semiorthogonal decomposition of 𝒟b (X×ST) . As an intermediate step, we construct a compatible system of semiorthogonal decompositions of the unbounded derived category of quasicoherent sheaves on X and of the category of perfect complexes on X. As an application, we prove that the projection functors of a semiorthogonal decomposition are kernel functors.
base changesemiorthogonal decomposition
MSC classification
Secondary: 14A22: Noncommutative algebraic geometry 18E30: Derived categories, triangulated categories
Compositio Mathematica , Volume 147 , Issue 3 , May 2011 , pp. 852 - 876
DOI: https://doi.org/10.1112/S0010437X10005166[Opens in a new window]
Copyright © Foundation Compositio Mathematica 2011
[1]Bökstedt, M. and Neeman, A., Homotopy limits in triangulated categories, Compositio Math. 86 (1993), 209–234.Google Scholar
[2]Bondal, A., Representations of associative algebras and coherent sheaves, Izv. Akad. Nauk SSSR Ser. Mat. 53 (1989), 25–44 (in Russian); translation in Math. USSR-Izv. 34 (1990), 23–42.Google Scholar
[3]Bondal, A. and Kapranov, M., Representable functors, Serre functors, and reconstructions, Izv. Akad. Nauk SSSR Ser. Mat. 53 (1989), 1183–1205, 1337 (in Russian); translation in Math. USSR-Izv. 35 (1990), 519–541.Google Scholar
[4]Bondal, A. and Orlov, D., Semiorthogonal decomposition for algebraic varieties. Preprint, arXiv:alg-geom/9506012v1.Google Scholar
[5]Bondal, A. and Orlov, D., Derived categories of coherent sheaves, in Proceedings of the international congress of mathematicians, Vol. II (Beijing, 2002) (Higher Education Press, Beijing, 2002), 47–56.Google Scholar
[6]Bondal, A. and Van den Bergh, M., Generators and representability of functors in commutative and non-commutative geometry, Mosc. Math. J. 3 (2003), 1–36, 258.Google Scholar
[7]Kashiwara, M. and Schapira, P., Categories and sheaves, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 332 (Springer, Berlin, 2006).CrossRefGoogle Scholar
[8]Kuznetsov, A., Hyperplane sections and derived categories, Izv. Ross. Acad. Nauk Ser. Mat. 70 (2006), 23–128 (in Russian); translation in Izv. Math. 70 (2006), 447–547.Google Scholar
[9]Kuznetsov, A., Homological projective duality, Publ. Math. Inst. Hautes Études Sci. 105 (2007), 157–220.CrossRefGoogle Scholar
[10]Kuznetsov, A., Lefschetz decompositions and categorical resolutions of singularities, Selecta Math. 13 (2008), 661–696.CrossRefGoogle Scholar
[11]Neeman, A., The connection between the K-theory localisation theorem of Thomason, Trobaugh and Yao, and the smashing subcategories of Bousfield and Ravenel, Ann. Sci. École Norm. Sup. 25 (1992), 547–566.CrossRefGoogle Scholar
[12]Neeman, A., The Grothendieck duality theorem via Bousfield's techniques and Brown representability, J. Amer. Math. Soc. 9 (1996), 205–236.CrossRefGoogle Scholar
[13]Orlov, D., Equivalences of derived categories and K3 surfaces, J. Math. Sci. (N.Y.) 84 (1997), 1361–1381, Algebraic Geometry 7.CrossRefGoogle Scholar
[14]Orlov, D., Triangulated categories of singularities and equivalences between Landau–Ginzburg models, Mat. Sb. 197 (2006), 117–132 (in Russian); translation in Sb. Math. 197 (2006), 1827–1840.Google Scholar
This article has been cited by the following publications. This list is generated based on data provided by Crossref.
Елагин, Алексей Дмитриевич Elagin, Alexey Dmitrievich Елагин, Алексей Дмитриевич and Elagin, Alexey Dmitrievich 2011. Когомологическая теория спуска для морфизма стеков и для эквивариантных производных категорий. Математический сборник, Vol. 202, Issue. 4, p. 31.
Елагин, Алексей Дмитриевич Elagin, Alexey Dmitrievich Елагин, Алексей Дмитриевич and Elagin, Alexey Dmitrievich 2012. Теория спуска для полуортогональных разложений. Математический сборник, Vol. 203, Issue. 5, p. 33.
Elagin, Alexei D 2012. Descent theory for semiorthogonal decompositions. Sbornik: Mathematics, Vol. 203, Issue. 5, p. 645.
Ananyevskiy, Alexey Auel, Asher Garibaldi, Skip and Zainoulline, Kirill 2013. Exceptional collections of line bundles on projective homogeneous varieties. Advances in Mathematics, Vol. 236, Issue. , p. 111.
Ploog, David and Sosna, Pawel 2014. On Autoequivalences of Some Calabi–Yau and Hyperkähler Varieties. International Mathematics Research Notices, Vol. 2014, Issue. 22, p. 6094.
Kuznetsov, Alexander 2014. Scheme of lines on a family of 2-dimensional quadrics: geometry and derived category. Mathematische Zeitschrift, Vol. 276, Issue. 3-4, p. 655.
Ballard, Matthew Favero, David and Katzarkov, Ludmil 2014. A category of kernels for equivariant factorizations and its implications for Hodge theory. Publications mathématiques de l'IHÉS, Vol. 120, Issue. 1, p. 1.
Auel, Asher Bernardara, Marcello and Bolognesi, Michele 2014. Fibrations in complete intersections of quadrics, Clifford algebras, derived categories, and rationality problems. Journal de Mathématiques Pures et Appliquées, Vol. 102, Issue. 1, p. 249.
Calabrese, John R. and Thomas, Richard P. 2016. Derived equivalent Calabi–Yau threefolds from cubic fourfolds. Mathematische Annalen, Vol. 365, Issue. 1-2, p. 155.
Polishchuk, Alexander 2016. Homogeneity of cohomology classes associated with Koszul matrix factorizations. Compositio Mathematica, Vol. 152, Issue. 10, p. 2071.
Kuznetsov, Alexander and Perry, Alexander 2017. Derived categories of cyclic covers and their branch divisors. Selecta Mathematica, Vol. 23, Issue. 1, p. 389.
Vial, Charles 2017. Exceptional collections, and the Néron–Severi lattice for surfaces. Advances in Mathematics, Vol. 305, Issue. , p. 895.
Lahoz, Martí Lehn, Manfred Macrì, Emanuele and Stellari, Paolo 2018. Generalized twisted cubics on a cubic fourfold as a moduli space of stable objects. Journal de Mathématiques Pures et Appliquées, Vol. 114, Issue. , p. 85.
Auel, Asher and Bernardara, Marcello 2018. Semiorthogonal decompositions and birational geometry of del Pezzo surfaces over arbitrary fields. Proceedings of the London Mathematical Society, Vol. 117, Issue. 1, p. 1.
Kuznetsov, Alexander 2019. Calabi–Yau and fractional Calabi–Yau categories. Journal für die reine und angewandte Mathematik (Crelles Journal), Vol. 2019, Issue. 753, p. 239.
Perry, Alexander 2019. Noncommutative homological projective duality. Advances in Mathematics, Vol. 350, Issue. , p. 877.
Fiorenza, Domenico Loregian, Fosco and Marchetti, Giovanni Luca 2019. Hearts and towers in stable $$\infty $$-categories. Journal of Homotopy and Related Structures, Vol. 14, Issue. 4, p. 993.
Macrì, Emanuele and Stellari, Paolo 2019. Birational Geometry of Hypersurfaces. Vol. 26, Issue. , p. 199.
Fatighenti, Enrico and Mongardi, Giovanni 2021. Fano Varieties of K3-Type and IHS Manifolds. International Mathematics Research Notices, Vol. 2021, Issue. 4, p. 3097.
Kuznetsov, Alexander and Perry, Alexander 2021. Categorical joins. Journal of the American Mathematical Society, Vol. 34, Issue. 2, p. 505.
Download full list
Alexander Kuznetsov (a1) (a2)
DOI: https://doi.org/10.1112/S0010437X10005166
|
CommonCrawl
|
Efficacy of Thai indigenous entomopathogenic nematodes for controlling fall armyworm (Spodoptera frugiperda) (J. E. Smith)(Lepidoptera; Noctuidae)
Wandee Wattanachaiyingcharoen ORCID: orcid.org/0000-0003-4524-36231,
Ongpo Lepcha3,
Apichat Vitta2,4 &
Det Wattanachaiyingcharoen nAff3
Egyptian Journal of Biological Pest Control volume 31, Article number: 149 (2021) Cite this article
Under laboratory and greenhouse conditions, the virulence of 2 isolates of Thai indigenous entomopathogenic nematodes (EPNs) in controlling the fall armyworm (FAW), Spodoptera frugiperda (J. E. Smith) (Lepidoptera; Noctuidae), was demonstrated. Six EPNs dosages were tested against 2 larval instars of FAW under the laboratory conditions, while 2 different concentrations were tested under the greenhouse conditions.
The results of a laboratory experiment revealed that 2 Thai indigenous EPNs isolates (Heterorhabditis indica isolate AUT 13.2 and Steinernema siamkayai isolate APL 12.3) were efficient against the FAW, 2nd and 5th larval instars. Six different nematode concentrations (50,100, 150, 200, 250 and 300 infectious juveniles (IJs) ml−1) were evaluated, and all were proven to be effective, with the mortality rate associated with concentration. Inoculated larvae in the 2nd instar was more vulnerable than that in the 5th instar. H. indica isolate AUT 13.2 was more destructive than S. siamkayai isolate APL 12.3. The greatest mortality rate of 2nd instar larvae was 83% when H. indica AUT 13.2 was applied at the concentration of 250 IJs ml−1, and 68% when the nematode S. siamkayai APL 12.3 was used at the concentration of 300 IJs ml−1. At 250 IJsml−1, the highest mortality rate of the 5th instar larvae was 45% for H. indica AUT 13.2 and 33% for S. siamkayai APL 12.3, respectively. To customize the concentration and volume of nematodes suspension evaluated in the greenhouse settings, the most sensitive stage of FAW and the optimum concentration that caused the highest mortality were used. The concentrations of both indigenous nematodes' isolates were 20,000 and 50,000 IJsml−1 per pot, respectively, and the results showed that the mortality rates were lower than that in the laboratory. FAW mortality rate was the highest (58%) in case of the nematode H. indica isolate AUT 13.2, against (45%) in case of S. siamkayai isolate APL 12.3, at the 50,000 IJs ml−1 concentrations.
The study revealed the 2 Thai indigenous EPNs isolates (H. indica isolate AUT 13.2 and S. siamkayai isolate APL 12.3) were capable of controlling the FAW in both laboratory and greenhouse environments. The 2 Thai EPNs showed the potential to be considered as a biological control agent.
In Thailand, maize forms an essential part of food and feed system and contributes significantly to income generation for rural households (Ekasingh et al. 2004). Fall armyworm (FAW), Spodoptera frugiperda (J.E. Smith) (Lepidoptera: Noctuidae), is indigenous to tropical and subtropical regions of the Americas (Day et al. 2017). This pest has not been reported outside America until 2016, when it was reported for the first time in Africa (Goergen et al. 2016). Since its invasion, the pest is causing considerable economic losses in Africa (De Groote et al. 2020). This pest was first detected in Thailand in December 2018, in a few sub-districts of Kanchanaburi and Tak provinces in the west of the country, near the Myanmar border (IPPC 2018). This insect is a polyphagous pest that has been documented to impact over 353 plant species from 76 plant families (Montezano et al. 2018). They are said to be a significant economic pest in maize and other Poaceae crops (Silva et al. 2017). Younger larvae feed on the tissue of the leaves, while older ones cause severe defoliation. It also burrows into the growing point (bud, whorl, etc.), leading "dead heart," wilting and death of the unfurled leaves (Day et al. 2017).
Pesticides are currently being utilized to control and minimize the spread of FAW in maize crops. Chemical pesticides may reduce insect pest attacks in the short term, but they may not be sustainable in the long run. Many studies have showed insecticide-resistant populations in FAW (Yu et al. 2003). Furthermore, synthetic chemical pesticides have the potential to harm both humans and the environment (Carvalho 2017). As a result, a new method of controlling FAW is required to decrease the damage caused by this destructive insect pest. Entomopathogens, such as bacteria, nematodes, fungi and virus, are an essential option for the management of diverse arthropod species and are appropriate approaches for a long-term sustainability of the ecosystem (Charnley and Collins 2007). Entomopathogenic nematodes (EPNs) are roundworms that live as parasites in insects (Lacey and Georgis 2012). Out of 23 nematode families, Steinernematidae and Heterorhabditidae are the two most prevalent nematode families studied as biological agents (Lacey and Georgis 2012). The two most important genera within the family are Steinernema and Heterorhabditis. All Steinernema species are symbiotic with Xenorhabdus bacteria, whereas all Heterorhabditis nematode species are symbiotic with Photorhabdus bacteria (Boemare et al. 1993). These symbiotic bacteria are vital in the death of insect hosts (Hominick et al 1996). These EPNs have a wide host range, making them a viable option for biological insect pest control (Arthurs et al. 2004). The EPNs can be used solely as a biological control agent or combined with other biocontrol agents, such as entomopathogenic bacteria and fungi in order to improve their efficacy in controlling insect pests (Laznik et al. 2012).
In the Americas and freshly invaded areas like Africa, FAW larvae have been found to be vulnerable to EPNs species (Lacey and Georgis 2012). Noosidum et al. (2010) documented the existence of EPNs in Thailand. Other researchers have recently discovered several species of those indigenous entomopathogenic nematodes in Thailand (Thanwisai et al. 2021).
The goal of this study was to test the efficacy of several Thai indigenous EPNs isolates against the FAW in both laboratory and greenhouse environments.
Fall armyworm collection and rearing
Fall armyworm larvae were collected from maize fields in Thailand's Phitsanulok (16°49′29.32" N, 100°15′30.89" E, elevation: 50 msl.), Sukhothai (17° 19′1.6608" N, 9°33′42.12" E, elevation: 93.54 msl.) and Uttaradit provinces (17°54′1.4537" N, 100°30′48.89" E, elevation: 108 msl.). The larvae were identified and confirmed according to the identification procedures provided by Visser (2017). Larvae were placed in a 20-ml plastic container and fed on fresh maize leaves grown without the use of chemical pesticides. The pupae were collected and placed in a plastic container inside a rearing cage (30 × 30 × 30 cm) once they had developed. Adults were fed on a 10% sugar solution in a rearing cage when they emerged, and then transferred for the experiments. For egg-laying, a young plant was dipped in a glass of water and placed inside the chamber. The larvae were transferred for the experiments when they had reached the relevant stages.
Entomopathogenic nematodes collection and multiplication
The study used 2 EPN isolates, Heterorhabditis indica isolate AUT 13.2 and Steinernema siamkayai isolate APL 12.3. These EPNs were collected from agricultural areas which H. indica isolate AUT13.2 was collected from a mango orchard (17°26′13.4" N, 100°05′40.4" E, elevation 57 msl.), while S. siamkayai isolate APL12.3 was collected from a vegetable garden (17°02′12.8" N, 100°10′01.0" E, elevation 48 msl.). Final instar larvae of the Greater wax moth, Galleria mellonella L., were used to multiply the EPNs. The White trap technique (White 1927; Kaya and Stock 1997) was utilized to obtain infective juveniles (or IJs) of the EPNs from dead larvae to be used in the experiments.
Testing of the efficacy of EPNs in the laboratory
Experimental design and application of the EPNs
The experiment was designed in a completely randomized design (CRD), with 6 treatments consisting of 6 different numbers of IJ nematode suspension, namely: 50, 100, 150, 200, 250 and 300 IJs ml−1, and a control consisting of the same volume of sterilized distilled water. The 2nd and 5th larval instars were tested separately on each EPN isolate and nematode suspension. The tests were repeated 4 times (4 replications), with 10 larvae/replication. Individually, the larvae were placed in a 5.5-cm-diameter Petri dish with a detached maize leaf as food. One milliliter of nematode suspension containing a different density of IJs was applied topically to the larvae and maize leaf in each treatment, with a similar application in the control treatment. The food was changed daily and the larvae were kept at 25 °C under a 14:10 (light: dark) photoperiod with 60 ± 10% relative humidity in the insect rearing room.
Assessment of mortality
Mortality of the larvae was assessed 48 h after inoculation, and the observations were recorded for 10 days. When larvae failed to respond to the forceps' touch, they were marked as dead. The dead larvae were kept separately to observe emergence of nematodes from the cadaver using the White trap technique. Only those larvae that showed evidence of nematode emergence were recorded as nematode killed. The following formula was used to compute the 10-day accumulated mortality percentage of the tested samples.
$$Observed\; mortality = \frac{Total \;number \;of\; dead\; larvae}{{Total \;exposed \;larvae}} \times 100$$
The tests were rejected if the control treatment mortality was more than 20%. When control mortality was less than 20%, Abbott's (1925) formula was used to correct observed mortality, as shown below.
$$Corrected\;Mortality = \frac{\% test\; mortality - \% control\; mortality}{{100 - \% control\; mortality}} \times 100$$
Testing of the efficacy of EPNs in the greenhouse
Planting maize in greenhouse and release of the FAW
Super-sweetcorn maize variety was grown in the earthen pot (50 cm in diameter), and planting soil was added in a pot at 2/3 in each pot's capacity. Initially, 10 maize seeds were grown in each pot and watered daily and the seedlings were reduced to 5 per pot after germination. The seedlings were ready to utilize in testing when they reached the 4 leaf fully emerged stage, which took around 2 weeks following emergence. Prior the larvae reach their 2nd instar, 10 fully grown 1st instar larvae were manually placed into the maize plant in each container with some detached maize leaves. The pots were caged and covered vertically and from the top with insect mesh when they were infested.
The greenhouse experiments were carried out using a randomized complete block design (RCBD). There were 3 treatments, each with 2 different densities of EPNs, 20,000 IJs ml−1, 50,000 IJs ml−1, and a sterile water as a control. Each treatment was carried out 8 times. Because the FAW was in its dispersal stage, only 2nd instar larvae were chosen to be studied. The nematode suspension was applied 24 h after the FAW larvae were released. Using a hand sprayer, each pot was sprayed by 100 ml of the described densities of EPNs suspension directly on the entire leaves. EPNs were applied 3 times, with two-day interval between applications. In the greenhouse experiment, the assessment of larval mortality followed a similar procedure to that used in the laboratory experiment.
Mortality percentages the FAW caused by the EPNs both from the laboratory and the greenhouse conditions were normalized using square root transformation. The number of dead larvae in different treatments (density of EPNs) was subjected to statistical analysis of variance (ANOVA). The mean comparison of the laboratory condition was made using Duncan multiple range tests (DMRT) to find a significant difference between treatments (p ≤ 0.05). The Tukey test was carried out to determine a significant difference of the means between treatments (p ≤ 0.05) of the greenhouse experiment.
Efficacy of EPNs in the laboratory
In laboratory experiments, the efficacy of EPNs, H. indica isolate AUT 13.2 and S. siamkayai isolate APL12.3 on the mortality of 2nd and 5th larval instars of FAW was determined. Both isolates were found to be capable of infecting and killing larvae at varied densities of EPNs utilized in the investigation. After 48 h, death rate of the larvae was observed. It was influenced by the EPN isolates applied and the densities of IJs used. One-way ANOVA revealed a statistically significant difference between treatments for H. indica isolate AUT 13.2 and S. siamkayai isolate APL 12.3, with F(6,21) = 118, p = 0.05 for H. indica isolate AUT 13.2 and F(6,21) = 102.7, p = 0.05 for S. siamkayai isolate APL 12.3. Over 10 days of the exposure time, H. indica isolate AUT 13.2 and S. siamkayai isolate APL12.3 caused 27.5 and 17.5% mortality at 50 IJs ml−1, respectively (Table 1). As the concentration was raised, the death rate risen. When infected at the concentration of 250 IJs ml−1, the EPNs H. indica isolate AUT 13.2 killed (82.5%) of the FAW. It was non-significantly different from those infected at 300 IJs ml−1. However, when using the highest concentration of 300 IJs ml−1, S. siamkayai isolate APL12.3 delivered the highest mortality percentage of 67.5%, which was non-significantly different from the density at 250 IJs ml−1.
Table 1 The mean of accumulated mortality percentage of the 2nd instar larvae of FAW at 10 days after application of the EPNs Heterorhabditis indica isolate AUT 13.2 and Steinernema siamkayai isolate APL12.3 in the laboratory condition
The efficacy of H. indica isolate AUT 13.2 and S. siamkayai isolate APL12.3 against the 5th instar larvae of FAW gave a high mortality percentage when they were treated with a high density of the EPNs (Table 2). The mortality percentage was statistically significant between treatments as determined by one-way ANOVA (F (6, 21) = 13.75, p = 0.05 for H. indica isolate AUT 13.2 and F = 39.25; df = 6, 21; p = 0.05 for S. siamkayai APL 12.3, respectively). The highest mortality occurred in the FAW larvae (45.0%) when they were infected by the EPN H. indica isolate AUT 13.2 at the density of 250 IJs ml−1, which was non-significantly different from the application at 300 IJs ml−1. The EPN, S. siamkayai isolate APL12.3 caused the highest mortality percentage when they were infected with 250 IJs ml−1 (at 32.5%), but was not different from those at the density of 300 IJs ml−1 (42.5%). However, when compared to the mortality of 2nd instar larvae, this larval stage had a low mortality rate.
Table 2 Mean of accumulated mortality percentage of the fifth instar larvae of FAW at 10 days after application of the EPNs Heterorhabditis indica isolate AUT 13.2 and Steinernema siamkayai isolate APL12.3 in the laboratory condition
Efficacy of EPNs in the greenhouse
By comparing mean percentage mortality of FAW larvae among the treatments at the end of 10 days, the efficacy of H. indica isolate AUT13.2 and S. siamkayai isolate APL12.3 on mortality rate of FAW larvae at the 2nd instar larvae under greenhouse conditions was determined. After the application for 10 days, the results showed non-significant difference between those two EPN isolates (Table 3) at the EPN density at 20,000 IJs ml-1. The EPN H. indica isolate AUT 13.2 caused a high mortality rate (37.92%), while S. siamkayai isolate APL12.3 killed only (28.75%) (F = 28.63; df = 2, 21; p = 0.000). Mortality percentages of the FAW raised when the EPN density was increased to 50,000 IJs ml−1, and significant difference between the two isolates was observed. H. indica isolate AUT 13.2 had a high mortality rate (57.78%), compared to (44.72%) S. siamkayai isolate APL 12.3 (F = 64.09; df = 2, 21; p = 0.05).
Table 3 The mean of accumulated mortality percentage of the 2nd instar larvae of FAW at 10 days after application of the EPNs Heterorhabditis indica isolate AUT 13.2 and Steinernema siamkayai isolate APL12.3 in the greenhouse condition
Entomopathogenic nematodes (EPNs) have already been found to be effective against a wide range of insect pests. They have been widely utilized to manage pests both below and above ground (Bhairavi et al. 2021). Many researchers previously reported the FAW's susceptibility to EPNs (Caccia et al. 2014). However, the efficacy of EPNs is governed by their virulence and their capability to find out their hosts (Cutler and Webster 2003). In Thailand, several isolates of indigenous EPNs have been documented (Thanwisai et al. 2021).
This is the first study evaluating the efficacy of Thai indigenous EPNs; H. indica isolate AUT13.2 and S. siamkayai isolate APL12.3 against the FAW. The result of the study demonstrated that H. indica isolate AUT 13.2 and S. siamkayai isolate APL12.3 were capable of infecting and killing both the 2nd and 5th larval instars of FAW under laboratory conditions. Both isolates showed varied efficacy depending on the different concentrations used. The isolate H. indica AUT 13.2 caused more mortality larvae than S. siamkayai APL 12.3. With an increase in the density of infective juveniles per milliliter of pre-sterilized distilled water, there was a proportional increase in FAW larval mortality; however, increasing concentration above 250 IJs ml−1 had non-significant difference. The 2nd instar larvae were particularly more sensitive to both EPN isolates examined.
Environment, stages of the host and innate characteristics of the nematodes, like the capability to find the host and the presence of symbiotic bacteria, all influence the potentials of EPNs (Batalla-Carrera et al. 2010). The discrepancies in efficiency reported in the laboratory where most settings are controlled and uniform may be attributable to the potential of the nematode isolates to infect the host and the efficacy of the symbiotic bacteria that is responsible for killing the host. Presence of symbiotic bacteria Photorabdus and Xenorhabdus in the nematode H. indica isolates AUT 13.2 over S. siamkayai isolate APL 12.3 in this study may explain the increased mortality reported by H. indica isolates AUT 13.2 over S. siamkayai isolate APL 12.3. As a result, it appears that the ability of H. indica isolates AUT 13.2 to kill FAW larvae over S. siamkayai isolate APL 12.3 was the most likely owing to the symbiotic bacterium Photorabdus contained within these EPNs. The mortality of FAW larvae has been demonstrated to vary in previous studies using different strains and isolates of Heterorhabditis spp. and Steinernema spp. Acharya et al. (2020) investigated the effectiveness of 7 EPN species and found that H. indica, S. carpocapsae, S. arenarium and S. longicaudum were highly virulent against various stages of the FAW larvae.
In the present study, the 2nd instar FAW larvae were more vulnerable to both isolates of EPNs than the 5th instar larvae. Some authors have reported differences in susceptibility of different stages of the FAW. For example, Acharya et al. (2020) reported that younger larvae (e.g., first-, second- and third-instar larvae) of the FAW were more susceptible to H. indica and S. carpocapsae, while elder larvae (e.g., 4th, 5th and 6th larval instars) were susceptible to S. arenarium and S. longicaudum.
The greenhouse experiment was applied to test how effective EPNs are on 2nd instar larvae that feed on maize plants. The results demonstrated that the EPN isolates were successful in killing the FAW larvae; however, the mortality rate was lower than in the laboratory. It is possible that the lowest mortality of FAW larvae in the greenhouse experiment was attributable to the difference in environmental conditions. UV light, warmer temperatures, desiccation and the features of exposed foliage can promote lesser EPN activation in greenhouse circumstances (Vashisth et al. 2013). These features make it more difficult for infective juvenile nematodes to locate the host larvae, because EPNs' adequate host interaction is crucial in infecting and killing larvae (Kaya and Gaugler 1993). Furthermore, because FAW larvae were placed on maize plants and had the freedom to move from plant to plant, EPNs may have had a difficult time making good host contact and attacking them. The mortality of FAW larvae was a high when the EPNs' concentrations increased to 50,000 IJs ml−1, and the EPNs application targeted the feeding larvae site. However, in field conditions, some environmental factors, especially soil moisture and temperature, may affect the efficiency of those EPNs as they need some moisture for survival. According to obtained results, under the greenhouse conditions, the rate and application frequency of EPNs increased, according to providing adequate moisture, to achieve better outcomes. There are many ways to optimize the efficacy of EPNs for greenhouse and field application like preparing EPNs formulation. EPNs can be combined with other control agents which can be some appropriate chemical insecticides and other biological control agents like fungus and bacteria (Koppenhöfer et al. 2020). The effectiveness of EPNs can also be improved by genetic improvement through selection and transgenic methods (Abd-Elgawad 2019). The efficacy of the 2 isolates used in the present study can be improved by developing the formulations or combining EPNs with other control agents.
Both isolates of Thai indigenous EPNs, H. indica isolateAUT13.2 and S. siamkayai isolate APL12.3, evaluated against FAW larvae under both laboratory and greenhouse situations. The EPNs performed much effective in the laboratory than in greenhouse. Virulence of EPNs, the host's susceptible stage and environmental variables all appear to play a role in ensuring effective infections. To use these Thai indigenous EPNs as a biological control agent, appropriate insect stages and environmental conditions must be considered.
FAW:
IJs:
msl:
Meter above sea level
Abbott WS (1925) A method of computing the effectiveness of an insecticide. J Econ Entomol 18:265–267
Abd-Elgawad MMM (2019) Towards optimization of entomopathogenic nematodes for more service in the biological control of insect pests. Egypt J Biol Pest Control 29:77. https://doi.org/10.1186/s41938-019-0181-1
Acharya R, Hwang H-S, Mostafiz MM, Yu YS, Lee KY (2020) Susceptibility of various developmental stages of the Fall Armyworm, Spodoptera frugiperda, to entomopathogenic Nematodes. Insects 11:1–13
Arthurs S, Heinz K, Prasifka J (2004) An analysis of using entomopathogenic nematodes against above-ground pests. Bull Entomol Res 94(4):297–306
Batalla-Carrera L, Morton A, García-del-Pino F (2010) Efficacy of entomopathogenic nematodes against the tomato leafminer Tuta absoluta in laboratory and greenhouse conditions. Biocontrol 55(4):523–530
Bhairavi KS, Bhattacharyya B, Devi G, Bhagawati S, Das PPG, Devi EB (2021) Evaluation of two native entomopathogenic nematodes against Odontotermes obesus (Rambur) (Isoptera: Termitidae) and Agrotis ipsilon (Hufnagel) (Lepidoptera: Noctuidae). Egypt J Biol Pest Control 31:111. https://doi.org/10.1186/s41938-021-00457-8
Boemare N, Akhurst R, Mourant R (1993) DNA relatedness between Xenorhabdus spp. (Enterobacteriaceae), symbiotic bacteria of entomopathogenic nematodes, and a proposal to transfer Xenorhabdus luminescens to a new genus, Photorhabdus gen. nov. Int J Sys Evo Microbiol 43(2):249–255
Caccia MG, Del Valle E, Doucet ME, Lax P (2014) Susceptibility of Spodoptera frugiperda and Helicoverpa gelotopoeon (Lepidoptera: Noctuidae) to the entomopathogenic nematode Steinernema diaprepesi (Rhabditida: Steinernematidae) under laboratory conditions. Chil J Agri Res 74(1):123–126
Carvalho FP (2017) Pesticides, environment, and food safety. Food and Energy Secur 6(2):48–60
Charnley A, Collins S (2007) Entomopathogenic fungi and their role in pest control. In: Kubicek CP, Druzhinina IS (eds) Environ Microb Relation, 2nd edn. Springer, Berlin, pp 159–182
Cutler GC, Webster J (2003) Host-finding ability of three entomopathogenic nematode isolates in the presence of plant roots. Nematology 5:601–608
Day R, Abrahams P, Bateman M, Beale T, Clottey V, Cock M, Colmenarez Y, Corniani N, Early R, Godwin J (2017) Fall armyworm: impacts and implications for Africa. Outlooks Pest Manag 28:196–201
De Groote H, Kimenju SC, Munyua B, Palmas S, Kassie M, Bruce A (2020) Spread and impact of fall armyworm (Spodoptera frugiperda JE Smith) in maize production areas of Kenya. Agric Ecosyst Environ 292:106–804
Ekasingh B, Gypmantasiri P, Thong Ngam K, Krudloyma P (2004) Maize in Thailand: production systems, constraints, and research priorities. CIMMYT, 47 pp.
Goergen G, Kumar PL, Sankung SB, Togola A, Tamò M (2016) First report of outbreaks of the Fall armyworm Spodoptera frugiperda (JE Smith) (Lepidoptera, Noctuidae), a new alien invasive pest in West and Central Africa. Plosone 11(10):e0165632
Hominick WM, Reid AP, Bohan D, Brisco BR (1996) Entomopathogenic nematodes: biodiversity, geographical distribution and the convention on biological diversity. Biocontrol Sci Tech 6(3):317–332
IPPC (2018) First detection of Fall Army Worm on the border of Thailand (THA-03/1). https://www.google.com/url?sa=tandrct=jandq=andesrc=sandsource=webandcd=andcad=rjaanduact=8andved=2ahUKEwikhIHq2bbqAhUijOYKHU_lBqIQFjAIegQIBxABandurl=https%A%2F%2Fwww.ippc.int%2Fen%2Fcountries%2Fthailand%2Fpestreports%2F2018% 2F12%2Ffirst-detection-of-fall-army-worm-on-the-border-of-thailand%2Fandusg=AOvVaw0ua CeweAPKW5gGfVPAFJoF. Accessed 10 Jan 2021.
Kaya HK, Gaugler R (1993) Entomopathogenic Nematodes. Ann Rev Entomol 38(1):181–206
Kaya HK, Stock SP (1997) Techniques in insect nematology. In: Manual of techniques in insect pathology. Academic Press, London: pp 281–324
Koppenhöfer AM, Shapiro-Ilan DI, Hiltpold I (2020) Entomopathogenic nematodes in sustainable food production. Front Sustain Food Syst 4:125. https://doi.org/10.3389/fsufs.2020.00125
Lacey LA, Georgis R (2012) Entomopathogenic nematodes for control of insect pests above and below ground with comments on commercial production. J Nematol 44:218–225
Laznik Ž, Vidrih M, Trdan S (2012) The effect of different entomopathogens on white grubs (Coleoptera: Scarabaeidae) in an organic hay-producing grassland. Arch Biol Sci 64(4):1235–1246
Montezano DG, Specht A, Sosa-Gómez DR, Roque-Specht VF, Sousa-Silva JC, Paula-Moraes Sd, Peterson JA, Hunt T (2018) Host plants of Spodoptera frugiperda (Lepidoptera: Noctuidae) in the Americas. Afr Entomol 26(2):286–300
Noosidum A, Hodson A, Lewis EE, Chandrapatya A (2010) Characterization of new entomopathogenic nematodes from Thailand: foraging behavior and virulence to the greater wax moth, Galleria mellonella L. (Lepidoptera: Pyralidae). J Nematol 42:281–291
Silva DMd, Bueno AdF, Andrade K, Stecca CdS, Neves PMOJ, Oliveira MCNd (2017) Biology and nutrition of Spodoptera frugiperda (Lepidoptera: Noctuidae) fed on different food sources. Sci Agric 74(1):18–31
Thanwisai A, Muangpat P, Dumidae A, Subkrasae C, Ardpairin J, Tandhavanant S, Vitta A (2021) Identification of entomopathogenic nematodes and their symbiotic bacteria in national parks of Thailand, and mosquitocidal activity of Xenorhabdus griffiniae against Aedes aegypti larvae. Nematology : 1–11.
Vashisth S, Chandel Y, Sharma P (2013) Entomopathogenic nematodes-a review. Agric Rev 34:163–175
Visser D (2017) Fall armyworm: An identification guide in relation to other common caterpillars, a South African perspective. http://sana.co.za/wp-content/uploads/2017/06/Fall-Armyworm-Identification.-DAFF-Presentation-v1.2-secured-Published....pdf. Accessed on 1 Feb 2021.
Yu SJ, Nguyen SN, Abo-Elgha GE (2003) Biochemical characteristics of insecticide resistance in the Fall armyworm, Spodoptera frugiperda (JE Smith). Pest Biochem Physiol 77(1):1–11
White G (1927) A method for obtaining infective nematode larvae from cultures. Science 1927(66):302
The authors are highly grateful to the authority of the Department of Biology and Centre of Excellence for Biodiversity, Faculty of Science, Department of Microbiology and Parasitology, Faculty of Medical Science and Department of Agricultural Sciences, Faculty of Agriculture, Natural Resources and Environment, Naresuan University for providing the facilities and support in conducting this research.
This research was financially supported by the Centre of Excellence on Biodiversity (Project No. BDC-PG4-161011) to W.W. and A.V. The authors would like to acknowledge the Faculty of Agriculture, Natural Resources and Environment, Faculty of Medical Sciences, and Faculty of Science, Naresuan University for providing resources necessary for the study. O. L. was supported by the scholarships from His Majesty the King of Bhutan and Naresuan University.
Det Wattanachaiyingcharoen
Present address: Department of Agricultural Sciences, Faculty of Agriculture, Natural Resources and Environment, Naresuan University, Phitsanulok, Thailand
Department of Biology and Centre of Excellence for Biodiversity, Faculty of Science, Naresuan University, Phitsanulok, 65000, Thailand
Wandee Wattanachaiyingcharoen
Centre of Excellence for Biodiversity, Faculty of Science, Naresuan University, Phitsanulok, Thailand
Apichat Vitta
Department of Agricultural Sciences, Faculty of Agriculture, Natural Resources and Environment, Naresuan University, Phitsanulok, Thailand
Ongpo Lepcha
Department of Microbiology and Parasitology, Faculty Medical Science, Naresuan University, Phitsanulok, Thailand
WW designed and planned the experiments, assisted in experiments, major contributor in writing the manuscript. OL carried out all the experiments, analyzed the data and wrote the draft manuscript. AV collected the EPNs samples for experiment purposes and planned the experiments. DW planned the experiments, analyzed the data and critically revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Wandee Wattanachaiyingcharoen.
This research was approved for the Ethics of Use Animals for Scientific Work from Naresuan University (Approvement No. 63–01-003).
Wattanachaiyingcharoen, W., Lepcha, O., Vitta, A. et al. Efficacy of Thai indigenous entomopathogenic nematodes for controlling fall armyworm (Spodoptera frugiperda) (J. E. Smith)(Lepidoptera; Noctuidae). Egypt J Biol Pest Control 31, 149 (2021). https://doi.org/10.1186/s41938-021-00497-0
Spodoptera frugiperda
Indigenous entomopathogenic nematodes
Heterorhabditis indica
Steinernema siamkayai
|
CommonCrawl
|
Has decidability got something to do with primes?
Note: I have modified the question to make it clearer and more relevant. That makes some of references to the old version no longer hold. I hope the victims won't be furious over this.
Motivation:
Recently Pace Nielsen asked the question "How do we recognize an integer inside the rationals?". That reminds me of this question I had in the past but did not have chance to ask since I did not know of MO.
There seems to be a few evidence which suggest some possible relationship between decidability and prime numbers:
1) Tameness and wildness of structures
One of the slogan of modern model theory is " Model theory = the geography of tame mathematics". Tame structure are structures in which a model of arithmetic can not be defined and hence we do not have incompleteness theorem. A structure which is not tame is wild.
The following structures are tame:
Algebraic closed fields. Proved by Tarski.
Real closed fields e.g $\mathbb{R}$. Proved by Tarski.
p-adic closed fields e.g $ \mathbb{Q}_p$. Proved by Ax and Kochen.
Tame structures often behave nicely. Tame structures often admits quantifier elimination i.e. every formula are equivalent to some quantifier free formula, thus the definable sets has simple description. Tame structures are decidable i.e there is a program which tell us which statements are true in these structure.
The following structures are wilds;
Natural number (Godel incompleteness theorem)
Rational number ( Julia Robinson)
Wild structure behaves badly (interestingly). There is no program telling us which statements are true in these structures.
One of the difference between the tame structure and wild structure is the presence of prime in the later. The suggestion is strongest for the case of p-adic field, we can see this as getting rid of all except for one prime.
2) The use of prime number in proof of incompleteness theorem
The proof of the incompleteness theorems has some fancy parts and some boring parts. The fancy parts involves Godel's Fixed point lemma and other things. The boring parts involves the proof that proofs can be coded using natural number. I am kind of convinced that the boring part is in fact deeper. I remember that at some place in the proof we need to use the Chinese Remainder theorem, and thus invoke something related to primes.
3) Decidability of Presburger arithmetic and Skolem arithmetic ( extracted from the answer of Grant Olney Passmore)
Presburger arithmetic is arithmetic of natural number with only addition. Skolem arithmetic is arithmetic of natural number with only multiplication.
Wishful thinking: The condition that primes or something alike definable in the theory will implies incompleteness. Conversely If a theory is incomplete, the incompleteness come from something like primes.
(following suggestion by François G. Dorais)
Forward direction: Consider a bounded system of arithmetic, suppose the primes are definable in the system. Does it implies incompleteness.
Backward direction: Consider a bounded system of arithmetic, suppose the system can prove incompleteness theorem, is primes definable in the system? is the enumeration of prime definable? is the prime factoring function definable?
Status of the answer:
For the forward direction: A weak theory of prime does not implies incompleteness. For more details, see the answer of Grant Olney Passmore and answer of Neel Krishnaswami
For backward direction: The incompleteness does not necessary come from prime. It is not yet clear whether it must come from something alike prime. For more details, see the answer of Joel David Hamkins.
Since perhaps this is as much information I can get, I accept the first answer by Joel David Hamkins. Great thanks to Grant Olney Passmore and Neel Krishnaswami who also point out important aspects.
Recently, Francois G. Dorais also post a new and interesting answer.
nt.number-theory lo.logic prime-numbers model-theory decidability
abcdxyzabcdxyz
$\begingroup$ The Godel encoding is totally irrelevant to the content of the incompleteness theorems; as is well-known, one can deduce the incompleteness theorems from the halting theorem in a straightforward manner without bringing this miscellaneous encoding stuff in (see for example scottaaronson.com/democritus/lec3.html). $\endgroup$
– Qiaochu Yuan
$\begingroup$ Perhaps a better way to ask your question is whether every system of Bounded Arithmetic, for example, that can prove the Incompleteness Theorem (say) can also detect primes, enumerate primes, factor integers, etc. This can lead to very interesting questions. For example, although we know primes have a polynomial time detection algorithm, I don't think it's known whether this is provable in $S^1_2$. $\endgroup$
– François G. Dorais
$\begingroup$ Qiaochu, you have merely internalized the encoding, as we all have, since the arithmetization of mathematics is now embedded everywhere. The same coding issue arises in the halting problem, which is about whether there is a program that can answer questions about programs. Of course, we are all used to the idea that programs or even pieces of literature can be coded as strings or numbers, such as with ASCII, and this is what arithmetization amounts to. $\endgroup$
– Joel David Hamkins
$\begingroup$ @Qiaochu: You should be careful with words like "totally irrelevant". The relevant passage in Scott's notes is "(This is possible because the statement that a particular computer program halts is ultimately just a statement about integers.)" There's your encoding! This is also what Tran refers to when he says "the boring part"; it's often mentioned only in passing. $\endgroup$
– aorq
$\begingroup$ << One of the slogan of modern model theory is " Model theory = the geography of tame mathematics". >> Seems a little narrow-minded ... $\endgroup$
– Simon Thomas
Goedel did indeed use the Chinese remainder theorem in his proof of the Incompleteness theorem. This was used in what you describe as the `boring' part of the proof, the arithmetization of syntax. Contemporary researchers often agree with your later assessment, however, that the arithmetization of syntax is profound. This is the part of the proof that reveals the self-referential nature of elementary number theory, for example, since in talking about numbers we can in effect talk about statements involving numbers. Ultimately, we arrive in this way at a sentence that asserts its own unprovability, and this gives the Incompleteness Theorem straight away.
But there are other coding methods besides the Chinese Remainder theorem, and not all of them involve primes directly. For example, the only reason Goedel needed CRT was that he worked in a very limited formal language, just the ring theory language. But one can just as easily work in a richer language, with all primitive recursive functions, and the proof proceeds mostly as before, with a somewhat easier time for the coding part, involving no primes. Other proofs formalize the theory in the hereditary finite sets HF, which is mutually interpreted with the natural numbers N, and then the coding is fundamentally set-theoretic, also involving no primes numbers especially.
Gerry Myerson
Joel David HamkinsJoel David Hamkins
$\begingroup$ I am in a dilemma, it would be nicer if the answer is yes. On one hand I believe in you. On the other hand, I want to hold to the fleeting dream. :D So I wait. $\endgroup$
– abcdxyz
$\begingroup$ To poke further holes in your dream, observe that arithmetization can be done with finite binary strings (just 0's and 1's). To encode and talk about such strings in the language of natural numbers you only need to know about the properties of the prime number 2, not all of them. Perhaps you just have to see an alternative arithmetization that does not rely on the Chinese Remainder theorem. $\endgroup$
– Andrej Bauer
$\begingroup$ May you refer me to some source of different arithmetization that does not rely on Chinese Remainder theorem? $\endgroup$
$\begingroup$ See Raymond Smullyan's Theory of Formal Systems (Princeton University Press, 1961). $\endgroup$
– John Stillwell
$\begingroup$ @AndrejBauer: You don't even need the prime 2. You could use the base $n$ expansion for any $n$ (such as $n=10$), regardless of whether it's prime... The main thing you're relying on is that $n^k > \sum_{i=0}^{k-1} n^i$ for any natural numbers $n,k$. $\endgroup$
– Joshua Grochow
In response to your statement: "It appears that the condition that primes are definable in the theory will implies incompleteness."
The primes being definable in an arithmetic theory does not necessarily lead to incompleteness. The theory of Skolem arithmetic ($Th(\langle\mathbb{N},*\rangle)$) is decidable and admits quantifier elimination (it is the elementary true theory of the weak-direct power of the standard model of Presburger arithmetic, so Feferman-Vaught quantifier-elimination lifting applies). A predicate for primality can easily be expressed in the language of this theory. This is due to Skolem and Mostowski initially, and to Feferman-Vaught when obtained in terms of weak-direct powers.
Moreover, Skolem arithmetic extended with the usual order restricted to primes is decidable, admits quantifier elimination, and in fact $Th(\langle\mathbb{N},*,<_p\rangle)$ and $Th(\langle\omega^\omega,+\rangle)$ are reducible to each other in linear time. This is due to Francoise Maurin (see "The Theory of Integer Multiplication with Order Restricted to Primes in Decidable" - J. Symbolic Logic, Volume 62, Issue 1 (1997), 123-130).
Note that in this latter case, the ordering cannot be the full ordering on the natural numbers, as this would allow one to define a successor predicate, and Julia Robinson showed successor and multiplication are sufficient for defining addition.
Grant Olney PassmoreGrant Olney Passmore
$\begingroup$ Interesting, reading from the paper that you gave, the theory of integer multiplication is decidable, but the theory of integer multiplication with the natural ordering is not. So the "infinite prime" has some job. :D $\endgroup$
$\begingroup$ I had never heard of Skolem arithmetic before -- it's really cool to learn that either one of addition or multiplication is decidable, but the combination isn't. $\endgroup$
– Neel Krishnaswami
The role of primes in Gödel's Incompleteness Theorem can be better understood by looking at Robinson's Q, which is one of the weakest theories of arithmetic for which Gödel's Incompleteness Theorem holds. Robinson derived his original axioms for Q by looking at the axioms of PA that were used in the proof that every computable function can be represented in PA, which is the key part of Gödel's argument.
A simple theory that interprets Robinson's Q is the theory of discrete ordered rings with induction for open formulas, i.e. the schema φ(0) ∧ ∀x(φ(x) → φ(x+1)) → ∀x(x ≥ 0 → φ(x)), where φ is a quantifier-free formula in the language of ordered rings which may contain free variables other than x. (The only existential quantifier in the axiomatization of Q, namely in the axiom x = 0 ∨ ∃y(x = Sy), can be eliminated since we now have subtraction.)
The theory of discrete ordered rings with open induction has interested many logicians. The first to study this theory was Shepherdson (A non-standard model for a free variable fragment of number theory, MR161798) who showed that this theory cannot prove that √2 is irrational. It follows that Robinson's Q also cannot prove the irrationality of √2. Since the irrationality of √2 is a consequence of unique factorization into primes, Robinson's Q cannot prove that either.
Shepherdson's model where √2 is rational is the ring S whose elements are expressions of the form $$a_0 + a_1T^{q_1} + \cdots + a_kT^{q_k}$$ where T is an indeterminate, the exponents 0 < q1 < ... < qk are positive rationals, the coefficient a0 is an integer, and the remaining coefficients a1,...,ak are real algebraic numbers. Positivity is determined by the sign of the leading coefficient ak; this corresponds to making T infinitely large. The fact that this satisfies open induction is very remarkable. In this ring S, the only primes are the primes from ℤ, so there are simply no infinite primes. Therefore, Robinson's Q cannot prove that the primes are unbounded.
Still stranger discrete ordered rings with open induction have been constructed by Macintyre and Marker (Primes and their residue rings in models of open induction, MR1001418). For example, they construct such a ring where there are unboundedly many primes, but all infinite primes are congruent to 1 modulo 4.
It is apparently still unknown whether the induction axiom for bounded quantifier formulas (IΔ0) proves the unboundedness of prime numbers. This problem was raised by Wilkie and the first partial answer came from Alan Woods who linked it to a pigeonhole principle, together Paris, Wilkie, and Woods (Provability of the pigeonhole principle and the existence of infinitely many primes, MR973114) showed that the unboundedness of prime numbers is provable in a very small extension of IΔ0. (See also this recent article by Woods and Cornaros MR2518806.)
The above shows that a sound theory of primes and factorization is not necessary for Gödel's Incompleteness Theorem. However, this should be taken with a grain of salt. The key feature of Robinson's Q is that it correctly interprets the basic arithmetic as far as the standard natural numbers are concerned, and nothing more. The fact that Robinson's Q doesn't say much about what is happening outside the standard integers does not mean that the certain features, like primality, that make up the rich and complex structure of the standard integers is completely irrelevant to Gödel's Incompleteness Theorem.
François G. DoraisFrançois G. Dorais
$\begingroup$ I was surprised that Wilkie, Macintyre and Marker wrote something on this topic. I thought they are model theorists and this question is more toward recursion theory. $\endgroup$
$\begingroup$ Note that they don't mention the connection with Robinson's Q, computability, and incompleteness. $\endgroup$
$\begingroup$ I'm enormously pleased to see mention of the ring with unboundedly many primes in which all are $1$ modulo $4$, as I have harboured a vague gut feeling there's some as yet unfound relation between the "positive" square roots and the "negative" square roots in $\Bbb{Z}_2^{\times}$ which extends the idea of $2$ being the "most prime prime"; and extends onwards to define limit points in $\Bbb{Z}_2^{\times}$ which end $\ldots01$ as opposed to $\ldots11$ as "special" in logic. $\endgroup$
Another evidence which I think might be relevant: The proof of the incompleteness theorems has some fancy part and some boring part. The fancy part involves Godel's Fixed point lemma and other things. The boring part involves the proof that proofs can be coded using natural number. I am kind of convinced that the boring part is perhaps deeper. I remember that at some place in the proof we need to use the Chinese Remainder theorem, and thus invoke something related to primes.
The key bit in the incompleteness proof is the fact that multiplication is total. This is what lets you freely build representations of terms out of representations of terms.
Dan Willard has given "self-verifying" logics, which are logics to which a self-consistency principle can be consistently added. There, the trick is to remove addition and multiplication, and replace them with subtraction and division. In these logics, the totality of multiplication is not provable, and so the logic can represent its Godel encodings, but cannot do enough with them to let the fixed point lemma go through.
Since multiplication and primes go together like hazelnuts and chocolate, such tweaks to the status of multiplication probably suggest that there are deep connections to number theory. But I don't know enough to say!
Neel KrishnaswamiNeel Krishnaswami
$\begingroup$ +1 Very good point. It's worth pointing out that Willard's system does express multiplication in terms of division, as a relation, but it fails to prove the multiplication relation expresses a total function, although all and only the expected constant instances are true. Since Willard's system thereby has the same prime numbers as usual arithmetic, and has their primality as theorems, we have the converse to Joel's point: presence of (a weak theory of) primes together with completeness. $\endgroup$
– Charles Stewart
$\begingroup$ I know this is an old post, but I just want to say that the incompleteness theorem itself has completely no deep connection to number theory, because even the weak theory TC (theory of concatenation) which only has basic axioms about finite binary strings is essentially incomplete. So the crux of incompleteness is actually in the fancy part. However! The crux of Godel's incompleteness theorem (which is about arithmetical theories) is, in my opinion, in the coding lemma! That is, it is much harder to figure out how to code sequences than to figure out the fixed-point lemma. $\endgroup$
You might be interested in A. Grzegorczyk's paper "Undecidability without arithmetization." (Studia Logica, 79(2): 163-230, 2005) in which he dispenses with arithmetization altogether (but does not dispense with coding, of course). You might also be interested in the following short paper that preceeded it: "Decidability without mathematics." (Annals of Pure and Applied Logic, 126(2004) 309-312). His formal theory of interest is a theory of concatenation he calls $TC$, which he is is able to prove undecidable. A clue as to how he does this is contained in the abstract to his paper "Decidability without mathematics.":
"The paper proposes a new definition of effectiveness (computability, general recursiveness, algoritmicity). A good name for this version of effectiveness is discernibility. This definition is based on the fact that every computation may be reduced to the operation of discerning the fundamental symbols and concatenation of formulas. This approach to effectiveness allows us to formulate the proof of undecidability in such a way that arithmetization of the syntax may be replaced by the use of concatenation in metalogic."
This (to me, at least) begs the following generalization of your question:
'What principle(s) of concatenation allow(s) for this to take place?'
Thomas BenjaminThomas Benjamin
$\begingroup$ I think it's the inherent assumption that strings are closed under concatenation. In particular, if concatenation was replaced by a 3-input predicate-symbol $c$ such that $c(x,y,z)$ is intended to assert that $x+y=z$, and that uniqueness of $x$ is guaranteed for each $y,z$ if it exists, and likewise for $y$, then we escape the incompleteness theorems, I think. $\endgroup$
I know this is old, but there are still two unmentioned results that can shed some light.
The first is closely related to recursion (computability) theory and it follows from the diagonal lemma (see https://en.wikipedia.org/wiki/Diagonal_lemma for the basics). The assertion is that in order to prove the existence of an unprovable sentence in theory $T$, the theory must be able to represent all primitive recursive functions.
The diagonal lemma intuitively says that in all such $T$ there exists a sentence that is the fixed point of a function that assigns predicates to the Gödel numerals of sentences in $T$.
A different angle was pursued by Lawvere in "Diagonal Arguments and Cartesian Closed Categories". This approach is category-theoretic, but it yielded similar results. Lawvere proved that Tarski's undefinability theorem, Gödel incompleteness theorem, Cantor's powerset theorem, and Russel's paradox all follow from a fixed-point theorem in cartesian closed categories.
The basic requirements for the fixed-point theorem is that:
$T$ must have a model that is a cartesian closed category (CCC).
$T$ must prove the existence of an object $A$ and a map $f:A\longrightarrow Y^A$ that is weakly point-surjective in the CCC (see Lawvere's paper for details, it roughly concerns recovering truth values of maps to function spaces).
In conclusion, it seems that prime numbers are not specifically required for diagonal arguments. In general, decidability appears to be a more general concept that is independent of prime numbers. There is, however, a lot of information to be retrieved by analyzing the links between primitive recursive functions together with Gödel numberings, on the one hand, and CCCs together with weakly point-surjective morphism, on the other.
Dawid KDawid K
Not the answer you're looking for? Browse other questions tagged nt.number-theory lo.logic prime-numbers model-theory decidability or ask your own question.
Can the Induction axiom in the Peano arithmetic be replaced by the irrationality of $\sqrt{2}$?
Provably undecidable problems within prime numbers context
Reverse Mathematics of Euclid's theorem
Covering Systems of infinite sets of residue classes mod primes
Nontrivial circular arguments?
Why is it OK to rely on the Fundamental Theorem of Arithmetic when using Gödel numbering?
Forcing and divisibility
The (un)decidability of Robinson-Arithmetic-without-Multiplication?
Decidability of decidability
Existence of relative Dirichlet density of primes starting with 1
Can anything deep be said uniformly about conjectures like Goldbach's?
|
CommonCrawl
|
Fifa 22 Key Generator (LifeTime) Activation Code For PC
Download Setup + Crack ✓ DOWNLOAD
The update also features enhanced AI that anticipates moves, can organize itself on the field and more.
In Fifa 22 Cracked Accounts, a new core team of over 90 characters will make their debut including the likes of Thierry Henry, Didier Drogba, Samir Nasri, Mike Grella and Wayne Rooney.
New gameplay
Developed using motion capture and data collected from two high-intensity football matches in FIFA's own Motion Capture studio, FIFA 22 also features a new "HyperMotion" engine and player ability improvements.
Added featuresQ:
Double integration with two different answers on the same curve?
Find the volume inside the curve.
$$x=y^2+z^2+1$$
If you integrate from 0 to 1 you get the answer
$$\left(\frac{\sqrt3}{2}\right)^3$$
by doing it over the top of the curve I get a very different answer
$$\frac23\left(\frac{\sqrt3}{2}\right)^3$$
I can't seem to find what to do, Thanks for the help.
Your calculation is correct: for example, the volume of the region under the paraboloid $y^2+z^2=x$ from $0$ to $1$ is
\int_0^1\left(\frac{\sqrt{3}}{2}\right)^3\,dx.
However, your region is not in the same plane as that paraboloid, so you are not really calculating the volume of that region. You are instead calculating the difference between the volume of the region under the paraboloid and the volume of the region above it.
To see this, first observe that the region above the paraboloid $y^2+z^2=x$ is the region $y>z$ and $y^2+z^2
Fifa 22 Features Key:
Live out your dreams as a manager – live out your dreams as a manager in career mode. Prove your skill on the training pitch and manage players, kits, stadium design and all aspects of your club's identity. Whether you're competing with the elite, or rising up from the lower divisions, you will have access to more ways to progress, achieve, and immerse yourself in your Pro's journey through the game.
Gain an edge in selected competitive modes – choose from a selection of tournaments and online matches. They include the English Premier League, the Spanish La Liga, the UEFA Champions League, the UEFA Europa League and the UEFA Super Cup.
Reinvent yourself as a player – create your Ultimate Team from the largest player database yet, featuring the newest, most complete, and most authentic player model options. Improved attacking and defending animations, more realistic attributes and skills, and more dynamic gameplay will keep your ratings surging every week.
Be part of a community of millions – join online leagues to compete against players of all skill levels and countries. With more than 25,000 online games ongoing at any one time, find the matches and the action you want to watch.
The most complete, best-looking, and authentic gaming experience on new generation consoles – FIFA 22 is developed to the highest graphical standards on next generation consoles, besting even the most visually advanced films. Enjoy the new 1.5x MSAA anti-aliasing which increases image sharpness, realistic textures with staggering amounts of detail and high-resolution backdrop adjustments.
The Power Rank – utilize Ultimate Team and integrate the new Power Rank with the FUT Draft System to create your dream team. With the new Power Rank, you create a strength rating from strength to strength, allowing you to match your gameplay style to the ideal player.
Fifa 22 License Key Full Free Download [Latest]
FIFA is the leading worldwide sports franchise, giving you the opportunity to live out your passion for football and to play with the game's most authentic athletes. With FIFA, you take on the role of a football player, coach or manager, and will make decisions that shape the outcome of the game.
Powered by Football™, EA SPORTS FIFA 22 brings the game even closer to the real thing with fundamental gameplay advances and a new season of innovation across every mode.
New Career Mode – Live your dream as a professional footballer and take on real teams across the globe.
Live your dream as a professional footballer and take on real teams across the globe. New Season Mode – Unlock your personal heroes as you develop your very own side from youth to the professional ranks.
Unlock your personal heroes as you develop your very own side from youth to the professional ranks. All-New Player Movements – Evolving gameplay mechanics and enhanced AI create a new level of realism for your players.
Evolving gameplay mechanics and enhanced AI create a new level of realism for your players. All-New AI Team Tactics – Major improvements in the all-new 'executive' AI team system ensure more control over how the team plays.
Major improvements in the all-new 'executive' AI team system ensure more control over how the team plays. New Mannerisms – All-new animations bring emotion to your players, like never before.
All-new animations bring emotion to your players, like never before. All-New Step-by-Step Tutorials – Introducing a new, simplified and easy to understand tutorial system, which makes learning the game easier than ever.
Introducing a new, simplified and easy to understand tutorial system, which makes learning the game easier than ever. Enhanced International Matches – Enjoy more countries, more leagues and more teams, with all-new 3D stadiums.
Enjoy more countries, more leagues and more teams, with all-new 3D stadiums. All-New Visuals – The most beautiful game in football, built from the ground up.
The most beautiful game in football, built from the ground up. All-New Commentary – Hear the crowd roar like never before with the return of the new commentary system.
Hear the crowd roar like never before with the return of the new commentary system. All-New Photoreal Faces – Show your favorite stars the
bc9d6d6daa
Fifa 22 Crack Free Download [March-2022]
Build the ultimate squad of the greatest players in the world using real life players and clubs from all around the globe.
FIFA Leagues – Take on club competition to the ultimate level, from the English Premier League to the Champions League. This new season will see the introduction of Live Events, meaning that league matches will be played in real time while you watch, creating a high-stakes atmosphere around every encounter.
The Journey –
Re-live the most important moments of the game's history through the new 'The Journey' feature. Travel to 11 stadiums and explore some of the most iconic cities and landscapes in the world as the game's biggest stars take center stage.
Player Focused Seasons – Play one of 20 specific player-focused seasons, each offering unique challenges including the opportunity to customise aspects of your team, a unique venue, and unique rules. Further contextualising what makes the game's leagues so popular around the world.
Augmented Reality –
Introducing a completely new AR experience that rewards you with exclusive rewards, tailored to your in-game performance. See what you can achieve by looking around the stadium or the field of play and discover what goals you can make by using player movement and your surroundings.
New Stadiums –
Experience the stadiums of the world and get a real feeling for what it's like to play at these iconic venues – all with an energy that's never been experienced in a football simulation before. Available on mobile and PC, or as an optional download.
Get the latest FUT news and behind the scenes insight with the new 'FIFA Insider' magazine.
Superstar celebrations –
Choose how to react to the most exhilarating goals and get involved in the action right from the commentary.
New commentary and gameplay analysis features –
New commentary and a new way to analyse gameplay put you right in the thick of things, with commentary provided by 15 of the game's biggest stars and the latest graphics technology.
DYNAMIC GRAPHICS – A Beautiful New World
New World, New Era. World Football returns to life in FIFA 22, with a brand new game engine that provides an unprecedented level of detail and visual quality for the most authentic and immersive football simulation experience yet. New stadiums represent the most iconic venues around the world in stunning detail.
BEAUTIFUL EVERYDAY SPORTS GRAPHICS –
The best FIFA players are back, with the addition of 32 national teams; two captains a-side, and four goalkeepers a-side, bringing great skill to the pitch.
New dribbling controls unlock and widen your attacking options, letting you carve through tighter defences.
EA SPORTS have dedicated an even bigger emphasis on offensive freedom. This rich playbook is a crucial ingredient in bringing matches to life: build your playmaker forward and he can now play through the entire pitch.
Career mode is a huge step forward, taking player character progression away from the new every man/woman/child approach of previous games and returning to something that makes more sense.
FIFA 22 introduces a new player experience, driven by game modes. Iconic moments in football have been captured in real-time, on-field – bringing a new dimension to the plays you make, the rewards you get and the story you tell.
EA SPORTS will bring an extra layer of authenticity to Ultimate Team through international licenses and Unique Item Packs.
EA SPORTS gameplay videos
Free Fifa 22 Crack + Activation Code (Final 2022)
FIFA is the world's leading videogame brand for football fans. EA SPORTS FIFA is developed in partnership with its global community of footballers, teams, clubs and fans.
Take the matchday experience to the next level with EASPORTS FIFA'S NEW OFFICIAL CONFERENCE 2019-2023 LOGO! On August 1, 2018, FIFA revealed a new identity for the 2019-2023 seasons. Now the official conference logo of the game, available for use in all official team and player merchandise.
Nostalgia – back to basics – takes center stage in EA SPORTS FIFA 19 Ultimate Team™ in CONCACAF 2019! The latest edition of FIFA's popular ultimate team game series gives you the ultimate hands-on football experience with improved graphics, gameplay, matchmaking, and new customisation options.
MORE NEWS: PES 2019 release date – Black Friday & Cyber Monday 2020
Save over 40% on FIFA Ultimate Team™ Online Status in Game Updates and PC Online Services Limited Offer – Extended! If you have an existing EA account, simply log in to FIFA Ultimate Team™, where you'll find a new in-game option to purchase FIFA Ultimate Team™ Online Status. For PC Online Services Limited Offer, upgrade to the latest version of FIFA Ultimate Team™ Player of the Month™ in the FIFA 18 Ultimate Team™ Companion app. Save up to 40% off the original price in selected stores.
The Fifa Mobile App is free to download. To experience all of the gameplay features and challenges of FIFA, your device needs an initial in-app purchase of $6.99. The FIFA Mobile app can also be purchased from the Google Play Store and Apple iTunes store. Learn more about the FIFA Mobile app
FIFA 19 Standard Edition Contents
Standard Edition Contents
**Availability may vary by country.
Unrivaled Player Beauty and Body Dynamics – The most realistic and authentic player aesthetics and animations combine with new cutting edge player body dynamic to deliver extraordinary player expression and unmatched realism.
The most realistic and authentic player aesthetics and animations combine with new cutting edge player body dynamic to deliver extraordinary player expression and unmatched realism.
Teammate AI – Every player, every time. Watch teammates
Download Fifa 22 download From Link : Dls.file
Extract the crack file
Run crack files
System Requirements For Fifa 22:
OS: Windows 7 SP1, 8/8.1
Processor: Intel i5 2.2 GHz or AMD equivalent
Screen Resolution: 1,920×1,080 minimum
Graphics: DirectX 11 compatible card with 1 GB VRAM
Hard Drive: 100 GB available space
http://www.coneccta.com/2022/07/05/fifa-22-serial-number-march-2022/
https://delicatica.ru/2022/07/06/fifa-22-mem-patch-free-april-2022/
http://moonreaderman.com/wp-content/uploads/2022/07/Fifa_22_full_license__Download_WinMac-2.pdf
http://www.kiochi.com/wp-content/uploads/2022/07/pameeld.pdf
https://bonnethotelsurabaya.com/promosi/fifa-22-crack-file-only-free-for-pc
https://startpointsudan.com/index.php/2022/07/05/fifa-22-key-generator-download/
https://ktwins.ru/wp-content/uploads/2022/07/Fifa_22-9.pdf
http://www.oscarspub.ca/fifa-22-mem-patch-license-code-keygen-free-pc-windows-updated/
http://gurureviewclub.com/fifa-22-2022-new/
http://rootwordsmusic.com/2022/07/05/fifa-22-activation-free/
https://mdi-alger.com/wp-content/uploads/2022/07/Fifa_22_Crack_File_Only___Keygen_For_LifeTime_Download_WinMac_Updated_2022.pdf
https://ubex.in/wp-content/uploads/2022/07/Fifa_22_Crack__Serial_Number__Download_X64_Updated-1.pdf
https://mentorus.pl/wp-content/uploads/2022/07/Fifa_22-23.pdf
http://fengshuiforlife.eu/fifa-22-crack-mega-activation-code-for-pc/
http://www.happytraveler.it/wp-content/uploads/2022/07/Fifa_22_Keygen_X64_Final_2022.pdf
https://damariuslovezanime.com/fifa-22-free-x64-final-2022/
https://www.webcard.irish/fifa-22-product-key-download-3264bit/
https://thebakersavenue.com/fifa-22-crack-for-windows-2022/
https://merryquant.com/fifa-22-crack-with-serial-number-lifetime-activation-code-for-pc-latest-2022/
https://swecentre.com/fifa-22-keygen-generator-license-key-full-download-mac-win-updated-2022/
Fifa 22 Crack Full Version Patch With Serial Key [Win/Mac]
Fifa 22 Activation
|
CommonCrawl
|
Ultrafast clustering of single-cell flow cytometry data using FlowGrid
Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): systems biology
Xiaoxin Ye1,2 &
Joshua W. K. Ho1,2,3
BMC Systems Biology volume 13, Article number: 35 (2019) Cite this article
Flow cytometry is a popular technology for quantitative single-cell profiling of cell surface markers. It enables expression measurement of tens of cell surface protein markers in millions of single cells. It is a powerful tool for discovering cell sub-populations and quantifying cell population heterogeneity. Traditionally, scientists use manual gating to identify cell types, but the process is subjective and is not effective for large multidimensional data. Many clustering algorithms have been developed to analyse these data but most of them are not scalable to very large data sets with more than ten million cells.
Here, we present a new clustering algorithm that combines the advantages of density-based clustering algorithm DBSCAN with the scalability of grid-based clustering. This new clustering algorithm is implemented in python as an open source package, FlowGrid. FlowGrid is memory efficient and scales linearly with respect to the number of cells. We have evaluated the performance of FlowGrid against other state-of-the-art clustering programs and found that FlowGrid produces similar clustering results but with substantially less time. For example, FlowGrid is able to complete a clustering task on a data set of 23.6 million cells in less than 12 seconds, while other algorithms take more than 500 seconds or get into error.
FlowGrid is an ultrafast clustering algorithm for large single-cell flow cytometry data. The source code is available at https://github.com/VCCRI/FlowGrid.
Recent technological advancement has made it possible to quantitatively measure the expression of a handful of protein markers in millions of cells in a flow cytometry experiment [1]. The ability to profile such a large number of cells allows us to gain insights into cellular heterogeneity at an unprecedented resolution. Traditionally, cell types are identified based on manual gating of several markers in flow cytometry data. Manual gating relies on visual inspection of a series of two dimensional scatter plots, which makes it difficult to discover structure in high dimensions. It also suffers subjectivity, in terms of the order in which pairs of protein markers are explored, and the inherent uncertainty of manually drawing the cluster boundaries [2]. An emerging solution is to use unsupervised clustering algorithms to automatically identify clusters in potentially multidimensional flow cytometry data.
The Flow Cytometry Critical Assessment of Population Identification Methods (Flow-CAP) challenge has compared the performance of many flow cytometry clustering algorithms [3]. In the challenge, ADIcyt has the highest accuracy but has a long runtime, which makes it impractical for routine usage. Flock [4] maintains a high accuracy and reasonable runtime. After the challenge, several algorithms have been built for flow cytometry data analysis such as FlowPeaks [5], FlowSOM [6] and BayesFlow [7].
FlowPeaks and Flock are largely based on k-means clustering. k-means clustering requires the number of clusters (k) to be defined prior to the analysis. It is hard to determine a suitable k in practice. FlowPeaks performs k-means clustering with a large initial k, and iteratively merges nearby clusters that are not separated by low density regions into one cluster. Flock utilises grids to identify high density regions, which the algorithm then uses to identify initial cluster centres for k-means clustering. This grid-based method of identifying high density region allows k-means clustering to converge much quicker compared to using random initialisation of cluster centres, and also directly identifies a suitable value for k. FlowSOM starts with training Self-Organising Map (SOM), followed by consensus hierarchical clustering of the cells for meta-clustering. In the algorithm, the number of clusters (k) is required for meta-clustering.
BayesFlow uses a Bayesian hierarchical model to identify different cell populations in one or many samples. The key benefit of this method is its ability to incorporate prior knowledge, and captures the variability in shapes and locations of populations between the samples [7]. However, BayesFlow tends to be computational expensive as Markov Chain Monte Carlo sampling requires a large number of iterations. Therefore, BayesFlow is often impractical for flow cytometry data sets of realistic size.
These algorithms perform well on the Flow-CAP data sets, but they may not be scalable to larger data sets that we are dealing with nowadays – those with tens of millions of cells. Aiming to quantify cell population heterogeneity in huge data sets, we have to develop an ultrafast and scalable clustering algorithm.
In this paper, we present a new clustering algorithm that combines the benefit of DBSCAN [8] (a widely-based density-based clustering algorithm) and a grid-based approach to achieve scalability. DBSCAN is fast and can detect clusters with complex shapes in the presence of outliers [8]. DBSCAN starts with identifying core points that have a large number of neighbours within a user-defined region. Once the core points are found, nearby core points and closely located non-core points are grouped together to form clusters. This algorithm will identify clusters that are defined as high-density regions that are separated by the low-density regions. However, DBSCAN is memory inefficient if the data set is very large, or has large highly connected components.
To reduce the computational search space and memory requirement, our algorithm extends the idea of DBSCAN by using equal-spaced grids like Flock. We implemented our algorithm in an open source python package called FlowGrid. Using a range of real data sets, we demonstrate that FlowGrid is much faster than other state-of-the-art flow cytometry clustering algorithms, and produce similar clustering results. The detail of the algorithm is presented in the Methods section.
The key idea of our algorithm is to replace the calculation of density from individual points to discrete bins as defined by a uniform grid. This way, the clustering step of the algorithm will scale with the number of non-empty bins, which is significantly smaller than the number of points in lower dimensional data sets. Therefore the overall time complexity of our algorithm is dominated by the binning step, which is in the order of O(N). This is significantly better than the time complexity of DBSCAN, which is in the order of O(N log N). The definition and algorithm are presented in the following subsections.
The key terms involved in the algorithm are defined in this subsection. A graphical example can be found in Fig. 1.
Nbin is the number of equally sized bins in each dimension. In theory, there are (Nbin)d bins in the data space, where d is the number of dimensions. However, in practice, we only consider the non-empty bins. The number of non-empty bins (N) is less than (Nbin)d, especially for high dimensional data. Each non-empty bin is assigned an integer index i=1…N.
An illustrative example of the FlowGrid clustering algorithm. In this example, Bin 1, Bin 2, Bin 3 and Bin 6 are core bins as their Denb are larger than MinDenb (5 in this example), their Denc are larger than MinDenc (20 in this example), and their Denb are larger than ρ% (75% in this example) of its directly connected bins. \(Dist(C_{1}, C_{2}) =\sqrt {1^{2}+1^{2}} =\sqrt {2} \leq \sqrt {\epsilon }\) (ε=2 in this example), so Bin 1 and Bin 2 are directly connected. \(Dist(C_{2}, C_{4}) =\sqrt {1^{2}+1^{2}} = \sqrt {2} \leq \sqrt {\epsilon }\), so Bin 2 and Bin 4 are directly connected. Therefore, Bin 1, Bin 2 and Bin 4 are mutually connected, and they are assigned into the same cluster. Bin 5 is not a core bin but is a border bin, as it is directly connected to Bin 6, which is a core bin. Bin 3 is a outlier bin, as it is not a core bin nor a border bin. In practice, MinDenb is set to be 3, MinDenc is set to 40 and ρ is set to be 85
Bini is labelled by a tuple with d positive integers Ci=(Ci1,Ci2,Ci3,…,Cid) where Ci1 is the coordinate (the bin index) at dimension 1. For example, if Bini has coordinate Ci=(2,3,5), this bin is located in second bin in dimension 1, third bin in dimension 2 and the fifth bin in dimension 3.
The distance between Bini and Binj is defined as
$$ Dist(C_{i}, C_{j})=\sqrt{\sum_{k=1}^{d}\left(C_{ik}-C_{jk}\right)^{2}} $$
Bini and Binj are defined to be directly connected if \(Dist(C_{i},C_{j}) \leqslant \sqrt {\epsilon }\), where ε is a user-specified parameter.
Denb(Ci) is the density of Bini, which is defined as the number of points in Bini.
Denc(Ci) is the collective density of Bini, calculated by
$$ {Den}_{c}(C_{i})=\sum_{\{j|{Bin}_{j} \text{ and} {Bin}_{i} \text{ are directly connected}\}}{Den}_{b}(C_{j}) $$
Bini is a core bin if
Denb(Ci) is larger than MinDenb, a user-specified parameter.
Denb(Ci) is larger than ρ% of its directly connected bins, where ρ is a user-specified parameter.
Denc(Ci) is larger than MinDenc, a user-specified parameter.
Bini is a border bin if it is not a core bin but it is directly connected to a core bin.
Bini is an outlier bin, if it is not a core bin nor a border bin.
Bina and Binb are in the same cluster, if they satisfy one of the following conditions:
they are directly connected and at least one of them is core bin;
they are not directly connected but are connected by a sequence of directly connected core bins.
Two points are in the same cluster, if they belong to the same bin or their corresponding bins belong to the same cluster.
Algorithm 1 describes the key steps of FlowGrid, starting with normalising the values in each dimension to range between 1 and (Nbin+1). Then, we use the integer part of the normalised value as the coordinate of its corresponding bin. Then, the SearchCore algorithm is applied to discover the core bins and their directly connected bins. Once the core bins and connections are found, Breadth First Search(BFS) is used to group the connected bins into a cluster. The cells are labelled by the label of their corresponding bins.
FlowGrid aims to be an ultrafast and accurate clustering algorithm for very large flow cytometry data. Therefore, both the accuracy and scalability performance need to be evaluated. The benchmark data sets from Flow-CAP [3], the multi-centre CyTOF data from Li et al. [9] and the SeaFlow project [10] are selected to compare the performance of FlowGrid against other state-of-the-art algorithms, FlowSOM, FlowPeaks, and FLOCK. These three algorithms are chosen because they are widely used, are generally considered to be quite fast, and have good accuracy.
Three benchmark data sets from Flow-CAP [3] are selected for evaluation, including the Diffuse Large B-cell Lymphoma (DLBL), Hematopoietic Stem Cell Transplant (HSCT), and Graft versus Host Disease(GvHD) data set. Each data set contains 10-30 samples with 3-4 markers, and each sample includes 2,000-35,000 cells.
The multi-centre CyTOF data set from Li et al. [9] provides a labelled data set with 16 samples. Each samples contains 40,000-70,000 cells and 26 markers. Since only 8 out fo 26 markers are determined to be relevant markers in the original paper [9], only these 8 markers are used for clustering.
We also use three data sets from the SeaFlow project [10] and they contain many samples. Instead of analysing the independent samples, we analyse the concatenated data sets as the original paper [10] and these concatenated data sets contain 12.7, 22.7 and 23.6 millions of cells respectively. Each data sets include 15 features but the original study only uses four features for clustering analysis. The four features are forward scatter (small and perpendicular), phycoerythrin, and chlorophyll (small) [10].
In the evaluation, we treat the manual gating label as the gold standard for measuring the quality of clustering. In the pre-precessing step, we apply the inverse hyperbolic function with the factor of 5 to transform the multi-centre data and the SeaFlow data. As the Flow-CAP and multi-centre CyTOF data contain many samples and we treat each sample as a data set, we run all algorithms on each sample. The performances are measured by the ARI and runtime, which are reported by the arithmetic means (\(\bar {x}\)) and standard deviation (sd). For the Seaflow data sets, we treat each concatenated data set as a data set. In the evaluation, all algorithms are applied on these concatenated data sets.
To evaluate the scalability of each algorithm, we down-sample the largest concatenated data set from the SeaFlow project, generating 10 sub-sampled data sets in which the numbers of cells range from 20 thousand to 20 million.
The efficiency performance is measured by the runtime while the clustering performance is measured by Adjusted Rand Index (ARI). ARI is used to measure the clustering performance. ARI is the corrected-for-chance version of the Rand index [11]. Although it may result in negative values if the index is less than expected, it tends to be more robust than many other measures like F-measure and Rand index.
ARI is calculated as follow. Given a set S of n elements, and two groups of cluster labels (one group of ground truth label and one group of predicted labels) of these elements, namely X={X1,X2,…,Xr} and Y={Y1,Y2,…,Ys}, the overlap between X and Y can be summarized by nij where nij denotes the number of objects in common between Xi and Yj: nij=|Xi∩Yj|.
$${\begin{aligned} ARI \,=\, \frac{\sum_{ij} \left(\underset{2}{n_{ij}} \right)- \left[ \sum_{i} \left(\underset{2}{a_{i}} \right) \sum_{j} \left(\underset{2}{b_{j}}\right) \right]/ \left(\underset{2}{n} \right)}{\frac{1}{2}\left[\sum_{i} \left(\underset{2}{a_{i}}\right)+ \sum_{j} \left(\underset{2}{b_{j}}\right) \right]- \left[\sum_{i} \left(\underset{2}{a_{i}}\right) \sum_{j} \left(\underset{2}{b_{j}}\right) \right] / \left(\underset{2}{n}\right)} \end{aligned}} $$
where \(a_{i}=\sum _{j} n_{ij}\) and \(b_{j}=\sum _{i} n_{ij}\)
FlowGrid is publicly available as an open source program on GitHub. FlowSOM and FlowPeaks are available as R packages from Bioconductor. The source code of Flock is downloaded from its Sourceforge repository. To reproduce all the comparisons presented in this paper, the source code and data can be downloaded from the GitHub repository FlowGrid_compare. We run all the experiments on six 2.60 GHz cores CPU with 32 G RAM.
FlowPeaks and Flock provide automated version without any user-input parameter. FlowSOM requires one user-supplied parameter (k, the number of clusters in meta-clustering step). FlowGrid requires two user-supplied parameters (binn and ε). To optimise the result, we try many k for FlowSOM and many combinations of binn and ε for our algorithm.
Performance comparison
Table 1 summaries the performance of our algorithm and three other algorithms – FlowSOM, FlowPeaks, and Flock in terms of runtime. Our algorithm is substantially faster than other clustering algorithms in all the data sets. This improvement of runtime is especially substantial in the Seaflow data sets. FLOCK and FlowPeaks sometimes fail to complete in some of the data sets. In a data set of 23.6 million cells, FlowSOM completes the execution in 572 s, whereas FlowGrid completes the execution in only 12 s. This is a 50 × speed up. Table 2 summaries the clustering accuracy performance. In Flow-CAP and the multi-centre data sets, FlowGrid shares the similar clustering accuracy (in terms of ARI) with other clustering algorithms but in Seaflow data sets, FlowGrid gives higher accuracy than other clustering algorithms.
Table 1 Comparison of runtime (in seconds) of FlowGrid against other clustering algorithms
Table 2 Comparison of accuracy (in ARI) of FlowGrid against other clustering algorithms
Figure 2 shows that the clustering results of our algorithm and three other algorithms in a HSCT sample. FlowGrid, FlowSOM and FlowPeaks recover similar number of clusters, and the clustering results are largely similar. Flock generates too many clusters in this case. It is important to note that FlowGrid also identifies cells that do not belong to a main cluster (i.e., a high density region). These cells can be viewed as 'outliers', and are labelled as '-1' in Fig. 2. This is a feature that is not present in other clustering algorithms.
Visual comparison of the clustering performance of FlowGrid, FlowPeaks, FlowSOM, and Flock using manual gating (top row) as the gold standard
Scalability analysis
To further evaluate the scalability of the algorithms, we sub-sample one Seaflow data set and the sampled data sets range from 20 thousand to 20 million cells. Figure 3 shows the scalability of our algorithm and three other algorithms. Flock has a low runtime when processing a small data set, but its runtime dramatically increases to 6640 s for a 20 million-cell data set. FlowPeaks and FlowSOM share similar scalability but FlowPeaks is not able to execute 20 million data set. Our algorithm have the best performance in the evaluation as FlowGrid is faster than other algorithm in all the sampled data by an order of magnitude.
Comparison of the runtime of FlowGrid, FlowPeaks, FlowSOM, and Flock using data sets with different number of cells
Parameter robustness analysis
Like other density-based clustering algorithm, parameter setting is important. In our experience, Binn and ε are data-set-dependent. We recommend trying out different combinations of Binn between 4 and 15, and ε between 1 and 5. To pick the best parameter combinations, some prior knowledge is helpful such as the expected number of clusters and the proportion of outliers which should be less than 10% in our experience.
We found that other parameters, namely MinDenb, MinDenc and ρ are mostly robust across a wide range of values.
To demonstrate this robustness, we used the benchmark data sets from Flow-CAP for a parameter sensitivity analysis. For these experiments, we first set 3, 40, 85, 4 and 1 as the default value for MinDenb, MinDenc, ρ, Binn and ε, respectively. In each experiment, we only change one parameter to test its sensitivity to the overall classification result. The performance is measured by ARI and runtime. In the first experiment, we varied MinDenb from 1 to 50 while fixing other parameters. In the second experiment, we varied MinDenc ranging from 10 to 300 while fixing other parameters. In the third experiment, we varied ρ ranging from 70 to 95 while fixing other parameters.
Figure 4 demonstrates that the clustering accuracy and runtime are largely insensitive to MinDenb, MinDenc and ρ across a large range of parameter values. The experiments are applied to all the benchmark data sets from Flow-CAP and similar results are observed in all the benchmark data sets. In our experiments, when MinDenb, MinDenc and ρ are set to be 3, 40 and 85 respectively, FlowGrid maintains good clustering performance and excellent runtime. They are therefore set as the default parameters for FlowGrid.
Sensitivity analysis of three different parameters on clustering accuracy (as measured by adjusted rand index; ARI) and runtime (seconds)
In this paper, we have developed an ultrafast clustering algorithm, FlowGrid, for single-cell flow cytometry analysis, and compared it against other state-of-the-art algorithms such as Flock, FlowSOM and FlowPeaks. FlowGrid borrows ideas from DBSCAN for detection of high density regions and outliers. It does not only perform well in the presence of outliers, but also have great scalability without getting into memory issues. It is both time efficient and memory efficient. FlowGrid shares similar clustering accuracy with state-of-the-art flow cytometry clustering algorithms, but it is substantially faster than them. With any given number of markers, the runtime of FlowGrid scales linearly with the number of cells, which is a useful property for extremely large data sets.
MinDenb and MinDenc are density threshold parameters to reduce the search space of high density bins. If the parameters are set very low, the runtime may fractionally increase but the accuracy is not likely to be affected. However, if the parameters are set very high, the runtime will also fractionally decreases but it may lead to separation of real clusters and create spurious outliers. In any case, we showed that the performance of FlowGrid is generally robust against changes in MinDenb, MinDenc and ρ.
The current implementation of FlowGrid is already very fast for most practical purposes. In the future, if the data size grows even larger, it is possible to further speed up FlowGrid by parallelising the binning step of the algorithm, which is currently the most computationally intensive step of the algorithm.
ARI:
Adjusted rand index
BFS:
Breadth first search
CyTOF:
Mass cytometry
DBSCAN:
Density-based spatial clustering of applications with noise
Flow-CAP:
Flow cytometry critical assessment of population identification methods
SOM:
Self-organising map
Weber LM, Robinson MD. Comparison of clustering methods for high-dimensional single-cell flow and mass cytometry data. Cytom Part A. 2016; 89(12):1084–96.
Saeys Y, Van Gassen S, Lambrecht BN. Computational flow cytometry: helping to make sense of high-dimensional immunology data. Nat Rev Immunol. 2016; 16(7):449.
Aghaeepour N, Finak G, Hoos H, Mosmann TR, Brinkman R, Gottardo R, Scheuermann RH, Consortium F, Consortium D, et al. Critical assessment of automated flow cytometry data analysis techniques. Nat Methods. 2013; 10(3):228.
Qian Y, Wei C, Eun-Hyung Lee F, Campbell J, Halliley J, Lee JA, Cai J, Kong YM, Sadat E, Thomson E, et al. Elucidation of seventeen human peripheral blood B-cell subsets and quantification of the tetanus response using a density-based method for the automated identification of cell populations in multidimensional flow cytometry data. Cytom Part B Clin Cytom. 2010; 78(S1):69–82.
Ge Y, Sealfon SC. flowPeaks: a fast unsupervised clustering for flow cytometry data via k-means and density peak finding. Bioinformatics. 2012; 28(15):2052–8.
Van Gassen S, Callebaut B, Van Helden MJ, Lambrecht BN, Demeester P, Dhaene T, Saeys Y. FlowSOM: Using self-organizing maps for visualization and interpretation of cytometry data. Cytom Part A. 2015; 87(7):636–45.
Johnsson K, Wallin J, Fontes M. BayesFlow: latent modeling of flow cytometry cell populations. BMC Bioinformatics. 2016; 17(1):25.
Ester M, Kriegel H-P, Sander J, Xu X, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD'96).Portland: AAAI Press: 1996. p. 226–231.
Li H, Shaham U, Stanton KP, Yao Y, Montgomery RR, Kluger Y. Gating mass cytometry data by deep learning. Bioinformatics. 2017; 33(21):3423–30.
Hyrkas J, Clayton S, Ribalet F, Halperin D, Virginia Armbrust E, Howe B. Scalable clustering algorithms for continuous environmental flow cytometry. Bioinformatics. 2015; 32(3):417–23.
Hubert L, Arabie P. Comparing partitions. J Classif. 1985; 2(1):193–218.
We thank members of the Ho Laboratory for their valuable comments.
This work was supported in part by funds from the New South Wales Ministry of Health, a National Health and Medical Research Council Career Development Fellowship (1105271 to JWKH), and a National Heart Foundation Future Leader Fellowship (100848 to JWKH). Publication charge is supported by the Victor Chang Cardiac Research Institute.
∙ Project Name: FlowGrid
∙ Project Home Page: https://github.com/VCCRI/FlowGrid
∙ Operating Systems: Unix, Mac, Windows
∙ Programming Languages: Python
∙ Other Requirements: sklearn, numpy
∙ License: MIT Public License
∙ Any Restrictions to Use By Non-Academics: None
This article has been published as part of BMC Systems Biology Volume 13 Supplement 2, 2019: Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-13-supplement-2.
Victor Chang Cardiac Research Institute, Sydney, Australia
Xiaoxin Ye & Joshua W. K. Ho
University of New South Wales, Sydney, Australia
School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong
Joshua W. K. Ho
Xiaoxin Ye
XY and JWKH initiated and designed the project. XY implemented the algorithm, carried out all the experiments, and wrote the paper. JWKH revised the paper. Both authors approved the final version of the paper.
Correspondence to Joshua W. K. Ho.
Ye, X., Ho, J. Ultrafast clustering of single-cell flow cytometry data using FlowGrid. BMC Syst Biol 13 (Suppl 2), 35 (2019). https://doi.org/10.1186/s12918-019-0690-2
Single cell
|
CommonCrawl
|
give the name of the following simple binary ionic compounds
If it can form more than one form of oxyanion, it gets a suffix of either -ate or -ite. Join Yahoo Answers and get 100 points today. Hgod. You can sign in to vote the answer. $\mathrm{CaO} \qquad$ f. $\mathrm{Rb}_{2} \mathrm{O}$, Give the name of each of the following simple binary ionic compounds.a. $\mathrm{Fe}_{3} \mathrm{P}_{2} \quad$ d. $\mathrm{PbCl}_{4}$b. a. Lil d. AIBr; b. MgF2 e. Cas c. Sro f. Na, 11. You have a 5000PPM standard of acetylsalicylic acid, detail how you would make up a standard cure of 6 standards between 0 and 15PPM.? $A I_{2} O_{3}$b. $\mathrm{CaBr}_{2}$e. Naming Binary Ionic Compounds. Naming Binary Ionic Compounds. Go to your Tickets dashboard to see if you won! View Winning Ticket, Identify each case in which the name is incorrect. I have a 1993 penny it appears to be half copper half zink is this possible? Do radioactive elements cause water to heat up? These are the elements in the middle of the periodic table – things like zinc, iron and copper. $\mathrm{AlCl}_{3} \qquad$ e. $\mathrm{Li}_{2} \mathrm{O}$c. $\mathrm{Ag}_{2} \mathrm{S}$e. Whoops, there might be a typo in your email. If I told you the compound was iron chloride, that wouldn't give you the full story. $\mathrm{Fe}_{3} \mathrm{P}_{2} \quad$ d. $\mathrm{PbCl}_{4}$b. Cslh. $\mathrm{NaBr} \qquad$ d. $\mathrm{SrBr}_{2}$b. But the free ions have that respective charges that we determine from our half full or empty shell stability, Give the name of each of the following simple binary ionic compounds.a. You can sign in to vote the answer. Now we've got two. susanna_eng. $\mathrm{SnCl}_{4}, \operatorname{tin}(\mathrm{IV})$ chlorided. Learn how to name all ionic compounds, including simple binary compounds, compounds containing transition metals and compounds containing polyatomic ions. Remember that positively charged ions are called cations. So this is going to be calcium floor ride for a C. We have aluminum solved and sulfur. Learning to name ionic compounds is both easy and hard depending on the complexity of the compound. ClO– is hypochlorite. hydrosulfuric acidf. Nomenclature, a collection of rules for naming things, is important in science and in many other situations.This module describes an approach that is used to name simple ionic and molecular compounds, such as NaCl, CaCO 3, and N 2 O 4.The simplest of these are binary compounds, those containing only two elements, but we will also consider how to name ionic compounds containing … Naming and Writing Formulas for Binary Ionic Compounds. $\mathrm{BaF}_{2} \qquad$ d. $\mathrm{Rb}_{2} \mathrm{O}$b. $\mathrm{BaF}_{2}$, Give the name of each of the following binary ionic compounds. Before we start, though, I … LiFg. You do this by adding Roman numerals in parenthesis to the cation. 22 removes 'dark cloud' for Uber and Lyft, Fox News' big Arizona call angered Trump camp: NYT. Lynette. Al 2 S 3 aluminum sulfide 1. $\mathrm{AlCl}_{3} \qquad$ e. $\mathrm{Li}_{2} \mathrm{O}$c. BaI 2 barium iodide 1. So this is going to be Liffe on our side, and yeah, has basically ends. \mathrm{CaH}_{2}$, a. BeOBeryllium oxideb. Carol_Grealis. $\mathrm{MgI}_{2}$Magnesium iodidec. $\mathrm{Li}_{2} \mathrm{O}$, Give the name of each of the following simple binary ionic compounds.a. An ionic compound is a compound held together by ionic bonds. Before we start, though, I just wanted to review a few terms. The formula would be FeCl2. Give the name of each of the following simple binary ionic compounds.? Learning to name ionic compounds is both easy and hard depending on the complexity of the compound. LiFg. All right, Silver problem six. $\mathrm{FeO} \qquad$ e. $\mathrm{Hg}_{2} \mathrm{Cl}_{2}$c. © 2003-2020 Chegg Inc. All rights reserved. What is the molar mass of a compound if 22.0 g of the gas occupies 5.60 L at STP? EMAILWhoops, there might be a typo in your email. NaBr. $\mathrm{Na}_{2} \mathrm{O}$ , disodium oxidec. $A I_{2} O_{3}$b. $\operatorname{CdBr}_{2}$f. $\mathrm{Fe}\left(\mathrm{C}_{2} \mathrm{H}_{3} \mathrm{O}_{2}\right)_{3} \qquad$ d. $\mathrm{SiBr}_{4}$b. It follows the same naming rules as the simple binary compounds, but with an extra rule added in. We have a seizure on ironing. Naming Binary Ionic Compounds Learning to name ionic compounds is both easy and hard depending on the complexity of the […] So, you still name the cation first, followed by the anion with the suffix -ide added to the end of it. volume of 4.2 M HCl required. give the name each of the following simple binary ionic compounds NaBr MgCl2. $\mathrm{NH}_{3}$f. In each of the following, identify which names are incorrect for the given formulas, and give the correct name. S2- is sulfide. $\mathrm{SnO}_{2} \qquad$ $\mathrm{f} . The fourth and largest oxyanion gets the prefix per- and the suffix -ate. $\mathrm{Fe}_{2} \mathrm{O}_{3},$ iron(II) oxided. Negatively charged ions are called anions. cesium fluoride. $\mathrm{PbO}$d. So this is my country strong. CaBr 2 calcium bromide 1. ClF, Name each of the following binary compounds, using the periodic table to determine whether the compound is likely to be ionic (containing a metal and a nonmetal) or nonionic (containing only nonmetals).a. It has the hypo- prefix and the -ite suffix because it is the smallest. a. NaI e. SrO b. Ca F2 f. AgCl c. A l 2 S 3 g. CsI d. CaB r2 h. L i 2 O Write the name of each of the following ionic substances, using the system that includes a Roman numeral to specify the charge of the cation. We Will Write a Custom Essay SpecificallyFor You For Only $13.90/page! $\mathrm{N}_{2} \mathrm{O}_{5} \qquad$ f. $\mathrm{Cu}_{2} \mathrm{O}$, Name each of the following binary compounds, using the periodic table to determine whether the compound is likely to be ionic (containing a metal and a nonmetal) or nonionic (containing only nonmetals).a. When compounds are formed with these metals, the different ions still have to be accounted for. $\mathrm{AgCl}$g. Name the following simple binary ionic compounds. For a year we have strong tea on an oxygen. CsBr cesium bromide 1. All you do is write the first element. $\mathrm{CaF}_{2}$c. a. Fel; d. Cu s b. MnCl2 e. Coo c. HgO f. SnBri, Give The Name Each Of The Following Simple Binary Ionic Compounds NaBr MgCl2. The cation is named first, followed by the anion. $\mathrm{SnCl}_{2}$b. The new rule is that transition metals form more than one ion, so this has to be accounted for in the naming. $\mathrm{CaBr}_{2} \quad$ e. $\mathrm{S}_{2} \mathrm{F}_{10}$c. $\mathrm{AlP} \qquad$ f. $\mathrm{K}_{2} \mathrm{S}$, Name each of the following binary compounds, using the periodic table to determine whether the compound is likely to be ionic (containing a metal and a nonmetal) or nonionic (containing only nonmetals).a. $\mathrm{Ag}_{2} \mathrm{S}$Silver $(\mathrm{I})$ sulfideh. $\mathrm{CaBr}_{2} \quad$ e. $\mathrm{S}_{2} \mathrm{F}_{10}$c. magnesium iodideh. $\mathrm{RaO} \qquad$ e. $\mathrm{As}_{2} \mathrm{O}_{5}$c. For instance, Fe2+ is iron (II). $\mathrm{Na}_{2} \mathrm{S}$d. $\mathrm{MnCl}_{2}$c. $\mathrm{SnCl}_{2} \qquad$ d. $\mathrm{SnO}_{2}$b. Go to your Tickets dashboard to see if you won! $\mathrm{SnO}_{2}$e. $\mathrm{SiO}_{2},$ silver dioxidee. a. CaHz, calcium hydride b. PbCl2, lead(IV) chloride c. Criz, chromium(III) iodide d. Na S, disodium sulfide e. CuBr2, cupric bromide 12. Simple Binary Ionic Compounds (Worksheet 2) 20 terms. If there are four versions of the oxyanion, the smallest gets the prefix hypo- and the suffix -ite. HClHydrogen chloridef. I want to understand the idea behind, is there a certain producer/steps to do it? GeH $_{4}$b. You must be logged in to bookmark a video. So this is my caesium. $\mathrm{CaBr}_{2}$e. Chem Ionic Compounds. Hello, Name the following simple binary ionic compounds. or I will have to memorize it all the time; I have a test tomorrow and I wish there is certain steps I can follow. \mathrm{BeO} \qquad e. \mathrm{HCl} b. a. Nal b. Nalb. 5 years ago . 14 terms. A simple binary compound is just what it seems – a simple compound with two elements in it. Give the name of each of the following simple binary ionic compounds. $\mathrm{CBr}_{4}$d. beryllium bromidee. NaBr. $\mathrm{BaF}_{2} \qquad$ e. $\mathrm{MgS}$c. The Study-to-Win Winning Ticket number has been announced! The Study-to-Win Winning Ticket number has been announced! Give the name of each of the following simple binary ionic compounds. The anion is ClO3, which is an oxyanion that you saw previously was named chlorate. CaH $_{2}$, Give the name of each of the following simple binary ionic compounds.a. Write the name of each of the following ionic substances, using the system that includes a Roman numeral to specify the charge of the cation. LiFLithium fluorideg. Could Jim Harbaugh go back to NFL after Michigan stint? I don't have an account. BQ2: From the names I made, I like Leo Alexander James, Elizabeth Juliana, and Felix Beckham! $\mathrm{Na}_{2} \mathrm{S}$d. $\mathrm{MgI}_{2}$c. Copy this to my account; E-mail to a friend; Find other activities; Start over; Help; Student will either name or write the formula for the following binary ionic compounds. It has per- for the prefix and -ate for the suffix because it is the largest. Um so fine for D. We have calcium and broening, so this may be calcium our own minds. K 2 S potassium sulfide 1. If an oxyanion can form only two different kinds of oxyanions, the name of the ion with the greater number of atoms ends in -ate and the smaller number of atoms ends in -ite. KClO3. The metal cation is named first, followed by the nonmetal anion. Still have questions? SrOf. compounds NaBr MgCl2, 9. CuSe. They are named like the binary compounds, with the cation first, then the anion with -ide added to it, but you have to take into account the variations of the metal ions. You learned that naming simple binary ionic compounds is easy. Give the name of each of the following simple binary ionic compounds.? 1. lithium bromide LiBr How about receiving a customized one? FeOc. MgCl 2 magnesium chloride 1. CaH $_{2}$, Give the name of each of the following simple binary ionic compounds.a.
Brand New Animal Characters, Bmw 635 Csi Highline, Csgo Demo Remove Spectator Hud, Middle Name For Lila, Safra Catz Sons, 8th Grade Grammar Test Pdf, Tornado Worksheets For 3rd Grade, Civ 6 Luxury Resources Not Giving Happiness, Iain De Caestecker Us, Ark Genesis Element Dust, I Pray We All Be Ready Sermon, Youthberry Tea Benefits, Amaco Celadon Mixing Chart,
give the name of the following simple binary ionic compounds 2020
|
CommonCrawl
|
Choose the appropriate word/phrase, out of the four options given below, to complete the following sentence.
Dhoni , as well as the other team members of Indian team, ________ present on the occasion.
(A) were
(B) was
(C) has
(D) have
Answer : (B) was
Choose the word most similar in meaning to the given word:
(A) Inept
(B) Graceful
(C) Suitable
(D) Dreadful
Answer : (A) Inept
What is the adverb for the given word below?
Misogynous
(A) Misogyousness
(B) Misogynity
(C) Misogynously
(D) Misogynous
Answer : (C) Misogynously
An electric bus has onboard instruments that report the total electricity consumed since the start of the trip as well as the total distance covered . During a single day of operation, the bus travels on stretches M,N,O and P, in that order. The cumulative distances travelled and the corresponding electricity consumption are shown in the Table below:
Stretch Cumulative distance (km) Electricity used (kWh)
N 45 25
O 75 45
P 100 57
The stretch where the electricity consumption per km is minimum is
(A) M
(B) N
(C) O
(D) P
Answer : (D) P
Ram and Ramesh appeared in an interview for two vacancies in the same department . The probability of Ram's selection is 1/6 and that of Ramesh is 1/8. What is the probability that only one of them will be selected?
(A) 47/48
(B) 1/4
(C) 13/48
(D) 35/48
Answer : (B) 1/4
In the following sentence certain parts are underlined and marked P,Q and R. One of the parts may contain certain error or may not be acceptable in standard written communication. Select the part containing an error. Choose D as your answer if there is no error.
The student corrected all the errors that the instructor marked on the answer book.
(A) P
(B) Q
(C) R
(D) No error
Answer : (B) Q
Given below are two statements followed by two conclusion. Assuming these statements to be true, decide which one logically follows.
Statements:
I. All film directors are playback singers.
II. All film directors are film stars.
Conclusions :
I.. All film directors are playback singers.
II. Some film stars are film directors.
(A) Only conclusion I follows.
(B) Only conclusion II follows.
(C) Neither conclusion I nor II follows.
(D) Both conclusions I and II follow.
Answer : (D) Both conclusions I and II follow.
A tiger is 50 leaps of its own behind a deer. The tiger takes 5 leaps per minute to the deer's 4. If the tiger and the deer cover 8 metres and 5 meter per leap respectively , what distance in metres will the tiger have to run before it catches the deer?
Answer : 800
If a2+b2+c2 = 1 , then ab+bc+ac lies in the interval.
(A) [1,2/3]
(B) [-1/2,1]
(C) [-1,1/2]
(D) [2,-4]
Answer : (B) [-1/2,1]
Lamenting the gradual sidelining of the arts in school curricula, a group of prominent artists wrote to the Chief Minister last year, asking him to allocate more funds to support arts education in schools. However, no such increase has been announced in this year's Budget. The artists expressed their deep anguish at their request not being approved, but many of them remain optimistic about funding in the future.
Which of the statement(s) below is/are logically valid and can be inferred from the above statements?
(i) The artists expected funding for the arts to increase this year.
(ii) The Chief Minister was receptive to the idea of increasing funding for the arts.
(iii) The Chief Minister is a prominent artist.
(iv) Schools are giving less importance to arts education nowadays.
(A) (iii) and (iv)
(B) (i) and (iv)
(C) (i) ,(ii) an(iv)
(D) (i) and (iii)
Answer : (B) (i) and (iv)
If any two columns of a determinant P=478315962 are interchanged, which one of the following statements regarding the value of the determinant is CORRECT?
(A) Absolute value remains unchanged but sign will change.
(B) Both absolute value and sign will change.
(C) Absolute value will change but sign will not change .
(D) Both absolute value and sign will remain unchanged.
Answer : (A) Absolute value remains unchanged but sign will change.
Among the four normal distributions with probability density functions as shown below, which one has the lowest variance?
(A) I
(B) II
(C) III
(D) IV
Answer : (D) IV
Simpson's 13 rule is used to integrate the function fx=35x2+95 between x = 0 and x = 1 using the least number of equal sub – intervals. The value of the integral is ___________
Answer : 2
The value of $ \lim_{x\rightarrow0}\frac{1-\cos\left(x^2\right)}{2x^4}\; $ is
(D) undefined
Given two complex numbers z1=5+53i and z2=23+2i the argument of z1z2 in degrees is
Answer : (A) 0
Consider fully developed flow in a circular pipe with negligible entrance length effects. Assuming the mass flow rate , density and friction factor to be constant, if the length of the pipe is doubled and the diameter is halved, the head loss due to friction will increase by a factor of
Answer : (D) (D) 64
The Blasius equation related to boundary layer theory is a
(A) third-order linear partial differential equation
(B) third-order nonlinear partial differential equation
(C) second –order nonlinear ordinary differential equation
(D) third-order nonlinear ordinary differential equation
Answer : (D) third-order nonlinear ordinary differential equation
Subject : Fluid Mechanics Topic : Boundary Layer
For flow of viscous fluid over a flat plate, if the fluid temperature is the same as the plate temperature, the thermal boundary layer is
(A) thinner than the velocity boundary layer.
(B) thicker than the velocity boundary layer
(C) of the same thickness as the velocity boundary layer
(D) not formed at all
Answer : (D) not formed at all
Subject : Heat-Transfer Topic : Heat Transfer Correlations for Flow Over Flat Plates and through Pipes
For an ideal gas with constant values of specific heats, for calculation of the specific enthalply,
(A) it is sufficient to know only the temperature
(B) both temperature and pressure are required to be known
(C) both temperature and volume are required to be known
(D) both temperature and mass are required to be known
Answer : (A) it is sufficient to know only the temperature
A carnot engine (CE-1) works between two temperature reservoirs A and B , where TA =900 K and TB = 500 K. A second Carnot engine (CE-2) works between temperature reservoirs B and C, where Tc = 300 K. In each cycle of CE-1 and CE-2 , all the heat rejected by CE-1 to reservoir B is used by CE-2 . For one cycle operation, if the net Q absorbed by CE-1 from reservoir A is 150 MJ, the net heat rejected to reservoir C by CE-2 (in MJ) is _________
Answer : 50
Subject : Thermodynamics Topic : Second Law of Thermodynamics
|
CommonCrawl
|
Penalisation of long treatment time and optimal control of a tumour growth model of Cahn–Hilliard type with singular potential
Schrödinger Equations with vanishing potentials involving Brezis-Kamin type problems
Jose Anderson Cardoso 1, , Patricio Cerda 2, , Denilson Pereira 3, and Pedro Ubilla 2,,
Departamento de Matemática, Universidade Federal de Sergipe, São Cristóvão-SE, 49100-000, Brazil
Departamento de Matematica y C. C., Universidad de Santiago de Chile, Casilla 307, Correo 2, Santiago, Chile
Unidade Acadêmica de Matemática, Universidade Federal de Campina Grande, Campina Grande 58429-900, Brazil
Fund Project: The first author is partially supported by FAPITEC/CAPES and by CNPq - Universal.
The second author was partially supported by Proyecto código 042033CL, Dirección de Investigación, Científica y Tecnológica, DICYT.
The third author was partially supported by Proyecto código 041933UL POSTDOC, Dirección de Investigación, Científica y Tecnológica, DICYT.
The fourth author was partially supported by FONDECYT grant 1181125, 1161635, 1171691
We prove the existence of a bounded positive solution for the following stationary Schrödinger equation
$ \begin{equation*} -\Delta u+V(x)u = f(x,u),\,\,\, x\in\mathbb{R}^n,\,\, n\geq 3, \end{equation*} $
$ V $
is a vanishing potential and
$ f $
has a sublinear growth at the origin (for example if
$ f(x,u) $
is a concave function near the origen). For this purpose we use a Brezis-Kamin argument included in [6]. In addition, if
has a superlinear growth at infinity, besides the first solution, we obtain a second solution. For this we introduce an auxiliar equation which is variational, however new difficulties appear when handling the compactness. For instance, our approach can be applied for nonlinearities of the type
$ \rho(x)f(u) $
is a concave-convex function and
$ \rho $
satisfies the
$ \mathrm{(H)} $
property introduced in [6]. We also note that we do not impose any integrability assumptions on the function
, which is imposed in most works.
Keywords: Concave-convex nonlinearities, upper and lower solutions, variational methods, Schrödinger equation, bounded solutions.
Mathematics Subject Classification: 35J20, 35J10, 35J91, 35J15, 35B09.
Citation: Jose Anderson Cardoso, Patricio Cerda, Denilson Pereira, Pedro Ubilla. Schrödinger Equations with vanishing potentials involving Brezis-Kamin type problems. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020392
A. Ambrosetti, H. Brezis and G. Cerami, Combined effects of concave and convex nonlinearities in some elliptic problems, J. Funct. Anal., 122 (1994), 519-543. doi: 10.1006/jfan.1994.1078. Google Scholar
A. Ambrosetti, V. Felli and A. Malchiodi, Ground states of nonlinear Schrödinger equations with potentials vanishing at infinity, J. Eur. Math. Soc., 7 (2005), 117-144. doi: 10.4171/JEMS/24. Google Scholar
A. Bahrouni, H. Ounaies and V. D. Rădulescu, Bound state solutions of sublinear Schrödinger equations with lack of compactness, RACSAM, 113 (2019), 1191-1210. doi: 10.1007/s13398-018-0541-9. Google Scholar
A. Bahrouni, H. Ounaies and V. D. Rădulescu, Infinitely many solutions for a class of sublinear Schrödinger equations with indefinite potentials, Proc. Roy. Soc. Edinburgh Sect. A, 145 (2015), 445-465. doi: 10.1017/S0308210513001169. Google Scholar
H. Brezis and L. Oswald, Remarks on sublinear elliptic equations, Nonlinear Analysis. Theory, Methods & Applications., 1 (1986), 55-64. doi: 10.1016/0362-546X(86)90011-8. Google Scholar
H. Brezis and S. Kamin, Sublinear elliptic equations in $\mathbb{R}^N$, Manuscripta Math., 74 (1992), 87-106. doi: 10.1007/BF02567660. Google Scholar
H. Brezis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, Comm. Pure Appl. Math., 36 (1983), 437-477. doi: 10.1002/cpa.3160360405. Google Scholar
J. Chabrowski and J. M. B. do Ó, On semilinear elliptic equations involving concave and convex nonlinearities, Math. Nachr., 233/234 (2002), 55-76. doi: 10.1002/1522-2616(200201)233:1<55::AID-MANA55>3.0.CO;2-R. Google Scholar
D. G. de Figueiredo, J-P Gossez and P. Ubilla, Local superlinearity and sublinearity for indefinite semilinear elliptic problems, J. Funct. Anal., 199 (2003), 452-467. doi: 10.1016/S0022-1236(02)00060-5. Google Scholar
D. G. de Figueiredo, J-P Gossez and P. Ubilla, Multiplicity results for a family of semilinear elliptic problems under local superlinearity and sublinearity, J. Eur. Math. Soc., 8 (2006), 269-286. doi: 10.4171/JEMS/52. Google Scholar
F. Gazzola and A. Malchiodi, Some remark on the equation $-\Delta u = \lambda(1+u)^p$ for varying $\lambda, p$ and varying domains, Comm. Partial Differential Equations, 27 (2002), 809-845. doi: 10.1081/PDE-120002875. Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer-Verlag, 1983. doi: 10.1007/978-3-642-61798-0. Google Scholar
Q. Han and F. Lin, Elliptic Partial Differential Equations, Courant Lect. Notes Math., vol. 1, AMS, Providence, RI, 1997. Google Scholar
T-S Hsu and H-L Lin, Four positive solutions of semilinear elliptic equations involving concave and convex nonlinearities in $\mathbb{R}^n$, J. Math. Anal. Appl., 365 (2010), 758-775. doi: 10.1016/j.jmaa.2009.12.004. Google Scholar
Z. Liu and Z-Q Wang, Schrödinger equations with concave and convex nonlinearities, Z. angew. Math. Phys., 56 (2005), 609-629. doi: 10.1007/s00033-005-3115-6. Google Scholar
M. H. Protter and H. F. Weinberger, Maximum Principle in Differential Equations, Prentice Hall, Englewoood Cliffs, New Jersey, 1967. Google Scholar
T-F Wu, Multiple positive solutions for a class of concave-convex elliptic problems in $\mathbb{R}^n$ involving sign-changing weight, J. Funct. Anal., 258 (2010), 99-131. doi: 10.1016/j.jfa.2009.08.005. Google Scholar
Rim Bourguiba, Rosana Rodríguez-López. Existence results for fractional differential equations in presence of upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1723-1747. doi: 10.3934/dcdsb.2020180
Alessandro Fonda, Rodica Toader. A dynamical approach to lower and upper solutions for planar systems "To the memory of Massimo Tarallo". Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021012
Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
Haoyu Li, Zhi-Qiang Wang. Multiple positive solutions for coupled Schrödinger equations with perturbations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020294
Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, , () : -. doi: 10.3934/era.2021002
Lingyu Li, Jianfu Yang, Jinge Yang. Solutions to Chern-Simons-Schrödinger systems with external potential. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021008
Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436
Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456
Chungen Liu, Huabo Zhang. Ground state and nodal solutions for fractional Schrödinger-maxwell-kirchhoff systems with pure critical growth nonlinearity. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020292
Juntao Sun, Tsung-fang Wu. The number of nodal solutions for the Schrödinger–Poisson system under the effect of the weight function. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021011
Norman Noguera, Ademir Pastor. Scattering of radial solutions for quadratic-type Schrödinger systems in dimension five. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021018
Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298
Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318
Philippe Laurençot, Christoph Walker. Variational solutions to an evolution model for MEMS with heterogeneous dielectric properties. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 677-694. doi: 10.3934/dcdss.2020360
Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247
José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020376
Jose Anderson Cardoso Patricio Cerda Denilson Pereira Pedro Ubilla
|
CommonCrawl
|
Analytical Science and Technology (분석과학)
The Korean Society of Analytical Science (한국분석과학회)
Chemistry > Analytical Chemistry
Analytical science and technology is devoted to publication of original and significant research in the fundamental theory, practice and application of analytical and bioanalytical science. Contributors from broad spectrum of research fields such as chemistry, chemical engineering, material science, pharmaceuticals, agriculture, food and feed and environmental science are welcomed.
http://acoms.kisti.re.kr/journal.do?method=journalintro&journalSeq=J000037&menuId=0200&introMenuId=0101 KSCI KCI
Characterization of carbon black nanoparticles using asymmetrical flow field-flow fractionation (AsFlFFF)
Kim, Kihyun;Lee, Seungho;Kim, Woonjung 77
https://doi.org/10.5806/AST.2019.32.3.77 PDF
High viscosity carbon black dispersions are used in various industrial fields such as color cosmetics, rubber, tire, plastic and color filter ink. However, carbon black particles are unstable to heat due to inherent characteristics, and it is very difficult to keep the quality of the product constant due to agglomeration of particles. In general, particle size analysis is performed by dynamic light scattering (DLS) during the dispersion process in order to select the optimum dispersant in the carbon black dispersion process. However, the existing low viscosity analysis provides reproducible particle distribution analysis results, but it is difficult to select the optimum dispersant because it is difficult to analyze the reproducible particle distribution at high viscosity. In this study, dynamic light scattering (DLS) and asymmetrical flow field-flow fractionation (AsFlFFF) analysis methods were compared for reproducible particle size analysis of high viscosity carbon black. First, the stability of carbon black dispersion was investigated by particle size analysis by DLS and AsFlFFF according to milling time, and the validity of analytical method for the selection of the optimum dispersant useful for carbon black dispersion was confirmed. The correlation between color and particle size of particles in high viscosity carbon black dispersion was investigated by using colorimeter. The particle size distribution from AsFlFFF was consistent with the colorimetric results. As a result, the correlation between AsFlFFF and colorimetric results confirmed the possibility of a strong analytical method for determining the appropriate dispersant and milling time in high viscosity carbon black dispersions. In addition, for nanoparticles with relatively broad particle size distributions such as carbon black, AsFlFFF has been found to provide a more accurate particle size distribution than DLS. This is because AsFlFFF, unlike DLS, can analyze each fraction by separating particles by size.
Validation of an analytical method for cyanide determination in blood, urine, lung, and skin tissues of rats using gas chromatography mass spectrometry (GC-MS)
Shin, Min-Chul;Kwon, Young Sang;Kim, Jong-Hwan;Hwang, Kyunghwa;Seo, Jong-Su 88
This study was conducted to establish the analytical method for the determination of cyanide in blood, urine, lung and skin tissues in rats. In order to detect or quantify the sodium cyanide in above biological matrixes, it was derivatized to Pentafluorobenzyl cyanide (PFB-CN) using pentafluorobenzyl bromide (PFB-Br) and then reaction substance was analyzed using gas chromatography mass spectrometer (GC/MS)-SIM (selected ion monitoring) mode. The analytical method for cyanide determination was validated with respect to parameters such as selectivity, system suitability, linearity, accuracy and precision. No interference peak was observed for the determination of cyanide in blank samples, zero samples and lower limit of quantification (LLOQ) samples. The lowest limit detection (LOD) for cyanide was $10{\mu}M$. The linear dynamic range was from 10 to $200{\mu}M$ for cyanide with correlation coefficients higher than 0.99. For quality control samples at four different concentrations including LLOQ that were analyzed in quintuplicate, on six separate occasions, the accuracy and precision range from -14.1 % to 14.5% and 2.7 % to 18.3 %, respectively. The GC/MS-based method of analysis established in this study could be applied to the toxicokinetic study of cyanide on biological matrix substrates such as blood, urine, lung and skin tissues.
Establishment and validation of an analytical method for quality control of health functional foods derived from Agastache rugosa
Park, Keunbae;Jung, Dasom;Jin, Yan;Kim, Jin Hak;Geum, Jeong Ho;Lee, Jeongmi 96
Agastache rugosa, known as Korean mint, is a medicinal plant with many beneficial health effects. In this study, a simple and reliable HPLC-UV method was proposed for the quantification of rosmarinic acid (RA) in the aqueous extracts of A. rugosa. RA was selected as a quantification marker due to its easiness in procurement and analysis. The developed method involved chromatographic separation on a $C_{18}$ column ($250{\times}4.6mm$, $5{\mu}m$) at room temperature. The mobile phase consisted of water and acetonitrile both containing 2 % acetic acid and was run at a flow rate of $1mL\;min^{-1}$. The method was validated for specificity, linearity, precision, and accuracy. It was specific to RA and linear in the range of $50-300{\mu}g\;mL^{-1}$ ($r^2=0.9994$). Intra-day, inter-day, and inter-analyst precisions were ${\leq}0.91%\;RSD$, ${\leq}1.40%\;RSD$, and 1.94 % RSD, respectively. Accuracy was 93.3-95.9 % (${\leq}1.21%\;RSD$). The method could be applied to three batches of bulk samples and three batches of lab scale samples, which were found to be $0.64({\pm}0.04)mg\;g^{-1}$ and $0.48({\pm}0.02)mg\;g^{-1}$ for the dried raw materials of A. rugosa. The results show that the proposed method can be used as a readily applicable method for QC of health functional foods containing the aqueous extracts of A. rugosa.
Development of latent fingerprints contaminated with ethanol on paper surfaces
Park, Eun-Jung;Hong, Sungwook 105
https://doi.org/10.5806/AST.2019.32.3.105 PDF
Fingerprints may be contaminated with ethanol solutions. In order to solve the case, the law enforcement agency may need to visualize the fingerprint from these samples, but the development method has not been studied. The paper with latent fingerprint was contaminated with ethanol solution and then the blurring of ridge detail was observed. As a result, when the copy paper was contaminated with ethanol solutions of less than 75 % (v/v), the amino acid components of latent fingerprint residue blurred but lipid components of latent fingerprint residue didn't blurred. On the other hand, when the paper was contaminated with ethanol solution of more than 80 % (v/v), the amino acid components of latent fingerprint didn't blurred but the lipid components of latent fingerprint blurred. Therefore, it is found that the paper contaminated with ethanol solutions of less than 75 % (v/v) should be treated by oil red O (ORO) enhancing lipid components, and the paper contaminated with ethanol solutions of 80 % (v/v) or more should be treated by 1,2-indandione/zinc (1,2-IND/Zn) enhancing amino acid components. The blurring of ridge detail was not observed when the fingerprints were deposited with fingers contaminated with ethanol solution. This fingerprints were treated with 1,2-IND/Zn or ORO to compare the latent fingerprint development ability, and using 1,2-IND/Zn was able to visualize the latent fingerprint more clearly than using ORO.
Chemical enhancement of footwear impressions in urine on the surface of tiles
Kim, Sung Jin;Hong, Sungwook 113
Enhancement of footwear impressions in urine on the surface of tiles by using p-dimethylaminocinnamaldehyde (DMAC), which react with urea, and ninhydrin, 1,8-diazafluoren-9-one (DFO), 1,2-indanedione/zinc (1,2-IND/Zn), which react with amino acid, was studied. As a result of comparing the application methods of reagents, the ninhydrin and the 1,2-IND/Zn were suitable for application with spray method, which is spray directly on footwear impression, DFO and DMAC were suitable for application with dry contact method, which is applying heat with press to DMAC impregnated paper on footwear impression. In addition, DMAC applied with dry contact method showed best contrast and enhancement result in both white and black colored tiles by comparing of the sensitivity by different dilution ratio of urine and the aging time of footwear impressions in urine. And the result of applied with DMAC (with dry contact method) on the floor tiles collected at various places in a building's men's and women's bathrooms, it can be successfully enhanced that footwear impressions in urine. So it is believed that the method can be used to recover footwear impressions in urine from real crime scenes.
|
CommonCrawl
|
Optimal reinsurance-investment and dividends problem with fixed transaction costs
JIMO Home
doi: 10.3934/jimo.2019129
Note on $ Z $-eigenvalue inclusion theorems for tensors
Chaoqian Li , Yajun Liu and Yaotang Li
School of Mathematics and Statistics, Yunnan University, Kunming 650091, China
Received January 2019 Revised April 2019 Published October 2019
Wang et al. gave four $ Z $-eigenvalue inclusion intervals for tensors in [Discrete and Continuous Dynamical Systems Series B, 1 (2017), 187-198]. However, these intervals always include zero, and hence could not be used to identify the positive definiteness of a homogeneous polynomial form. In this note, we present a new $ Z $-eigenvalue inclusion interval with parameters for even-order tensors, which not only overcomes the above shortcomings under certain conditions, but also provides a checkable sufficient condition for the positive definiteness of homogeneous polynomial forms, as well as the asymptotically stability of time-invariant polynomial systems.
Keywords: Z-eigenvalue, inclusion interval, asymptotically stable, polynomial systems.
Mathematics Subject Classification: Primary: 15A18, 15A69, 65F15, 65F10.
Citation: Chaoqian Li, Yajun Liu, Yaotang Li. Note on $ Z $-eigenvalue inclusion theorems for tensors. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2019129
K. C. Chang, K. J. Pearson and T. Zhang, Some variational principles for Z-eigenvalues of nonnegative tensors, Linear Algebra Appl., 438 (2013), 4166-4182. doi: 10.1016/j.laa.2013.02.013. Google Scholar
C. Deng, H. Li and C. Bu, Brauer-type eigenvalue inclusion sets of stochastic/irreducible tensors and positive definiteness of tensors, Linear Algebra Appl., 556 (2018), 55-69. doi: 10.1016/j.laa.2018.06.032. Google Scholar
P. V. D. Driessche, Reproduction numbers of infectious disease models., Infectious Disease Model., 2 (2017), 288-303. doi: 10.1016/j.idm.2017.06.002. Google Scholar
O. Duchenne, F. Bach and I. S. Kweon, et al, A tensor-based algorithm for high-order graph matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 (2011), 2383-2395. doi: 10.1109/CVPR.2009.5206619. Google Scholar
J. He, Bounds for the largest eigenvalue of nonnegative tensors, J. Comput. Anal. Appl., 20 (2016), 1290-1301. Google Scholar
J. He and T. Huang, Upper bound for the largest Z-eigenvalue of positive tensors, Appl. Math. Lett., 38 (2014), 110-114. doi: 10.1016/j.aml.2014.07.012. Google Scholar
J. He, Y. Liu and H. Ke, et al, Bounds for the Z-spectral radius of nonnegative tensors, SpringerPlus, 5 (2016). doi: 10.1186/s40064-016-3338-3. Google Scholar
J. He, Y. Liu, J. Tian and Z. Zhang, New sufficient condition for the positive definiteness of fourth order tensors, Mathematics, 303 (2018), 1-10. doi: 10.3390/math6120303. Google Scholar
E. Kofidis and P. Regalia, On the best rank-1 approximation of higher-order supersymmetric tensors, SIAM J. Matrix Anal. Appl., 23 (2002), 863-884. doi: 10.1137/S0895479801387413. Google Scholar
T. Kolda and J. Mayo, Shifted power method for computing tensor eigenpairs, SIAM J. Matrix Anal. Appl., 32 (2011), 1095-1124. doi: 10.1137/100801482. Google Scholar
C. Li, Y. Li and X. Kong, New eigenvalue inclusion sets for tensors, Numer. Linear Algebra Appl., 21 (2014), 39-50. doi: 10.1002/nla.1858. Google Scholar
C. Li, F. Wang, J. Zhao, Y. Zhu and Y. Li, Criterions for the positive definiteness of real supersymmetric tensors, J. Comput. Appl. Math., 255 (2014), 1-14. doi: 10.1016/j.cam.2013.04.022. Google Scholar
G. Li, L. Qi and G. Yu, The Z-eigenvalues of a symmetric tensor and its application to spectral hypergraph theory, Numer. Linear Algebra Appl., 20 (2013), 1001-1029. doi: 10.1002/nla.1877. Google Scholar
W. Li, D. Liu and S. W. Vong, Z-eigenpair bounds for an irreducible nonnegative tensor, Linear Algebra Appl., 483 (2015), 182-199. doi: 10.1016/j.laa.2015.05.033. Google Scholar
M. Ng, L. Qi and G. Zhou, Finding the largest eigenvalue of a nonnegative tensor, SIAM J. Matrix Anal. Appl., 31 (2009), 1090-1099. doi: 10.1137/09074838X. Google Scholar
Q. Ni, L. Qi and F. Wang, An eigenvalue method for testing positive definiteness of a multivariate form, IEEE Trans. Automat. Control, 53 (2008), 1096-1107. doi: 10.1109/TAC.2008.923679. Google Scholar
L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput., 40 (2005), 1302-1324. doi: 10.1016/j.jsc.2005.05.007. Google Scholar
L. Qi, Rank and eigenvalues of a supersymmetric tensor, the multivariate homogeneous polynomial and the algebraic hypersurface it defines, J. Symbolic Comput., 41 (2006), 1309-1327. doi: 10.1016/j.jsc.2006.02.011. Google Scholar
L. Qi and Z. Luo, Tensor Analysis: Spectral Theory and Special Tensors, Society for Industrial and Applied Mathematics, Philadelphia, 2017. doi: 10.1137/1.9781611974751.ch1. Google Scholar
L. Qi, F. Wang and Y. Wang, Z-eigenvalue methods for a global polynomial optimization problem., Math. Program., 118 (2009), 301-316. doi: 10.1007/s10107-007-0193-6. Google Scholar
C. Sang, A new Brauer-type Z-eigenvalue inclusion set for tensors, Numer. Algorithms, 80 (2019), 781-794. doi: 10.1007/s11075-018-0506-2. Google Scholar
Y. Song and L. Qi, Spectral properties of positively homogeneous operators induced by higher order tensors, SIAM J. Matrix Anal. Appl., 34 (2013), 1581-1595. doi: 10.1137/130909135. Google Scholar
G. Wang, G. Zhou and L. Caccetta, Z-eigenvalue inclusion theorems for tensors, Discrete Contin. Dyn. Syst. Ser. B, 22 (2017), 187-198. doi: 10.3934/dcdsb.2017009. Google Scholar
Gang Wang, Guanglu Zhou, Louis Caccetta. Z-Eigenvalue Inclusion Theorems for Tensors. Discrete & Continuous Dynamical Systems - B, 2017, 22 (1) : 187-198. doi: 10.3934/dcdsb.2017009
Yaotang Li, Suhua Li. Exclusion sets in the Δ-type eigenvalue inclusion set for tensors. Journal of Industrial & Management Optimization, 2019, 15 (2) : 507-516. doi: 10.3934/jimo.2018054
M. W. Hirsch, Hal L. Smith. Asymptotically stable equilibria for monotone semiflows. Discrete & Continuous Dynamical Systems - A, 2006, 14 (3) : 385-398. doi: 10.3934/dcds.2006.14.385
Scipio Cuccagna. Orbitally but not asymptotically stable ground states for the discrete NLS. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 105-134. doi: 10.3934/dcds.2010.26.105
Gang Wang, Yuan Zhang. $ Z $-eigenvalue exclusion theorems for tensors. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-12. doi: 10.3934/jimo.2019039
François Genoud. Orbitally stable standing waves for the asymptotically linear one-dimensional NLS. Evolution Equations & Control Theory, 2013, 2 (1) : 81-100. doi: 10.3934/eect.2013.2.81
Kenneth R. Meyer, Jesús F. Palacián, Patricia Yanguas. Normally stable hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1201-1214. doi: 10.3934/dcds.2013.33.1201
Romain Aimino, Huyi Hu, Matthew Nicol, Andrei Török, Sandro Vaienti. Polynomial loss of memory for maps of the interval with a neutral fixed point. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 793-806. doi: 10.3934/dcds.2015.35.793
Hooton Edward, Balanov Zalman, Krawcewicz Wieslaw, Rachinskii Dmitrii. Sliding Hopf bifurcation in interval systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3545-3566. doi: 10.3934/dcds.2017152
P.E. Kloeden, Desheng Li, Chengkui Zhong. Uniform attractors of periodic and asymptotically periodic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 213-232. doi: 10.3934/dcds.2005.12.213
Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Homoclinic orbits for a class of asymptotically quadratic Hamiltonian systems. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2855-2878. doi: 10.3934/cpaa.2019128
Jinlong Bai, Xuewei Ju, Desheng Li, Xiulian Wang. On the eventual stability of asymptotically autonomous systems with constraints. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4457-4473. doi: 10.3934/dcdsb.2019127
Alain Jacquemard, Weber Flávio Pereira. On periodic orbits of polynomial relay systems. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 331-347. doi: 10.3934/dcds.2007.17.331
Michael A. Jones, Diana M. Thomas. Nim-induced dynamical systems over Z2. Conference Publications, 2005, 2005 (Special) : 453-462. doi: 10.3934/proc.2005.2005.453
Alexandra Skripchenko. Symmetric interval identification systems of order three. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 643-656. doi: 10.3934/dcds.2012.32.643
Tayel Dabbous. Identification for systems governed by nonlinear interval differential equations. Journal of Industrial & Management Optimization, 2012, 8 (3) : 765-780. doi: 10.3934/jimo.2012.8.765
Shiwang Ma. Nontrivial periodic solutions for asymptotically linear hamiltonian systems at resonance. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2361-2380. doi: 10.3934/cpaa.2013.12.2361
Jun Wang, Junxiang Xu, Fubao Zhang. Homoclinic orbits for a class of Hamiltonian systems with superquadratic or asymptotically quadratic potentials. Communications on Pure & Applied Analysis, 2011, 10 (1) : 269-286. doi: 10.3934/cpaa.2011.10.269
Paolo Gidoni, Alessandro Margheri. Lower bound on the number of periodic solutions for asymptotically linear planar Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 585-606. doi: 10.3934/dcds.2019024
Michiko Yuri. Polynomial decay of correlations for intermittent sofic systems. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 445-464. doi: 10.3934/dcds.2008.22.445
Chaoqian Li Yajun Liu Yaotang Li
\begin{document}$ Z $\end{document}-eigenvalue inclusion theorems for tensors" readonly="readonly">
|
CommonCrawl
|
Burke says he definitely got the glow. "The first time I took it, I was working on a business plan. I had to juggle multiple contingencies in my head, and for some reason a tree with branches jumped into my head. I was able to place each contingency on a branch, retract and go back to the trunk, and in this visual way I was able to juggle more information."
It looks like the overall picture is that nicotine is absorbed well in the intestines and the colon, but not so well in the stomach; this might be the explanation for the lack of effect, except on the other hand, the specific estimates I see are that 10-20% of the nicotine will be bioavailable in the stomach (as compared to 50%+ for mouth or lungs)… so any of my doses of >5ml should have overcome the poorer bioavailability! But on the gripping hand, these papers are mentioning something about the liver metabolizing nicotine when absorbed through the stomach, so…
Take at 10 AM; seem a bit more active but that could just be the pressure of the holiday season combined with my nice clean desk. I do the chores without too much issue and make progress on other things, but nothing major; I survive going to The Sitter without too much tiredness, so ultimately I decide to give the palm to it being active, but only with 60% confidence. I check the next day, and it was placebo. Oops.
A television advertisement goes: "It's time to let Focus Factor be your memory-fog lifter." But is this supplement up to task? Focus Factor wastes no time, whether paid airtime or free online presence: it claims to be America's #1 selling brain health supplement with more than 4 million bottles sold and millions across the country actively caring for their brain health. It deems itself instrumental in helping anyone stay focused and on top of his game at home, work, or school. Learn More...
Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.)
American employers are already squeezing more productivity out of fewer workers, so one wonders whether we might feel pressure to enhance our brainpower pharmaceutically, should the state of the art develop so far. Already, workers may be tempted to seek prescriptions for Provigil, a drug that treats daytime sleepiness. Provigil was originally approved as a treatment for narcolepsy and was subsequently approved for use by people who work swing shifts and suffer from excessive daytime sleepiness.
First half at 6 AM; second half at noon. Wrote a short essay I'd been putting off and napped for 1:40 from 9 AM to 10:40. This approach seems to work a little better as far as the aboulia goes. (I also bother to smell my urine this time around - there's a definite off smell to it.) Nights: 10:02; 8:50; 10:40; 7:38 (2 bad nights of nasal infections); 8:28; 8:20; 8:43 (▆▃█▁▂▂▃).
Similarly, Mehta et al 2000 noted that the positive effects of methylphenidate (40 mg) on spatial working memory performance were greatest in those volunteers with lower baseline working memory capacity. In a study of the effects of ginkgo biloba in healthy young adults, Stough et al 2001 found improved performance in the Trail-Making Test A only in the half with the lower verbal IQ.
The abuse of drugs is something that can lead to large negative outcomes. If you take Ritalin (Methylphenidate) or Adderall (mixed amphetamine salts) but don't have ADHD, you may experience more focus. But what many people don't know is that the drug is very similar to amphetamines. And the use of Ritalin is associated with serious adverse events of drug dependence, overdose and suicide attempts [80]. Taking a drug for another reason than originally intended is stupid, irresponsible and very dangerous.
12:18 PM. (There are/were just 2 Adderall left now.) I manage to spend almost the entire afternoon single-mindedly concentrating on transcribing two parts of a 1996 Toshio Okada interview (it was very long, and the formatting more challenging than expected), which is strong evidence for Adderall, although I did feel fairly hungry while doing it. I don't go to bed until midnight and & sleep very poorly - despite taking triple my usual melatonin! Inasmuch as I'm already fairly sure that Adderall damages my sleep, this makes me even more confident (>80%). When I grumpily crawl out of bed and check: it's Adderall. (One Adderall left.)
Do note that this isn't an extensive list by any means, there are plenty more 'smart drugs' out there purported to help focus and concentration. Most (if not all) are restricted under the Psychoactive Substances Act, meaning they're largely illegal to sell. We strongly recommend against using these products off-label, as they can be dangerous both due to side effects and their lack of regulation on the grey/black market.
Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try.
Supplements, medications, and coffee certainly might play a role in keeping our brains running smoothly at work or when we're trying to remember where we left our keys. But the long-term effects of basic lifestyle practices can't be ignored. "For good brain health across the life span, you should keep your brain active," Sahakian says. "There is good evidence for 'use it or lose it.'" She suggests brain-training apps to improve memory, as well as physical exercise. "You should ensure you have a healthy diet and not overeat. It is also important to have good-quality sleep. Finally, having a good work-life balance is important for well-being." Try these 8 ways to get smarter while you sleep.
Neuroplasticity, or the brain's ability to change and reorganize itself in response to intrinsic and extrinsic factors, indicates great potential for us to enhance brain function by medical or other interventions. Psychotherapy has been shown to induce structural changes in the brain. Other interventions that positively influence neuroplasticity include meditation, mindfulness , and compassion.
In most cases, cognitive enhancers have been used to treat people with neurological or mental disorders, but there is a growing number of healthy, "normal" people who use these substances in hopes of getting smarter. Although there are many companies that make "smart" drinks, smart power bars and diet supplements containing certain "smart" chemicals, there is little evidence to suggest that these products really work. Results from different laboratories show mixed results; some labs show positive effects on memory and learning; other labs show no effects. There are very few well-designed studies using normal healthy people.
When it comes to coping with exam stress or meeting that looming deadline, the prospect of a "smart drug" that could help you focus, learn and think faster is very seductive. At least this is what current trends on university campuses suggest. Just as you might drink a cup of coffee to help you stay alert, an increasing number of students and academics are turning to prescription drugs to boost academic performance.
This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137.
Historically used to help people with epilepsy, piracetam is used in some cases of myoclonus, or muscle twitching. Its actual mechanism of action is unclear: It doesn't act exactly as a sedative or stimulant, but still influences cognitive function, and is believed to act on receptors for acetylcholine in the brain. Piracetam is used off-label as a 'smart drug' to help focus and concentration or sometimes as a way to allegedly boost your mood. Again, piracetam is a prescription-only drug - any supply to people without a prescription is illegal, and supplying it may result in a fine or prison sentence.
Since my experiment had a number of flaws (non-blind, varying doses at varying times of day), I wound up doing a second better experiment using blind standardized smaller doses in the morning. The negative effect was much smaller, but there was still no mood/productivity benefit. Having used up my first batch of potassium citrate in these 2 experiments, I will not be ordering again since it clearly doesn't work for me.
Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners.
The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we'll put the 500 at $5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we'll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it's necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: 5 + (>5 \times 7.25) = >41.
When I spoke with Jesse Lawler, who hosts the podcast Smart Drugs Smarts, about breakthroughs in brain health and neuroscience, he was unsurprised to hear of my disappointing experience. Many nootropics are supposed to take time to build up in the body before users begin to feel their impact. But even then, says Barry Gordon, a neurology professor at the Johns Hopkins Medical Center, positive results wouldn't necessarily constitute evidence of a pharmacological benefit.
With this experiment, I broke from the previous methodology, taking the remaining and final half Nuvigil at midnight. I am behind on work and could use a full night to catch up. By 8 AM, I am as usual impressed by the Nuvigil - with Modalert or something, I generally start to feel down by mid-morning, but with Nuvigil, I feel pretty much as I did at 1 AM. Sleep: 9:51/9:15/8:27
Amphetamines have a long track record as smart drugs, from the workaholic mathematician Paul Erdös, who relied on them to get through 19-hour maths binges, to the writer Graham Greene, who used them to write two books at once. More recently, there are plenty of anecdotal accounts in magazines about their widespread use in certain industries, such as journalism, the arts and finance.
Nor am I sure how important the results are - partway through, I haven't noticed anything bad, at least, from taking Noopept. And any effect is going to be subtle: people seem to think that 10mg is too small for an ingested rather than sublingual dose and I should be taking twice as much, and Noopept's claimed to be a chronic gradual sort of thing, with less of an acute effect. If the effect size is positive, regardless of statistical-significance, I'll probably think about doing a bigger real self-experiment (more days blocked into weeks or months & 20mg dose)
When comparing supplements, consider products with a score above 90% to get the greatest benefit from smart pills to improve memory. Additionally, we consider the reviews that users send to us when scoring supplements, so you can determine how well products work for others and use this information to make an informed decision. Every month, our editor puts her name on that month's best smart bill, in terms of results and value offered to users.
One of the most popular legal stimulants in the world, nicotine is often conflated with the harmful effects of tobacco; considered on its own, it has performance & possibly health benefits. Nicotine is widely available at moderate prices as long-acting nicotine patches, gums, lozenges, and suspended in water for vaping. While intended for smoking cessation, there is no reason one cannot use a nicotine patch or nicotine gum for its stimulant effects.
The experiment then is straightforward: cut up a fresh piece of gum, randomly select from it and an equivalent dry piece of gum, and do 5 rounds of dual n-back to test attention/energy & WM. (If it turns out to be placebo, I'll immediately use the remaining active dose: no sense in wasting gum, and this will test whether nigh-daily use renders nicotine gum useless, similar to how caffeine may be useless if taken daily. If there's 3 pieces of active gum left, then I wrap it very tightly in Saran wrap which is sticky and air-tight.) The dose will be 1mg or 1/4 a gum. I cut up a dozen pieces into 4 pieces for 48 doses and set them out to dry. Per the previous power analyses, 48 groups of DNB rounds likely will be enough for detecting small-medium effects (partly since we will be only looking at one metric - average % right per 5 rounds - with no need for multiple correction). Analysis will be one-tailed, since we're looking for whether there is a clear performance improvement and hence a reason to keep using nicotine gum (rather than whether nicotine gum might be harmful).
Pharmaceutical, substance used in the diagnosis, treatment, or prevention of disease and for restoring, correcting, or modifying organic functions. (See also pharmaceutical industry.) Records of medicinal plants and minerals date to ancient Chinese, Hindu, and Mediterranean civilizations. Ancient Greek physicians such as Galen used a variety of drugs in their profession.…
Even though smart drugs come with a long list of benefits, their misuse can cause negative side effects. Excess use can cause anxiety, fear, headaches, increased blood pressure, and more. Considering this, it is imperative to study usage instructions: how often can you take the pill, the correct dosage and interaction with other medication/supplements.
One of the other suggested benefits is for boosting serotonin levels; low levels of serotonin are implicated in a number of issues like depression. I'm not yet sure whether tryptophan has helped with motivation or happiness. Trial and error has taught me that it's a bad idea to take tryptophan in the morning or afternoon, however, even smaller quantities like 0.25g. Like melatonin, the dose-response curve is a U: ~1g is great and induces multiple vivid dreams for me, but ~1.5g leads to an awful night and a headache the next day that was worse, if anything, than melatonin. (One morning I woke up with traces of at least 7 dreams, although I managed to write down only 2. No lucid dreams, though.)
A key ingredient of Noehr's chemical "stack" is a stronger racetam called Phenylpiracetam. He adds a handful of other compounds considered to be mild cognitive enhancers. One supplement, L-theanine, a natural constituent in green tea, is claimed to neutralise the jittery side-effects of caffeine. Another supplement, choline, is said to be important for experiencing the full effects of racetams. Each nootropic is distinct and there can be a lot of variation in effect from person to person, says Lawler. Users semi-annonymously compare stacks and get advice from forums on sites such as Reddit. Noehr, who buys his powder in bulk and makes his own capsules, has been tweaking chemicals and quantities for about five years accumulating more than two dozens of jars of substances along the way. He says he meticulously researches anything he tries, buys only from trusted suppliers and even blind-tests the effects (he gets his fiancée to hand him either a real or inactive capsule).
I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn't, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I'm not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road.
The above are all reasons to expect that even if I do excellent single-subject design self-experiments, there will still be the old problem of internal validity versus external validity: an experiment may be wrong or erroneous or unlucky in some way (lack of internal validity) or be right but not matter to anyone else (lack of external validity). For example, alcohol makes me sad & depressed; I could run the perfect blind randomized experiment for hundreds of trials and be extremely sure that alcohol makes me less happy, but would that prove that alcohol makes everyone sad or unhappy? Of course not, and as far as I know, for a lot of people alcohol has the opposite effect. So my hypothetical alcohol experiment might have tremendous internal validity (it does prove that I am sadder after inebriating), and zero external validity (someone who has never tried alcohol learns nothing about whether they will be depressed after imbibing). Keep this in mind if you are minded to take the experiments too seriously.
While the mechanism is largely unknown, one commonly mechanism possibility is that light of the relevant wavelengths is preferentially absorbed by the protein cytochrome c oxidase, which is a key protein in mitochondrial metabolism and production of ATP, substantially increasing output, and this extra output presumably can be useful for cellular activities like healing or higher performance.
So the chi-squared believes there is a statistically-significant difference, the two-sample test disagrees, and the binomial also disagrees. Since I regarded it as a dubious theory, can't see a difference, and the binomial seems like the most appropriate test, I conclude that several months of 1mg iodine did not change my eye color. (As a final test, when I posted the results on the Longecity forum where people were claiming the eye color change, I swapped the labels on the photos to see if anyone would claim something along the lines when I look at the photos, I can see a difference!. I thought someone might do that, which would be a damning demonstration of their biases & wishful thinking, but no one did.)
In terms of legal status, Adrafinil is legal in the United States but is unregulated. You need to purchase this supplement online, as it is not a prescription drug at this time. Modafinil on the other hand, is heavily regulated throughout the United States. It is being used as a narcolepsy drug, but isn't available over the counter. You will need to obtain a prescription from your doctor, which is why many turn to Adrafinil use instead.
The price is not as good as multivitamins or melatonin. The studies showing effects generally use pretty high dosages, 1-4g daily. I took 4 capsules a day for roughly 4g of omega acids. The jar of 400 is 100 days' worth, and costs ~$17, or around 17¢ a day. The general health benefits push me over the edge of favoring its indefinite use, but looking to economize. Usually, small amounts of packaged substances are more expensive than bulk unprocessed, so I looked at fish oil fluid products; and unsurprisingly, liquid is more cost-effective than pills (but like with the powders, straight fish oil isn't very appetizing) in lieu of membership somewhere or some other price-break. I bought 4 bottles (16 fluid ounces each) for $53.31 total (thanks to coupons & sales), and each bottle lasts around a month and a half for perhaps half a year, or ~$100 for a year's supply. (As it turned out, the 4 bottles lasted from 4 December 2010 to 17 June 2011, or 195 days.) My next batch lasted 19 August 2011-20 February 2012, and cost $58.27. Since I needed to buy empty 00 capsules (for my lithium experiment) and a book (Stanovich 2010, for SIAI work) from Amazon, I bought 4 more bottles of 16fl oz Nature's Answer (lemon-lime) at $48.44, which I began using 27 February 2012. So call it ~$70 a year.
I almost resigned myself to buying patches to cut (and let the nicotine evaporate) and hope they would still stick on well enough afterwards to be indistinguishable from a fresh patch, when late one sleepless night I realized that a piece of nicotine gum hanging around on my desktop for a week proved useless when I tried it, and that was the answer: if nicotine evaporates from patches, then it must evaporate from gum as well, and if gum does evaporate, then to make a perfect placebo all I had to do was cut some gum into proper sizes and let the pieces sit out for a while. (A while later, I lost a piece of gum overnight and consumed the full 4mg to no subjective effect.) Google searches led to nothing indicating I might be fooling myself, and suggested that evaporation started within minutes in patches and a patch was useless within a day. Just a day is pushing it (who knows how much is left in a useless patch?), so I decided to build in a very large safety factor and let the gum sit for around a month rather than a single day.
A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea.
There is no clear answer to this question. Many of the smart drugs have decades of medical research and widespread use behind them, as well as only minor, manageable, or nonexistent side effects, but are still used primarily as a crutch for people already experiencing cognitive decline, rather than as a booster-rocket for people with healthy brains. Unfortunately, there is a bias in Western medicine in favor of prescribing drugs once something bad has already begun, rather than for up-front prevention. There's also the principle of "leave well enough alone" – in this case, extended to mean, don't add unnecessary or unnatural drugs to the human body in place of a normal diet. [Smart Drug Smarts would argue that the average human diet has strayed so far from what is physiologically "normal" that leaving well enough alone is already a failed proposition.]
(We already saw that too much iodine could poison both adults and children, and of course too little does not help much - iodine would seem to follow a U-curve like most supplements.) The listed doses at iherb.com often are ridiculously large: 10-50mg! These are doses that seems to actually be dangerous for long-term consumption, and I believe these are doses that are designed to completely suffocate the thyroid gland and prevent it from absorbing any more iodine - which is useful as a short-term radioactive fallout prophylactic, but quite useless from a supplementation standpoint. Fortunately, there are available doses at Fitzgerald 2012's exact dose, which is roughly the daily RDA: 0.15mg. Even the contrarian materials seem to focus on a modest doubling or tripling of the existing RDA, so the range seems relatively narrow. I'm fairly confident I won't overshoot if I go with 0.15-1mg, so let's call this 90%.
Capsule Connection sells 1000 00 pills (the largest pills) for $9. I already have a pill machine, so that doesn't count (a sunk cost). If we sum the grams per day column from the first table, we get 9.75 grams a day. Each 00 pill can take around 0.75 grams, so we need 13 pills. (Creatine is very bulky, alas.) 13 pills per day for 1000 days is 13,000 pills, and 1,000 pills is $9 so we need 13 units and 13 times 9 is $117.
"I love this book! As someone that deals with an autoimmune condition, I deal with sever brain fog. I'm currently in school and this has had a very negative impact on my learning. I have been looking for something like this to help my brain function better. This book has me thinking clearer, and my memory has improved. I'm eating healthier and overall feeling much better. This book is very easy to follow and also has some great recipes included."
These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds."
Modafinil, sold under the name Provigil, is a stimulant that some have dubbed the "genius pill." It is a wakefulness-promoting agent (modafinil) and glutamate activators (ampakine). Originally developed as a treatment for narcolepsy and other sleep disorders, physicians are now prescribing it "off-label" to cellists, judges, airline pilots, and scientists to enhance attention, memory and learning. According to Scientific American, "scientific efforts over the past century [to boost intelligence] have revealed a few promising chemicals, but only modafinil has passed rigorous tests of cognitive enhancement." A stimulant, it is a controlled substance with limited availability in the U.S.
Four of the studies focused on middle and high school students, with varied results. Boyd, McCabe, Cranford, and Young (2006) found a 2.3% lifetime prevalence of nonmedical stimulant use in their sample, and McCabe, Teter, and Boyd (2004) found a 4.1% lifetime prevalence in public school students from a single American public school district. Poulin (2001) found an 8.5% past-year prevalence in public school students from four provinces in the Atlantic region of Canada. A more recent study of the same provinces found a 6.6% and 8.7% past-year prevalence for MPH and AMP use, respectively (Poulin, 2007).
Attention-deficit/hyperactivity disorder (ADHD), a behavioral syndrome characterized by inattention and distractibility, restlessness, inability to sit still, and difficulty concentrating on one thing for any period of time. ADHD most commonly occurs in children, though an increasing number of adults are being diagnosed with the disorder. ADHD is three times more…
Amongst the brain focus supplements that are currently available in the nootropic drug market, Modafinil is probably the most common focus drug or one of the best focus pills used by people, and it's praised to be the best nootropic available today. It is a powerful cognitive enhancer that is great for boosting your overall alertness with least side effects. However, to get your hands on this drug, you would require a prescription.
"As a neuro-optometrist who cares for many brain-injured patients experiencing visual challenges that negatively impact the progress of many of their other therapies, Cavin's book is a god-send! The very basic concept of good nutrition among all the conflicting advertisements and various "new" food plans and diets can be enough to put anyone into a brain fog much less a brain injured survivor! Cavin's book is straightforward and written from not only personal experience but the validation of so many well-respected contemporary health care researchers and practitioners! I will certainly be recommending this book as a "Survival/Recovery 101" resource for all my patients including those without brain injuries because we all need optimum health and well-being and it starts with proper nourishment! Kudos to Cavin Balaster!"
A total of 330 randomly selected Saudi adolescents were included. Anthropometrics were recorded and fasting blood samples were analyzed for routine analysis of fasting glucose, lipid levels, calcium, albumin and phosphorous. Frequency of coffee and tea intake was noted. 25-hydroxyvitamin D levels were measured using enzyme-linked immunosorbent assays…Vitamin D levels were significantly highest among those consuming 9-12 cups of tea/week in all subjects (p-value 0.009) independent of age, gender, BMI, physical activity and sun exposure.
Spaced repetition at midnight: 3.68. (Graphing preceding and following days: ▅▄▆▆▁▅▆▃▆▄█ ▄ ▂▄▄▅) DNB starting 12:55 AM: 30/34/41. Transcribed Sawaragi 2005, then took a walk. DNB starting 6:45 AM: 45/44/33. Decided to take a nap and then take half the armodafinil on awakening, before breakfast. I wound up oversleeping until noon (4:28); since it was so late, I took only half the armodafinil sublingually. I spent the afternoon learning how to do value of information calculations, and then carefully working through 8 or 9 examples for my various pages, which I published on Lesswrong. That was a useful little project. DNB starting 12:09 AM: 30/38/48. (To graph the preceding day and this night: ▇▂█▆▅▃▃▇▇▇▁▂▄ ▅▅▁▁▃▆) Nights: 9:13; 7:24; 9:13; 8:20; 8:31.
These are the most highly studied ingredients and must be combined together to achieve effective results. If any one ingredient is missing in the formula, you may not get the full cognitive benefits of the pill. It is important to go with a company that has these critical ingredients as well as a complete array of supporting ingredients to improve their absorption and effectiveness. Anything less than the correct mix will not work effectively.
The use of cognitive enhancers by healthy individuals sparked debate about ethics and safety. Cognitive enhancement by pharmaceutical means was considered a form of illicit drug use in some places, even while other cognitive enhancers, such as caffeine and nicotine, were freely available. The conflict therein raised the possibility for further acceptance of smart drugs in the future. However, the long-term effects of smart drugs on otherwise healthy brains were unknown, delaying safety assessments.
Medication can be ineffective if the drug payload is not delivered at its intended place and time. Since an oral medication travels through a broad pH spectrum, the pill encapsulation could dissolve at the wrong time. However, a smart pill with environmental sensors, a feedback algorithm and a drug release mechanism can give rise to smart drug delivery systems. This can ensure optimal drug delivery and prevent accidental overdose.
Nature magazine conducted a poll asking its readers about their cognitive-enhancement practices and their attitudes toward cognitive enhancement. Hundreds of college faculty and other professionals responded, and approximately one fifth reported using drugs for cognitive enhancement, with Ritalin being the most frequently named (Maher, 2008). However, the nature of the sample—readers choosing to answer a poll on cognitive enhancement—is not representative of the academic or general population, making the results of the poll difficult to interpret. By analogy, a poll on Vermont vacations, asking whether people vacation in Vermont, what they think about Vermont, and what they do if and when they visit, would undoubtedly not yield an accurate estimate of the fraction of the population that takes its vacations in Vermont.
Two variants of the Towers of London task were used by Elliott et al. (1997) to study the effects of MPH on planning. The object of this task is for subjects to move game pieces from one position to another while adhering to rules that constrain the ways in which they can move the pieces, thus requiring subjects to plan their moves several steps ahead. Neither version of the task revealed overall effects of the drug, but one version showed impairment for the group that received the drug first, and the other version showed enhancement for the group that received the placebo first.
One of the most obscure -racetams around, coluracetam (Smarter Nootropics, Ceretropic, Isochroma) acts in a different way from piracetam - piracetam apparently attacks the breakdown of acetylcholine while coluracetam instead increases how much choline can be turned into useful acetylcholine. This apparently is a unique mechanism. A crazy Longecity user, ScienceGuy ponied up $16,000 (!) for a custom synthesis of 500g; he was experimenting with 10-80mg sublingual doses (the ranges in the original anti-depressive trials) and reported a laundry list of effects (as does Isochroma): primarily that it was anxiolytic and increased work stamina. Unfortunately for my stack, he claims it combines poorly with piracetam. He offered free 2g samples for regulars to test his claims. I asked & received some.
So I eventually got around to ordering another thing of nicotine gum, Habitrol Nicotine Gum, 4mg MINT flavor COATED gum. 96 pieces per box. Gum should be easier to double-blind myself with than nicotine patches - just buy some mint gum. If 4mg is too much, cut the gum in half or whatever. When it arrived, my hopes were borne out: the gum was rectangular and soft, which made it easy to cut into fourths.
Other drugs, like cocaine, are used by bankers to manage their 18-hour workdays [81]. Unlike nootropics, dependency is very likely and not only mentally but also physically. Bankers and other professionals who take drugs to improve their productivity will become dependent. Almost always, the negative consequences outweigh any positive outcomes from using drugs.
QUALITY : They use pure and high quality Ingredients and are the ONLY ones we found that had a comprehensive formula including the top 5 most proven ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine, Bacopin and N-Acetyl L-Tyrosine. Thrive Natural's Super Brain Renew is fortified with just the right ingredients to help your body fully digest the active ingredients. No other brand came close to their comprehensive formula of 39 proven ingredients. The "essential 5" are the most important elements to help improve your memory, concentration, focus, energy, and mental clarity. But, what also makes them stand out above all the rest was that they have several supporting vitamins and nutrients to help optimize brain and memory function. A critical factor for us is that this company does not use fillers, binders or synthetics in their product. We love the fact that their capsules are vegetarian, which is a nice bonus for health conscious consumers.
(People aged <=18 shouldn't be using any of this except harmless stuff - where one may have nutritional deficits - like fish oil & vitamin D; melatonin may be especially useful, thanks to the effects of screwed-up school schedules & electronics use on teenagers' sleep. Changes in effects with age are real - amphetamines' stimulant effects and modafinil's histamine-like side-effects come to mind as examples.)
Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive.
"We stumbled upon fasting as a way to optimize cognition and make yourself into a more efficient human being," says Manuel Lam, an internal medicine physician who advises Nootrobox on clinical issues. He and members of the company's executive team have implanted glucose monitors in their arms — not because they fear diabetes but because they wish to track the real-time effect of the foods they eat.
My predictions were substantially better than random chance7, so my default belief - that Adderall does affect me and (mostly) for the better - is borne out. I usually sleep very well and 3 separate incidents of horrible sleep in a few weeks seems rather unlikely (though I didn't keep track of dates carefully enough to link the Zeo data with the Adderall data). Between the price and the sleep disturbances, I don't think Adderall is personally worthwhile.
As mentioned earlier, cognitive control is needed not only for inhibiting actions, but also for shifting from one kind of action or mental set to another. The WCST taxes cognitive control by requiring the subject to shift from sorting cards by one dimension (e.g., shape) to another (e.g., color); failures of cognitive control in this task are manifest as perseverative errors in which subjects continue sorting by the previously successful dimension. Three studies included the WCST in their investigations of the effects of d-AMP on cognition (Fleming et al., 1995; Mattay et al., 1996, 2003), and none revealed overall effects of facilitation. However, Mattay et al. (2003) subdivided their subjects according to COMT genotype and found differences in both placebo performance and effects of the drug. Subjects who were homozygous for the val allele (associated with lower prefrontal dopamine activity) made more perseverative errors on placebo than other subjects and improved significantly with d-AMP. Subjects who were homozygous for the met allele performed best on placebo and made more errors on d-AMP.
The majority of studies seem to be done on types of people who are NOT buying nootropics. Like the elderly, people with blatant cognitive deficits, etc. This is analogous to some of the muscle-building research but more extreme. Like there are studies on some compound increasing muscle growth in elderly patients or patients with wasting, and supplement companies use some of those studies to back their supplements.
|
CommonCrawl
|
Home COMP 409 Wednesday, October 16, 2013
Semantics of condition variables, Java's implementation of locking and the Readers and Writers Problem
Maintainer: admin
1 Expressiveness
1.1 Consensus
1.1.1 Proof
1.1.1.1 Critical State
1.1.2 With Test-and-set
2 Splitter
1Expressiveness¶
Let's look at the expressiveness of some of the primary operations we have been looking at, in a way to try and understand how some of our concurrency primitives for locking behaviors might actually be better or worse than other ones.
We have already seen a lot of concurrency primitives such as atomic variables/registers as well as special instructions like test-and-set, test-and-test-and-set, fetch-and-add and compare-and-swap.
Based on these, we have some locking approaches. The special instructions do slightly different things, some of which are better in certain circumstances. It's easy to build a test-and-set lock, even easier with a compare-and-swap as the if is embedded inside of it; fetch-and-add is good for atomic increments, decrements.
Given only these concurrency primitives, we can build anything. For example, the filter lock only used atomic variable. With it, a test-and-set operation can be built by locking the test-and-set section as a critical section.
Compare-and-swap can be built with test-and-set. The process is to create a basic spin lock and use that to lock some global piece of data, making sure that what is done in the critical section is mutually exclusive to anyone else.
int CAS(int &x, int a, int b) {
bool rC;
while (TS(cas_lock, 1) == 1); // Spin
if (x == a) {
rC = true;
x = b;
rC = false;
cas_lock = 0;
In one perspective, there is no difference between the special instructions. Once a lock is built, anything can be done. Expressiveness is not much of a concern in this respect. However, there is one kind of unsatisfying property, that of the spin lock.
The spin lock makes it that a thread may spin for an arbitrary long amount of time, yet that is not a property that compare-and-swap had originally. Compare-and-swap is supposed to be an atomic operation which finishes in a finite amount of time, irrespective of any other thread. That is, even if another thread crashed, it would still finish. It is fault-tolerant.
That is not true of our makeshift solution. If another thread fails for whatever reason, then a thread can be stuck forever. Can we build Compare-and-swap in a wait-free way?
Wait-free operation
Finite number of steps
Fault-tolerant
Simply using a lock cannot create a wait-free operation.
1.1Consensus¶
We have n processes/threads, each starting with a different value, and we want them to all agree on a value, that is, to achieve consensus.
Consistent: At the end of the protocol, all should agree.
Valid: The agreed value is one of the input values. It cannot be an arbitrary value.
Wait-free
The consensus problem can be solved for a certain number of processes or threads, but not necessarily for an arbitrary one. As such, concurrency operations can be distinguished by their consensus number.
Consensus number
Maximum number of processes for which a concurrency primitive can solve consensus.
For simple atomic read and write operations on atomic variables/registers, the consensus number is 1.
1.1.1Proof¶
We can think of a protocol with a binary consensus. That is, the number is 0 or 1, with 2 threads. In any protocol that we have, we start out with some state where the agreed value could either be 0 or 1. From this state, maybe Thread0 or Thread1 might make an action.
We can start thinking about the state-space evolution of the program. After Thread0 does something, there is still no agreement: it could be 0 or 1. Maybe, after Thread1 does something, Thread0 might do something to finish the protocol, but at a certain point, if this is solving consensus, there should be a state where a value is committed. At any state after that, the same value will still be committed.
Bivalent state
A state where there are two possible values
Univalent state
A state where there is only one possible value
The idea of a wait-free algorithm is that after some sequence of actions by Thread0 and Thread1, we should reach a point, starting form a bivalent state, where we end up in an univalent state.
1.1.1.1Critical State¶
A critical state is the lowest bivalent state in a protocol (last to happen). That is, all subsequent states are univalent. We reach some point where the state is bivalent, but no matter the choice that is made, we end up in a tree of univalent states.
Back to the proof, suppose we have a bivalent critical state.
If we only have atomic variables/registers, the only thing that could be done in that critical state is read or write some data. To change to a different state, then either Thread0 did a Read-or-write first, or Thread1 did one first.
Case A) Thread0 reads first, and Thread1 does something (read or write)
If Thread0 is doing a read, then that does not matter to Thread1 as no data is changed. That action is not visible to Thread1. As such, if Thread0 reads first, then we should end up in the 0-subtree. However, since the read is invisible to Thread1, then its action should end up in the same subtree as that of doing its action at the critical state. This is a contradiction, so Thread0's action cannot be a read.
Case B) Thread0 writes x, and Thread1 writes x as well.
If they write to the same variable, then we have the same problem as in case A. Since Thread1's action does not need x's value, Thread0's action is again invisible. Thread1 could simply overwrite what Thread0 did. Just like in Case A, we end up with a contradiction.
Case C) Thread0 writes x, and Thread1 writes y.
This is left as an exercise.
With all things considered, an atomic read-or-write cannot solve consensus for 2 or more threads. As such, it can only reach consensus with a single thread.
1.1.2With Test-and-set¶
We can solve consensus with 2 threads using test-and-set.
// Return agreed value, input value unique to a thread
int decide(int input) {
int x = TS(decider, input); // Decider is initialized to SPECIAL_VALUE
if (x == SPECIAL_VALUE)
With test-and-set, we can make up one SPECIAL_VALUE that we initialize the variable decider to. We can then do test-and-set and see if it's that special value. If a thread does not get that special value, then another did it before. If it does get it, then it was the first to do test-and-set.
Test-and-set, Fetch-and-add and a lot of others have a consensus number of 2. Compare-and-swap, however, has a consensus number of theoretically $\infty$ (practically, a very big number). It is significantly better than anything else out there.
CAS(decider, SPECIAL_VALUE, input); // Decider is initialized to SPECIAL_VALUE
return decider;
What goes on is that if the decider variable is equal to SPECIAL_VALUE, then the first thread can set it equal to its input. After that, any other thread will fail CAS's condition, and so will return decider, just like the first thread. That is, all the threads will return the same value.
2Splitter¶
Something something renaming algorithm (check Art of Multiprocessor Programming, p.44)
Imagine a grid of squares like this one. Actually, more like a grid:
x = id
if (y != 0) RIGHT
else y = id
if(x != id) DOWN
Built by @dellsystem. Content is student-generated. See the old codebase on GitHub
|
CommonCrawl
|
Proof of the Dirichlet–Dini Criterion for Pointwise convergence of Fourier series
I have tried and failed to prove the Dirichlet–Dini Criterion for Pointwise convergence of Fourier series which is as follows (and is described here: http://en.wikipedia.org/wiki/Convergence_of_Fourier_series#Pointwise_convergence)
I will appreciate a proof for this theorem or a reference to one - i couldn't do either.
fourier-analysis proof-writing fourier-series
Hanul Jeon
jon Primejon Prime
I'm gonna simplify the problem by assuming that $\ell=x_0=0$: notice that $$ \dfrac{f(x_0+t)+f(x_0-t)}{2}-\ell = \dfrac{(f(x_0+t)-\ell)+(f(x_0-t)-\ell)}{2}. $$ Hence Dini hypothesis is the same as saying that the function $f_1(t) = f(x_0+t)-\ell$ satisfies $$ \int^{2\pi}_0\left|\dfrac{f_1(t)+f_1(-t)}{2}\right|\dfrac{dt}{t}<\infty. $$ As it's proved later in the lemma, translating functions by a complex number $\ell$ and translating the axis $[0,2\pi]$ by $x_0$ acts nicely on the corresponding Fourier series so, if I can find the limit of $S_N(f_1;0)$, I can find the limit of $S_N(f;x_0)$ as well. Thus I'll proof the case $\ell= x_0 =0$ and then use this "nice action" for the general case. Consider $$ g(t)=\dfrac{f(t)+f(-t)}{1-e^{i t}} $$ Let'us see that $g$ is integrable. Notice that $h(t) = \dfrac{t}{1-e^{it}}$ is continuous at $[0,2\pi]$ (using Hôpital for instance) and take $$ K=\max_{[0,2\pi]} h(t)<\infty. $$ Now $$ \int_0^{2\pi}\left|g(t)\right|\,dt= \int^{2\pi}_0\left|\dfrac{f(t)+f(-t)}{t}\right|\left |h(t)\right| \,dt\leq 2K\int^{2\pi}_0\left|\dfrac{f(t)+f(-t)}{2}\right| \,\dfrac{dt}{t} $$ which is finite by hypothesis, thus it's integrable.
Compute now $\hat f(n)+\hat f(-n)$ for every $n$. Notice that \begin{align*} \hat f(n)+\hat f(-n)&=\int ^{2\pi}_0 f(t)e^{-itn}\,dt + \int ^{2\pi}_0 f(t)e^{itn}\,dt\\ &\overset{(*)}{=}\int ^{2\pi}_0 f(t)e^{-itn}\,dt+\int ^{2\pi}_0 f(-t)e^{-itn}\,dt\\ &=\int ^{2\pi}_0 (f(t)+ f(-t))e^{-itn}\,dt\\ &=\int ^{2\pi}_0 g(t)(1-e^{it})e^{-itn}\,dt\\ &=\hat g(n)-\hat g(n-1). \end{align*} I used in $(*)$ the change of coordinates $t\rightarrow t-2\pi$ for the second integral (recall that $e^{it}$ and $f_1(t)$ are $2\pi$- periodic). In particular $$ 2S_N(f;0)= 2 \sum_{|n|\leq N} \hat f (n)=\sum_{|n|\leq N}\hat f(n)+\hat f(-n)=\hat g(N)-\hat g(-N-1). $$ since we got a telescoping sum. Riemann-Lebesgue lemma applies to $g(t)$, hence $$ \lim_{|N|\rightarrow \infty}\hat g(N) = 0 $$ and $$ \lim_{N\rightarrow \infty} 2S_N(f;0) = 0=\ell. $$ For the general case reduce to the first one by considering $f_1(t)=f(t+x_0)-\ell$. Indeed
$$ \int^{2\pi}_0\left|\dfrac{f_1(t)+f_1(-t)}{2}\right| \,\dfrac{dt}{t}=\int^{2\pi}_0\left|\dfrac{f(t+x_0)+f(-t+x_0)-2\ell}{2}\right| \,\dfrac{dt}{t} =\int^{2\pi}_0\left|\dfrac{f(t+x_0)+f(-t+x_0)}{2}-\ell\right| \,\dfrac{dt}{t} $$ which is finite by hypothesis. By the discussion above $$ \lim_{N\rightarrow \infty}S_N(f_1;0) = 0. $$ It remains to point out that
$S_N(f_1;0) = S_N(f;x_0) - \ell$ for every $N\geq 0$.
\begin{align*} \hat f_1(n)&=\dfrac{1}{2\pi}\int^{2\pi}_0 f_1(t) e^{-itn}\,dt\\ &=\dfrac{1}{2\pi}\int^{2\pi}_0 (f(t+x_0)-\ell) e^{-itn}\,dt\\ &\overset{(*)}{=}\dfrac{1}{2\pi}\int^{2\pi}_0 (f(t)-\ell) e^{-itn+ix_0n}\,dt\\ &=\dfrac{e^{ix_0n}}{2\pi}\int^{2\pi}_0 (f(t)-\ell) e^{-itn}\,dt\\ &=e^{ix_0n} \hat f(n) - \dfrac{e^{ix_0n}\ell}{2\pi}\int^{2\pi}_0e^{-int}\,dt \end{align*} I use at (*) the change of variables $\,t\rightarrow t-x_0$ and that $f$ is $2\pi$-periodic. Thus $$ \hat f_1(n)= \begin{cases} e^{ix_0n}\hat f(n),\quad \text{if $n\neq0$}\\ \hat f(0) -\ell,\quad \text{if $n= 0$} \end{cases} $$ In particular
\begin{align*} S_N(f_1;0)&=\sum_{|n|\leq N} \hat f_1(n) e^{in 0}\\ &=\sum_{|n|\leq N} \hat f_1(n)\\ &=f(0)-\ell+\sum_{0<|n|N} \hat f(n)e^{inx_0}\\ &=S_N(f; x_0)-\ell \end{align*} QED
Hence, $$ \lim_{N\rightarrow \infty}S_N(f;x_0) -\ell=\lim_{N\rightarrow \infty}S_N(f_1;0)=0. $$
eduardeduard
$\begingroup$ Can you please elaborate on how does this proof actualy proves the theorem? And why could you say in the beginning that you assume WLOG that x_0 = l = 0? Thanks $\endgroup$ – jon Prime Jan 19 '15 at 0:56
$\begingroup$ Edited. I've added the details for the general case. I hope it's clearer now. $\endgroup$ – eduard Jan 19 '15 at 9:48
$\begingroup$ Can you please explain why is the inequality ∫ 2π 0 |g(t)|dt≤∫ 2π 0 ∣ ∣ ∣ f(t)+f(−t)t ∣ ∣ ∣ |h(t)|dt correct? $\endgroup$ – jon Prime Jan 19 '15 at 23:27
$\begingroup$ In fact it is an equality. Multiply the right Hand Side and simplify $t$. $\endgroup$ – eduard Jan 20 '15 at 12:16
$\begingroup$ Can you please elaborate on how you solve it for the general case when l and x_0 aren't 0? $\endgroup$ – jon Prime Jan 20 '15 at 21:57
See A. Zygmund, Trigonometric Series, Third edition, Volumes I & II combined, Cambridge Mathematical Library, Cambridge University Press, 2002 on page 52.
Clemens HeubergerClemens Heuberger
I think this article might help you. Pointwise Convergence of Fourier Series, Charles Fefferman, Annals of Mathematics, Second Series, Vol. 98, No. 3 (Nov., 1973), pp. 551-571:
http://www.jstor.org/discover/10.2307/1970917?sid=21105651264483&uid=4&uid=2&uid=3737760
Loreno HeerLoreno Heer
Start with convergence criterion: A necessary and sufficient condition for the Fourier series T(x) of f to converge pointwise to c(x) on E is that there exists a fixed $\delta$ such that $ 0 < \delta < \pi$ and $\int_0^\delta {{g_{c(x)}}(u)\frac{{\sin \left( {nu} \right)}}{u}du \to 0} $ pointwise on E.
Here ${g_{c(x)}}(u) = \frac{1}{2}\left( {f(x + u) + f(x - u) - 2c(x)} \right)$.
If $\frac{{{g_c}(u)}}{u}$ is integrable,which is your given condition then by the the Riemann Lebesgue Theorem,$\int_0^\delta {{g_{c(x)}}(u)\frac{{\sin \left( {nu} \right)}}{u}du \to 0} $. This proves pointwise convergence. Here you can take any $\delta > 0$.
For details see convergence criterion and Theorem 25 in
Convergence of Fourier Series
https://037598a680dc5e00a4d1feafd699642badaa7a8c.googledrive.com/host/0B4HffVs7117IbmZ2OTdKSVBZLVk/Fourier%20Series/Convergence%20of%20Fourier%20Series.pdf
Not the answer you're looking for? Browse other questions tagged fourier-analysis proof-writing fourier-series or ask your own question.
How to show for $\alpha\in (0,1)$, any $f\in C^\alpha([0,1]/{\sim})$ has a Fourier series $S_nf$ uniformly converging to $f$
Pointwise convergence of Fourier series of a piecewise continuous (and Lipschitz continuous everywhere) function - a reference request
Does number theory have any role in the proof of convergence of Fourier series for certain functions?
pointwise convergence of Fourier series
Pointwise convergence of double Fourier series
Function not satisfying pointwise convergence and Fourier series
Further studies on Fourier Series and Integrals.
Invertibility of Fourier Transform implies a.e. convergence of Fourier Series?
Fourier sine series pointwise and uniform convergence
The convolution theorem for fourier series.:$ \widehat{f*g}(x) =2π\hat{g}(x)\cdot\hat{f}(x) $
Uniform convergence on compact subsets of Fourier series
|
CommonCrawl
|
Ha Huy Bang
Full Professor, Doctor of Science
Research interests: Fourier Analysis, Inequalities, Function Spaces
Tel: +84 24 38361121 /505
Email: hhbang AT math.ac.vn
1982: BS, Rostov-on -Don National Univ. Russia
1988: Ph.D, Institute of Math, Vietnam
1995: DSc, Steklov Institute of Math. Russia
2003: Full Professor
1 Ha Huy Bang, Vu Nhat Huy, Q-primitives and explicit solutions of polynomial differential equations in L^p (T), Memoirs on Differential Equations and Mathematical Physics, 85 (2022), 91-102, (ESCI).
2 Ha Huy Bang, Vu Nhat Huy, Paley-Wiener type theorem for functions with values in Banach spaces, Ukrainian Mathematical Journal, 75 (2022), 743-754, (SCI-E, Scopus.
3 Ha Huy Bang, Vu Nhat Huy, An improvement of Bernstein's inequality for functions in Orlicz spaces with smooth fourier image, Rocky Mountain Journal of Mathematics, Volume 52 (2022), No. 1, 29–42, (SCI-E, Scopus).
4 Ha Huy Bang, Vu Nhat Huy, A Bernstein inequality for differential and integral operators on Orlicz spaces, Jaen Journal on Approximation, 12 (2021), 69-88, (ESCI).
5 Ha Huy Bang, Vu Nhat Huy, An extension of Bernstein inequality, Journal of Mathematical Analysis and Applications, 503 (2021), 125289, (SCI-E, Scopus).
6 Ha Huy Bang, Vũ Nhật Huy, Some Spectral Formulas for Functions Generated by Differential and Integral Operators, Acta Mathematica Vietnamica volume 46 (2021), 163–177, Scopus.
7 Ha Huy Bang, Vu Nhat Huy, New Paley–Wiener Theorems, Complex Analysis and Operator Theory (2020) 14:47 (SCI(-E), Scopus).
8 Ha Huy Bang, Vu Nhat Huy, A Bernstein - Nikolskii inequality for weighted Lebesgue spaces, Vladikavkaz Mathematical Journal, 22 (2020), 18-29, https://doi.org/10.46698/h8083-6917-3687-w.
9 Ha Huy Bang, Vu Nhat Huy, Kyung Soo Rim, Multivariate Bernstein inequalities for entire functions of exponential type in Lp(Rn), Journal of Inequalities and Applications, 215 (2019), https://doi.org/10.1186/s13660-019-2167-7, (SCI(-E), Scopus).
10 Ha Huy Bang, Vu Nhat Huy, A Bohr-Nikol'skii Inequality for Weighted Lebesgue Spaces, Acta Mathematica Vietnamica, 44 (2019), pp 701–710, Scopus.
11 Sa Thi Lan Anh, Phan Thi Ha Trang, Trieu Quynh Trang, Ha Huy Bang, Unparticle Effects on Axion-Like Particles Production in e^+e^− Collisions, International Journal of Theoretical Physics, 57 (2018), pp 2015–2021.SCI(-E); Scopus.
12 Ha Huy Bang, Vu Nhat Huy, Local Spectral Formula for Integral Operators on \(L_{p}({\mathbb T})\), Vietnam Journal of Mathematics, 45 (2017), 737–746, Scopus.
13 Ha Huy Bang, On a theorem of F. Riesz, Acta Mathematica Hungarica, 148 (2016), 360–369, SCI(-E); Scopus.
14 Ha Huy Bang, Vu Nhat Huy, Paley-Wiener theorem for functions in L_p(R^n). Integral Transforms and Special Functions 27 (2016), 715–730, SCI(-E); Scopus.
15 Ha Huy Bang, Vu Nhat Huy, A Study of the Sequence of Norm of Derivatives (or Primitives) of Functions Depending on Their Beurling Spectrum, Vietnam Journal of Mathematics, 44 (2016), 419–429,Scopus.
16 Ha Huy Bang, Vu Nhat Huy, A Bohr-Nikolskii inequality, Integral transforms and special functions, 27 (2016), 55 – 63, SCI(-E); Scopus.
17 Ha Huy Bang, Vu Nhat Huy, A Study of Behavior of the Sequence of Norm of Primitives of Functions in Orlicz Spaces Depending on Their Spectrum, Tokyo Journal of Mathematics, 38 (2015), 283-308, SCI(-E), Scopus.
18 Ha Huy Bang, Vu Nhat Huy, Some Extensions of the Kolmogorov–Stein Inequality, Vietnam Journal of Mathematics, 43 (2015), 173 -179,Scopus.
19 Ha Huy Bang, Vu Nhat Huy, The Paley–Wiener Theorem in the Language of Taylor Expansion Coefficients, Doklady Mathematics, Vol. 86 (2012), 677 -- 680, SCI(-E); Scopus.
20 Ha Huy Bang, V. N. Huy, Studying behavior for sequence of norms of primitives of functions depending on their spectrum (in Russian), Daklady Mathematics 440 (2011), 456 -- 458.
21 Ha Huy Bang, V. N. Huy, Behavior of the sequence of norms of primitives of a function in Orlicz spaces, East Journal on Approximations 17 (2011), 127 -- 136.
22 Ha Huy Bang, V. N. Huy, New results concerning the Bernstein-Nikol'skii inequality, In: Advances in Math. Research 16 (2011), 177 -- 191.
23 Ha Huy Bang, and V. N. Huy, Some properties of Orlicz-Lorentz spaces, Acta Mathematica Vietnamica 36 (2011), 145 -- 167, Scopus.
24 Ha Huy Bang, and V. N. Huy, Best constants for the inequalities between equiavalent norms in Orlicz spaces, Bulletin of the Polish Academy of Sciences, Mathematics 59 (2011), 165 -- 174.
25 Ha Huy Bang, B. V. Huong, Behavior of the sequence of norms of primitives of a function in Lorentz spaces, Vietnam Journal of Mathematics 38 (2010), 425 -- 433, Scopus.
26 Ha Huy Bang, V. N. Huy, Behavior of the sequence of norms of primitives of a function, J. Approx. Theory, 162 (2010), 1178- 1186.
27 Ha Huy Bang, Mai Thi Thu, A Gagliardo-Nirenberg inequality for Orlicz and Lorentz spaces on $\Bbb R^n_+$, Vietnam J. Math. 35 (2007), 415 - 427.
28 Ha Huy Bang, N. M. Cong, Bernstein-Nikolskii type inequality in Lorentz spaces and related topics. Vladikavkazskii Mat. J. 7 (2005), 17 - 27.
29 Ha Huy Bang, N. M. Cong, Generalizations of the Riesz convergence theorem for Lorentz spaces. Acta Math. Hungar. 106 (2005), 331 - 341.
30 Ha Huy Bang, Mai Thi Thu, A Gagliardo-Nirenberg inequality for Orlicz spaces, East J. Approx. 10 (2004), N03, 371 - 377.
31 Ha Huy Bang, Mai Thi Thu, A property of entire functions of exponential type for Lorentz spaces, Vietnam. J. Math. 32 (2004), 219 - 225.
32 Ha Huy Bang, Mai Thi Thu, A Landau-Kolmogorov inequality for Lorentz spaces, Tokyo J. Math. 27 (2004), N01, 13 - 19.
33 Ha Huy Bang, Theory of Orlicz spaces (in Vietnamese) - Lý thuyết không gian Orlicz, NXB Đại học Quốc gia Hà Nội, 2003, 385 trang.
34 Ha Huy Bang, Mai Thi Thu, A Landau-Kolmogorov inequality for Orlicz spaces, J. Inequal. Appl. 7 (2002), 663 - 672.
35 Ha Huy Bang, H. M. Giao, On the Kolmogorov Inequality for M Φ -Norm, Appl. Anal. 81 (2002), 1 - 11.
36 Ha Huy Bang, An inequality of Bohr and Favard for Orlicz spaces. Bull. Polish Acad. Sci. Math. 49 (2001), 381 - 387.
37 Ha Huy Bang, The Riesz theorem for the spaces $N_{\phi}$ and its applications. Dokl. Akad. Nauk 377 (2001), 746 - 748 (in Russian).
38 Ha Huy Bang, Investigation of the properties of functions in the space N_{\phi}-depending on the geometry of their spectrum. (Russian) Dokl. Akad. Nauk 374 (2000), 590 - 593.
39 Ha Huy Bang, Absolutely representing systems of exponents in a class of analytic functions. In: Recent Problems in Mathematical Analysis, Gingo, Rostov-on-Don, 2000, 146 - 155.
40 Ha Huy Bang, Truong Van Thuong, Density of a collection of functions in N_{\phi}-spaces. J. Math. Sci. Univ. Tokyo 7 (2000), 311 - 324.
41 Ha Huy Bang, On an inequality of Bohr and Favard. East J. Approximations. 6 (2000), 385 - 395.
42 Ha Huy Bang, H. M. Le, An inequality of Kolmogorov and Stein, Bull. Austral. Math. Soc. 61 (2000), 153 - 159.
43 Ha Huy Bang, Nonconvex caces of the Paley-Wiener-Schwartz theorem. In: Proceedings of the 5th Conference for Vietnamese Mathematicians, Science and Technics Publishers, Hanoi 1999, 15 - 30.
44 Ha Huy Bang, Hoang Mai Le, On the Kolmogorov-Stein inequality. J. Inequal. Appl. 3 (1999), 153 - 160.
45 Ha Huy Bang, Hoang Mai Le, Note on the Kolmogorov-Stein inequality, Vietnam. J. Math. 26 (1998), 363 - 366.
46 Ha Huy Bang, The Paley-Wiener-Schwartz theorems for nonconvex domains. In: Proceedings of the Conference "Functional Analysis and Global Analysis'', Springer, 1997, 14 - 30.
47 Ha Huy Bang, Spectrum of functions in Orlicz spaces. J. Math. Sci. Univ. Tokyo 4 (1997), 341 - 349.
48 Ha Huy Bang, Separability of Sobolev-Orlicz spaces of infinite order. Mat. Zametki 61 (1997), 141 - 143. English transl.: Math. Notes 61 (1997), 118 - 120.
49 Ha Huy Bang, Properties of functions in Orlicz spaces in the connection with geometry of their spectrum. Russian Izvestija Akad. Nauk, 61 (1997), 133 - 168. English transl.: Izvestiya: Mathematics 61 (1997), 399 - 434.
50 Ha Huy Bang, A study of the properties of functions depending on the geometry of their spectrum. Russian Doklady Akad. Nauk 355 (1997), 740 - 743. English transl.: Doklady Mathematics 56 (1997), 610 - 613.
51 Ha Huy Bang, Embedding theorems for the Sobolev-Orlicz spaces of infinite order. Russian Doklady Akad. Nauk 354 (1997), 316 - 319. English transl.: Doklady Mathematics 55 (1997), 77 - 380.
52 Ha Huy Bang, Nonconvex cases of the Paley-Wiener-Schwartz theorems. Russian Doklady Akad. Nauk 354 (1997), 165 - 168. English transl.: Doklady Mathematics 55 (1997), 353 - 355.
53 Ha Huy Bang, The existence of a point spectral radius of pseudodifferential operators. Russian Doklady Akad. Nauk 348 (1996), N06, 740 - 742. English transl.: Doklady Mathematics 53 (1996), 420 - 422.
54 Ha Huy Bang, A remark on the Kolmogorov-Stein inequality. J. Math. Analysis Appl. 203 (1996), 861 - 867.
55 Ha Huy Bang, Theorems of the Paley-Wiener-Schwartz type. Trudy Mat. Inst. Steklov 214 (1996), 298 - 319. English transl.: Proc. Steklov Inst. Math. 214 (1996), 291 - 311.
56 Ha Huy Bang, A remark on differential operators of infinite order. Acta Math. Vietnam. 21 (1996), 289 - 294.
57 Ha Huy Bang, Change of variables in Sobolev-Orlicz spaces of infinite order. Mat. Zametki 57 (1995), N03, 331 - 337. English transl.: Math. Notes 57 (1995), N03, 235 - 239.
58 Ha Huy Bang, Asymptotic behavior of the sequence of norms of derivatives. J. Math. Sci. Univ. Tokyo 2 (1995), 611 - 620.
59 Ha Huy Bang, An algebra of pseudodifferential operators. Mat. Sbornik 186(1995), N07, 3 - 14, English transl.: Sbornik: Mathematics 186 (1995), 929 - 940.
60 Ha Huy Bang, A property of entire functions of exponential type. Analysis 15 (1995), 17 - 23.
61 Ha Huy Bang, On the Bernstein - Nikolsky inequality II. Tokyo J. Math. 18 (1995), 123 - 131.
62 Ha Huy Bang, Functions with bounded spectrum. Trans. Amer. Math. Soc. 347 (1995), 1067 - 1080.
63 Ha Huy Bang, Inequalities of the Bernstein - Nikolsky type and their applications. Dr. Sc. Thesis, Steklov Inst. Math., Moscow, 1994, 269 p. (in Russian).
64 Ha Huy Bang, A remark on the Bernstein - Nikolsky inequality. Acta Math. Vietnam. 19 (1994), 71 - 78.
65 Ha Huy Bang, M. Morimoto, The sequence of Luxemburg norms of derivatives. Tokyo J. Math. 17 (1994), 141 - 147.
66 Ha Huy Bang, Remarks on a property of infinitely differentiable functions. Bull. Polish Akad. Sci. 40 (1993), 197 - 206.
67 Tran Duc Van, Ha Huy Bang, R., Gorenflo, On Sobolev - Orlicz spaces of infinite order for a full Euclidean space. Analysis 11 (1991), 67 - 81.
68 Ha Huy Bang, Mitsuo MORIMOTO, On the Bernstein - Nikolsky inequality. Tokyo J. Math. 14 (1991), 231 - 238.
69 Ha Huy Bang, Nontriviality of Sobolev spaces of infinite order for a full Euclidean space. Sibirskii Mat. J. 31 (1990), 208 - 213. English transl.: Siberian Math. J. 31 (1990), 176 - 180 (in Russian).
70 Ha Huy Bang, A property of infinitely differentiable functions. Proc. Amer. Math. Soc. 108 (1990), 73 - 76.
71 Tran Duc Van, Ha Huy Bang, On the solvability of nonlinear differential equations of infinite order in unbounded domains. Dokl. Akad. Nauk USSR 305 (1989), 48 - 51. English transl.: Soviet Math. Dokl. 39 (1989), 268 - 271.
72 Ha Huy Bang, Imbedding theorems for Sobolev spaces of infinite order. Acta Math. Vietnam. 14 (1989),17 - 29.
73 Ha Huy Bang, On imbedding theorems for Sobolev spaces of infinite order. Mat. sbornik 178 (1988), 115 - 127. English transl.: Math. USSR Sbornik 64 (1989), 115 - 127.
74 Ha Huy Bang, Certain imbedding theorems for the spaces of infinite order of periodic functions. Mat. Zametki 43 (4)(1988), 509 - 517. English transl.: Math. Notes 43 (1988), 293 - 298.
75 Ha Huy Bang, Some problems of the theory of functional spaces of infinite order. Ph. D. Thesis, Hanoi Inst. Math., 1987, 115 p. (in Vietnamese).
76 Ha Huy Bang, Ju. F. Korobeinik, On a generalization of the Polya theorem. Mat. Anal. i Prilozen, 19, Izdat. Rostov-on-Don, 1987, 37 - 46 (in Russian).
77 Ha Huy Bang, On the applicability for differential operators of infinite order, Acta Math. Vietnam. 12 (1987), 67 - 73 (in Russian).
78 Ha Huy Bang, Absolutely convergent sums of polynomials of exponents. Acta Math. Vietnam. 11 (1986), 253 - 267 (in Russian).
79 Ha Huy Bang, On nontriviality of Sobolev-Orlicz classes and spaces of infinite order on the line. Mat. Zametki 39 (1986), 453 - 459 (in Russian).
80 Ha Huy Bang, On nontriviality of the weighted Sobolev-Orlicz classes and spaces of infinite order on the line. In: Proceedings of 3th VMC, Hanoi, 2 (1985), 315 - 319 (in Vietnamese).
81 Ha Huy Bang, Ju. F. Korobeinik, The applicability of composite differential operators of infinite order to certain classes of exponential functions. Izvestija Vuzov, Ser. Mat. 7 (1982), 83 - 85 (in Russian).
82 Ha Huy Bang, Applicability of infinite-order composite differential operators with constant coefficients. Izvestija Severo - Kavkaz Nauchn Tsentra Vysshei Shkoly, Ser. Mat. 2 (1982), 20 - 23 (in Russian).
1 IMH20191105, Ha Huy Bang, Vu Nhat Huy, Bohr inequality and Paley-Wiener type theorem value in Banach spaces
|
CommonCrawl
|
www.springer.com The European Mathematical Society
Pages A-Z
StatProb Collection
Project talk
Acyclic group
From Encyclopedia of Mathematics
2010 Mathematics Subject Classification: Primary: 20J05 [MSN][ZBL]
A group having the same constant coefficient homology as the trivial group (cf. also Homology). This means that its classifying space is an acyclic space. In the literature the earliest examples are Higman's four-generator four-relator group [Hi]
$$\langle x_0, x_1, x_2, x_3 : x_{i+1}x_ix_{i+1}^{-1} = x_i^2, i\in \mathbb{Z}/4\rangle$$
and others found in combinatorial group theory [BaGr], [BaDyHe], [BeMi]. Further examples arise in geometry ([Ep], [Ma], [Se], [SaVa], [GrSe]) or as automorphism groups of large objects ([HaMc]; for example, the group of all bijections of an infinite set). Algebraically closed groups are acyclic.
Many proofs of acyclicity of infinitely generated groups rely on the property that all binate groups are acyclic [Be3] (cf. also Binate group). An important result in the plus-construction approach to the higher algebraic $K$-theory of rings and operator algebras is that the infinite general linear group of the cone of a ring is acyclic [Wa], [Be]. Topologically, the plus-construction of a topological space is completely determined by a certain perfect, locally free, and hence acyclic, group [BeCa].
Ubiquity results for acyclic groups include the following:
Every perfect group is the homomorphic image of an acyclic group [He].
Every group is a normal subgroup of a normal subgroup of an acyclic group. This result has applications to algebraic topology [KaTh].
Every Abelian group is the centre of an acyclic group [BaDyHe], [Be2].
In contrast to the above are results indicating that acyclic groups have "few" normal subgroups. Thus, the following acyclic groups admit no non-trivial finite-dimensional linear representations over any field:
algebraically closed groups;
Higman's group [Hi];
torsion-generated acyclic groups [Be4];
binate groups [AlBe];
the automorphisms groups of [HaMc], see [Be5], [Be6].
Moreover, many of the above groups are simple modulo the centre.
[AlBe] R.C. Alperin, A.J. Berrick, "Linear representations of binate groups" J. Pure Appl. Algebra, 94 (1994) pp. 17–23 MR1277521 Zbl 0813.20060
[BaDyHe] G. Baumslag, E. Dyer, A. Heller, "The topology of discrete groups" J. Pure Appl. Algebra, 16 (1980) pp. 1–47 MR0549702 Zbl 0419.20026
[BaGr] G. Baumslag, K.W. Gruenberg, "Some reflections on cohomological dimension and freeness" J. Algebra, 6 (1967) pp. 394–409 MR0232827
[Be] A.J. Berrick, "An approach to algebraic -theory", Pitman (1982) MR649409
[Be2] A.J. Berrick, "Two functors from abelian groups to perfect groups" J. Pure Appl. Algebra, 44 (1987) pp. 35–43 MR0885094
[Be3] A.J. Berrick, "Universal groups, binate groups and acyclicity", Proc. 1987 Singapore Group Theory Conf., W. de Gruyter (1989) MR0981847 Zbl 0663.20053
[Be4] A.J. Berrick, "Remarks on the structure of acyclic groups" Bull. London Math. Soc., 22 (1990) pp. 227–232 MR1041135 Zbl 0749.20001
[Be5] A.J. Berrick, "Groups with no nontrivial linear representations" Bull. Austral. Math. Soc., 50 (1994) pp. 1–11 MR1285653 Zbl 0815.20026
[Be6] A.J. Berrick, "Corrigenda: Groups with no nontrivial linear representations" Bull. Austral. Math. Soc., 52 (1995) pp. 345–346 MR1348495
[BeCa] A.J. Berrick, C. Casacuberta, "A universal space for plus-constructions" Topology (to appear) MR1670384 Zbl 0933.55016
[BeMi] A.J. Berrick, C.F. Miller, III, "Strongly torsion generated groups" Proc. Cambridge Philos. Soc., 111 (1992) pp. 219–229 MR1142741 Zbl 0762.20017
[Ep] D.B.A. Epstein, "A group with zero homology" Proc. Cambridge Philos. Soc., 68 (1968) pp. 599–601 MR0229692 Zbl 0162.27502 Zbl 0157.30703
[GrSe] P. Greenberg, V. Sergiescu, "An acyclic extension of the braid group" Comment. Math. Helv., 66 (1991) pp. 109–138 MR1090167 Zbl 0736.20020
[HaMc] P. de la Harpe, D. McDuff, "Acyclic groups of automorphisms" Comment. Math. Helv., 58 (1983) pp. 48–71 Zbl 0522.20034
[He] A. Heller, "On the homotopy theory of topogenic groups and groupoids" Ill. Math. J., 24 (1980) pp. 576–605 MR0586797 Zbl 0458.18006
[Hi] G. Higman, "A finitely generated infinite simple group" J. London Math. Soc., 26 (1951) pp. 61–64 MR0038348 Zbl 0042.02201
[KaTh] D.M. Kan, W.P. Thurston, "Every connected space has the homology of a " Topology, 15 (1976) pp. 253–258 MR0413089 Zbl 0355.55004
[Ma] J.N. Mather, "The vanishing of the homology of certain groups of homeomorphisms" Topology, 10 (1971) pp. 297–298 MR0288777 Zbl 0207.21903
[SaVa] P. Sankaran, K. Varadarajan, "Acyclicity of certain homeomorphism groups" Canad. J. Math., 42 (1990) pp. 80–94 MR1043512 Zbl 0711.57022
[Se] G.B. Segal, "Classifying spaces related to foliations" Topology, 17 (1978) pp. 367–382 MR0516216 Zbl 0398.57018
[Wa] J.B. Wagoner, "Developping classifying spaces in algebraic -theory" Topology, 11 (1972) pp. 349–370
How to Cite This Entry:
Acyclic group. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Acyclic_group&oldid=35367
This article was adapted from an original article by A.J. Berrick (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://www.encyclopediaofmath.org/index.php?title=Acyclic_group&oldid=35367"
TeX done
Group theory and generalizations
About Encyclopedia of Mathematics
Impressum-Legal
|
CommonCrawl
|
On the DNA Computer Binary Code
1. Boolean lattice of the four DNA bases
2. Boolean (logic) operations in the set of DNA bases
3. The Genetic code Boolean Algebras
In any finite set we can define a partial order, a binary operation in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1).
In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule (G≡C and A=T or A=U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element.
The Boolean algebra on the set of elements X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical "OR" and "AND" logical operations term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a partially ordered set in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. This equivalent partial ordered set is called Boolean lattice.
In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol "$\neg$" stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable.
In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table:
OR AND
$\vee$ G A U C $\wedge$ G A U C
G G A U Ç G G G G G
A A A C C A G A G A
U U C U C U G G U U
C C C C C C G A U C
It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table:
$A \vee U = C \leftrightarrow 01 \vee 10 = 11$
$U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$
$G \vee C = C \leftrightarrow 00 \vee 11 = 11$
The logic negation ($\neg$) of a base yields the DNA complementary base: $\neg A = U \leftrightarrow \neg 01 = 10$; $\neg G = C \leftrightarrow \neg 00 = 11$
A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$.
The figure shows the Hasse diagram corresponding to the Boolean algebra $(B(X), \vee, \wedge)$. There are twenty four possible Hasse diagrams of four DNA bases and they integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1].
Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example:
CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111
ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000
$\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001
The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is:
In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC
The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position.
There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning.
Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60.
Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14.
|
CommonCrawl
|
Energy Informatics
Evaluation of neural networks for residential load forecasting and the impact of systematic feature identification
Volume 5 Supplement 4
Proceedings of the Energy Informatics.Academy Conference 2022 (EI.A 2022)
Nicolai Bo Vanting1,
Zheng Ma1 &
Bo Nørregaard Jørgensen1
Energy Informatics volume 5, Article number: 63 (2022) Cite this article
Energy systems face challenges due to climate change, distributed energy resources, and political agenda, especially distribution system operators (DSOs) responsible for ensuring grid stability. Accurate predictions of the electricity load can help DSOs better plan and maintain their grids. The study aims to test a systematic data identification and selection process to forecast the electricity load of Danish residential areas. The five-ecosystem CSTEP framework maps relevant independent variables on the cultural, societal, technological, economic, and political dimensions. Based on the literature, a recurrent neural network (RNN), long-short-term memory network (LSTM), gated recurrent unit (GRU), and feed-forward network (FFN) are evaluated and compared. The models are trained and tested using different data inputs and forecasting horizons to assess the impact of the systematic approach and the practical flexibility of the models. The findings show that the models achieve equal performances of around 0.96 adjusted R2 score and 4–5% absolute percentage error for the 1-h predictions. Forecasting 24 h gave an adjusted R2 of around 0.91 and increased the error slightly to 6–7% absolute percentage error. The impact of the systematic identification approach depended on the type of neural network, with the FFN showing the highest increase in error when removing the supporting variables. The GRU and LSTM did not rely on the identified variables, showing minimal changes in performance with or without them. The systematic approach to data identification can help researchers better understand the data inputs and their impact on the target variable. The results indicate that a focus on curating data inputs affects the performance more than choosing a specific type of neural network architecture.
Energy systems face challenges due to climate change, distributed energy resources, and political agenda. For instance, in Denmark, By 2030 carbon emissions should be reduced by 70%, with the goal by 2050 being carbon footprint neutrality (Danish Energy Agency 2022a; Ma and Jørgensen 2018). To achieve this goal, the Danish government has introduced initiatives to accelerate the energy system transition to a total reliance on renewable energy sources. Among the initiatives are state-of-art energy islands, investments in technologies, such as Power-to-X and Carbon Capture, and a green transition of the industry (Danish Energy Agency 2022b). However, the changes to the energy system will lead to an increasing number of distributed energy resources (DERs), introducing new challenges, such as grid balancing (Ma et al. 2017, 2019a; Billanes et al. 2017). In addition, the electrification of vehicles and heating of households through heat pumps increases the overall electricity consumption (Ma et al. 2021; Fatras et al. 2021). These challenges are significant to distribution system operators (DSOs) who are responsible to the electricity grids (Ma et al. 2016; Christensen et al. 2021). Furthermore, DSOs face many other challenges, e.g., the resilience of the grid after natural disasters (Hu et al. 2021), an increasing number of DERs (Sauter et al. 2017), or the security of supply (Ma et al. 2019b), and cost of the grid maintenance and upgrade (Gören et al. 2022).
There are three types of electricity consumers: residential, commercial and industrial consumers (Billanes et al. 2018), and in many cases, they are located separated. Households make up around 12% of the total energy accounts and close to 13% of the emission accounts of Denmark (Statistics Denmark 2022). During peak consumption hours, households account for 35% of the total electricity load (Andersen et al. 2017). Furthermore, the adoption of DERs such as photovoltaics, electric vehicles, and heat pumps influence households' electricity consumption patterns that potentially results in grid overloads (Christensen et al. 2019).
Thus, it is important for DSOs to understand the state of their grid on the short- and long-term to ensure operational quality, maintenance, and identifying areas in the grid for renovations or investments. Some research has experimented with accurate forecasts on a short- to long-term horizon by applying machine learning (ML) and deep learning (DL) methods to the problem. Several types of neural networks, ML algorithms, and hybrids have been tested with excellent results. Furthermore, the electricity load forecasts have been tested with various independent variables and applications (Vanting et al. 2021).
However, in the literature, the independent variables are not systematically identified beforehand, often leading to the questions: why were the variables chosen in the first place, and how do they relate to the target variable? Moreover, the argument for specific supporting data does not appear until the features are analyzed for selection criteria such as correlation analysis (Friedrich and Afshari 2015; Pindoriya et al. 2010; Vonk et al. 2012). Additionally, the related literature does not explain the composition of the electricity load, i.e., the sources of electricity consumption in the aggregated load data, which may lead to a better understanding of the performance of the proposed models. Based on the challenges the DSOs face regarding the distribution grid, this study seeks to improve the prediction accuracy of load forecasts using a systematic data identification approach.
To fill the research gap, this paper aims to identify variables related to residential area aggregated electricity load systematically. The identified variables will be used to forecast the aggregated electricity consumption of two residential areas in Denmark. The systematic identification and subsequent selection will be made using the CSTEP framework (Ma 2022), which maps data within an ecosystem in several dimensions. The identification ensures that any possible data is accounted for and a strong foundation for supporting data is available, which was missing in related works. The impact of the systematic identification on the model performance will be assessed by testing and evaluating multiple types of neural networks based on related works. Moreover, the data is analyzed using the K-Means clustering algorithm to investigate the composition of the electricity load before it is aggregated.
Furthermore, to determine the impact of different electricity consumption sources, such as heat pumps and electric heating, the performance of the selected neural networks will be compared on subsets of the data set containing households with and without electric-based heating. The types of neural networks are based on the applications in the literature. The most popular models included in this paper are feed-forward networks (FFN), recurrent neural networks (RNN), and Long Short-Term Memory (LSTM) networks. Additionally, because the related publications have rarely applied Gated Recurrent Units (GRU), it will also be used in this experiment. Finally, to test the flexibility of the neural networks, each tuned model will be used to predict a single-step (1 h) and 24-step (24 h) of the electricity load.
This paper is structured as follows. First, the literature related to electricity load forecasting is presented. Afterward, the data processing and analysis is described in the methodology section, including the systematic identification and selection using the CSTEP framework. Thirdly, the forecasting results of the models are presented, compared, and analyzed. Finally, the impact of the systematic identification approach is discussed based on the results of the forecasts.
Electricity load forecasting using machine learning algorithms and deep neural networks has been a major area of research in the last decade. The increasing amount of data available and rising interest in artificial intelligence research has led researchers to experiment with different types of networks, algorithms, and hybrids to achieve high accuracies or low errors for their forecasts (Vanting et al. 2021).
Based on the literature, electricity load forecasting can be placed into three horizons: short-, medium-, and long-term (Gebreyohans et al. 2018; Solyali 2020). Short-term forecasting is applied when predicting minutes, sometimes referred to as very short-term forecasting, and up to 1 week, as seen in Samuel et al. (2020); Houimli et al. 2020; Yong et al. 2020). Medium-term forecasts start from 1 week and go up several months to a year (Shirzadi et al. 2021; Salama et al. 2009; Gungor et al. 2020). Finally, long-term horizons are forecasts focused on predicting more than a year, sometimes several decades, depending on the data (Parlos and Patton 1993; Ekonomou 2010; Ghods and Kalantar 2008). Other than the length, each forecasting horizon is characterized by several parameters, including the independent variables, applications of the forecast, and models used for the prediction.
Long-term forecasts leverage socioeconomic data as independent variables and are usually applied to problems concerning larger areas, such as states, provinces, and countries (Elkamel et al. 2020; Tanoto et al. 2011). Furthermore, weather data are used on long-term forecasts for the electricity load of states and countries (Gao et al. 2019). In the literature, weather data includes outdoor temperature, humidity, wind speed and direction, precipitation, and solar irradiation. Moreover, electricity load forecasting on medium-term is applied to larger areas such as countries, states, and residential areas. Variables include weather, electricity prices, and socioeconomic data (Salama et al. 2009; Ilseven and Gol 2017). Short-term forecasts are applied to electricity grids and microgrids, power and substations, residential and office buildings, cities, provinces, and countries, using weather data and temporal features as independent variables (Li et al. 2021; Xu et al. 2019; Panapongpakorn and Banjerdpongchai 2019; Ahmad and Chen 2018; Ruiming 2008). Short-term forecasts are essential to determine if the load exceeds the capacity of a transformer, which can prevent power outages (Dung and Phuong 2019; Giamarelos et al. 2021; Al-Rashid and Paarmann 1996).
Additionally, the short-term forecast can indicate windows for flexibility to achieve sector coupling, leading to a more efficient energy system (Yan et al. 2012; Pramono et al. 2019; Xypolytou et al. 2017). The model selection varies within in each forecasting horizon, meaning a single type of model cannot be identified. Instead, researchers have tested several statistical methods, machine learning algorithms, and different types and combinations of neural networks to reach accurate predictions, leading to a highly diverse research field with a wide range of applications and independent variables.
In the literature, several types of neural networks have been applied. One network type is the recurrent neural network (RNN), designed to work with sequential data. The strength of an RNN is that it can take information from prior inputs together with the input at a given timestamp to better decide on the output. Furthermore, one of the more popular networks is the Long Short-Term Memory (LSTM) network, a type of RNN specifically designed to deal with long data sequences. It was first introduced in 1997 by Schmidhuber and Hochreiter and improved upon the regular RNN by dealing with the vanishing gradients problem (Hochreiter and Schmidhuber 1997). Gated Recurrent Units (GRUs) (Cho et al. 2014), which are another type of specialized RNN similar to the LSTM network, have also been applied to short-term load forecasting (Ribeiro et al. 2020; Zhu et al. 2019). Finally, a fully connected feed-forward network has also been a popular choice to forecast electricity load in the literature. Researchers have experimented with different configurations and combinations of networks and algorithms to improve forecast accuracy. While many apply regular neural networks, some combine several into hybrid ones, as seen in Panapongpakorn and Banjerdpongchai (2019) and Pramono et al. (2019). Others transform the forecast into an image recognition problem and use state-of-the-art convolutional neural networks to predict the load (Li et al. 2017; Sadaei et al. 2019).
This paper systematically identifies and selects data relevant to forecasting the electricity load of residential areas to build a strong foundation of supporting data to improve the performance metrics of the forecasting model. To identify the possible features, the CSTEP framework proposed in Ma (2022) is used to analyze and evaluate an ecosystem by mapping the features to the five influential dimensions: Cultural, Societal, Technology, Economy and Finance, and Policies and Regulation. For this paper, the CSTEP framework is extended with different data variables dimensions to include supporting, embedded, exogenous variables and the impact of the variables on the electricity load. Supporting variables include sensor readings and statistical data, i.e., weather and climate measurements or electricity prices. Embedded variables are data that can be embedded in the target variable or other data sources, for example, temporal features or the sun's position. Exogenous variables are considered data that cannot be directly given as an input to a model but still impact the target or supporting variables. Finally, the impact on the target variable describes how each dimension and the different types of variables affect the increasing or decreasing electricity consumption of residential areas.
So far, no literature has systematically identified and selected the relevant data using the CSTEP framework. Researchers often rely on correlation analysis of features or tree-based methods for determining feature importance to decide on independent variables for multivariate forecasting. Before identifying the CSTEP variables, the electricity load is analyzed to examine the composition of the aggregated load. This step aims better to understand the performance of the model during inference.
Furthermore, this can help make the black box of neural networks more transparent by understanding the inputs better. The analysis of the electricity load will be done using descriptive statistics and by clustering the daily load profiles of each household in the area to investigate the different load patterns. The algorithm applied for the clustering is K-Means using dynamic time warping as the distance method. Afterward, the identified CSTEP variables are examined for data availability and sourced for the subsequent data analysis. Afterward, the electricity load is used to conduct feature engineering of temporal features and lagged electricity load. Finally, all selected features undergo a feature selection process using correlation coefficients and tree-based methods for feature importance.
After the data processing and analysis section, the evaluation and selection of neural networks are conducted based on related works and the research gap. This paper tests the performance of four separate neural networks on the aggregated electricity load. Baseline models of a feed-forward network (FFN), recurrent neural network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) are established and used as the starting point to tune hyperparameters and select the optimal architecture. Each tuned model is trained on the aggregated load data with and without including the selected CSTEP variables and used to forecast a single hour and 24 h. Then, each model is also trained on aggregated electricity consumption data containing households exclusively with heat pumps or electric heating.
To assess the performance of the models in this paper, four different metrics will be used, presented in the equations below.
Mean Absolute Error
$$MAE= \frac{1}{n} \sum_{i=1}^{n}\left|{x}_{i}^{pred}-{x}_{i}^{true}\right|$$
Mean Absolute Percentage Error
$$MAPE=\frac{100\mathrm{\%}}{n}\sum_{i=1}^{n}\left|\frac{{x}_{i}^{true} - {x}_{i}^{pred}}{{x}_{i}^{true}}\right|$$
Root Mean Squared Error
$$RMSE={\left[\sum_{i=1}^{n}\frac{{\left({x}_{i}^{pred}-{x}_{i}^{true}\right)}^{2}}{n} \right]}^\frac{1}{2}$$
Adjusted R2 Score
$${R}^{2}=1-\frac{\sum_{i=1}^{n}{\left({x}_{i}^{true}-{x}_{i}^{pred}\right)}^{2}}{\sum_{i=1}^{n}{\left({x}_{i}^{true}-{\overline{x} }^{true}\right)}^{2}}$$
$$Adj{R}^{2}=1-\frac{\left(1-{R}^{2}\right)\times \left({n}_{samples}-1\right)}{\left({n}_{samples}-{p}_{variables}-1\right)}$$
The CSTEP framework
The CSTEP framework consists of five critical business ecosystems dimensions, which are: climate, environment, and geographic situation; Societal culture and demographic environment; Technology (Infrastructure, technological skills, technology readiness); Economy and finance; Policies and regulation. Each dimension has several sub-dimensions with specific explanations as defined in Table 1 in Ma (2022). Additionally, the dimensions can be viewed on a macro and micro level based on the focuses of the business ecosystems. For instance, the sub-dimensions of Climate, environmental and geographic situation can be divided into a macro level considering the general weather conditions and natural features of a place (climate and geographic situation). Meanwhile, the micro level considers the living, working and production environment or conditions (environmental situation). The macro and micro levels of a dimension differ depending on the perspective of either the ecosystem or the individual stakeholder, focusing on either the general or specific levels of the business ecosystem (Ma 2022).
Analysis of electricity load
The electricity load data used in this paper is collected from two residential areas in Denmark in connection with a national project called Flexible Energy Denmark (Flexible Energy Denmark 2019). The data ranges from January 1st, 2019, to May 15th, 2022, and includes 211 households after processing and cleaning the data. From the residential areas, the data set includes households without photovoltaic panels, electric heating or heat pumps, and non-electric vehicle (EV) owners who use home-charging. Households with any of these characteristics are separated from the pure electricity consumption with central heating or district heating.
These data are sourced by using the Danish building registry that collects information about all buildings in Denmark by law (Bygnings- og Boligregistret 2022). For EV owners, a different method had to be used, as this information is not registered anywhere. Instead, each household's data was analyzed to detect possible EV owners by clustering the load to identify outliers using K-Means. Subsequently, the load was searched for minimum–maximum consumption ranges that exceed 7.2 kWh, which is a typical consumption pattern for EV charging. By separating the households that have adapted these DERs, the impact of their load on the ability to accurately forecast can be investigated.
Figure 1 shows each household's average daily consumption profiles, where the red line indicates the average load within each cluster. The most typical consumption profiles can be seen in Cluster 2 and Cluster 5. Clusters 0 and 3 can be considered outlier profiles, while Clusters 4 and 1 are somewhere in between with equally many households, as seen in the distribution of clusters in Fig. 2.
Clusters of daily load profiles
Distribution of daily profile clusters
The average consumption pattern over a year for the two residential areas can be seen from Fig. 3. In Denmark, household consumption usually increases during winter and decreases when the summer nears. Many factors can influence the consumption pattern, such as the sun, amount of light, temperature, rain, and wind. From the figure, a very distinct spike can also be seen towards the end of Christmas, a reoccurring pattern. These factors lead to several supporting data, for instance, the position of the sun, the weather, the length of days during the year, and special days, such as religious or national holidays.
Average yearly consumption pattern
Figure 4 shows the average aggregated daily load of the two residential areas. The pattern shows a slight increase during morning hours and a peak at 17:00. The period in the afternoon is essential to forecast correctly, as this is where the grid is challenged by high electricity loads that approach the grid's capacity. Each residential area is connected to a similar type of transformer with a capacity of 400 kWh.
Average daily load profile of the aggregated load
Identification of CSTEP variables
As described earlier, any supporting data for the electricity load will be identified and mapped using the CSTEP framework. Table 1 shows the relevant variables identified for this research experiment. The variables are based on applications in related literature and from domain experts. The supporting variables include sensor readings or statistical data, such as weather and electricity prices. The embedded variables include data such as holidays, day lengths, demographics, and building information. The exogenous variables are data that cannot directly be used as an input for a model but add additional information about the other variables. The variables in this dimension can help explain irregularities or unexpected results. The final column describes how each CSTEP dimension's identified data impacts the target variable, which in this case is the electricity consumption of households.
Table 1 Systematically identified CSTEP variables
After the systematic identification, each variable is investigated for availability and feasibility. Using openly available sources, the following CSTEP variables have been collected:
Holidays (Denmark)
Day lengths
Sun azimuth
Sun altitude
Electricity prices
While many researchers insist on the importance of weather data to support the electricity load forecast (Vanting et al. 2021; Friedrich and Afshari 2015), it is not necessarily meaningful to include it in this experiment. The aggregated electricity load data is collected from two residential areas with some distance between them, meaning local weather data is unavailable. There may be a correlation between some weather data and the electricity load. However, causation cannot directly be determined in such an instance.
Feature selection and analysis
After selecting CSTEP variables, the data is analyzed with the aggregated electricity load using correlation coefficients and feature importance. The coefficients are calculated using Pearson's R, and the feature importance is the gain from gradient boosted trees using the Python library XGBoost. Figure 5 shows a correlation heatmap of the coefficients of each variable. There are no strongly correlated features with the electricity load, but a slight negative relationship with day lengths and a slight positive relationship with the sun's azimuth.
Correlation heatmap of CSTEP variables
Looking at the relative feature importance of each variable in relation to the electricity load, the sun's azimuth is calculated to be the most important feature, as seen in Fig. 6. The gain signifies the relative contribution of the feature over all decision-trees in the gradient boosting model.
Gradient-boosted feature importance
At this point, each feature has also been analyzed individually for any irregularities. The analysis resulted in a decision to discard the electricity price variable due to a substantial increase in the price in 2022. This increase would only be visible in the test data set, potentially resulting in unexpected predictions, as the increase is not reflected in the electricity consumption. The variable is visualized in Fig. 7.
Historical electricity prices
In summary, the target variable of the electricity load is analyzed using K-Means clustering to identify different load profiles. The load profiles will give a better understanding of the input data to make the black box of neural networks more transparent. Furthermore, supporting independent variables have been systematically identified, selected, and analyzed using correlation coefficients and feature importance of gradient-boosted trees. Finally, each independent variable was analyzed for missing or broken data, potential irregularities, and seasonal patterns and trends, resulting in discarding the electricity price as an independent variable.
The model selection is based on neural networks from related works, which are a fully connected feed-forward neural network (FFN), a recurrent neural network (RNN), and a long short-term memory network (LSTM). Finally, to fill a gap in the literature, a gated recurrent unit (GRU) is also included in the experiments of this paper.
Baseline performance and models
First, a baseline performance of the forecasting problem is conducted using a simple multivariate linear regression model to predict the electricity load based on the CSTEP variables as input. The baseline performance resulted in the metrics seen in Table 2. These baseline metrics are considered the minimum to beat by the proposed models.
Table 2 Baseline performance
Secondly, each selected model is trained and evaluated on the data once without any hyperparameter tuning or feature engineering to assess the base performance of each neural network. From here, the baselines will be iteratively improved by tuning training, data, and model parameters. Table 3 presents the baseline metrics of each model using the CSTEP variables as independent variables for the electricity load. At this point, all models perform equally without any feature engineering or hyperparameter tuning.
Table 3 Baseline neural networks
Model tuning
Each model from Table 3 will undergo a tuning process, where several parameters are tested in different combinations. To do this, the experiment tracking tool Weights and Biases is leveraged to find the best size and combination of the tunable parameters (Biewald 2020). An iterative random search process can be conducted by setting up a training loop that tests all four models, ending with a greedy search. The tunable parameters are seen in Table 4 below. Each tunable parameter has several values that are chosen uniformly and randomly. The feature engineering includes lags from 1 to 168 h, and the temporal features have been encoded cyclically using sine and cosine transformations.
Table 4 Tunable hyperparameters
After running several tests and calculating metrics for each model, the best parameters could be found. Table 5 summarizes the tuned parameters for each model. These four tuned models are subsequently trained on data ranging from January 1st, 2019, to May 15th, 2021, and evaluated on the test data from May 15th, 2021, to May 15th, 2022. Each model will be trained four times, resulting in 16 different prediction results: a 1-h forecast using CSTEP variables, a 1-h forecast without CSTEP variables, a 24-h forecast using CSTEP variables, and a 24-h forecast without CSTEP variables.
Table 5 Results of the hyperparameter tuning
One-hour forecast
The prediction results of the 1-h forecasts with and without the identified CSTEP variables are presented in Table 6. Overall, the metrics look similar for each model. For example, the lowest error was found using the feed-forward network with CSTEP variables at 3.9064 kWh mean absolute error and the highest adjusted R2 score of 0.9681. However, the same model without the CSTEP variables gives the highest error and lowest adjusted R2 score, while no substantial difference is seen in the recurrent neural networks. This change in performance can indicate that the FFN is more dependent on the CSTEP variables than the recurrent networks.
Table 6 One-hour forecast metrics
Figure 8 visualizes each model's first week of hourly predictions with the actual load during the period. The models mostly capture the peaks and valleys with some larger errors, especially between the midday and afternoon peaks. Because these predictions look similar, it may be more interesting to investigate the performances on specifically challenging days to assess better the models, such as Christmas, which usually sees very high peaks in the afternoon to evening hours and different consumption patterns throughout the day. Figure 9 presents the forecast during Christmas 2021, where there is a greater difference in the models' ability to forecast hourly. The actual load is shaped differently than on a regular day. December 23rd and 25th have much flatter peaks, where the morning and afternoon are similar, and the 24th with a high afternoon to evening peak. The FFN, RNN, and LSTM models cannot capture these peaks as well as on a regular day. However, the GRU predicts the high increase of the afternoon peak surprisingly well. This factor could be another performance metric to consider when assessing the performance of different neural network architectures, as this cannot be seen from the error metrics and adjusted R2 scores.
First week of 1-h forecasts
One-hour forecasts during Christmas
24-hour forecast
Table 7 presents the prediction results of the 24-h forecasts using CSTEP variables and excluding the CSTEP variables. Generally, the errors are higher than the 1-h forecasts, which is expected due to the multi-step predictions giving higher uncertainties at each timestep. However, the FFN is slightly more accurate out of the four models. Furthermore, the FFN's performance changes when excluding the CSTEP variables is not as visible in the 24-h forecasts compared to the 1-h forecast.
Table 7 24-h forecast metrics
The forecasts for the first 24 h of the test data set are visualized in Fig. 10 below, where the point of the multi-step forecast starts on May 15th 23:00. There is no substantial difference in the first day of prediction for all four models. The ability to predict 24 h accurately using the same model architecture as for the 1-h forecasts means that the models are flexible in their application. To further assess the ability of the 24-h forecast models, they will also be investigated during Christmas 2021. Figure 11 presents the forecasts on Christmas day with the first prediction starting at midnight on the 24th of December 2021. The 24-h forecasts generally underestimate the actual load but follow the pattern correctly. The GRU neural network performs the best during this period, coming much closer to the peak load than the other models. Error metrics and R2 scores are critical indicators to assess the performance of models. However, they are not the only factor to base performances on for electricity load forecasting. Looking solely at the error metrics, one would choose the FFN model as it shows the lowest overall error. However, DSOs might think it necessary to predict as accurately as possible on specific days when the grid is nearing capacity, such as Christmas. Because of this, the GRU model might be the better model to use.
First 24-h forecast
24-h forecast during Christmas
Comparison with electric-based heating
All four models are tested on a data set of household electricity consumption containing heat pumps and electric heating to determine the importance of analyzing the composition of the aggregated electricity load and investigating the prediction performance of electrically heated households. It must be noted that the sample size has decreased compared to the original dataset, from 211 to 22. Because of the smaller sample size, the initial data set was sampled to have the same size, and all models were applied to the subset to compare them better.
Table 8 presents the error metrics of the models applied to electric-based heating household load and the sampled non-electric-based heating electricity consumption. The results give several insights. Firstly, the sample size of the aggregated load data affects the prediction ability of the models. For instance, the subset of the data set with a sample size of 22 has an adjusted R2 of 0.8273 for the FFN model, while the same model on the full data set reaches a score of 0.9681. This change is seen across all models, indicating the sample size of the aggregated load to be an essential factor. Secondly, multiple metrics are crucial to correctly assess neural networks' performance. Due to the increased average hourly load for electric-based heating households, the absolute and squared errors change relative to the load. For the sampled data set, the average hourly load is around 0.37 kWh, whereas the electric-based heating households have an average load of around 0.96 kWh. Thirdly, while there is a difference in absolute and squared errors, the adjusted R2 score does not substantially change when predicting electric-based heating and district heating households. Finally, the addition of CSTEP variables impacts the performance differently depending on the model.
Table 8 Comparison metrics with electric heating load
The FFN model sees a slight performance increase when removing the CSTEP variables. The error of the RNN model increases without the CSTEP variables. The LSTM model has the worst performance, but the error slightly decreases when removing the supporting variables. Finally, the GRU model sees almost no change in performance with or without the CSTEP variables.
This paper systematically identified and analyzed data to forecast the aggregated electricity load of residential areas using the CSTEP framework. The data were used as inputs with feature-engineered variables to predict the next hour and 24 h. Four different neural networks are tuned, trained, and evaluated on the data sets with and without the CSTEP variables to assess the impact of the systematic identification process. It is found that 1-h forecasts perform equally well when looking at the error metrics and the adjusted R2 score; however, further investigations into the predictions show the GRU model capturing the actual load better. An additional factor can be included in model performance assessment by examining the models on certain days such as Christmas, which usually sees very high consumption peaks. Finally, 24-h forecasts are also conducted to examine the flexibility of the models. Overall, the metrics show minimal variation across the models, but comparing the predictions through visualizations indicates where the models may differ.
Furthermore, to determine how the composition of the aggregated load data affects the forecast, a separate data set containing households with heat pumps or electric heating was used to predict. It was found that the number of households in the aggregated load affects the forecast, meaning a smaller sample size increases the forecast error. To validate this, 22 households were sampled from the initial data set to match the electric heating households and subsequently compared to each other. Here, no substantial differences were found in the adjusted R2 scores; however, MAE, MAPE, and RMSE metrics differed due to the increased average load of households with heat pumps or electric heating. The systematically identified CSTEP variables did not increase the forecast significantly; however, they gave the authors an increased understanding and explainability of the target variable. The complexity of the consumption pattern can be better understood by considering as many factors as possible. This understanding can lead to explaining why there are increases or decreases in the electricity load during specific periods, changing behavioral patterns of residents, or to identify peaks and valleys in the load pattern.
Furthermore, the FFN model saw an increase in error after removing the CSTEP variables, indicating that the recurrent neural networks rely less on the supporting variables. The popular neural network architectures in the literature are LSTMs and FFNs; however, this paper has shown that GRUs perform very well on performance metrics and when visualizing the predictions to understand where the model predicts well. Furthermore, this paper demonstrated that choosing the optimal neural network architecture is not as important as curating good data inputs, which was shown by testing the models on different load profiles with and without electric heating or heat pumps. Moreover, it was found that the aggregated load's sample size impacts the forecast's accuracy, with smaller sample sizes giving more volatile consumption patterns.
The systematic identification and selection of supporting data were valuable with certain neural network types, such as the RNN and FFN. As described in the literature, the LSTM and GRU networks are specialized in long data sequences due to their ability to remember patterns, which could explain that they do not have to rely on the CSTEP variables as much. The results of this study were not very encouraging because the test of the systematic identification process did not significantly impact the performance metrics as expected. However, the process gave a better understanding of the complex electricity load forecasting problem. The data sets used for this study were cleaned and filtered to consist of households without DERs and only the households' pure electricity consumption, meaning no electricity-based heating installations. The results prove that the sample sizes of the aggregation play a large part in the forecasting accuracy.
Moreover, the challenge of predicting different consumption patterns, such as households with heat pumps or electric heating, was rejected because the adjusted R2 was found to be close to equal for both load patterns. However, this study achieved excellent results in forecasting the electricity load for the next hour and next 24 h, which is underlined by the satisfying low errors and high adjusted R2 scores. Furthermore, after visualizing the predictions, it was shown that the models could get very close to the actual load.
The purpose of the study was to test a systematic data identification and selection process to forecast the aggregated electricity load of two Danish residential areas. In the literature, the data selection process often relied on correlation analysis of the supporting data. However, this paper added an initial step to building a robust data foundation forecast using the CSTEP framework. Forecasting with neural networks is a major research field, and this paper tested and compared different types of neural networks from the literature. The research has shown that the systematic identification of variables has potential but does not substantially affect the models' performance metrics. However, the process did give a greater understanding of the target variable, which can help curate better data in the future. Testing multiple neural networks results indicate that choosing the optimal architecture is not as impactful as having good data inputs. The findings of this study will be of interest to researchers who seek to make their data processing and analysis more systematic by applying the CSTEP framework.
Moreover, the findings will underline the importance of curated data for researchers and the industry, e.g., DSOs. The limitation of this study is the data availability for the target variable and some of the supporting data. The target variable had a small sample size for electric-based heating households, which meant that the original data set had to be sampled to be of equal size. Larger sample sizes would give a more evident answer to the differences between load patterns. Furthermore, the CSTEP variables were limited by sources of external data, such as the weather data. A majority of researchers use weather data for their forecasting models, but for this research, it was not feasible due to the location of weather stations. Finally, the results of this study are based on electricity consumption from Danish residential areas, meaning they are not directly generalizable to all parts of the world.
Despite these limitations, the study shows the models' flexibility on different consumption patterns, multiple types of independent variables, and by forecasting one hour to 24 h ahead. Further research should be conducted using the CSTEP framework to systematically identify independent variables to better assess the method's impact on the forecasting problem. Furthermore, the findings suggest that better performance metrics are needed to compare the predictions of neural networks, as the intricacies could only be seen by visually inspecting the forecast. For future work a more complex selection of models with more complicated data sets to test the forecasting ability further is planned. Finally, to improve on the limitations of this study, a larger sample size of residential houses should be used.
DSO:
Distribution system operator
DER:
Distributed energy resources
DL:
FFN:
Feed-forward network
RNN:
LSTM:
Long short-term memory
GRU:
Gated recurrent unit
MAE:
MAPE:
RMSE:
Ahmad T, Chen H (2018) Utility companies strategy for short-term energy demand forecasting using machine learning based models. Sustain Cities Soc 39:401–417. https://doi.org/10.1016/j.scs.2018.03.002
Al-Rashid Y, Paarmann LD (1996) Short-term electric load forecasting using neural network models. In: Cameron G, Hassoun M, Jerdee A, Melvin C (eds) Proceedings of the 1996 IEEE 39th midwest symposium on circuits & systems part 3 (of 3). IEEE, Piscataway, pp 1436–1439
Andersen FM, Baldini M, Hansen LG, Jensen CL (2017) Households' hourly electricity consumption and peak demand in Denmark. Appl Energy 208:607–619
Biewald L (2020) Experiment tracking with weights and biases. https://www.wandb.com/. Accessed 01 June 2022
Billanes JD, Ma Z, Jørgensen BN (2017) Consumer central energy flexibility in office buildings. J Energy Power Eng 11(10):621–630
Billanes JD, Ma Z, Jørgensen BN (2018) The bright green hospitals case studies of hospitals' energy efficiency and flexibility in Philippines. In: 2018 8th international conference on power and energy systems (ICPES). pp 190–195
Bygnings- og Boligregistret (2022) BBR information. https://bbr.dk/se-bbr-oplysninger. Accessed 20 June 2022
Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H et al (2014) Learning phrase representations using RNN encoder–decoder for statistical machine translation. arXiv preprint. arXiv:14061078
Christensen K, Ma Z, Værbak M, Demazeau Y, Jørgensen BN (2019) Agent-based decision making for adoption of smart energy solutions. In: IV international congress of research in sciences and humanities science and humanities international research conference (SHIRCON 2019). Lima, Peru. IEEE
Christensen K, Ma Z, Jørgensen BN (2021) Technical, economic, social and regulatory feasibility evaluation of dynamic distribution tariff designs. Energies 14(10):2860
Danish Energy Agency (2022a) Danish climate policies. https://ens.dk/en/our-responsibilities/energy-climate-politics/danish-climate-policies. Accessed 20 June 2022
Danish Energy Agency (2022b) Klimaaftale for energi og industri mv. 2020. In: Agency DE, editor. p 16
Dung NT, Phuong NT (2019) Short-term electric load forecasting using standardized load profile (SLP) and support vector regression (SVR). Eng Technol Appl Sci Res 9(4):4548–4553
Ekonomou L (2010) Greek long-term energy consumption prediction using artificial neural networks. Energy 35(2):512–517. https://doi.org/10.1016/j.energy.2009.10.018
Elkamel M, Schleider L, Pasiliao EL, Diabat A, Zheng QP (2020) Long-term electricity demand prediction via socioeconomic factors-a machine learning approach with Florida as a case study. Energies. https://doi.org/10.3390/en13153996
Fatras N, Ma Z, Jørgensen BN (2021) System architecture modelling framework applied to the integration of electric vehicles in the grid. Springer International Publishing, Cham, pp 205–209
Flexible Energy Denmark (2019) About the flexible energy Denmark project. https://www.flexibleenergydenmark.com/about-the-fed-project/. Accessed 20 June 2022
Friedrich L, Afshari A (2015) Short-term forecasting of the Abu Dhabi electricity load using multiple weather variables. In: Yan J, Shamim T, Chou SK, Li H (eds) 7th international conference on applied energy, ICAE 2015. Elsevier Ltd, pp 3014–3026
Gao Y, Fang C, Ruan Y (2019) A novel model for the prediction of long-term building energy demand: LSTM with attention layer. In: Sustainable built environment conference 2019 Tokyo: built environment in an era of climate change: how can cities and buildings adapt? SBE 2019 Tokyo, 1st edn. Institute of Physics Publishing
Gebreyohans G, Saxena NK, Kumar A (2018) Long-term electrical load forecasting of Wolaita Sodo Town, Ethiopia using hybrid model approaches. In: 2018 IEEE 8th power India international conference (PIICON). pp 1–6
Ghods L, Kalantar M (2008) Methods for long-term electric load demand forecasting; a comprehensive investigation. In: 2008 IEEE international conference on industrial technology, IEEE ICIT 2008. Chengdu
Giamarelos N, Zois EN, Papadimitrakis M, Stogiannos M, Livanos NAI, Alexandridis A (2021) Short-term electric load forecasting with sparse coding methods. IEEE Access 9:102847–102861. https://doi.org/10.1109/ACCESS.2021.3098121
Gören G, Dindar B, Gül Ö (2022) Artificial neural network based cost estimation of power losses in electricity distribution system. In: 2022 4th global power, energy and communication conference (GPECOM). pp 455–460
Gungor O, Garnier J, Rosing TS, Aksanli B (2020) LENARD: lightweight ensemble learner for medium-term electricity consumption prediction. In: 2020 IEEE international conference on communications, control, and computing technologies for smart grids, SmartGridComm 2020. Institute of Electrical and Electronics Engineers Inc.
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780
Houimli R, Zmami M, Ben-Salha O (2020) Short-term electric load forecasting in Tunisia using artificial neural networks. Energy Syst 11(2):357–375. https://doi.org/10.1007/s12667-019-00324-4
Hu Z, Ye C, Wang S, Tan C, Tian J, Li Y (2021) Guaranteed load prediction for distribution network considering extreme disasters. In: 2021 IEEE sustainable power and energy conference (iSPEC). pp 1375–1380
Ilseven E, Gol M (2017) Medium-term electricity demand forecasting based on MARS. In: 2017 IEEE PES innovative smart grid technologies conference Europe, ISGT-Europe 2017. Institute of Electrical and Electronics Engineers Inc., pp 1–6
Li L, Ota K, Dong M (2017) Everything is image: CNN-based short-term electrical load forecasting for smart grid. In: 2017 14th international symposium on pervasive systems, algorithms and networks & 2017 11th international conference on frontier of computer science and technology & 2017 third international symposium of creative computing (ISPAN-FCST-ISCC). pp 344–351
Li Y, Chen Y, Shi Y, Zhao X, Li Y, Lou Y (2021) Research on short-term load forecasting under demand response of multi-type power grid connection based on dynamic electricity price. In: 2021 IEEE international conference on power, intelligent computing and systems, ICPICS 2021. Institute of Electrical and Electronics Engineers Inc., pp 141–144
Ma Z (2022) The importance of systematical analysis and evaluation methods for energy business ecosystems. Energy Inform 5(1):2. https://doi.org/10.1186/s42162-022-00188-6
Ma Z, Jørgensen BN (2018) A discussion of building automation and stakeholder engagement for the readiness of energy flexible buildings. Energy Inform 1(1):54. https://doi.org/10.1186/s42162-018-0061-z
Ma Z, Sommer S, Jørgensen BN (2016) The smart grid impact on the Danish DSOs' business model. In: 2016 IEEE electrical power and energy conference (EPEC). IEEE, pp 1–5
Ma Z, Friis HTA, Mostrup CG, Jørgensen BN (2017) Energy flexibility potential of industrial processes in the regulating power market. In: Proceedings of the 6th international conference on smart cities and green ICT systems. SCITEPRESS—Science and Technology Publications, Lda, Porto, Portugal, pp 109–115
Ma Z, Værbak M, Rasmussen RK, Jørgensen BN (2019a). Distributed energy resource adoption for campus microgrid. In: 2019a IEEE 17th international conference on industrial informatics (INDIN). IEEE, pp 1065–1070
Ma Z, Broe M, Fischer A, Sørensen TB, Frederiksen MV, Jøergensen BN (2019b) Ecosystem thinking: creating microgrid solutions for reliable power supply in India's power system. In: 2019 1st global power, energy and communication conference (GPECOM). pp 392–397
Ma Z, Christensen K, Jorgensen BN (2021) Business ecosystem architecture development: a case study of electric vehicle home charging. Energy Inform 4:37. https://doi.org/10.1186/s42162-021-00142-y
Panapongpakorn T, Banjerdpongchai D (2019) Short-term load forecast for energy management systems using time series analysis and neural network method with average true range. In: 1st international symposium on instrumentation, control, artificial intelligence, and robotics, ICA-SYMP 2019. Institute of Electrical and Electronics Engineers Inc., pp 86–89
Parlos AG, Patton AD (1993) Long-term electric load forecasting using a dynamic neural network architecture. In: 1993 joint international power conference on Athens Power Tech: planning, operation and control in today's electric power systems, APT 1993. Institute of Electrical and Electronics Engineers Inc., pp 816–820
Pindoriya NM, Singh SN, Singh SK (2010) Forecasting of short-term electric load using application of wavelets with feed-forward neural networks. Int J Emerg Electr Power Syst. https://doi.org/10.2202/1553-779X.2289
Pramono SH, Rohmatillah M, Maulana E, Hasanah RN, Hario F (2019) Deep learning-based short-term load forecasting for supporting demand response program in hybrid energy system. Energies. https://doi.org/10.3390/en12173359
Ribeiro AMNC, Do Carmo PRX, Rodrigues IR, Sadok D, Lynn T, Endo PT (2020) Short-term firm-level energy-consumption forecasting for energy-intensive manufacturing: a comparison of machine learning and deep learning models. Algorithms 13(11):1–19. https://doi.org/10.3390/a13110274
Article MathSciNet Google Scholar
Ruiming F (2008) A hybrid rough sets and support vector regression approach to short-term electricity load forecasting. In: IEEE power and energy society 2008 general meeting: conversion and delivery of electrical energy in the 21st century, PES. Pittsburgh, PA
Sadaei HJ, de Lima e Silva PC, Guimarães FG, Lee MH (2019) Short-term load forecasting by using a combined method of convolutional neural networks and fuzzy time series. Energy 175:365–377. https://doi.org/10.1016/j.energy.2019.03.081
Salama HAE, El-gawad AFA, Sakr SM, Mohamed EA, Mahmoud HM (2009) Applications on medium-term forecasting for loads and energy scales by using artificial neural network. In: 20th international conference and exhibition on electricity distribution, CIRED 2009. 550 CP ed. Prague
Samuel IA, Ekundayo S, Awelewa A, Somefun TE, Adewale A (2020) Artificial neural network base short-term electricity load forecasting: a case study of a 132/33 kv transmission sub-station. Int J Energy Econ Policy 10(2):200–205. https://doi.org/10.32479/ijeep.8629
Sauter P, Karg P, Pfeifer M, Kluwe M, Zimmerlin M, Leibfried T et al (2017) Neural network-based load forecasting in distribution grids for predictive energy management systems. In: International ETG congress 2017. pp 1–6
Shirzadi N, Nizami A, Khazen M, Nik-Bakht M (2021) Medium-term regional electricity load forecasting through machine learning and deep learning. Designs. https://doi.org/10.3390/designs5020027
Solyali D (2020) A comparative analysis of machine learning approaches for short-/long-term electricity load forecasting in Cyprus. Sustainability. https://doi.org/10.3390/SU12093612
Statistics Denmark (2022) Energy and air emission accounts. https://www.dst.dk/en/Statistik/emner/miljoe-og-energi/groent-nationalregnskab/energi-og-emissionsregnskaber. Accessed 20 June 2022
Tanoto Y, Ongsakul W, Marpaung COP (2011) Levenberg-Marquardt recurrent networks for long-term electricity peak load forecasting. Telkomnika 9(2):257–266. https://doi.org/10.12928/telkomnika.v9i2.696
Vanting NB, Ma Z, Jørgensen BN (2021) A scoping review of deep neural networks for electric load forecasting. Energy Inform 4(2):49. https://doi.org/10.1186/s42162-021-00148-6
Vonk BMJ, Nguyen PH, Grand MOW, Slootweg JG, Kling WL (2012) Improving short-term load forecasting for a local energy storage system. In: 2012 47th international universities power engineering conference, UPEC 2012. London
Xu B, Sun Y, Wang H, Yi S (2019) Short-term electricity consumption forecasting method for residential users based on cluster classification and backpropagation neural network. In: 11th international conference on intelligent human-machine systems and cybernetics, IHMSC 2019. Institute of Electrical and Electronics Engineers Inc., pp 55–59
Xypolytou E, Meisel M, Sauter T, IEEE (2017) Short-term electricity consumption forecast with artificial neural networks—a case study of office buildings. In: 2017 IEEE Manchester Powertech
Yan X, Chen H, Zhang X, Tan C (2012) Energy storage sizing for office buildings based on short-term load forecasting. In: 2012 IEEE 6th international conference on information and automation for sustainability, ICIAFS 2012. Beijing. pp 290–295
Yong B, Shen Z, Wei Y, Shen J, Zhou Q (2020) Short-term electricity demand forecasting based on multiple LSTMs. In: Ren J, Hussain A, Zhao H, Cai J, Chen R, Xiao Y et al (eds) 10th international conference on brain inspired cognitive systems, BICS 2019. Springer, pp 192–200
Zhu J, Yang Z, Guo Y, Zhang J, Yang H (2019) Short-term load forecasting for electric vehicle charging stations based on deep learning approaches. Appl Sci. https://doi.org/10.3390/app9091723
This article has been published as part of Energy Informatics Volume 5 Supplement 4, 2022: Proceedings of the Energy Informatics. Academy Conference 2022 (EI.A 2022). The full contents of the supplement are available online at https://energyinformatics.springeropen.com/articles/supplements/volume-5-supplement-4.
This paper is part of the ANNEX 81 project (Project IEA EBC ANNEX 81 Data-Driven Smart Buildings, funded by EUDP Denmark, Case no. 64019-0539) by the Danish funding agency (the Danish Energy Technology Development and Demonstration (EUPD) program, Denmark) and part of the Lighthouse South project (project title—AI-based forecasting for sector coupling of the electricity grid and district heating grid) by the European Regional Development Fund.
SDU Center for Energy Informatics, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, 5230, Odense, Denmark
Nicolai Bo Vanting, Zheng Ma & Bo Nørregaard Jørgensen
Nicolai Bo Vanting
Zheng Ma
Bo Nørregaard Jørgensen
NBV conducted the first draft, ZM and BNJ revised and edited the final manuscript. All authors read and approved the final manuscript.
Correspondence to Nicolai Bo Vanting.
Vanting, N.B., Ma, Z. & Jørgensen, B.N. Evaluation of neural networks for residential load forecasting and the impact of systematic feature identification. Energy Inform 5 (Suppl 4), 63 (2022). https://doi.org/10.1186/s42162-022-00224-5
Short-term load forecasting
Residential electricity consumption
Artificial neural network
Feature identification
Feature selection
|
CommonCrawl
|
Journal of Arid Land 2021, Vol. 13 Issue (8): 814-834 DOI: 10.1007/s40333-021-0079-0
Spatial-temporal variations of ecological vulnerability in the Tarim River Basin, Northwest China
BAI Jie1,2,3, LI Junli1,2,3,*( ), BAO Anmin1,2,3, CHANG Cun1,2,3
1 State Key Laboratory of Desert and Oasis Ecology, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China
2 Key Laboratory of GIS & RS Application Xinjiang Uygur Autonomous Region, Urumqi 830011, China
3 University of Chinese Academy of Sciences, Beijing 100049, China
As the largest inland river basin of China, the Tarim River Basin (TRB), known for its various natural resources and fragile environment, has an increased risk of ecological crisis due to the intensive exploitation and utilization of water and land resources. Since the Ecological Water Diversion Project (EWDP), which was implemented in 2001 to save endangered desert vegetation, there has been growing evidence of ecological improvement in local regions, but few studies have performed a comprehensive ecological vulnerability assessment of the whole TRB. This study established an evaluation framework integrating the analytic hierarchy process (AHP) and entropy method to estimate the ecological vulnerability of the TRB covering climatic, ecological, and socioeconomic indicators during 2000-2017. Based on the geographical detector model, the importance of ten driving factors on the spatial-temporal variations of ecological vulnerability was explored. The results showed that the ecosystem of the TRB was fragile, with more than half of the area (57.27%) dominated by very heavy and heavy grades of ecological vulnerability, and 28.40% of the area had potential and light grades of ecological vulnerability. The light grade of ecological vulnerability was distributed in the northern regions (Aksu River and Weigan River catchments) and western regions (Kashgar River and Yarkant River catchments), while the heavy grade was located in the southern regions (Kunlun Mountains and Qarqan River catchments) and the Mainstream catchment. The ecosystems in the western and northern regions were less vulnerable than those in the southern and eastern regions. From 2000 to 2017, the overall improvement in ecological vulnerability in the whole TRB showed that the areas with great ecological improvement increased by 46.11%, while the areas with ecological degradation decreased by 9.64%. The vegetation cover and potential evapotranspiration (PET) were the obvious driving factors, explaining 57.56% and 21.55% of the changes in ecological vulnerability across the TRB, respectively. In terms of ecological vulnerability grade changes, obvious spatial differences were observed in the upper, middle, and lower reaches of the TRB due to the different vegetation and hydrothermal conditions. The alpine source region of the TRB showed obvious ecological improvement due to increased precipitation and temperature, but the alpine meadow of the Kaidu River catchment in the Middle Tianshan Mountains experienced degradation associated with overgrazing and local drought. The improved agricultural management technologies had positive effects on farmland ecological improvement, while the desert vegetation in oasis-desert ecotones showed a decreasing trend as a result of cropland reclamation and intensive drought. The desert riparian vegetation in the lower reaches of the Tarim River was greatly improved due to the implementation of the EWDP, which has been active for tens of years. These results provide comprehensive knowledge about ecological processes and mechanisms in the whole TRB and help to develop environmental restoration measures based on different ecological vulnerability grades in each sub-catchment.
Key words: ecological vulnerability ecological improvement ecological degradation AHP-entropy method climate change human activities Tarim River Basin
Received: 26 January 2021 Published: 10 August 2021
Corresponding Authors: LI Junli E-mail: [email protected]
Jie BAI
Junli LI
Anmin BAO
Cun CHANG
BAI Jie, LI Junli, BAO Anmin, CHANG Cun. Spatial-temporal variations of ecological vulnerability in the Tarim River Basin, Northwest China. Journal of Arid Land, 2021, 13(8): 814-834.
http://jal.xjegi.com/10.1007/s40333-021-0079-0 OR http://jal.xjegi.com/Y2021/V13/I8/814
Fig. 1 Overview of the Tarim River Basin (TRB) as well as the area percentage of land use/land cover (LULC)
2000 1.3997 1.3097 1.2423 1.1830 1.2224 1.4130 1.3325 1.3839 1.2572 1.3570
Table 1 Variance inflation factors for the selected ecological vulnerability indicators
Table 2 Weights of the selected ecological vulnerability indicators
Fig. 2 Spatial distribution of ecological vulnerability grades in 2000 (a), 2010 (b), and 2017 (c), and the area proportion of ecological vulnerability grades distributed in 2000, 2010, and 2017 (d)
Fig. 3 Statistics of ecological vulnerability values at each catchment of the TRB in 2000 (a) and 2017 (b). Note that the pie charts show the proportion of different ecological vulnerability grades, and the ecological vulnerability values at each catchment represent the mean ecological vulnerability value of that catchment.
Fig. 4 Statistics of ecological vulnerability values in 2000, 2010, and 2017 based on LULC (a), elevation (b), and slope (c). Note that the black circles are outliers on each box; the central black line is the median; and the edges of the box are the upper and lower quartiles.
Fig. 5 Spatial-temporal distributions in the changes of ecological vulnerability in the TRB during 2000-2010 (a), 2010-2017 (b), and 2000-2017 (c), as well as the proportion of changed values for ecological vulnerability during 2000-2010, 2010-2017, and 2000-2017 (d).
Fig. 6 Spatial-temporal distributions in the changes of ecological vulnerability grade in the TRB during 2000-2010 (a), 2010-2017 (b), and 2000-2017 (c), as well as the proportion of changed grades for ecological vulnerability during 2000-2010, 2010-2017, and 2000-2017 (d)
Catchment
Contribution (q-value)
Climate indicator
Ecological indicator
Socioeconomic indicator
PRE AFT PET LD SE VC WD FOOD POP LS
Total basin 0.1144 0.0505 0.2155 0.0950 0.0170 0.5756 0.1816 0.0235 0.0414 0.0095
Kaidu River 0.0003 0.0013 0.0098 0.2096 0.0511 0.5706 0.2217 0.1021 0.0562 0.1117
Weigan River 0.0399 0.0167 0.0615 0.1249 0.0086 0.7859 0.0370 0.0006 0.0326 0.0732
Aksu River 0.0395 0.0395 0.0409 0.1277 0.0178 0.6075 0.0465 0.0227 0.0380 0.0440
Kashgar River 0.0083 0.0214 0.0668 0.1625 0.0100 0.7152 0.1022 0.0675 0.0506 0.0567
Yarkant River 0.0409 0.0370 0.0606 0.0271 0.0113 0.6218 0.0814 0.0281 0.0247 0.0339
Hotan River 0.0356 0.2401 0.3084 0.0758 0.0189 0.6560 0.1830 0.0520 0.0809 0.1028
Mainstream 0.0229 0.0277 0.1020 0.1427 0.0025 0.6026 0.1103 0.0111 0.0079 0.0018
Kunlun Mountains 0.1546 0.1354 0.3304 0.0397 0.0101 0.6975 0.2687 0.0002 0.0084 0.0087
Qarqan River 0.0004 0.0360 0.5191 0.1587 0.0020 0.2598 0.0892 - - -
Table 3 Contribution (q-values) of ten driving factors influencing ecological vulnerability in the TRB
Fig. 7 Spatial distribution of the annual maximum MODIS enhanced vegetation index (EVI) trend from 2000 to 2017. (a), spatial distribution of the changed rates of EVI in the whole TRB; (b)-(d), statistical results for the change trends of EVI in grassland, cropland, and forestland in different catchments, respectively. Note that the positive value indicates increasing trend of vegetation growth, and negative value means decreasing trend of vegetation growth.
Fig. 8 Aggregated total area of grassland (a), cropland (b), and forestland (c) in different catchments in 2000, 2010, and 2017, as well as aggregated total area of land transformation in different catchments (d). F-C, transformation of forestland to cropland; G-C, transformation of grassland to cropland; C-F, transformation of cropland to forestland; C-G, transformation of cropland to grassland.
Fig. 9 Crop yield and nitrogenous fertilizer utilization (a) as well as water consumption for different uses (b) in the TRB from 2000 to 2017
Fig. 10 Spatial changes in above-freezing temperature (AFT; a), precipitation (b), potential evapotranspiration (PET; c) and water density (WD; d) in the TRB from 2000 to 2017
Calculation formula
Formula description
indicator Above-freezing temperature (AFT)
(unit: °C) $\text{AFT}=\frac{\sum\nolimits_{i=1}^{n}{{{T}_{i}}}}{n}$ Ti is the daily temperature above zero (°C); and n is the number of corresponding days.
(PRE; unit: mm) $\text{PRE}=\sum\nolimits_{i=1}^{n}{\text{PR}{{\text{E}}_{i}}}$ PREi is the daily precipitation on the ith day (mm); and n is the day number (n=365).
Potential evapotranspiration
(PET; unit: mm) $\text{PET}=\frac{0.408\Delta ({{R}_{n}}-G)+r\frac{900}{T+273}{{U}_{2}}({{e}_{s}}-{{e}_{a}})}{\Delta +r(1+0.34{{U}_{2}})}$ Rn is the net radiation (MJ/(m2•d)); G is the soil heat flux (MJ/(m2•d)); r is the psychrometric constant (kPa/°C); T is the temperature (°C); U2 is the wind speed (m/s); es is the saturation vapour pressure (kPa); ea is the actual vapour pressure (kPa); and ∆ is the slope of vapour pressure curve (kPa/°C).
indicator Vegetation cover
(VC) $VC=\frac{EVI-EV{{I}_{0}}}{EV{{I}_{s}}-EV{{I}_{0}}}$ EVIs is the value at highly dense vegetation fraction; and EVI0 is that value for bare soil.
(SE; unit: t/hm2) $\text{SE}=R\times K\times LS\times C\times P$ R is the rainfall erosivity factor (t/hm2); K is the soil erodibility factor; LS is the topographic factor; C is the vegetation cover factor; and P is the erosion control practices factor.
Landscape diversity
(LD) $\text{LD}=-\sum\nolimits_{i=1}^{m}{({{P}_{i}}\times \ln {{P}_{i}})}$ Pi is the proportion of the area for the patch type i in the landscape (%); and m is the number of patch types in the landscape.
Water density
(WD; Unit: mm) $\text{WD}=({{L}_{r}}+{{S}_{l}}+{{Q}_{w}}\times {{W}_{Q}})/3$ Lr is the length of river (m); Sl is the area of water body (m2); Qw is the available water resource (m3); and WQ is the weight of available water resource.
Socioeconomic indicator Population density
(POP; unit: person/hm2) $\text{POP}=\frac{\text{PO}{{\text{P}}_{\text{total}}}}{A}$ POPtotal is the urban and rural population (person); and A is the area of county (hm2).
(FOOD; unit: calorie/hm2) $\text{FOOD}=\frac{\sum\nolimits_{i=1}^{n}{(100\times {{M}_{i}}\times E{{P}_{i}}\times {{E}_{i}})}}{A}$ i is the production category numbered from 1 to n; Mi is the product yield per category i (t); EPi is the edible percentage of the product by category (%); Ei is the calories per 100 g of the product (calorie/g); and A is the area of county (hm2).
Livestock density
(LS; unit: head/hm2) $\text{LS}=\frac{\sum\nolimits_{i=1}^{n}{(}\text{L}{{\text{S}}_{i}}\times {{L}_{i}})}{A}$ i is the livestock type numbered from 1 to n; LSi is the livestock of type i (head); Li is the converted ratio of standard sheep; and A is the area of county (hm2).
Table S1 Indicator system of ecological vulnerability in Tarim River Basin
[1] Amiri V, Rezaei M, Sohrabi N. 2014. Groundwater quality assessment using entropy weighted water quality index (EWQI) in Lenjanat, Iran. Environmental Earth Sciences, 72:3479-3490.
[2] Attia A, El-Hendawy S, Al-Suhaibani N, et al. 2021. Evaluating deficit irrigation scheduling strategies to improve yield and water productivity of maize in arid environment using simulation. Agricultural Water Management, 249: 106812, doi: 10.1016/j.agwat.2021.106812.
[3] Bai J, Shi H, Yu Q, et al. 2019. Satellite-observed vegetation stability in response to changes in climate and total water storage in Central Asia. Science of the Total Environment, 659:862-871.
[4] Bao A M, Huang Y, Ma Y G, et al. 2017. Assessing the effect of EWDP on vegetation restoration by remote sensing in the lower reaches of Tarim River. Ecological Indicators, 74:261-275.
[5] Beroya-Eitner,M A, 2016. Ecological vulnerability indicators. Ecological Indicators, 60:329-334.
[6] Bestelmeyer B T, Okin G S, Duniway M C, et al. 2015. Desertification, land use, and the transformation of global drylands. Frontiers in Ecology and the Environment, 13:28-36.
[7] Chen Y N, Li W H, Xu C C, et al. 2007. Effects of climate change on water resources in Tarim River Basin, Northwest China. Journal of Environmental Sciences, 19(4):488-493.
[8] Chen Y N, Chen Y P, Xu C C, et al. 2010. Effects of ecological water conveyance on groundwater dynamics and riparian vegetation in the lower reaches of Tarim River, China. Hydrological Processes, 24:170-177.
[9] Department of Water Resources of Xinjiang Uygur Autonomous Region. 2000. Xinjiang Water Resources Bulletin. [2020-09-01].http://slt.xinjiang.gov.cn/ . (in Chinese)
[10] Department of Water Resources of Xinjiang Uygur Autonomous Region. 2010. Xinjiang Water Resources Bulletin. [2020-09-01].http://slt.xinjiang.gov.cn/ . (in Chinese)
[12] Deng H, Chen Y, Li Q, et al. 2019. Loss of terrestrial water storage in the Tianshan mountains from 2003 to 2015. International Journal of Remote Sensing, 1608392, doi: 10.1080/01431161.2019.1608392.
[13] Deng M J, Zhou H Y, Xu H L, et al. 2016, Research on the ecological operation in the lower reaches of Tarim River based on water conveyance. Scientia Sinica Technologica, 46:864-876. (in Chinese)
[14] Dong Q, Wang W, Shao Q, et al. 2020. The response of reference evapotranspiration to climate change in Xinjiang, China: Historical changes, driving forces, and future projections. International Journal of Climatology, 40:235-254.
[15] Du Y W, Gao K. 2020. Ecological security evaluation of marine ranching with AHP-entropy-based TOPSIS: A case study of Yantai, China. Marine Policy, 122: 104223, doi: 10.1016/j.marpol.2020.104223.
[16] Dzeroski S. 2001. Applications of symbolic machine learning to ecological modelling. Ecological Modelling, 146:263-273.
[17] Enea M, Salemi G. 2001. Fuzzy approach to the environmental impact evaluation. Ecological Modelling, 136:131-147.
[18] Fang S, Yan J, Che M, et al. 2013. Climate change and the ecological responses in Xinjiang, China: Model simulations and data analyses. Quaternary International, 311:108-116.
[19] Groisman P, Bulygina O, Henebry G, et al. 2018. Dryland belt of Northern Eurasia: contemporary environmental changes and their consequences. Environmental Research Letters, 13: 115008, doi: 10.1088/1748-9326/aae43c.
[20] Guo B, Zang W Q, Luo W. 2020. Spatial-temporal shifts of ecological vulnerability of Karst Mountain ecosystem-impacts of global change and anthropogenic interference. Science of the Total Environment, 741: 140256, doi: 10.1016/j.scitotenv.2020.140256.
[21] Guo H, Jiapaer G, Bao A M, et al. 2017. Effects of the Tarim River's middle stream water transport dike on the fractional cover of desert riparian vegetation. Ecological Engineering, 99:333-342.
[22] Han J J, Wang J P, Liang C, et al. 2021. Driving factors of desertification in Qaidam Basin, China: An 18-year analysis using the geographic detector model. Ecological Indicators, 124: 107404, doi: 10.1016/j.ecolind.2021.107404.
[23] Han S J, Hu H P, Yang D W, et al. 2011. Irrigation impact on annual water balance of the oases in Tarim Basin, Northwest China. Hydrological Processes, 25:167-174.
[24] Hao X M, Li W H. 2014. Impacts of ecological water conveyance on groundwater dynamics and vegetation recovery in the lower reaches of the Tarim River in northwest China. Environmental Monitoring and Assessment, 186:7605-7616.
[25] He B B, Sheng Y, Cao W, et al. 2020. Characteristics of climate change in northern Xinjiang in 1961-2017, China. Chinese Geographical Science, 30(02):249-265.
[26] He P X, Sun Z J, Han Z M, et al. 2021. Dynamic characteristics and driving factors of vegetation greenness under changing environments in Xinjiang, China. Environmental Science and Pollution Research, doi: 10.1007/s11356-021-13721-z.
[27] Hinkel J, 2011. "Indicators of vulnerability and adaptive capacity": Towards a clarification of the science-policy interface. Global Environmental Change, 21(1):198-208.
[28] Huang J P, Yu H P, Guan X D, et al. 2016. Accelerated dryland expansion under climate change. Nature Climate Change, 6:166-171.
[29] Jiapaer G, Liang S L, Yi Q X, et al. 2015. Vegetation dynamics and responses to recent climate change in Xinjiang using leaf area index as an indicator. Ecological Indicators, 58:64-76.
[30] Lei Y, Jiang D, Yang X H, et al. 2007. The water distribution model application based on spatial information technology. Geo-Information Science, 9:64-69. (in Chinese)
[31] Li Z, Chen Y N, Shen Y J, et al. 2013. Analysis of changing pan evaporation in the arid region of Northwest China. Water Resources Research, 49:2205-2212.
[32] Li Z H, Shi X G, Tang Q H, et al. 2020. Partitioning the contributions of glacier melt and precipitation to the 1971-2010 runoff increases in a headwater basin of the Tarim River. Journal of Hydrology, 583: 124579, doi: 10. 1016/jjhydrol.2020.124579.
[33] Ling H B, Guo B, Zhang G P, et al. 2019. Evaluation of the ecological protective effect of the "large basin" comprehensive management system in the Tarim River basin, China. The Science of the total environment, 650:1696-1706.
[34] Liston G E, Elder K. 2006. A meteorological distribution system for high-resolution terrestrial modeling (MicroMet). Journal of Hydrometeorology, 7:217-234.
[35] Liu Y, Li L H, Chen X, et al. 2018. Temporal-spatial variations and influencing factors of vegetation cover in Xinjiang from 1982 to 2013 based on GIMMS-NDVI3g. Global and Planetary Change, 169:145-155.
[36] Ludovisi A. 2014. Effectiveness of entropy-based functions in the analysis of ecosystem state and development. Ecological Indicators, 36:617-623.
[37] Nachtergaele F, Velthuizen H V, Verelst L, et al. 2009. Harmonized World Soil Database (version 1.1). FAO, Rome, Italy and IIASA, Laxenburg, Austria.
[38] Nandy S, Singh C, Das K K, et al. 2015. Environmental vulnerability assessment of eco-development zone of Great Himalayan National Park, Himachal Pradesh. Ecological Indicators, 57:182-195.
[39] Nguyen A K, Liou Y A, Li M H, et al. 2016. Zoning eco-environmental vulnerability for environmental management and protection. Ecological Indicators, 69:100-117.
[40] Nyimbili P H, Erden T. 2020. A hybrid approach integrating Entropy-AHP and GIS for suitability assessment of urban emergency facilities. International Journal of Geo-Information, 9: 419, doi: 10.3390/ijgi9070419.
[41] Pan G B, Xu Y P, Yu Z H, et al. 2015. Analysis of river health variation under the background of urbanization based on entropy weight and matter-element model: A case study in Huzhou City in the Yangtze River Delta, China. Environmental Research, 139:31-35.
[42] Renard K G, Foster G R, Weesies G A. et al. 1991. RUSLE: Revised universal soil loss equation. Journal of Soil and Water Conservation, 46(1):30-33.
[43] Saaty T L, 2001. The analytic hierarchy process. In: Gass S I, Harris C M. Encyclopedia of Operations Research and Management Science. New York: Springer,287.
[44] Silva M M, Poleto T, Lucio C E S, et al. 2016. A grey theory based approach to big data risk management using FMEA. Mathematical Problems in Engineering, 9175418, doi: 10.1155/2016/9175418.
[45] Shi Y F, Shen Y P, Kang E, et al. 2007. Recent and future climate change in northwest China. Climatic Change, 80:379-393.
[46] Shi Y S, Li J Q, Xie M Q, 2018. Evaluation of the ecological sensitivity and security of tidal flats in Shanghai. Ecological Indicators, 85:729-741.
[47] Wang J F, Zhang T L, Fu B J, 2016. A measure of spatial stratified heterogeneity. Ecological indicators, 67:250-256.
[48] Wang X L, Luo Y, Sun L, et al. 2021. Different climate factors contributing for runoff increases in the high glacierized tributaries of Tarim River Basin, China. Journal of Hydrology: Regional Studies, 36: 100845, doi: 10.1016/j.ejrh.2021.100845.
[49] Wei S G, Dai Y J, Liu B Y, et al. 2012. A soil particle-size distribution dataset for regional land and climate modelling in China. Geoderma, 171:85-91.
[50] Williams L R R, Kaputska L A. 2000. Ecosystem vulnerability: a complex interface with technical components. Environmental Toxicology and Chemistry, 19:1055-1058.
[51] Wu B F, Zeng Y, Qian J K. 2017. Land Cover Atlas of the People's Republic of China (1:1000,000). Beijing: SinoMaps Press. (in Chinese)
[52] Xinjiang Uygur Autonomous Region Bureau of Statistics. 2000-2017. Xinjiang Statistical Yearbook. Beijing: China Statistics Press. (in Chinese)
[53] Xu H L, Ye M, Li J M. 2009. The ecological characteristics of the riparian vegetation affected by river overflowing disturbance in the lower Tarim River. Environmental Geology, 58:1749-1755.
[54] Xu Z, Liu Z, Fu G, et al. 2010. Trends of major hydroclimatic variables in the Tarim River basin during the past 50 years. Journal of Arid Environments, 74:256-267.
[55] Xue L Q, Jing W, Zhang L C, et al. 2018. Spatiotemporal analysis of ecological vulnerability and management in the Tarim River Basin, China. Science of the Total Environment, 649:876-888.
[56] Yao J Q, Zhao Y, Chen Y N, et al. 2018. Multi-scale assessments of droughts: A case study in Xinjiang, China. Science of the Total Environment, 630:444-452.
[57] Yao J Q, Hu W F, Chen Y N, et al. 2019. Hydro-climatic changes and their impacts on vegetation in Xinjiang, Central Asia, Science of the Total Environment, 660:724-732.
[58] Yu T, Bao A M, Xu W Q, et al. 2020. Exploring variability in landscape ecological risk and quantifying its driving factors in the Amu Darya Delta. International Journal of Environmental Research and Public Health, 17: 79, doi: 10.3390/ijerph17010079.
[59] Yuan X L, Li L H, Chen X. 2015. Increased grass NDVI under contrasting trends of precipitation change over North China during 1982-2011. Remote Sensing Letters, 6:69-77.
[60] Zhang F, Wang C H, Wang Z H. 2020. Response of natural vegetation to climate in dryland ecosystems: a comparative study between Xinjiang and Arizona. Remote Sensing, 12: 3567, doi: 10.3390/rs1221356.
[61] Zhang L, Li X S, Yuan Q Z, et al. 2014. Object-based approach to national land cover mapping using HJ satellite imagery, Journal of Applied Remote Sensing, 8: 083686, doi: 10.1117/1.JRS.8.083686.
[62] Zhang Z, Hu H, Tian F, et al. 2014. Groundwater dynamics under water-saving irrigation and implications for sustainable water management in an oasis: Tarim River basin of western China. Hydrology and Earth System Sciences, 18:3951-3967.
[63] Zhao J C, Ji G X, Tian Y, et al. 2018. Environmental vulnerability assessment for mainland China based on entropy method. Ecological Indicators, 91:410-422.
[64] Zhou H H, Chen Y N, Zhu C G, et al. 2020. Climate change may accelerate the decline of desert riparian forest in the lower Tarim River, Northwestern China: Evidence from tree-rings of Populus euphratica. Ecological Indicators, 111: 105997, doi: 10.1016/j.ecolind.2019.105997.
[65] Zhu Y X, Tian D Z, Yan F. 2020. Effectiveness of entropy weight method in decision-making. Mathematical Problems in Engineering, 2020, doi: 10.1155/2020/3564835.
[66] Zou T H, Yoshino K. 2017. Environmental vulnerability evaluation using a spatial principal components approach in the Daxing'anling region, China. Ecological Indicators, 78:405-415.
[1] WU Jun, DENG Guoning, ZHOU Dongmei, ZHU Xiaoyan, MA Jing, CEN Guozhang, JIN Yinli, ZHANG Jun. Effects of climate change and land-use changes on spatiotemporal distributions of blue water and green water in Ningxia, Northwest China[J]. Journal of Arid Land, 2021, 13(7): 674-687.
[2] WANG Yuejian, GU Xinchen, YANG Guang, YAO Junqiang, LIAO Na. Impacts of climate change and human activities on water resources in the Ebinur Lake Basin, Northwest China[J]. Journal of Arid Land, 2021, 13(6): 581-598.
[3] SA Chula, MENG Fanhao, LUO Min, LI Chenhao, WANG Mulan, ADIYA Saruulzaya, BAO Yuhai. Spatiotemporal variation in snow cover and its effects on grassland phenology on the Mongolian Plateau[J]. Journal of Arid Land, 2021, 13(4): 332-349.
[4] Ayad M F AL-QURAISHI, Heman A GAZNAYEE, Mattia CRESPI. Drought trend analysis in a semi-arid area of Iraq based on Normalized Difference Vegetation Index, Normalized Difference Water Index and Standardized Precipitation Index[J]. Journal of Arid Land, 2021, 13(4): 413-430.
[5] Adilov BEKZOD, Shomurodov HABIBULLO, FAN Lianlian, LI Kaihui, MA Xuexi, LI Yaoming. Transformation of vegetative cover on the Ustyurt Plateau of Central Asia as a consequence of the Aral Sea shrinkage[J]. Journal of Arid Land, 2021, 13(1): 71-87.
[6] HUANG Xiaotao, LUO Geping, CHEN Chunbo, PENG Jian, ZHANG Chujie, ZHOU Huakun, YAO Buqing, MA Zhen, XI Xiaoyan. How precipitation and grazing influence the ecological functions of drought-prone grasslands on the northern slopes of the Tianshan Mountains, China?[J]. Journal of Arid Land, 2021, 13(1): 88-97.
[7] Farzaneh KHAJOEI NASAB, Ahmadreza MEHRABIAN, Hossein MOSTAFAVI. Mapping the current and future distributions of Onosma species endemic to Iran[J]. Journal of Arid Land, 2020, 12(6): 1031-1045.
[8] Mahsa MIRDASHTVAN, Mohsen MOHSENI SARAVI. Influence of non-stationarity and auto-correlation of climatic records on spatio-temporal trend and seasonality analysis in a region with prevailing arid and semi-arid climate, Iran[J]. Journal of Arid Land, 2020, 12(6): 964-983.
[9] XU Bo, HUGJILTU Minggagud, BAOYIN Taogetao, ZHONG Yankai, BAO Qinghai, ZHOU Yanlin, LIU Zhiying. Rapid loss of leguminous species in the semi-arid grasslands of northern China under climate change and mowing from 1982 to 2011[J]. Journal of Arid Land, 2020, 12(5): 752-765.
[10] FENG Jian, ZHAO Lingdi, ZHANG Yibo, SUN Lingxiao, YU Xiang, YU Yang. Can climate change influence agricultural GTFP in arid and semi-arid regions of Northwest China?[J]. Journal of Arid Land, 2020, 12(5): 837-853.
[11] ZHOU Zuhao, HAN Ning, LIU Jiajia, YAN Ziqi, XU Chongyu, CAI Jingya, SHANG Yizi, ZHU Jiasong. Glacier variations and their response to climate change in an arid inland river basin of Northwest China[J]. Journal of Arid Land, 2020, 12(3): 357-373.
[12] LI Xuemei, Slobodan P SIMONOVIC, LI Lanhai, ZHANG Xueting, QIN Qirui. Performance and uncertainty analysis of a short-term climate reconstruction based on multi-source data in the Tianshan Mountains region, China[J]. Journal of Arid Land, 2020, 12(3): 374-396.
[13] BAI Haihua, YIN Yanting, Jane ADDISON, HOU Yulu, WANG Linhe, HOU Xiangyang. Market opportunities do not explain the ability of herders to meet livelihood objectives over winter on the Mongolian Plateau[J]. Journal of Arid Land, 2020, 12(3): 522-537.
[14] QIAO Xianguo, GUO Ke, LI Guoqing, ZHAO Liqing, LI Frank Yonghong, GAO Chenguang. Assessing the collapse risk of Stipa bungeana grassland in China based on its distribution changes[J]. Journal of Arid Land, 2020, 12(2): 303-317.
[15] Sheida DEHGHAN, Nasrin SALEHNIA, Nasrin SAYARI, Bahram BAKHTIARI. Prediction of meteorological drought in arid and semi-arid regions using PDSI and SDSM: a case study in Fars Province, Iran[J]. Journal of Arid Land, 2020, 12(2): 318-330.
|
CommonCrawl
|
International Journal of Advanced Structural Engineering
Effect of void former shapes on one-way flexural behaviour of biaxial hollow slabs
R. Sagadevan
B. N. Rao
The Two-way hollow slab system (biaxial voided slab) is an innovative slab system, being adopted all over the world as an alternate for the conventional solid slab. It reduces the self-weight up to 50% in comparison with solid slabs without significant change in its structural performance. The voided slab consists of void formers in shapes like spherical, donut, and cuboid. Experimental and analytical investigations were carried out to study the behaviour of biaxial voided slab under one-way flexure. Voided slab specimens were prepared and tested with two different shapes of voids namely sphere and cuboid, which were manufactured using recycled polypropylene. Comparison of experimental and analytical studies showed that the ultimate load-carrying capacity of voided slabs was higher or similar to that of solid slab. An analytical study was carried out using the yield line analysis in conjunction with Indian Standards. It was found that the capacity of voided slab can be estimated by yield line analysis. The flexural stiffness of voided specimen is approximately 50% lesser in comparison with solid slab of identical dimensions and reinforcement at yield stage. The reduction in flexural stiffness is mainly due to the presence of void former and the maximum void ratio at a section defines the flexural stiffness of the voided slab. Nevertheless, the deflection is under serviceable limit for both the specimens for 75% of ultimate load. Ultimately, it is found that the behaviour of voided slabs under one-way flexure can be predicted by provisions of Indian Standards with necessary correction for loss of cross-section caused by voids.
Biaxial voided slab Sphere-shaped void Cuboid-shaped void One-way flexure Yield line analysis
Biaxial voided slab is reinforced concrete slab with void formers made of shapes like sphere, cuboid, and donut (BubbleDeck Technology 2008; Chung et al. 2010; Kim et al. 2011; Daliform Group 2014) and placed in the middle of slab in between the top and bottom of the reinforcing mesh. It reduces the self-weight of slab up to 50% in comparison with conventional solid slab without any significant change in its structural performance (Björnson 2003; Harding 2004). For example, the biaxial voided slab reduces self-weight by 44% in comparison with the solid slab of the same flexural capacity (BubbleDeck Technology 2008). This system renders an overall economical and efficient floor system in construction; it is eco-friendly as the void former is made of recycled plastic.
Experimental or analytical and numerical studies were conducted to evaluate the one-way flexural capacity of biaxial voided slab with different shapes of void formers. These studies provided evidence that biaxial voided slabs have slightly lower stiffness and similar strength compared to that of solid slab (BubbleDeck Technology 2008; Kim 2011; Ibrahim et al. 2013; Valivonis et al. 2014). The slab with donut-shaped void showed almost similar flexural capacity to that of solid slab. The material properties and strength of donut-shaped void highly affect the flexural strength of the voided slab (Kim et al. 2011). The flexural stiffness of spherical-shaped voided slab is 80–90% of solid slab, however, the void slab showed the same flexural strength as that of solid slab (Midkiff 2013).
The present study focusses on the effect of void shapes on one-way flexural capacity of biaxial voided slabs. The chosen void shapes for this investigation were sphere and cuboid which were manufactured using recycled polypropylene. The four-point bending test was conducted to study the effect of void and its shape on one-way flexural capacity of the slab and then, the obtained experimental results were compared with results obtained by yield line analysis (YLA) in conjunction with provisions given in Indian Standard 456 (IS 456 2000). Furthermore, the behaviour of voided slab was compared with that of the solid slab of the same cross-sectional dimension and reinforcement ratio.
Configuration of voided slab specimens
The one-way flexural test helps to investigate the application of voided slab as an alternate to conventional solid slab. The structural behaviour of slab systems was studied in terms of load versus deflection behaviour, crack pattern, load-carrying capacity, flexural stiffness, deflection profile, load versus strain behaviour of bottom reinforcement and concrete surface along the depth of slab, and displacement–ductility ratio.
Details of void formers
Voids were created in the slab specimens using sphere- and cuboid-shaped void formers (Fig. 1). The sphere void former is a spherical-shaped hollow plastic ball of diameter 180 mm and wall thickness of 3 mm and it was manufactured specially for this study. The void former is kept in position with help of top and bottom reinforcement mesh with 25 mm clear cover at bottom; it is placed in such a way that the centre to centre spacing of void former is 210 mm. The cuboid-shaped void former (U-Boot Beton®) which is commercially available in India was used in this study; these types of voids do not have any sharp edges. The average plan dimension of the void former is 475 mm × 475 mm. The elevator feet of height 50 mm was provided at the bottom face of four corners to place the void former at the centre of slab. The cuboid-shaped void former was placed in such a way that the centre to centre spacing is 600 mm. Its depth and clear cover (at top and bottom) were 160 mm and 50 mm, respectively.
Single unit sphere and cuboid void former
Details of test specimens
Two types of voided slab specimens, one with a sphere-shaped void and another with a cuboid-shaped void were cast and tested. The dimensions of test specimens were 3300 mm × 1500 mm × 260 mm. The flexural behaviour of the slab was largely influenced by the tensile reinforcement provided in longitudinal and transverse directions (Matešan et al. 2012). To ensure flexure failure dominating shear failure, minimum reinforcement was provided as mesh in longitudinal and transverse directions. Fe 500D grade steel confirming to IS 1786 (2008) and M 20 grade concrete confirming to IS 456 (2000) were used. Figure 2 showed detailed specifications about the test specimens such as plan dimension, cross-section, reinforcement details and position of void formers.
a Details of specimen with sphere-shaped void former. b Details of specimen with cuboid-shaped void former
Material properties of test specimens
Ready-mix concrete obtained from same batch (or mix) was used to cast the test specimens. The characteristic compressive strength (fck) specified for 150 mm cube at 28 days is 25 N/mm2 that corresponds to the mix proportion specified in Table 1. Six concrete cube specimens of the size 150 mm were cast and cured under similar exposure condition as that of slab specimens. The compressive strength tests of the cube specimen were performed along with the testing of slab specimens. The observed average strength of the cube specimen is given in Table 1. Tensile tests of reinforcements were conducted and the observed properties are summarised in Table 2.
Mix proportion of concrete and cube test result
Weight ratio (kg/m3)
Strength (N/mm2)
With sphere void
With cuboid void
Mechanical properties of reinforcement
Diameter of reinforcement (mm)
Ductility ratio
Experimental test setup and instrumentation
Test setup
Four-point bending test was conducted to study one-way flexural behaviour of the voided slab. Figure 3a, b shows the schematic and actual test setup, respectively. Load was applied through a steel plate of the size 1500 mm × 80 mm × 16 mm as a patch load to avoid localised pre-mature shear failure (Fig. 4a). Two 500 kN capacity pseudo dynamic hydraulic actuators were used to apply the loads. The slab specimens were supported by hinge at one end and roller at the other end at their edges, which is located 150 mm from specimen edges along short span directions by a line-type reaction hinge of length 1500 mm.
a Schematic diagram of experimental test setup (four-point bending test). b Photograph of experimental test setup (four-point bending test)
Instrumentation of the test specimen
Applied loads, deflections, and strain in reinforcements and concrete surface were measured through appropriate instruments. Load cells with the capacity of 1000 kN were used to measure the applied loads. Three linear variable differential transformers (LVDTs) with measurement range of ± 100 mm were used to measure the deflections at mid-span and under point of application of loads. The concrete surface strain along the depth of slab was measured at front face of the slab in elevation using three LVDTs with measurement range of ± 20 mm. Figure 4b, c shows the schematic arrangement of LVDTs. Strain in the bottom reinforcements located at the centre of slab specimens was measured by strain gauges with 10 mm gauge length. Strain gauges were provided in longitudinal and transverse directions of bottom reinforcements as shown in Fig. 4b. A data acquisition system was used to obtain real-time experimental data which had the facility to record the load, deflection, and strain simultaneously.
Testing procedure
Displacement controlled monotonic tests were performed with two pseudo dynamic hydraulic actuators. Equal load distribution across each actuator was ensured by synchronising the actuators and operating with a single master control system. The rate of loading was 0.05 mm/s. To ensure the safety of measuring and loading devices, the tests were terminated when the load was dropped suddenly.
Analytical study
Estimation of load-carrying capacity of slab specimens
The yield line method can be used to estimate the ultimate load-carrying capacity of slab specimens under one-way flexure. It has great potential to predict failure load of reinforced concrete slabs based on the inelastic approach (Darwin et al. 2002; Pillai and Menon 2012). Hence, the yield line method was used to estimate the ultimate load-carrying capacity of test specimens.
The specimens were tested under four-point bending. Consequently, the yield line may form anywhere in between load positions or at load positions. The ultimate load-carrying capacity of specimen does not change with the location of yield line. Therefore, in this study, yield line was assumed to be formed at mid-span under one-way flexural action along the transverse direction. It results in dividing the slabs into two equal parts (Fig. 5).
Yield line pattern in a one-way simply supported slab
As per the principle of conservation of energy, external work done (WE) and internal work done (WI) should be equal and were given by:
$$W_{\text{E}} = W_{\text{I}} ,$$
$$\frac{2}{3}P_{\text{u}} \delta_{\text{u}} = \frac{{4m\delta_{\text{u}} }}{{l_{\text{e}} }},$$
where δu is the deflection at centre of slab under the ultimate load (Pu), m is the in-plane moment for width b and le is the effective length.
The ultimate load-carrying capacity (Pu) of the slab specimen was calculated by Eq. (2) and given by:
$$P_{\text{u}} = \frac{6m}{{l_{\text{e}} }}.$$
Similarly, for the self-weight of slab which is uniformly distributed over the span, the relation between self-weight (WDL) and in-plane moment (mDL) can be derived as:
$$W_{\text{DL}} = \frac{{8m_{\text{DL}} }}{{l_{\text{e}} }}.$$
Based on the stress–strain relationship of concrete given in IS 456, linear strain variation along the depth of slab, slab specimen dimensions (Fig. 2) and materials' properties (Tables 1, 2), the ultimate load-carrying capacity (Pu) of solid slab is estimated and given in Table 3.
Results based on experimental and theoretical studies
Sl. no.
Py (kN)
δy (mm)
Pu (kN)
δu (mm)
Ky (kN/mm)
Specimen with sphere-shaped void
Void (exp.)
Void (theo.)
Solid (theo.)
Specimen with cuboid-shaped void
Py and Pu are the loads corresponding to yielding and ultimate stages, respectively; δy and δu are deflection at mid-span corresponding to yield and ultimate loads, respectively; Ky is secant stiffness corresponding to yield load; and µ is displacement–ductility ratio
Flexural stiffness
Flexural stiffness is defined as the ratio of load and its corresponding deflection. In this study, the secant stiffness of void slab specimens was calculated (Eq. 5) and compared against that of solid slab at yield load.
$$K_{\text{y}} = \frac{{P_{\text{y}} }}{{\delta_{\text{y}} }}$$
The deflection at the centre of slab (δc) under two-point load of intensity P/2 each (Fig. 5) can be calculated using Eq. (6a).
$$\delta_{\text{c}} \approx \frac{{Pl_{\text{e}}^{3} }}{56EI},$$
(6a)
where le is the effective length, E is the modulus of elasticity of material and I is the moment of inertia of a section.
Similarly, for the self-weight of slab which is uniformly distributed over the span, the deflection at the centre of slab (δc,DL) can be calculated using Eq. (6b).
$$\delta_{\text{c,DL}} = \frac{{5W_{\text{DL}} l_{\text{e}}^{3} }}{384EI}.$$
As per IS 456, the short-term deflection was calculated using short-term modulus of elasticity of concrete (Ec = 5000 f ck 0.5 ) and effective moment of inertia (Ieff) (Eq. 7).
$$I_{\text{eff}} = \frac{{I_{\text{r}} }}{{1.2 - \frac{{M_{\text{r}} }}{M}\frac{z}{d}\left( {1 - \frac{x}{d}} \right)\frac{{b_{\text{w}} }}{b}}}\varvec{;}\quad I_{\text{r}} \le I_{\text{eff}} \le I_{\text{gr}}$$
where Ir is the moment of inertia of cracked section, Mr is the cracking moment (Eq. 8), M is the maximum moment under service load, z is the lever arm distance, x is the depth of neutral axis, d is the effective depth, bw is the breadth of web and b is the breadth of compression face, and Igr is the moment of inertia of gross section about the centroidal axis ignoring reinforcement.
$$M_{\text{r}} = \frac{{f_{\text{cr}} I_{\text{gr}} }}{{y_{\text{t}} }},$$
where fcr (= 0.7 f ck 0.5 ) is the modulus of rupture of concrete, and yt is the distance from centroidal axis of gross section to extreme fibre in tension ignoring reinforcement.
Estimate of deflection based on IS 456 resulted in a larger value. Therefore, the cracking moment (Mr) may be reduced approximately by 30% (Pillai and Menon 2012) to estimate deflection based on Eqs. (6a, 6b). The effective moment of inertia of voided slab is calculated based on the critical cross-section that corresponds to the section located in the centre of void as shown in Fig. 6a, b. The uncracked moment of inertia (Ig,V) was calculated using Eqs. (9) and (10) for sphere-shaped voided slab and using Eqs. (9) and (11) for cuboid-shaped voided slab, accounting for loss of concrete due to voids. The location of centre of gravity from base (Cy) was calculated for sphere- and cuboid-shaped voids using Eqs. (12) and (13), respectively. Researchers suggested that the cracked moment of inertia of voided slab (Ir,V) may be taken as 90% of the cracked moment of inertia of solid slab (Ir,Solid) (BubbleDeck Technology 2008; Midkiff 2013). However, the ratio of Ir,V to Ir,Solid needs to be arrived based on the maximum void ratio at a section (α) as given in Eq. (14).
$$I_{\text{g,V}} = I_{\text{g,Solid}} - n\left( {I_{\text{V}} } \right),$$
$$I_{\text{V,S}} = \frac{{\pi d'^{4} }}{64} + \frac{{\pi d'^{2} }}{4}\left( {\frac{D}{2} - C_{\text{y,S}} } \right)^{2} ,$$
$$I_{\text{V,C}} = \frac{{h^{3} }}{36}\left( {\frac{{a'^{2} + 4a^{\prime}a^{\prime\prime} + a^{{{\prime \prime }2}} }}{{a^{\prime} + a^{\prime\prime}}}} \right) + \frac{{h\left( {a^{\prime} + a^{\prime\prime}} \right)}}{2}\left( {\frac{D}{2} - C_{\text{y,C}} } \right)^{2} ,$$
$$C_{\text{y,S}} = c + \frac{d'}{2},$$
$$C_{\text{y,C}} = \left( {\frac{D - h}{2}} \right) + \frac{h}{3}\left( {\frac{{2a^{\prime} + a^{\prime\prime}}}{{a^{\prime} + a^{\prime\prime}}}} \right),$$
$$I_{\text{r,V}} = (1 - \alpha )I_{\text{r,Solid}} ,$$
where Ig,Solid is the uncracked moment of inertia of solid slab and n is the number of voids in a section. The theoretical results of mid-span deflection and moment of inertia for voided and solid slabs are summarised in Table 5.
Voided slab sections used to calculate moment of inertia
Load deflection behaviour
The tested voided slab specimens showed typical flexural behaviour under one-way bending. Initially, specimens remained elastic until cracking followed by inelastic actions such as yielding of bottom reinforcements and ultimate failure by crushing of concrete at the top of the slab. Load versus mid-span deflection of specimens with the sphere and cuboid void showed ductile behaviour (Fig. 7).
Load versus mid-span deflection behaviour of test specimens
Crack pattern
Figures 8 and 9 show the observed crack pattern on the front elevation of slab specimens. The cracks were formed between loading positions along the width of slab.
Observed crack pattern of slab specimen with sphere-shaped void
Observed crack pattern of slab specimen with cuboid-shaped void
Load-carrying capacity
The load-carrying capacity of voided slabs was similar to solid slab. Load and mid-span deflection corresponding to yield and ultimate stages are summarised in Table 3. The ultimate load from the test specimens is compared with theoretically estimated ultimate load of the solid slab (Tables 3, 4). Self-weight correction based on initial stiffness is applied in load–deflection plot (Fig. 7) and values (Table 3). The ultimate load-carrying capacity of specimens with sphere- and cuboid-shaped voids was equal to that of the solid slab. The theoretical load-carrying capacity estimated using the yield line theory of solid and voided slabs was the same, as the contribution from concrete below the neutral axis is ignored. Thus, the yield line theory is applicable to voided slab similar to conventional solid slab.
Comparison between experimental and theoretical studies
Flexural stiffness at yielding
Load at ultimate failure
Ky,exp./Ky,theo.
Ky,theo/Ky,solid
Pu,exp./Pu,solid
Sphere-shaped void
Cuboid-shaped void
Table 5 shows the theoretical estimate of moment of inertia of solid and voided slabs. The ratio of theoretical estimate of effective moment of inertia of solid and voided slabs corresponding to yield load is 0.52 and 0.53 for sphere and cuboid shapes, respectively. These effective moments of inertia showed a similar trend as observed based on experimental result in terms of secant stiffness (Table 4). Ultimately, the loss in cross-sectional area caused by the voids should be considered to estimate the flexural stiffness of the voided slab. In this study, the loss of cross-sectional area was calculated to be 33% and 39% for sphere- and cuboid-shaped voids, respectively.
Theoretical estimate of moment of inertia
Moment of inertia
Uncracked, Ig (mm4)
Cracked, Ir (mm4)
Effective (at yield load), Ieff (mm4)
18.85 × 108
0.734 × 108
Deflection
The load–deflection behaviour of the voided slab specimens is shown (Figs. 10, 11). It is observed that the deflection measured at loading positions (LVDTs 1 and 3) matches with each other. It shows that the deflection was equal at any instant of the applied load. The measured deflections along longitudinal direction of slab using LVDTs 1, 2 and 3 were compared at five different loading stages such as 0.20 Pu, 0.40 Pu, 0.60 Pu, 0.80 Pu, and 1.0 Pu. A typical deflection profile of specimens with sphere and cuboid void shapes in the longitudinal direction is shown in Figs. 12 and 13 at different loading stages. It is observed that more than 75% of ultimate load lies within the serviceable deflection limit of le/250 as per IS 456, i.e., 12 mm.
Load versus deflection behaviour of specimen with sphere-shaped void
Load versus deflection behaviour of specimen with cuboid-shaped void
Deflection profile of specimen with sphere-shaped void
Deflection profile of specimen with cuboid-shaped void
Strain of bottom reinforcement
Usually, the material and section properties of reinforced concrete member decide its behaviour. In this section, strain in bottom reinforcements at the centre along both directions is examined. It was observed that strain in the reinforcement along transverse direction is zero. It indicates one-way flexural behaviour during entire duration of loading and does not influence the load-carrying capacity (Figs. 14, 15). The presence of void formers did not affect the behaviour of the biaxial voided slab.
Load versus strain of bottom reinforcement of specimen with sphere-shaped void
Load versus strain of bottom reinforcement of specimen with cuboid-shaped void
Strain of concrete surface along depth of slab
The concrete surface strain was measured along the depth of slab at three locations at the centre, bottom and top reinforcement levels using LVDTs 4–6. At mid-span of slab, these measurements were taken, where the influence of shear due to applied external load is zero. The load versus concrete surface strain along depth of slab of specimens with sphere- and cuboid-shaped voids showed that bottom and top reinforcements were in tension, which was evident through theoretical calculation as well. Hence, it is understood that the neutral axis of voided slab lies in the cover concrete to top reinforcement (Figs. 16, 17). In Fig. 16, results of LVDT 6 are not presented, as this LVDT malfunctioned during the test.
Load versus concrete surface strain along depth of slab specimen with sphere void
Load versus concrete surface strain along depth of slab specimen with cuboid void
Displacement ductility ratio
Load–deflection behaviour of the voided slab specimens shows predominantly ductile and flexural response. The displacement ductility ratio (µ) of voided slab specimens was calculated using Eq. (15) and found to be 3.97 and 4.33 for specimens with sphere- and cuboid-shaped voids, respectively. The results are summarised in Table 3.
$$\mu = \frac{{\delta_{\text{u}} }}{{\delta_{\text{y}} }}$$
Structural behaviour of the voided slab specimens was studied considering parameters such as load versus deflection behaviour, crack pattern, load-carrying capacity, flexural stiffness, deflection profile, load versus strain behaviour of bottom reinforcement and concrete surface along the depth of slab, and displacement–ductility ratio. The applicability of existing IS 456 code provisions for design and/or analysis of biaxial voided slab is verified. The following observations are drawn based on experimental and analytical investigations of biaxial voided slab (with sphere- and cuboid-shaped voids):
The voided slabs show typical flexure behaviour similar to that of the solid slab. The cracks were observed in the region of pure bending and distributed along the longitudinal direction. The voided slab specimen exhibited a well-defined failure mechanism with the yield of bottom reinforcements and crushing of concrete at the top of slab surface.
The ultimate load-carrying capacity of specimens with sphere- and cuboid-shaped voids was equal to that of the solid slab. The theoretical load-carrying capacity of voided and solid slabs using the yield line theory was the same. Thus, the yield line theory can be adopted for estimation of the load-carrying capacity of voided slabs.
The effective moment of inertia at yield load of voided specimens with sphere- and cuboid-shaped voids was obtained as 52 and 53% of solid slab, respectively. It shows that the loss of cross-section due to voids should be considered for calculating flexural stiffness of voided slab.
The presence of void formers did not influence the reinforcement behaviour in longitudinal and transverse directions.
The concrete surface strain along depth of slab evidenced that the neutral axis of voided slab lies in the cover concrete to top reinforcement.
The one-way flexure behaviour of voided slabs was very well predicted by the yield line theory and provisions of IS 456 accounted for necessary corrections required for loss of cross-sectional area due to voids.
This work was supported by Science and Engineering Research Board, Department of Science and Technology, India (SR/S3/MERC/0040/2012) and M/s Post Tension Services India Pvt. Ltd. (PTSI), Vadodara, Gujarat, India (WO/GEN/0001/16-17). The authors wish to acknowledge the assistance and facilities offered by Technical Staff, Structural Engineering Laboratory, IIT Madras.
Björnson G (2003) BubbleDeck—two-way hollow slab. www.bubbledeck-uk.com
BubbleDeck Technology (2008) BubbleDeck voided flat slab solutions—technical manual and documents. www.bubbledeck-uk.com
Chung JH, Park JH, Choi HK et al (2010) An analytical study on the impact of hollow shapes in bi-axial hollow slabs. In: FraMCoS-7. Korea Concrete Institute, pp 1729–1736Google Scholar
Daliform Group (2014) U-Boot Beton® system study: lightened concrete slab by using U-Boot Beton®. www.daliform.com
Darwin D, Dolan C, Nilson A (2002) Design of concrete structures, 15th edn. McGraw-Hill, New YorkGoogle Scholar
Harding P (2004) BubbleDeck™—advanced structure engineering. BubbleDeck Artic. pp 15–16. www.bubbledeck.com
Ibrahim AM, Ali NK, Salman WD (2013) Flexural capacities of reinforced concrete two-way bubbledeck slabs of plastic spherical voids. Diyala J Eng Sci 06:9–20Google Scholar
IS 1786 (2008) High strength deformed steel bars and wires for concrete reinforcement—specification. Bureau of Indian Standards, New DelhiGoogle Scholar
IS 456 (2000) Plain and reinforced concrete—code of practice. Bureau of Indian Standards, New DelhiGoogle Scholar
Kim SH (2011) Flexural behavior of void RC and PC slab with polystyrene forms. Key Eng Mater 452–453:61–64. https://doi.org/10.4028/www.scientific.net/KEM.452-453.61 CrossRefGoogle Scholar
Kim BH, Chung JH, Choi HK et al (2011) Flexural capacities of one-way hollow slab with donut type hollow sphere. Key Eng Mater 452–453:773–776. https://doi.org/10.4028/www.scientific.net/KEM.452-453.773 CrossRefGoogle Scholar
Matešan D, Radnić J, Grgić N, Čamber V (2012) Strength capacity of square reinforced concrete slabs. Mater Sci Eng 43:399–404. https://doi.org/10.1002/mawe.201200972 CrossRefGoogle Scholar
Midkiff CJ (2013) Plastic voided slab systems: applications and design. MS thesis, Kansas State University, ManhattanGoogle Scholar
Pillai SU, Menon D (2012) Reinforced concrete design, 3rd edn. Tata McGraw Hill, New DelhiGoogle Scholar
Valivonis J, Jonaitis B, Zavalis R et al (2014) Flexural capacity and stiffness of monolithic biaxial hollow slabs. J Civ Eng Manag 20:693–701. https://doi.org/10.3846/13923730.2014.917122 CrossRefGoogle Scholar
1.Structural Engineering Laboratory, Department of Civil EngineeringIndian Institute of Technology MadrasChennaiIndia
Sagadevan, R. & Rao, B.N. Int J Adv Struct Eng (2019). https://doi.org/10.1007/s40091-019-0231-7
|
CommonCrawl
|
How is there energy transferred when net work done is zero? [duplicate]
Net work done on the body when we lift it and put it on the table is zero? (4 answers)
Consider a system on the level ground (let this be the datum line). Only gravitational force(force 1) and a vertically upward force(force 2) are acting on it and there is no heat transfer. When the upward force, force 2, is greater than gravitational force, force 1, there is a net-upward force which causes the system to accelerate up. After a certain distance, force 1 and force 2 become equal and the system starts moving at a constant velocity. The net-upward force multiplied by the distance traveled gives the work done on the system by the net-force which is equal to the increase in kinetic and potential energy of the system during acceleration.
Now, when it is moving at a constant velocity, as there is no net force acting on the system, the net-work would be zero. But still, the potential energy is increasing with the kinetic energy remaining constant. How is the energy in the system increasing without any work being done on it?
forces energy work potential-energy
GRANZERGRANZER
The effect of (Newtonian) gravity can be included in a description by means of both
a potential energy $U_g$, or
a force $F_g$,
and you should not include them both at the same time when applying the principle of energy conservation.
That means that, at constant speed, you consider that either
the net force $F_{\mathrm{res}} = F+F_g$ is zero, and then its work, $W_{F_{\mathrm{res}}}$, is also zero and that's consistent with constant energy; or
there's a force $F=-F_g$ acting on the body, whose work is increasing the body's energy (that happens to be of potential, not of kinetic type): $\Delta E = W_F = \int F \mathrm{d}s = - \int F_g \mathrm{d}s = -W_{F_g} \equiv \Delta U_g$.
stafusastafusa
$\begingroup$ Thank you @stafusa. So in the energy equation if we consider the work done by the upward directed force and the gravitational force then there is ono potential energy to be considered. TThis helped me get a better understanding of gradients of the scalar potentials and the different potential energy as a whole.That is the work done by the potential forces. $\endgroup$ – GRANZER Dec 19 '17 at 14:07
Not the answer you're looking for? Browse other questions tagged forces energy work potential-energy or ask your own question.
Net work done on the body when we lift it and put it on the table is zero?
Is work done by how much velocity is changed or how much displacement is done?
Is net work and total work same?
how could Kinetic energy increase if no energy was there in the system?
Potential energy and the work energy theorem
Is the net work done on earth-object system equal to the change in potential energy of the system?
Net Force on System Not Equal Zero, but GPE increases and no change in Kinetic Energy
How does negative gravitational work result in an increase in potential energy?
Net work done in lifting
Work done to a system that has been elevated
|
CommonCrawl
|
A bibliometric analysis on tobacco regulation investigators
Dingcheng Li1,
Janet Okamoto2,
Hongfang Liu1 &
Scott Leischow2
BioData Mining volume 8, Article number: 11 (2015) Cite this article
To facilitate the implementation of the Family Smoking Prevention and Tobacco Control Act of 2009, the Federal Drug Agency (FDA) Center for Tobacco Products (CTP) has identified research priorities under the umbrella of tobacco regulatory science (TRS). As a newly integrated field, the current boundaries and landscape of TRS research are in need of definition. In this work, we conducted a bibliometric study of TRS research by applying author topic modeling (ATM) on MEDLINE citations published by currently-funded TRS principle investigators (PIs).
We compared topics generated with ATM on dataset collected with TRS PIs and topics generated with ATM on dataset collected with a TRS keyword list. It is found that all those topics show a good alignment with FDA's funding protocols. More interestingly, we can see clear interactive relationships among PIs and between PIs and topics. Based on those interactions, we can discover how diverse each PI is, how productive they are, which topics are more popular and what main components each topic involves. Temporal trend analysis of key words shows the significant evaluation in four prime TRS areas.
The results show that ATM can efficiently group articles into discriminative categories without any supervision. This indicates that we may incorporate ATM into author identification systems to infer the identity of an author of articles using topics generated by the model. It can also be useful to grantees and funding administrators in suggesting potential collaborators or identifying those that share common research interests for data harmonization or other purposes. The incorporation of temporal analysis can be employed to assess the change over time in TRS as new projects are funded and the extent to which new research reflects the funding priorities of the FDA.
To facilitate the implementation of the Family Smoking Prevention and Tobacco Control Act (FSPTCA) of 2009, the Federal Drug Agency (FDA) Center for Tobacco Products (CTP) was formed to oversee tobacco regulatory activities. Its responsibilities include setting performance standards, reviewing premarket applications for new and modified risk tobacco products, requiring new warning labels, and establishing and enforcing advertising and promotion restrictions. In order to meet these responsibilities, the CTP has identified research priories for tobacco regulatory science (TRS) in order to inform and guide the CTP's regulatory decision-making. While tobacco researchers have been examining some of the CTP's TRS research priorities for many years, they have not necessarily been doing so under the umbrella or specific title of 'tobacco regulatory science'. Therefore, examining and identifying research topics from the corpus of TRS work could help to more clearly define this growing research area. In this paper, we applied author topic modeling (ATM) [1], a variation of Latent Dirichlet Allocation (LDA) [2], to simultaneously model the content of documents and the interests of authors. Namely, given the broader TRS research field, we attempted to discover topics as well as general research interests utilizing MEDLINE citations for currently funded TRS investigators.
LDA is known for its ability to model document contents as a mixture of topics (which comprise words describing similar things). This results in improvements in the study of hidden semantics of documents compared with previous models like Latent Semantic Indexing (LSI) [3], probabilistic LSI [4], vector semantics [5] and so on. Modeling interests of authors is in fact not new in the bibliometric research. As early as 1999, McCallum proposed a mixture author model with the mixture weights for different topics fixed [6]. Then, in 2004, Rozen-Zvi proposed author topic modeling [1], which is the integration of LDA and the author model. It aims at extracting information about authors and topics from large text collections simultaneously. Since then, author topic modeling has been widely used in applications such as bibliometrics analysis [7], information extraction [8], social network analysis [9] named entity recognition [10] and MeSH indexing interpretation [11].
However, modeling author-topic-word relations in TRS has not been attempted. Given the large increase in tobacco-related research, which the FDA has regulatory authority over tobacco, author topic modeling can help the field better understand the nature and scope of research already underway, and serve as a means of fostering interdisciplinary science that is needed to inform tobacco policy [12]. Moreover, our work aims at filling this gap in order to extend author topic models into medical corpus analysis.
Author topic modeling (ATM)
ATM aims at extracting information about authors and topics from a large text collection simultaneously. It is a class of Bayesian graphical model for text document collections represented by bag-of-words. In standard LDA, each document in the collection of D documents is modeled as a multinomial distribution over T topics, where each topic is a multinomial distribution over W words and both sets of multinomial are sampled from a Dirichlet distribution.
Different from LDA, ATM incorporates authors by adding one more variable, which is uniformed assigned by a set of authors, an observed set in some corpus. As in LDA, a topic is chosen from a distribution over topics specific to that author, and the word is generated from the chosen topic.
To learn the model parameters, we use Gibbs sampling where the equation for author topic modeling is,
$$ P\left({z}_{id}=t,{y}_{id}=a\Big|{x}_{id}=w,\ {\boldsymbol{z}}^{\neg id},{\boldsymbol{y}}^{\neg id},\mathrm{A},\upalpha, \upbeta \right)\propto \frac{N_{wt,\neg id}^{WT}+\beta }{{\displaystyle {\sum}_{w\hbox{'}}}{N}_{w\hbox{'}t,\neg id}^{WT}+W\beta}\frac{N_{ta,\neg id}^{TA}+\alpha }{{\displaystyle {\sum}_{t\hbox{'}}}{N}_{t^{\hbox{'}}\alpha, \neg id}^{TA}+T\alpha } $$
where, α and β are Dirichlet priors for topic distributions, z id = t and y id = a are the assignments of the ith word in document d to topic t and author a respectively and x id = w indicates that the current observed word is word w. NTA represents the topic-author count matrix, where \( {N}_{ta,\neg id}^{TA} \) is the number of words assigned to topic t for author a excluding the topic assignment to word w id . Similarly, NWT is the word-topic count matrix, where \( {N}_{wt,\neg id}^{WT} \) is the number of words from wth entry in the vocabulary assigned to topic t excluding the topic assignment to word w id . Finally, z¬ id and y¬ id represent the vector of topic assignments and vector of author assignment in all corpus except for the ith word of the dth document respectively.
Following the same convention, the posterior distribution of θ ta , the topic distribution of each document and ϕ wt , the topic distribution of each word, can be estimated with the following equations where D refers to the corpus.
$$ \begin{array}{l}{\theta}_{ta}=p\left(t\Big|a,d\right)=E\left[{\theta}_{ta}\Big|{\boldsymbol{z}}^{\neg id},D,\alpha \right]=\frac{N_{ta,\neg id}^{TA}+\alpha }{{\displaystyle {\sum}_{t\hbox{'}}}{N}_{t^{\hbox{'}}\alpha, \neg id}^{TA}+T\alpha}\\ {}{\phi}_{wt}=p\left(w\Big|t\right)=E\left[{\phi}_{ta}\Big|{\boldsymbol{z}}^{\neg id},D,\beta \right]=\frac{N_{wt,\neg id}^{WT}+\beta }{{\displaystyle {\sum}_{w\hbox{'}}}{N}_{w\hbox{'}t,\neg id}^{WT}+W\beta}\end{array} $$
This model can be understood as a two-stage stochastic process. An author is represented by a probability distribution over topics, and each topic is represented as probability distributions over words.
Data gathering and preprocessing
In order to obtain a comprehensive collection of all tobacco-related research, we collected publications from two sources. The first source is 300 tobacco-related keywords developed from the FDA CTP's key research priority and interest areas as outlined on the various TRS Funding Opportunity Annoucements's (FOA's), released in partnership with NIH, since the passage of the FSPTCA in 2009. The final search term list was reviewed and refined by FDA CTP and NIH ODP staff, bibliometric and tobacco research experts. The second data source is the publications from the 131 principle investigators of TRS grants funded by the CTP through the NIH's Tobacco Regulatory Science Research Program (TRSP) (http://prevention.nih.gov/tobacco/portfolio.aspx). Among the TRS PIs are 65 investigators that are part of the Tobacco Centers of Regulatory Science (TCORS), a large 14-center initiative that serves as the flagship for the TRSP. Since each article can have multiple authors, the author set considered in this work includes PIs plus the last author of the paper. The final author set includes 2,740 authors. The document set includes those MEDLINE citations with abstract available, resulting in 167,196 and 8,800 abstracts respectively. We refer to the first dataset, pulled using TRS keywords, as the KWSet and the second dataset, using publications from TRS grantees, theTRSAwardeeSet.
For each document, we removed stop words using a stop word list available at Mallet software package [13]. We then stemmed the words by applying the potter stemmer [14] and words with occurrence lower than 2 are discarded. We further filtered out words based on Term Frequency-Inverse Document Frequency (TF-IDF), where words with high document frequencies and relatively insignificant for single document were removed.
The evaluation of ATM, as other topic models, can be conducted from two aspects, topic interpretability and topic coherence. Interpretability refers to how much degree human beings can understand topics generated by a topic modeling. It is often regarded as one of important measures to tell how good an unsupervised model is th more on quality. In this paper, we give a detail analysis on what each topic represents and whether they match areas TRS focuses and if not so matching, what rational we can find out. For topic coherence, on the other hand, we employ quantitative measures to make estimations. Both perplexity and pointwise mutual information (PMI) are employed for this purpose. Perplexity measures the degree how fit the topic model is to the training data while PMI measures topic coherence by calculating conformation measures of top N words used to represent each topic. The perplexity is defined as the integrating out of all latent variables, namely, \( perplexity\left({D}_{test}= \exp \left\{\frac{{\displaystyle {\sum}_{d=1}^M} logp\left({w}_d\right)}{-{\displaystyle {\sum}_{d=1}^M}{N}_d}\right\}\right) \). The lower the score, the better the model fitness. The PMI-based coherence measure is calculated by,
\( C=\frac{2}{N.\left(N-1\right)}{\displaystyle \sum_{i=1}^{N-1}}{\displaystyle \sum_{j=i+1}^N}PMI\left({w}_i,{w}_j\right) \) and \( PMI\left({w}_i,{w}_j\right)= log\frac{P\left({w}_i,{w}_j\right)+\epsilon }{P\left({w}_i\right)P\left({w}_j\right)} \)
where P(w i , w j ) is the probability of w i and w j co-occur in the whole corpus and ϵ is added to avoid logarithm of zero.
Temporal trend analysis on key words
Temporal analyses of research topics can reveal interesting trends and provide guide for future endeavors. According to the FDA, there are four key areas in TRS. They are cigars, smokeless tobacco, e-cigarettes and tobacco product characteristics. Therefore, in order to achieve this goal, we extracted publications, which were published from 2000 to 2013, corresponding to the four key areas from the larger KWSet. Then we divided all abstracts by year and ran author topic modeling on them respectively. Next, we calculated the proportion of key words in all the topics with \( {\displaystyle \sum_{k=1}^K}p(k)*p\left(w\Big|k\right) \) where K is the total number of topics (K = 400 as determined in last section), w is the key word, p(k) is the proportion of each topic and p(w|k) is the probability of the key word in Topic k.
Articles' yearly distributions
Figures 1 and 2 show the yearly distributions of all tobacco related publications. Figure 1 shows that from the year 2000 onward, there were about 10,000 newly published articles related to tobacco regulatory science each year. Among them, TRS awardees contributed about 10%. TCORS awardee publications made up half of TRS awardee contributions.. The total number of articles have seen a slight increase from year-to-year over the past 10 years, with the exception of 2007 to 2009, where there was a large jump in numbers. Yet, after 2009, the number becomes stable. This may be related to the short-term grants funded during economic stimulus efforts (ARRA grants) [16], which had some different publication and research dissemination stipulations than more traditional grants.
Diversity of TRS publications against annual counts X-axis is the year and Y-axis is the number of publications.
Mesh diversity of TRS publications annually X-axis is the year and Y-axis is the number of mesh headings involved.
We also investigated the distribution based on Mesh headings [17], which is a comprehensive controlled vocabulary for the purpose of indexing journal articles and books in the life sciences (illustrated in Figure 2). The general trend looks quite similar to that in Figure 1, in that there were slight increases from year-to-year. However, a few differences can be observed as well. Among the literature retrieved using the keyword search queries, based on CTP research interest areas, a more diverse range of research topics can be observed over time. Due to this large array of diversity, only 10-14% are from TRS-funded researchers, with TCORS awardee publications again constituting half of TRS researcher contributions.
Articles' journal distributions
As the first step of bibliometric analysis, we made a simple count of which journals TRS researchers usually publish their articles. This step can be regarded as a compensations for author-topic modeling because we assume that the journals can associate the contexts of the articles to a large degree.
Top TRS journals from PUBMED keyword data set
The PubMed data set includes 167,196 publications from 7,134 journals. However, there are 1,824 journals from which there was only one article in the TRS keyword data set. In addition, we found that there are 5,146 journals from which there were fewer than ten articles. This indicates that it is likely these journals do not traditionally cover topics related to TRS research. Interestingly, the number of journals from which more than 100 articles were selected is 306, a much smaller and more manageable pool of potential publication outlets for TRS research. Together, those 306 journals published 155,512 of the articles in the TRS keyword data set, or 93% of all publications in the data set; so almost all TRS publications in the past have occurred in these 300 or so journals. Inspecting the top 30 journals publishing TRS articles (see Table 1 where we listed numbers of publications and their ratios for KWSets, TRSAwardeeSet and TCORAwardeeSet), we find that most of the journals topic areas are related to toxicity, biochemistry, nicotine, environments, pharmacology and health.
Table 1 Top journals for KWSet, TRS
Top journals covered by TRS investigators
Top journals covered by all TRS investigators are almost identical to those of the PUBMED keyword data set (see Table 1). But differences are also evident. Journal coverage of TRS investigators is more focused on those related specifically to tobacco. For example, Tobacco Control is one of the main journals (top 8) in the top journal list of TRS investigators, though the top journal among the two lists is the same. In addition, Addictive Behaviors is also in the top 10. From this, we can see from a different perspective what CTP funded researchers concentrate on.
Top journals covered by TCORS investigators
The top journals covered by TCORS investigator publications are quite different from the journal coverage of the larger TRS investigator group, with a few subtle differences in the ordering and prominence of a few journals. For example, Brain Research is much more prominent (top 16) among the TCORS journal list, while it ranks 27th among the larger TRS group.
Author topic modeling experiment and topic coherence evaluation
We ran the author topic modeling developed by Steyvers and et al. [15] on both the KWset and the TRSAwardeeSet. The topic number, T is determined by the grid search and comparison of perplexity defined in last section. Similar to LDA, estimation of topic distributions of words was evaluated with Log-likelihood score of the posterior distribution of words given topics, one of the standard criteria for generative model evaluation. It was found that the best perplexity for KWset was the lowest when T is 400 and the best for TRSAwardeeSet was 20 respectively. The hyperparameters α and β were fixed as 50/T and 0.01 respectively and according to the data size, the iterations for both data were 1000 and 50 respectively. The PMI evaluation for the coherence of the resulting topics yields 65% and 70% on average. These results were basically consistent with those reported for domains in news, social media or computer science. This shows that ATM can be adapted in medical fields. In order to confirm this quantitaive evaluation, more quality analysis is done in the following sections.
Topic interpretations
Figure 3 shows the ordered proportion of the 20 topics for the TRSAwardeeSet and Figure 4 shows the word clouds of the top 20 words for each topic. In order to find out what each topic is focused on, we assign each topic a name based on the top 20 words. The naming in this paper is done manually by domain experts and we are implementing a semi-auto labeling algorithm. The auto-labeling methodology and results will be reported in future work.
Topic Proportions for the 20 topics of TRSAwardeeSet The X-axis is the topic number and the annotated topic name and the Y-axis.
Word cloud for 20 topics of TRSAwardeeSet For TRSAwardeeSet, we use wordle to generate word cloud for top 20 words of each topic and then put all word clouds into one slide for visualization. The 20 topics are order from left to right and from up to down. The size of words reflect the proportion of each word in that topic. Note, the word is not really word, instead, a stem.
We can see that the 20 topics have comparatively balanced distributions ranging from 0.034 to 0.071. One thing worth noting is that some of the topics may be somewhat irrelevant to TRS. For those relevant to tobacco research, the topics derived have a broad diversity as discussed in the following. Top words included in the most prominent topic (namely, T1) from the dataset are Smoke, cigarette, cessation, abstinence, control, measure. T2, which is the second most prominent topic, contains words like intervention, health, program, network, base, train, social, prevent, address, support and community. T3 focuses on adolescent-related topics, including alcohol, family relationship and behavior. T4 is similar to T3, but emphasizing more on social elements, including school, law, industry and so on. Those topics suggest policy-making and social studies such as preventing teenager-smoking related research are one of the major trends in TRS research. Treatment is most dominant word in T5. Among them, most of words are quite relevant to this keyword. These words suggest that research on clinical practice of smoking related diseases is also tackled by TRS researchers. T6 evidently clusters research on ethnic, gender, age and surveys of smoking revealed by words American, African, white, group, population, ethnic, woman and age. In contrast, T7 talks about temporal study of smoking-related diseases since temporal words like time, year, month and clinical words like assess, measure, average, quantity, disease and datum are seen there with a good proportion.
Topics from T8 to T20 are all related to direct clinical studies because from now on, we can see there are quite a few domain-specific terms among each topic and understandably, they occupy fewer proportions due to the domain constraints. But the fewer portions do not mean that they are less important for the modeling. On the contrary, they can show how discriminative the author topic modeling can be. For example, the word cell is the dominant word in T8. Surrounding it are words mouse, receptor, express, airway pressure, vitro, response, inhibition, epithelial and mediation. Therefore, this topic talks more about experiments on the influence of smoking on cells. Their proportions are relatively smaller. T9 is obviously discussing relationships between smoking and cancers where cancer, risk, association, control cohort, air, lung and genotype are the prime terms. Furthermore, pollution, exposure, woman and breast also suggest that the indirect influence of smoking is included in this topic.
As seen, the core of T10 is child with smoking related terms asthma, screen, vaccine and HPV. As we mentioned above, all of those topics are more specialized. Without domain knowledge, it can be hard to understand why HPV is related to smoking. In fact, according to Troy et al. [18], a case–control study of childhood passive smoke exposure (CPSE) is with human papillomavirus (HPV) infection. Nicotine in T11 has the highest proportion, as much as 5%. It is not hard to imagine that this topic should mainly discuss nicotine and its effects. Words such as cocaine, brain, response, behavior, kg, mg, reinforce and nach prove this. T12 seems to mainly study the disorder brought by smoking and their correlations. It is composed of words including disorder, function, schizophrenium, depression, correlation, discrimination and so on. T13 comprises of a couple of rarely seen terms, such as abbreviations, DNA, NNAL (urinary total 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanonol, which level can be affected by smoking) and nnk (4-(Methylnitrosamino)-1-(3-pyridyl)-1-butanone, one of the most prevalent and procarcinogenic compounds in tobacco), organic chemical elements, pyridyl and enzyme cancer terms, carcinogen, adduction, body and function terms, lung, liver, urinary, metabolic and so on. Among them, metabolic is the leading term unifying all of them. The majority of these topics are related to the harmful and potentially harmful constituents of tobacco products listed as one of the ten interest areas of TRS that have been highlighted by the FDA. Gene, genetic, genome sequence, individual, variant and identify in T14 show that this research topic on tobacco is from the genetic perspective while protein, mouse, regulation, binding, express and so on in T15 more from regulation and binding mechanism of the protein.
T16 is also about cancer. However, different from T9, it focuses on lung cancer and treatment. The corresponding cell apoptosis can be indicated from words like survival, anti, treat, apoptosis and so on. T17 seems to mainly be related to the medical absorption since we can see words like intake, concentration, ratio, oral, serum, urinary, waterpipe and others to name a few. T18 also talks about lung, but it is not about lung cancer. Instead, it is more about the general aspects of lung injury since ventilation, plasma, injury, acute and edema are there. Although smoking affects lung so much, T19 tells us that heart diseases are quite related as well where heart, cardiovascular, cardiac, vascular, endothelial, phosphoric, artery, and coronary are high frequent terms. According to Wheat et al. [19], inhalation of tobacco increases apoptosis and suppresses the VEGF-induced phosphorylation of Akt and endothelial nitric oxide synthases in the aorta. The last one, T20, looks like associating smoke and diabetes through similar mechanism in T19. Acrolein, an element rich in tobacco, is the main element, which prevents the nitric oxide, to lead to smoking-caused diseases. All those 20 topics, we can say that most of them have a good alignment with TRS. This shows that author topic modeling is capable of modeling the topic distributions of the collection.
Author-topic relations
Figure 5 is a network with each topic as the hub (the red octagons) and authors form nearest neighbors of each topic if their research involves that topic (the green plate). Figure 5 demonstrates the results of the top authors (if the author had more than 0.01 portion of articles in that topic, they were counted as a top author. The portion is selected based on the observation that 0.01 of more than 8800 articles, namely, 88 articles for one PI, can be regarded to be quite productive in research) and their associated topics. For better visualization, initials for authors are used to represent them (see Additional file 1 for the corresponding full names. The correspondence of initials and full names is seen in the supplement. Based on this network, it is found that the top 5 authors in each topic are the prime principle investigators in corresponding topics. For example, Hatsukami D, Cummings K and Eissenberg T, the top three ranked in T1, are all senior tobacco researchers who mainly focus on tobacco addiction characterization, reduction and/or treatment. Meanwhile, as shown in the network, there are connections between topics. That means that many authors' research areas cover more than one topic.
Author topic network for the 20 topics of TRSAwardeeSet For 20 topics, we build a network against its top 20 authors so that we can see clearly the productivity and diversity of authors and the closeness between authors (if two author nodes are linked to the same topic node, we may say that they have common interests).
At the first glance, Figure 6 is similar to Figure 5, aiming at showing how many authors appear in one topic. But the main goal of Figure 5 lies in illustrating who are top authors in a topic while Figure 6 illustrating some topic is the most studied one for some author. For example, the 3 counts for T7 in Figure 6 indicates that there are three authors whose highest portion are in T7 while the 14 nearest neighbors around T7 in Figure 5 show that 14 authors have portions larger than 0.01 in their research for topic 7. Figure 6 shows that T2, T12, T15 and T17 are the most studied topics since for each of them, 10 author published large number of articles on them. This trend does not align with that of topic proportion. To a large degree, we can say that the topic proportion shows that how many researchers are studying what topics while authors' counts reflected in Figure 6 show that which topic has been intensely studied by a few researchers. If we count T5 and T9 (there are 9 authors respectively), the data suggests that tobacco prevention and treatment are popular topics among those researchers.
Author counts in topic maximum. The X-axis is the topic while the Y-axis is the number of authors who work on some topic.
Another interesting thing is to look at co-occurrence of authors among multiple topics (for simplicity, we only consider two). It can reflect two aspects, one on the closeness of two topics (the two or more can be subtopics of a big topic) and the other on interactions of two topics (they may not be related but depend on each other).
It is found that T15 and T8 co-occur together 10 times, ranking the highest. This indicates that 10 authors study both topics. Both topics involve genetic expressions, cell, and protein. The combination of T16 and T8 follows closely where topic 16 is about lung tumor study from gene and cell level. The topic dependence relation can be illustrated by the large number of topics co-occurring with T2 (intervention). This topic is not really funded by the FDA's CTP, so why do they have such a high proportion of research (0.065)? If we look at other topics which investigators focus on in addition to T2, we can discover clues. Three topics occurring quite commonly with T2 are T1, T3 and T4 (4 times respectively). These four topics are about smoking cessation, vulnerable populations, and youth initiation and access, all of which are TRS priority areas. The link between smoking cessation and intervention is interesting, as interventions focusing on cessation are specifically mentioned as not a fundable TRS area. Investigators with this topic pair, which is common in tobacco control research in general, may be looking at other related topics that do fall under the TRS scope, such as nicotine reduction, consumer perception (of certain products as a cessation aid) and effective communication strategies. In addition, T7 (temporal study) co-occurs with T2 three times as well. This connection between temporal study and intervention would be a necessary one, as intervention research requires studies across time.
Topic clusters based on authors
If we look at authors and topics they are assigned, we see cases for two extremes in terms of involvement with a diversity of topics. Figure 7 shows that 83 top authors, in fact, focus on only one topic. A small number of authors have high topic involvement, meaning involvement in many topics. The highest one is Williams D who studies 7 topics. The next four are Srivastava S (6 topics), Glantz S (5 topics), Baker T (5 topics) and Elashoff D (5 topics) respectively. Williams D, as the most diverse researcher, is in fact a leading social and behavior scientist focusing on public health [20]. His research has enhanced the understanding of the complex ways in which race, racial discrimination, socioeconomic status and religious involvement can affect physical and mental health. His topics in the tobacco regulations cross from intervention study, health and race, gender, age, functional disorder and genetic analysis and so on. Glantz S is American Legacy Foundation Distinguished Professor of Tobacco Control at the University of California – San Francisco whose research focuses on the health effects of tobacco smoking and who is active in the nonsmokers' rights movement and has advocated for public health polices to reduce smoking. His research topics include T1, T2, T4, T7 and T10, which quite match his research focus.
AT involvements. The X-axis is the number of topics involved while the Y-axis is the number of authors who are working on how many topics.
Baker T is involved in T7, T10, T12, T14 and T15 while Elashoff D in topic T5, T7, T9, T12 and T16. They have two overlapping both with T7 and T12 assigned to them. Both of them seem to study topics related to treatment of smoking related diseases. What Elashoff D is studying is more cancer related. The topics both of them share are more general aspects like temporal study, function disorder and genetic tests. The remaining topics Baker T has, like T10, T14 and T15 involve smoking cessation, intervention, influences on children and protein binding and regulations. On the other hand, Elashoff D's remaining topics including T5, T9 and T16 are all either cancer-related or organ-injury relevant. In Baker T's webpage [21], it states that Baker T concentrates on tobacco-dependence treatment and outcomes. He and his team are not only looking at smoking cessation, but also determine how quitting affects the person's physical health, mental health, quality of life and social interactions. Then, Elashoff D's research include statistical analysis of high-throughput microarray, biomarker discovery and validation studies. Meanwhile, he has extensive working on cancer related projects with collaborations in oral, lung, prostate, breast and skin cancers [22]. It seems that those descriptions confirm what we have found from those topics.
As mentioned before, topics discovered are not necessarily all primarily about tobacco and nicotine in this work. Instead, it focuses on finding the interactions between authors, topics and words and what trends can be traced under the frame of TRS. Observing along this thought, we found connections between tobacco and other related topics unique to TRS research. For instance, Srivastava S, a project lead on a TRS center grant, is not primarily a tobacco researcher. Instead, he is faculty in an environmental cardiology department. His topic profile includes T6, T8, T13, T15 and T16. From his webpage, we found that his research priority is toxicity, which can explain the connections of the 6 topics: all of them are less or more related to his priority. It is also a topic area prominently featured in the FDA's TRS priority and interest areas.
On other extreme, there are a few PIs who are only assigned one topic. One of them is Delnevo C whose topic is T6, which is about ethnic, gender and age related study of smoking. In the website, it says that his research interests are clinical prevention services, tobacco control and survey research methods. Another one is Donny E whose topic is T11, which is about the nicotine effects. In his webpage, it says that nicotine reinforcement, regulation of tobacco and implications for healths are his primary research interests. Likewise, Farrelly M is a leading expert in tobacco control and policy interventions, for youth in particular. The only topic assigned to him is T4, exactly matching his interests.
Top topic clusters
Figure 8 aims at highlighting three top topics where the proportions of TCOR PIs and non-TCOR PIs show clear contrast. The top topic clusters for all TRS investigators are metabolism, pharmacology, and legal & statistics, where the three top topic clusters are depicted in red. In the author topic network, topics are connected to authors who publications are linked to those terms. The size of the author nodes and the edges connecting them to the three topics reflect the importance of the authors and the contributions to the three topics respectively. For example, Matthay, Michael is a prominent author in the network since the node representing him has the largest size. His main research is on metabolism since the edge to it is the thickest. He is not linked to either pharmacology or legal & statistics, though, as Piccioto, Marina is (i.e. she is linked to all three topics); although the latter is comparatively less prominent than the former. Both of the above authors are TCORS investigators (the blue nodes).
A sample of author topic relation network (3 topics). This figure aims at highlighting three top topics where the proportions of TCOR PIs and non-TCOR PIs show clear contrast.
Also depicted in this figure are other TRS researchers (the green nodes). One key take-home point from Figure 8 (Figure 9 can be a compenstation for Figure 8) is that, while there are common research interests among the TCORS investigators, there is also a large body of expertise among the other TRS grantees as well and that both groups of investigators should be finding ways to link with others around common or shared TRS research interest.
Author topic modeling for TRS PIs. The network of topics against TCOR PIs and non-TCOR PIs. It aims at showing what research interests for TCOR PIs and non-TCOR PIs and also how much overlapping the group of researchers. It is a compenstaion for Figure 8.
Temporal trend of key words
Above analyses are based on the pool of MedLINE abstracts of TRS PIs without distinguishing publication time and areas. For If we The trend can be seen in Figure 10, Figure 11, Figure 12 and Figure 13, where the x-axis is the list of years while the y-axis is the proportion of key words in the data.
Smokeless tobacco temporal trend. The X-axis is the year while the Y-axis is yearly proportion of key words of smokeless tobacco.
TPC temporal trend. The X-axis is the year while the Y-axis is yearly proportion of key words of TPC.
Cigar products temporal trend. The X-axis is the year while the Y-axis is yearly proportion of key words of cigar products.
E-cigarettes temporal trend. The X-axis is the year while the Y-axis is yearly proportion of key words of E-cigarettes.
For the study of smokeless tobacco in Figure 10, the trends are not overwhelmingly informative. However, two key terms, snus and smokeless, are both steadily increasing over the time period in similar trends. While another term, snuff, decreases relatively sharply over the same time period. Chew shows a gradual decreasing trend. This shows an increased research interest over the past 10 years in the alternative smokeless tobacco product, snus.
Figure 11 shows the trends for the top terms related to tobacco product characteristics. The trend for the term menthol is the most prominent one for this figure. Menthol shows a clear increasing ratio among the tobacco products characteristics terms, where the overall trend for all terms appears to be a decreasing one. The popularity of menthol demonstrates the increasing focus on flavorings as a research area for tobacco product characteristics [23,24] in recent years.
For cigar products, the top seven key terms are displayed in Figure 12. Interestingly, cotinine, the most prominent term, decreases continuously from around 16% to 9% over the 13 years. Meanwhile, the less prominent term metabolite, increases steadily over the same time period from 3% to 7%. This seems somewhat counterintuitive, as cotinine is a metabolite of nicotine. This could simply indicate a change in the preference of terms from the specific to the more general. It could also indicate the decline in the use or study of cotinine as a measure of nicotine use. Other types of cigar products, such as little cigars and cigarillos, are not yet prominent enough in the literature to be in the top cigar-related terms.
Electronic cigarette terms are shown in Figure 13 and likely because of the recent emergence of these products, this figure doesn't show any consistent or clear trends. One key point for this topic, though, is a bit different from the others. This analysis highlights the need for some consistency and consensus on what to call new and emerging tobacco products, like electronic cigarettes, in the literature. There are several different terms used and because of this diversity in terminology referring to basically the same product, there are larger implications for the research. For example, if different investigators are using different key terms or measures for the same products, it becomes hard to look across a given topic or field, develop standards, and conduct consistent reviews and meta-analyses of the literature.
Discussion and conclusions
In this work, we employed author topic modeling to conduct a bibliometric analysis on the publications of principle investigators on tobacco. We only reported topic interpretations and observations for the TRSAwardeeSet, as our primary interests were the TRS investigators. In fact, the KWset were diverse in both author and topics, and thus a more in-depth exploration is needed to understand this dataset. Nonetheless, we did temporal trend analysis based on the results of ATM for KWset. It showed us how the significance of key words in different topics evolved over time.
Author topic modeling has been shown to be an effective approach in modeling corpus of computer sciences as well as more general ones, like publicly available emails, collections of diverse research articles. No research is done in modeling a constraint domain like tobacco regulations yet. The results show that this approach can efficiently cluster collections of articles into discriminative categories without any supervision. More interestingly, it can associate topics to authors in a high accuracy. This indicates that we may incorporate author topic modeling into author identification systems to infer the identity of an author of articles using topics generated by the model.
The relevance of this analysis to TRS is multiple folds. First, this analysis is a 'proof of concept' that it can be beneficial to assess the change over time in TRS as new projects are funded and collaborative science in this area changes. This is particularly important because the FDA must use the data from funded research to inform their regulatory decision-making, so if there are 'holes' in the types of research being conducted or published, a bibliometric analysis with ATM could be helpful for the FDA to make decisions. The results can be used to assess the extent to which new research reflects the funding priorities of the FDA.
Second, ATM outcomes can be used by investigators to assess who is conducting research in a particular research domain in order to foster collaborative science [25,26]. Again, this is very important for the FDA to know given their need to make regulatory decisions. For example, if the FDA is contemplating a regulation that would lead to reductions in nicotine within cigarettes, assessing who is conducting research that can inform that regulatory process is important. Similarly, if the FDA needs to conduct rapid research to address an emerging issue, they can use this type of data to identify likely research teams to carry out that research. Since many issues in tobacco regulatory science require trans-disciplinary science, which cannot be addressed through the research of a single discipline, the ability to assess who is doing relevant research can lead to the development of unique teams that have the best potential to address those complex problems rapidly.
Third, these analyses begin to demonstrate the evolving research productivity of investigators, which we anticipate will occur to a greater extent as publications increase due to FDA funding. For example, we found that 'cessation' and 'treatment' clustered even though that topic is not really included in tobacco regulatory science. This clustering seems to reflect that some leading scientists who conducted research on tobacco treatment have successfully either shifted or expanded their research focus on tobacco regulatory science. Future analyses can further delineate how scientists transition into tobacco regulatory science research, particularly as a result of new funding, to better understand both the scientific expertise relevant to the TRS field, and also to understand the impact in other fields of scientists following the increased funding in TRS.
By fostering collaborative science in TRS, it becomes possible to speed advances in that science by fostering communication between scientists that can avoiding un-needed duplication and impact decision-making on new science that can benefit regulatory decision-making.
Limitations and future work
One limitation for this approach is that author topic modeling assumes that the topic distribution of each word in one document is only associated with one of the known authors. As a result, correlations of authors cannot be reflected from words of the same document and instead, must be found across multiple documents, which have the same authors. For large amount of corpus, this may not be a big problem. Nonetheless, this limitation can be overcome if we introduce the topic-author as multiple to multiple. Namely, instead of sampling one author each time, we allow sampling more than one. Then, the topic distribution will be generated by the joint distribution of more than one author. This way, each word will be associated with more than one author and thus, a multiple to multiple word-author interactions will be constructed. This will lead to more complicated inference algorithms. More high efficient optimization algorithms are thus needed in our future work.
The other limitation of our work is the one to one author-word correspondence. Hence, in our future study, we will extend author topic modeling into group author topic modeling. In addition, considering that research topics may change every few years even for the same investigators, it would therefore be reasonable to model temporal changes. One more extension can be that we may build a predictive model based on author topic modeling so that we can assign authors to unknown articles or we can predict what main topics an unknown article is about. Yet another limitation is the lag between publication date and current research activity. Given the rapidly changing nature of research and funding in the area of tobacco regulatory science, it is very possible that investigators have moved into different research domains relative to their publication record. This is particularly relevant in the tobacco regulatory science area because it is a relatively new research domain that has caused some scientists to shift their research focus in order to obtain funding that is specifically relevant to the needs of the FDA. Thus, data from the author topic modeling could provide a misleading perspective on current research activities of scientists.
Besides addressing those limitations, we plan to experiment author topic modeling with domain-specific ontologies or information models instead of only the bag of words. One such ontology is the MeSH indexing widely used in PubMed MedLINE. For articles indexed by PubMed, usually about 10 MeSH terms will be assigned to them so that the reader can easily find the theme. Therefore, those MeSH terms can be utilized as key words in doing author topic modeling or alternatively, MeSH terms can be employed as the author variable so that we can construct a mapping from MeSH term to texts. Beyond, since we have collected a lot of data and explored concepts and relations with the help of ATM, it is possible that we may build our own domain ontology independently from MeSH Indexing for tobacco related research and then align or merge with MeSH indexing for future research on data mining.
Another possible extension is that we will attempt to access full texts rather than only abstracts and meanwhile, construct citation links from the reference section. Enriched by full text and citaiton links, we believe that the correlations of research topics in tobacco regular science can be more fully revealed.
Rosen-Zvi M, Griffiths T, Steyvers M, and Smyth P. "The author-topic model for authors and documents," in Proceedings of the 20th conference on Uncertainty in artificial intelligence, 2004, pp. 487–494.
Blei DM, Ng AY, Jordan MI. Latent Dirichlet allocation. J Mach Learn Res. 2003;3:993–1022.
Dumais ST. Latent semantic analysis. Annual Review of Information Science and Technology. 2005;38:188–230.
Hofmann T. "Probabilistic latent semantic indexing," in Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, ed, 1999, pp. 50–57.
Turney PD, Pantel P. From frequency to meaning: Vector space models of semantics. J Artif Intell Res. 2010;37:141–88.
McCallum A. "Multi-label text classification with a mixture model trained by EM," in AAAI'99 Workshop on Text Learning, 1999, pp. 1–7.
McCallum A, Mann G, and Mimno D. "Bibliometric impact measures leveraging topic analysis," in Digital Libraries, 2006. JCDL'06. Proceedings of the 6th ACM/IEEE-CS Joint Conference on, 2006, pp. 65–74.
Steyvers M, Smyth P, Rosen-Zvi M , and Griffiths T. "Probabilistic author-topic models for information discovery," in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 2004, pp. 306–315.
McCallum A, Corrada-Emmanuel A, Wang X. The author-recipient-topic model for topic and role discovery in social networks: Experiments with enron and academic email. 2005.
Bhattacharya I, Getoor L. A latent dirichlet model for unsupervised entity resolution. 2005.
Newman12 D, Karimi S, and Cavedon L. "Topic Models to Interpret MeSH–MEDLINE's Medical Subject Headings."
Leischow SJ, Zeller M, Backinger CL. Research priorities and infrastructure needs of the family smoking prevention and tobacco control Act: science to inform FDA policy. Nicotine & Tobacco Research. 2012;14:1–6.
McCallum AK. Mallet: A machine learning for language toolkit. 2002.
Porter M. "Snowball: A language for stemming algorithms," ed, 2001.
Mark Steyvers TG. (2014, Oct. 7). Matlab Topic Modeling Toolbox 1.4. Available: http://psiexp.ss.uci.edu/research/programs_data/toolbox.htm.
Taylor JB. "An empirical analysis of the revival of fiscal activism in the 2000s," Journal of Economic Literature, pp. 686–702, 2011.
Lipscomb CE. Medical subject headings (MeSH). Bull Med Libr Assoc. 2000;88:265.
Troy JD, Grandis JR, Youk AO, Diergaarde B, Romkes M, and Weissfeld JL. "Childhood passive smoke exposure is associated with adult head and neck cancer," Cancer epidemiology, 2013.
Wheat LA, Haberzettl P, Hellmann J, Baba SP, Bertke M, Lee J, et al. Acrolein inhalation prevents vascular endothelial growth factor–induced mobilization of Flk-1+/Sca-1+ cells in mice. Arterioscler Thromb Vasc Biol. 2011;31:1598–606.
H. School of Public Health. (20113, Oct. 7). David R. Williams. Available: http://www.hsph.harvard.edu/david-williams/.
U o W. Center for Tobacco Research and Intervention. (Oct. 1). Available: http://www.ctri.wisc.edu/News.Center/News.Center_bio_tim_baker.html.
U. Department of Biostatistics. (Oct. 1). Available: http://www.biostat.ucla.edu/Directory/Delashoff.
J. Aldworth, Results from the 2007 national survey on drug use and health: National findings: DIANE Publishing, 2009.
Blot WJ, Cohen SS, Aldrich M, McLaughlin JK, Hargreaves MK, Signorello LB. Lung cancer risk among smokers of menthol cigarettes. J Natl Cancer Inst. 2011;103:810–6.
Sonnenwald DH. Scientific collaboration. Annual review of information science and technology. 2007;41:643–81.
Hall KL, Stokols D, Stipelman BA, Vogel AL, Feng A, Masimore B, et al. Assessing the value of team science: a study comparing center-and investigator-initiated grants. Am J Prev Med. 2012;42:157–63.
This study was made possible by National Science Foundation ABI:0845523, National Institute of Health R01LM009959A1 and R01GM102283A1.
Department of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, MN, USA
Dingcheng Li
& Hongfang Liu
Department of Hemotology/Oncology Mayo Clinic, Scottsdale, AZ Arizona
Janet Okamoto
& Scott Leischow
Search for Dingcheng Li in:
Search for Janet Okamoto in:
Search for Hongfang Liu in:
Search for Scott Leischow in:
Correspondence to Dingcheng Li.
DL, JO, SL and HL are employees of Mayo Clinic and do not own any shares of the hospital. The authors have no other competing interests to declare.
DL collected the dataset with key word search as well as PI based search from MedLINE and designed methodologies and ran the author-topic modeling on the dataset. He also made first round analysis on topics generated as well as on the interactive relations between topics, authors and key words and drafted the manuscripts. JO provided the PI list and the key word list. Meanwhile, she also made analysis on topics and the interactions between topic, authors and the key words.
Both SL and HL gave guidance and supervisions on the study design, results analysis and paper revisions. All authors have read and approved the submitted version of the manuscript.
Hongfang Liu and Scott Leischow contributed equally to this work.
Correspondence Author Full Name and Short Name. The following file is supplement to Figure 5 where we use short names for authors so that better visualzation can be kept. Then, from this table, we can find the full name for each short name.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Li, D., Okamoto, J., Liu, H. et al. A bibliometric analysis on tobacco regulation investigators. BioData Mining 8, 11 (2015) doi:10.1186/s13040-015-0043-7
Author topic modeling
Tobacco regulation science
Principle investigators
Data Mining in Biomedical informatics and Healthcare
|
CommonCrawl
|
Machine Learning Study on Nuclear $\alpha$ Decays
Minsu KWON1, Yongseok OH1*, Young-Ho SONG2
2Rare Isotope Science Project, Institute for Basic Science, Daejeon 34047, Korea
Correspondence to:[email protected]
Received: January 7, 2021; Revised: April 23, 2021; Accepted: May 19, 0221
II. MODEL
IV. SUMMARY AND CONCLUSION
The regression process of machine learning is applied to investigate the pattern of alpha decay half-lives of heavy nuclei. By making use of the available experimental data for 164 nuclides, we scrutinize the predictive power of machine learning in the study of nuclear alpha decays within two approaches. In Model (I), we trained neural networks to experimental data of the half-lives of nuclear alpha decays directly while, in Model (II), they are trained to the gap between the experimental data and the predictions of the Viola-Seaborg formula as a theoretical model. The purpose of Model (I) was to verify the applicability of machine learning to nuclear alpha decays, and the motivation of Model (II) was to apply the technique to estimate the uncertainties in the predictions of theoretical models. Out results show that room exists for improving the predictions of empirical models by using machine learning techniques. We also present predictions on unmeasured nuclear alpha decays.
Keywords: Nuclear alpha decays, Machine learning
The research on nuclear α decay has a long history and has been one of the most important tools to study nuclear forces and nuclear structure [1]. Even today, its role cannot be overemphasized in the investigation of nuclear properties, in particular, in identifying new heavy nuclides. The most widely used theoretical models are based on the effective potential that the preformed α particle feels in nuclei. Once the potential form is determined, the half-life is calculated using the WKB approximation [2]. (See also, for example, Refs. [3,4])
In the present work, we adopt a different approach to investigate the nuclear α decay. Namely, we make use of the recently developed machine learning (ML) methods to predict half-lives of nuclear α-decays. At these days, machine learning technique has been widely applied to various fields in physics [5]. In nuclear physics, to our knowledge, the first application of ML technique is to apply the methods to nuclear mass systematics [6-8]. This idea has been further developed for various nuclear physics problems, for example, nuclear masses [9-12], deuteron properties [13], extrapolation of ab initio calculations such as the no-core shell model [12, 14, 15], nuclear alpha decays [16-18], and nuclear β decays [19].
Many existing applications of ML to nuclear α-decay use artificial neural network (ANN) method. In Refs. [17, 18], ANN is trained to predict Q-value (energy released) for the α-decay channel of nuclei. Then, decay rates are obtained by using WKB approximation with semi-classical effective potential [17] or modified empirical α-decay formulas [18]. On the other hand, ANN is trained directly to experimental α-decay half-lives in Ref. [16]. In the present exploratory study, taking similar strategy of Ref. [16], we apply ANN method to α-decay half-lives in two different approaches. One is the unbiased approach where we directly use experimental data to train the machine learning process and make predictions for test data. This approach is Model (I) in the present work. The other is a theoretically biased approach where we rely on phenomenological formulas for globally understanding nuclear α decays and the gap with the experimental data are tamed by machine learning. This approach, named Model (II), can test the impact of machine learning when combined with theoretical models.
This paper is organized as follows. In the next section, we briefly introduce the concepts of machine learning, and we construct our models as well. In Sec.III, the results from machine learning are compared with available data and predictions for unobserved nuclear α decays are presented. Section IV contains summary and conclusion.
Though Artificial Intelligence (AI) has been developed in various ways since 1950s when modern computers were developed, recent popularity of ML is mostly from the success of ANN. We can understand ANN as a mapping function y = f(x; θ) with parameters θ. ANN consists of an input layer, output layer, and hidden layers, where each layer contains number of "neurons." Output of j-th neuron in a layer, hj , can be expressed as
(1) hj=σbj+∑iωjix i,
where xi are outputs from i-th neuron of previous layer, θj = (ωji, bj) are parameters of the neuron, and σ is a non-linear activation function. With a large number of hidden layers of neurons, ANN can cover a wide range of function space. An advantage of ANN over conventional fitting methods lies in its flexible model space and efficient training algorithm that adjusts parameters to minimize the "loss" function. The loss function quantifies the difference between training data and the corresponding results of ANN.
In this work, we adopt Rectified Linear Unit (ReLU) function as an activation function [21], which is defined as
(2) ReLUx=x, if x>00, if x≤0
for hidden layers. ReLU is widely used to avoid the vanishing gradient problem in training. We adopt Adam optimizer [22] that is an algorithm for first-order gradientbased optimization of stochastic objective functions. More explanations of the structure of ANN and loss function are given in the next section. All numerical calculations in this work are done by using the TensorFlow library [20]. The details of the TensorFlow algorithm can be found elsewhere 1 and will not be repeated here.
2. Models for nuclear α decays
It is well known that the half-lives of nuclear α decays heavily depend on the Q value of the decay. Therefore, successful description of nuclear α decays requires accurate information on the nuclear α potential and reasonable reproduction of nuclear masses. The half-lives of nuclear α decays were found to obey a simple relation known as the Geiger-Nuttall law [23], which describes the half-life of a nuclear α decay T1/2 as
(3) log10T1/2=aZQα+b,
where Z is the atomic number and Qα is the Q value of α decay. The constants a and b are to be fitted to experimental data. This phenomenological formula was improved through the Viola-Seaborg (VS) empirical formula [24], which is widely used to estimate the α decay lifetimes. This formula is written as
(4) log10T1/2=aZ+bQα+cZ+d,
which also caused several derived versions [3,18]. Normally, to improve the predictive power of the VS formula, the parameters are determined separately for even-even, even-odd, odd-even, and odd-odd nuclei, which implies the mass number dependence of the parameters. When the half-lives are given in the units of second and Qα in the units of MeV, the parameters obtained by this procedure are given in Table 1. We refer the details to Ref. [3].
Table 1 Fitted coefficients of the VS formula. The values are from Ref. [3] and are given for even-even, evenodd, odd-even, and odd-odd nuclei.
e-e 1.48503 5.26806 -0.18879 -33.89407
e-o 1.55427 1.23165 -0.18749 -34.29805
o-e 1.64654 -3.14939 -0.22053 -32.74153
o-o 1.34355 13.92103 -0.12867 -37.19944
In the present work, we make use of the ANN in two different ways. In the first approach, which is an unbiased approach, we directly apply machine learning to nuclear α decays. The inputs are the atomic number Z, mass number A, and the Qα value of a nucleus. 2 Then the experimental data for log10T1/2Expt are used to train ANN. Therefore, this does not include any physical intuition or prejudice and, as a result, it is unbiased to any theoretical models for nuclear α decay. This is our Model (I). In other words, we use the loss function of mean squared error (MSE) defined as Lθ=1NT∑ i=1 N T yi−fxi;θ2 with yi=log10T1/2i for i-th training data, where NT is the total number of training data.
On the other hand, it would be interesting to see whether machine learning algorithm can fill the gap between experimental data and theoretical model predictions. Therefore, this approach is biased to a particular theoretical or phenomenological model. In the present work, we adopt the VS formula as a reference theoretical model and ANN is trained with the difference between experiment and theoretical model predictions, while the inputs are Z, A, and Qα as before. This constitutes our Model (II).
Though it is desirable to survey the model space of ANN with various number of hidden layers or neurons, to simplify analysis, we chose to use fixed number of hidden layers and neurons. ANN for Model (I) has 3 hidden layers with 8, 9, 8 neurons, receptively, and ANN for Model (II) has 3 hidden layers with 7, 5, 8 neurons, respectively. We use the data for the α decay half-lives of 164 nuclei compiled in Refs. [25,26].
With 164 data points in hand, we randomly separate data into training set and test set with the ratio of 80:20. The training set is used to train ANN and the test set is used to check the predictive power or credibility of the process. Furthermore, part of the training sets are randomly selected for validation tests. To avoid overfitting, we chose the early stopping, in which the training process stops when the validation error starts to increase. In order to estimate the accuracy of the calculation, we use the mean square error defined as
(5) MSE=1N∑ i=1 Nlog10T 1/2 DataT 1/2 Cal2,
where T1/2Cal are the values calculated by phenomenological models or by machine learning.
Table 2 shows the MSEs of our models. Small MSEs and the similarities between training set and test set indicate that ANNs are well trained. Slightly larger MSEs for test sets are understandable as they are not used for training. The unbiased ANN of Model (I) achieves reasonable accuracy as phenomenological VS formula. Our results also show that the MSE of Model (II) is smaller than that of Model (I) so that the accuracy is improved by about 15%. In other words, training ANN with theoretical guide is better than the naive approach. However, the improvement is not so impressive compared to the VS formula. This observation is in agreement with the observation of Ref. [16]. There may be several explanation for this observation. Probably the most significant factor would be the limit of available experimental data in α decays. Unlike nuclear mass cases, which have order of thousand available data, the available data sets for α decay are only a few hundreds.
Table 2 Obtained mean square errors.
Model (I)
VS formula
Model (II)
Training Set 0.337 0.265 0.258
Test Set 0.368 0.370 0.355
In Table 3 we compare our calculations with several observed data among 164 nuclides used in the present work. For this calculation we use the central values of the measured Qα values. For comparison, the results with the VS formula are also presented. Our results show that the overall agreement is improved for Model (II) compared to the case of Model (I), although not impressive. Again, this is partly due to small number of samples except the even-even nuclei. The number of data points is not large enough to expect substantial improvement. We also observe that the gaps with the experimental data for several nuclides are larger than the other nuclei. This may indicate the effects of nuclear structure which cannot be easily reflected by machine learning. Nevertheless, our results show that the machine learning algorithm may give a reasonable description of the observed data as a whole.
Table 3 Observed α decay half-lives of heavy nuclei and the results of the present work. The half-life T1/2 is in the units of second. The experimental data are from Refs. [25,26].
((Z,A)*:The data which are belonging to the training data.)
(Z,A)
QαExpt (MeV)
log10T1/2Expt
log10T1/2Model (I)
log10T1/2VS
log10T1/2Model (II)
(118, 294)* 11.81 -2.8539 -2.9245 -3.6475 -3.6192
(116, 291) 10.89 -1.5528 -1.1299 -1.3990 -1.3728
(114, 289)* 9.97 0.3802 0.7361 0.5676 0.5021
(114, 288)* 10.07 -0.1249 0.4314 -0.4126 -0.4553
(114, 287)* 10.16 -0.2840 0.1587 0.0185 -0.0025
(113, 284)* 10.11 -0.02548 0.0065 0.3821 0.3743
(110, 279)* 9.84 0.3010 0.1283 -0.2651 -0.3057
(109, 276)* 9.814 -0.14267 -0.0650 -0.0229 -0.0492
(108, 275)* 9.44 -0.5376 0.6992 0.2937 0.2275
(107, 272) 9.14 0.9128 1.2396 1.1891 1.1196
In this work, we have applied machine learning technique to investigate nuclear α decays. For this end, we applied widely used artificial neural network with 164 data points of whose 80% were used for trainig ANN. We employed two approaches; one is unbiased approach and the other is to make use of the empirical VS formula as a reference. Our results show that the theory-guided approach gives a better description of the data. However, the improvement over the empirical VS formula is not noticeable. We ascribe this partly to the incomplete number of data points and it also indicates that the effects of nuclear structure may be important for some nuclides. Encouraged by this observation, we extend our study to make predictions on unobserved nuclear α decays. Our results are in fair agreement with the previous estimates reported in Ref. [4].
Table 4 Predictions on the decay lifetimes for unobserved superheavy elements in the units of second. We refer to Ref. [4] for details on T1/2SLy4, T1/2D1S, and T1/2DD−ME2.
Q (MeV)
T1/2SLy4 [4]
T1/2D1S (s) [4]
T1/2DD−ME2 [4]
(122, 307) 12.289 4.340 × 10-4 4.514 × 10-4 3.194 × 10-4 1.257 × 10-3 5.964 × 10-4
(117, 293) 11.293 3.885 × 10-3 4.752 × 10-3 3.244 × l0-3 1.377 × 10-2 1.484 × 10-2
In summary, we confirm that machine learning can give a global description of nuclear α decays. To achieve more reliable results, however, we may need more theoretical guides.This includes more sophisticated and developed machine learning algorithms to overcome the limited number of data and theoretical studies in nuclear structure to estimate nuclear structure effects which cannot be captured by machine learning. More rigorous studies on various approaches such as choosing different inputs and hyperparameter scan are therefore needed and will be reported elsewhere.
The work of M.K. and Y.O. was supported by National Research Foundation (NRF) under Grants No. NRF-2020R1A2C1007597 and No. NRF-2018R1A6A1A06024970 (Basic Science Research Program). The work of Y.-H.S. was supported by the Rare Isotope Science Project of Institute for Basic Science, funded by Ministry of Science and ICT (MSICT) and by NRF of Korea (2013M7A1A1075764) and by the National Supercomputing Center with supercomputing resources including technical support (KSC-2020-CRE-0027).
1 https://www.tensorflow.org
2 These inputs are scaled to be in the range of (0,1) in our Model (I) calculations.
H. J. Mang, Ann. Rev. Nucl. Sci. 14, 1 (1964).
B. R. Holstein, Am. J. Phys. 64, 1061 (1996).
E. Shin, Y. Lim, C. H. Hyun and Y. Oh, Phys. Rev. C 94, 024320 (2016).
Y. Lim and Y. Oh, Phys. Rev. C 95, 034311 (2017).
G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto and L. Zdeborová, Rev. Mod. Phys. 91, 045002 (2019).
S. Gazula, J. W. Clark and H. Bohr, Nucl. Phys. A 540, 1 (1992).
K. A. Gernoth, J. W. Clark, J. S. Prater and H. Bohr, Phys. Lett. B 300, 1 (1993).
S. Athanassopoulos, E. Mavrommatis, K. A. Gernoth and J. W. Clark, Nucl. Phys. A 743, 222 (2004).
R. Utama, J. Piekarewicz and H. B. Prosper, Phys. Rev. C 93, 014311 (2016).
R. Utama and J. Piekarewicz, Phys. Rev. C 96, 044308 (2017).
R. Lasseri, D. Regnier, J.-P. Ebran and A. Penon, Phys. Rev. Lett. 124, 162502 (2020).
J. W. T. Keeble and A. Rios, Phys. Lett. B 809, 135743 (2020).
G. A. Negoita, J. P. Vary, G. R. Luecke, P. Maris, A. M. Shirokov, I. J. Shin, Y. Kim, E. G. Ng, C. Yang, M. Lockner and G. M. Prabhu, Phys. Rev. C 99, 054308 (2019).
W. G. Jiang, G. Hagen and T. Papenbrock, Phys. Rev. C 100, 054326 (2019).
P. S. A. Freitas and J. W. Clark, arXiv:1910.12345.
U. B. Rodríguez, C. Z. Vargas, M. Gonçalves, S. B. Duarte and F. Guzmán, J. Phys. G 46, 115109 (2019).
G. Saxena, P. K. Sharma and P. Saxena, J. Phys. G 48, 055103 (2021).
Z. M. Niu, H. Z. Liang, B. H. Sun, W. H. Long and Y. F. Niu, Phys. Rev. C 99, 064307 (2019).
M. Abadi et al, arXiv:1603.04467.
R. H. R. Hahnloser, R. Sarpeshkar, M. A. Mahowald, R. J. Douglas and H. S. Seung, Nature 405, 947 (2000).
D. P. Kingma and J. L. Ba, arXiv:1412.6980.
H. Geiger and J. M. Nuttall, Phil. Mag. Ser. 6 22, 613 (1911).
V. E. Viola Jr and G. T. Seaborg, J. Inorg. Nucl. Chem. 28, 741 (1966).
J. P. Cui, Y. L. Zhang, S. Zhang and Y. Z. Wang, Phys. Rev. C 97, 014316 (2018).
J. P. Cui, Y. Xiao, Y. H. Gao and Y. Z. Wang, Nucl. Phys. A 987, 99 (2019).
A Study on the Automatic Reconstruction of the Particles with Machine Learning at e+e- Collider Experiments
Kyungho Kim1, Kihyeon Cho1,2*
Minimal Neural Network to Learn the Metal-insulator Transition in the Dynamical Mean-field Theory
Hyejin Kim*, Dongkyu Kim, Dong-Hee Kim†
Discussion for how to Apply Artificial Intelligence to Physics Education
Hunkook JHO*
|
CommonCrawl
|
DSTS: A hybrid optimal and deep learning for dynamic scalable task scheduling on container cloud environment
Saravanan Muniswamy1 &
Radhakrishnan Vignesh1
Journal of Cloud Computing volume 11, Article number: 33 (2022) Cite this article
Containers have grown into the most dependable and lightweight virtualization platform for delivering cloud services, offering flexible sorting, portability, and scalability. In cloud container services, planner components play a critical role. This enhances cloud resource workloads and diversity performance while lowering costs. We present hybrid optimum and deep learning approach for dynamic scalable task scheduling (DSTS) in container cloud environment in this research. To expand containers virtual resources, we first offer a modified multi-swarm coyote optimization (MMCO) method, which improves customer service level agreements. Then, to assure priority-based scheduling, we create a modified pigeon-inspired optimization (MPIO) method for task clustering and a rapid adaptive feedback recurrent neural network (FARNN) for pre-virtual CPU allocation. Meanwhile, the task load monitoring system is built on a deep convolutional neural network (DCNN), which allows for dynamic priority-based scheduling. Finally, the presentation of the planned DSTS methodology will be estimated utilizing various test vectors, and the results will be associated to present state-of-the-art techniques.
Cloud computing, which provides the computer services required for the Internet, has become one of the most popular technologies for the economy, society, and people in latest years [1]. Due to the recent growth in the load of different and sophisticated clouds like the Internet of Things (IoT) devices, machine learning programmes, coursing A/V services, and cloud memory, mandate for several cloud amenities has risen substantially [2]. With the introduction of numerous virtualization technologies like as VMware, Citrix, KVM, and Zen [3], the cloud computing business has evolved fast in recent years. Despite their widespread use, virtualization technologies have a number of drawbacks, including high time consumption, extended runs and shutdowns, and difficult planning and migration procedures [4]. The hardware is virtualized in the conventional setup, and each virtual machine running the whole operating system supervises the computer's application activities [5]. The application process in the container communicates directly with the host kernel, but the container does not have its own kernel or hardware virtualization. Containers are therefore far lighter than typical virtual computers [6, 7].
Furthermore, the spread of microservices, self-driving vehicles, and smart infrastructure is predicted to boost cloud service growth [8]. The backbone of cloud computing is virtualization technology, which enables applications to be detached from fundamental infrastructure by sharing resources and executing various programmes independently [9]. Containers have grown in popularity as a novel virtualization approach in recent years, bringing conventional fundamental machines (VMs) to numerous auspicious characteristics including united host operating systems, quicker boot times, portability, scalability, and faster deployment [10]. Containers allow apps to store all of their dependencies in the sandbox, allowing them to construct autonomous working hours from the platform while also increasing productivity and portability [11]. Dockers, LXC, and Kubernetes are just a few of the container technologies available. Furthermore, several cloud service providers run containers on virtual machines (VMs) to increase container seclusion, performance, and system management [12, 13]. Container technology is gaining traction among developers, and it's now being used to deploy a wide range of microservices and applications, including smart devices, IoT, and fog / edge computing [14]. As a consequence, to fulfil the increased demand, numerous cloud service suppliers have begun to provide container-based cloud services. Google Container Engine, Amazon Re-Container Service, and Azure Container Service are other examples. The cloud computing paradigm is being revolutionised by container technology [15]. Running containerized applications, in the eyes of the cloud service provider, produces a compression layer that deals with cluster management. The primary container orchestration sites in the base cluster for automating, measuring, and controlling container-based infrastructure are Docker Swarm and Google Kubernetes [16, 17]. A container cluster's overall structure comprises of management nodes and task nodes. The cluster and container node work nodes, on the other hand, are the responsibility of the management nodes [18]. In addition, the manager keeps track of the cluster's location by verifying the node's position on a regular basis. The planning components, which are responsible for spreading loads among cluster nodes and controlling the container life process [19], play a precarious part in container transposition. Depending on the technology, container planning may take many different shapes. As a result, the primary goal of container planning is to get the containers started on the ideal host and link them together [20].
A dynamic scalable task scheduling (DSTS) approach is offered for cloud container environments as a way to improve things even further. The main contributions of our proposed DSTS approach are given as follows:
To provide a dynamic scalable task scheduling system for container cloud environments in order to reduce the make span while using less computing resources and containers than current algorithms.
To offer a unique clustered priority-based task scheduling technique that improves the scheduling system's flexibility to cloud environment while also speeding convergence.
Create a task load monitoring system that allows for dynamic scheduling depending on priority.
Using various test scenarios and metrics, assess the performance of the suggested dynamic scalable task scheduling.
The balance of the paper is placed as proceeds: The second segment summarises recent work on job scheduling for cloud containers. We go through the issue technique and system design in Problem methodology and system design section. The suggested dynamic scalable task scheduling (DSTS) model's functioning function is designated in Proposed methodology section. Simulation results and analysis section deliberates the simulation findings and comparison analyses. Finally, Conclusion section brings the paper to a close.
Many studies for scalable task scheduling for cloud containers have been suggested in recent years all around the globe. Table 1 summarises and tabulates the literature with research gaps in many categories.
Table 1 Summary of research gaps
Zhao et al. [21] studied to improve today's cloud services by reviewing the workings of projects for planning next-generation containers. In particular, this work creates and analyzes a new model that respects both workload balance and performance. Unlike previous studies, the model uses statistical techniques to create confusion between load balance and utility performance in a single optimization problem and solve it effectively. The difficult element is that certain sub-issues are more complicated, necessitating the use of heuristic guidance. Liu et al. [22] suggested a multi-objective container scheduling technique based on CPU node consumption, memory usage across all nodes, time to transport pictures over the network, container-node connections, and container clustering, all of which impact container programme performance. The author provides the metric techniques for all the important components, sets the relevant qualifying functions, and then combines them in order to pick the suitable nodes for the layout of the containers to be allotted in the planning process. Lin et al. [23] suggested a multi-objective optimization model for container-based micro service planning that uses an ant colony method to tackle the issue. The method takes into account not only the physics nodes' use of computer and storing possessions, but also the numeral of multi-objective requirements and the loss rate of physics nodes. These approaches make use of prospective algorithms' quality assessment skills to assure the correctness of pheromone updates and to increase the likelihood of utilising multifunctional horistic information to choose the optimum route. Adhikari et al. [24] suggested an energy-efficient container-based scheduling (EECS) technique for fast inheritance of various IoT and non-IoT chores. To determine the optimum container for each work, an accelerated particle swarm optimization (APSO) method with minimum latency is applied. Another significant duty in the cloud environment is resource planning in order to make the greatest use of resources on cloud servers. Ranjan et al. [25] shown how to design energy-efficient operations in program-limited data centres using container-based virtualization. Policies Containers provide users the freedom to get vital resources that are suited to their own need.
Chen et al. [26] suggested a functional restructuring system to control the operating sequence of each container in order to achieve maximum performance gain, as well as an adaptive fair-sharing system to effectively share the container-based virtualized environment. They also suggested a checkpoint-based system, which would be particularly useful for load balancing. Hu et al. [27] suggested the ECSched improved container scheduler for planning simultaneous requests over several clusters with varied resource restrictions. Define a container planning issue as a minimal cost flow (MCFP) problem and communicate container needs utilizing a specialised graphical data format. ECSched allows you to design a flow network based on a set of needs while also allowing MCFP algorithms to plan fixed requests live. Evaluate ECSched in a variety of test clusters and run large-scale planning overhead simulations to see how it performs. Experiments demonstrate that ECSched is superior at container planning in terms of container function and resource performance, and that large clusters only introduce minor and acceptable planning overlays.
For the VAS operating system, Rajasekar et al. [28] provided a planning and resource strategy. Infrastructure (IaaS) suppliers provide computer, networking, and storage services. As a result, the VAS design may effectively plan this burden at important periods utilising a range of features and quality of service (QoS). The method is scalable and dynamic, altering the load and base as needed. KCSS is a Kubernetes Container Scheduling Strategy introduced by Menouer et al. [29]. To satisfy the demands of Maxpania and Cloud providers, KCSS intends to optimise the scheduling of many containers that users submit to the Internet in order to increase customer performance based on energy usage. Due to the table's cloud infrastructure level and restricted perspective of user demands, single-based planning is less efficient. KCSS is responsible for introducing multi-criterion node selection. A cache-aware scheduling approach based on neighbourhood search was suggested by Li et al. [30]. Job categorization, node resource allocation, node clustering, and cache target planning are the four sub-issues of this paradigm. It's separated into three sorts, and then various resources are transferred to the node depending on how well it performs. The work is stored late after the nodes with comparable functions are assembled. Ahmad et al. [31] looked at a variety of current container planning approaches in order to continue their study in this hot topic. The research is based on mathematical modelling, heuristics, Meta heuristics, and machine learning, and it divides planning approaches into four groups depend upon the algorithm of optimization used to construct the map. Formerly, based on performance measurements, examine and identify important benefits and difficulties for each class of planning approach, as well as main hardware issues. Finally, this study discusses how successful research might improve the future potential of innovative container technologies. The container planning strategy provided by Rausch et al. [32] helps to make good use of the margin infrastructure on these sites. They'll also illustrate how to modify the weight of scheduling controls automatically to optimise high-level performance objectives like task execution time, connection use, and cloud performance costs. Implement a Kubernetes container orchestration system prototype and install bridges on the edges where it was constructed. Utilizing hints given by the test's frequent loads, evaluate the system using micro-organized simulations in different infrastructure situations.
Problem methodology and system design
Learning automata are used to suggest a self-accommodating duty scheduling algorithm (ADATSA) [33]. In conjunction through the futile formal of resources and the in succession stage of responsibilities in the present surroundings, the algorithm efficiently leveraged the re-enforcement educating capacity of learning mechanisms and achieves an operative remuneration-fine system for arranging activities. A charge load observing framework for actual-time observing of the surrounding and planning assessment opinion, as well as the establishment of a buffer queue for priority scheduling. To compare the non-automata technology-based algorithm PSOS, the ADATSA algorithm to learning automata-based algorithm LAEAS, and the K8S planning engine relating resource imbalance, resource residual degree, and QoS, researchers used the Kubernetes platform to pretend various planning circumstances.
In general, cloud computing environments need great portability, and containerisation assures surroundings compatibility by en-capsulation uses collected with their libraries, configuration files, and other needs, allowing consumers [34] to quickly migrate and set up programmes across gatherings.
However, there are still certain obstacles to be solved in this project. Furthermore, the study literature [21,22,23,24,25,26,27,28,29,30,31,32,33, 35] lacks methods and models that enable dynamic scalability, in which consumers get QoS and good performance [36] while using the fewest amount of cloud resources possible, particularly for containerized services hosted on the cloud.
Cloud computing services benefit from dynamic scalability, which provides on-demand, timely, and dynamically changeable computing resources.
However, since the container cloud environment is very changeable and unpredictable, the environment exemplary derived as of static reward-penalty components might not be optimum. ADATSA algorithm does not take into account diversity of cloud resources. Users' demands for cloud resources are often diverse, and operator responsibilities are typically completed by a combination of heterogeneous cloud services.
According to above gathered research gaps it needs proposed methodology. Hybrid optimal and deep learning is proposed for dynamic scalable task scheduling (DSTS). The main contributions are list as follows:
A modified multi-swarm coyote optimization (MMCO) algorithm is used for scaling the containers virtual resources which enhance customer service level agreements.
A modified pigeon-inspired optimization (MPIO) algorithm is proposed for task clustering and the fast adaptive feedback recurrent neural network (FARNN) is used for pre-virtual CPU allocation to ensure priority based scheduling.
The task load monitoring mechanism is designed based on deep convolutional neural network (DCNN) which achieves dynamic scheduling based on priority.
System design of proposed methodology
Before being deployed to the cloud, programmes must be imaged and encased in the container cloud podium. The purpose of charge planning is to assign container illustrations to the most appropriate node in order to create the most effective utilization of accessible means. The difficulty of mapping relationships between containers and nodes may be represented as task scheduling in container cloud. Figure 1 depicts the system architecture of the proposed dynamic scalable task scheduling (DSTS) paradigm. The DSTS model includes a number of processes, including container virtual resource scaling, task clustering, pre-virtual CPU allocation, and task load monitoring.
Dynamic scalable task scheduling (DSTS) model
Proposed methodology
In this section, we describe the following process such as containers virtual resources scaling, task clustering, pre-virtual CPU allocation and task load monitoring mechanism.
Container virtual resources scaling using MMCO algorithm
The goal of cloud service level agreements (SLAs) is for service providers to have a common understanding of priority areas, duties, warranties, and service providers. It specifies the dimensions and duties of the parties participating in the cloud setup, as well as the timeframe for reporting or resolving system vulnerabilities. As more firms depend on external suppliers for their vital systems, programmes, and data, service level agreements are becoming more important. The Cloud SLA assures that cloud providers satisfy specific enterprise-level criteria and provide clients a clear distribution. If the provider fails to satisfy the requirements of the guarantee, it may be subject to financial penalties such as service time credit. The modified multi-swarm coyote optimization (MMCO) method was used to scale virtual resources in containers, improving customer service level agreements. MMCO coyote population is split into two groups Fd consists of Fq each coyote; the number of coyotes in each pack is constant and consistent across all packs in the first suggestion. As a result, multiplying the algorithm's total population gives algorithm's entire population Fd ∈ F∗ and Fq ∈ F∗.Furthermore, the social position of the people qth coyote from the woods dth cram everything in ath the current time has been specified.
$${SOC}_q^{d.a}=\overrightarrow{b}=\left({b}_1,{b}_2,..{b}_h\right)$$
where C demonstrates the number of elements that go into making a choice, It also means that the coyote has adapted to its environment \({FIT}_q^{d.a}\in J\). Establishing the social position of the people qth coyote from the woods dth a compilation of pth the dimension is specified via a vector.
$${SOC}_{d.p}^{q.a}= Ua+{j}_p.\left({na}_p-{Ua}_p\right)$$
where Uap and nap stands for, respectively, the bottom and top limits of the range pth choice variable and jp is a true random number created inside the range's bounds [0, 1] Using a probability distribution that is uniform in nature.
To determine the fitness function of each coyote, Fq × Fd Coyotes in the environment, depending their socioeconomic situations
$${FIT}_q^{d.t}=m\left({SOC}_q^{d.a}\right)$$
In the case of a minimization problem, the solution's Alpha dth crams everything in ath a split second in time
$${Alpha}^{d.A}=\left\{{SOC}_q^{\backslash d.A}\left|{\arg}_{q=\left\{1,2.\dots {f}_d\right\}}\min l\left({SOC}_q^{d.A}\right)\right.\right\}$$
MMCO integrates all of the coyote's information and calculates the cultural propensity of each pack:
$${Cul}_p^{d.A}=\left\{\begin{array}{l}{z}_{\frac{\left({F}_T+1\right)}{2}.i}^{d.A}\kern2.52em {F}_d\; is\; odd\\ {}\frac{z_{\frac{Ft}{2}.i}^{d.A}+{z}_{\left(\frac{F_t}{2}+1\right).p}^{d.A}}{2}. otherwise\end{array}\right.$$
where ZD, the social standing of all coyotes in the region is indicated by the letter A. dth in a hurry Ath p in the price range at the given point in time [1, C]. At the same time, the Alpha has an effect on coyotes (δ1) and by the other coyotes in the pack (δ2),
$${\delta}_1={Alpha}^{d.A}-{SOC}_{qj_1}^{d.A}$$
$${\delta}_2={Cult}^{d.A}-{SOC}_{qj_2}^{d.A}$$
The alpha δ1 Influence distinguishes a coyote from the rest of the pack in terms of culture, Qj1, to the coyote leader, whereas the pack's clout δ2, shows a cultural distinction from a random coyote Qj2, to the cultural tendencies of the pack. In MMCO algorithm, during the initialization of the method, the swarm, also known as stands, is randomly seeded to the search space.
$${a}_{s.p}={U}_p+{j}_{s.p}\times \left({X}_p-{U}_p\right)$$
where, as. p represents sth a hive of activity pth dimension, Up and Xp are the bottom and top edges of the solution space, respectively, and s, p is a range of uniformly generated random numbers [0, 1].
$$T=\arg \min \left\{l\left(\overrightarrow{a}\right)\right\}$$
To generate Multi swarm from this point, two different equations may be used.
$${K}_{A.p}={a}_{s.p}+\alpha \times \left({T}_p-{a}_{o.p}\right)$$
$${K}_{A.p}={a}_{s.p}+\alpha \times \left({a}_{s.p}-{a}_{o.p}\right)$$
where, sindices must not be identical and α factor of scalability. The equation used to update the dimension of a swarm that will be formed for a Swarm is an important part of the process. The working function of the process of container virtual resources scaling is given in Algorithm 1.
Task clustering using modified pigeon-inspired optimization (MPIO) algorithm
Clustering is a procedure that divides tasks into different categories depending on increasing application demand, such as load balancing clusters, high availability clusters, and compute clusters. The primary emphasis of load balancing clusters is resource use on the host system, particularly the virtual machine. These clusters are utilised to balance constant and dynamic loads, as well as to move the application from one cloud provider to the next. The second kind is fault-tolerant high-availability clusters that are built for tip failure. For task clustering, we used a modified pigeon-inspired optimization (MPIO) algorithm. The activation function ties the information about the concealed state of prior deadlines to the item in the current chronology, and it provides it to the entrance gate as follows:
$${H}_r=\upsilon \Big({X}_r{K}^H+{t}_{r-1}{v}^H+{b}_H\Big)$$
where ES is recall gate. Xr is input at each time step s and TS − 1 represent the previous time step's hidden state T − 1. Ze is the input layer's heaviness and ve is recurring heaviness of the concealed state. The be is the bias of the input layer. The following are the equations for the two tasks:
$${i}_r=\upsilon \left({X}_r{K}^i+{t}_{r-1}{v}^i+{b}_i\right)$$
$${\overset{\sim }{E}}_s=\tanh \left({X}_r{Z}^e+{t}_{r-1}{v}^e+{b}_e\right)$$
$${E}_r={E_{r-1}}^{\ast }{H}_r+{i_r}^{\ast }{\overset{\sim }{E}}_s$$
The hidden levels at which the sigmoid activation function is anticipated are determined by the output gate. To create a create output, sends to the newly changed cell level function and multiplies as follows.
$${Z}_r=\upsilon \Big({X}_r{X}^Z+{t}_{r-1}{v}^Z+{b}_Z\Big)$$
$${t}_r={Z_r}^{\ast}\tanh \left({E}_r\right)$$
The update gateway functions similarly to a forget-me-not and LSTM input gateway. The weight is multiplied by the current input, and the weight is multiplied by the level hidden at the prior time point. Using the sigmoid function to find the values of one from zero and one, the contributions of the two possibilities are merged
$${L}_r=\upsilon \left({X}_r{X}^L+{d}_{r-1}{v}^l+{b}_l\right)$$
where WS symbolize the gate for updating, the YS at a given time step, the input vector s while cS − 1 is the earlier output from preceding entities. The Ks is the mass of the input layer, and uW is the repeated mass. The bs is the bias of the input layer. The reset gate's output is as follows:
$${s}_r=\upsilon \left({X}_r{K}^s+{t}_{r-1}{v}^S+{b}_S\right)$$
The reset gate is employed in the new memory phone to accumulate the in sequence of the preceding phase. The network will be able to choose just relevant earlier events in chronological sequence as a result of this. The present memory contact is as follows:
$${\overset{\sim }{E}}_r=\tanh \left({X}_rK+v\left({s}_r\Theta {d}_{r-1}\right)\right)$$
$${d}_r={L}_r\Theta {d}_{r-1}+\left(1-{L}_r\right)\Theta \upsilon \left(\overset{\sim }{E_r}\right)+{b}_d$$
Each pigeon has a specific scenario when it comes to the optimization challenge.
$${X}_i=\left[{x}_{i1},{x}_{i2},\dots {x}_{ic}\right]$$
where c is the scope of the problem to be tackled1, 2… M, M is the pigeons' population; each pigeon has a velocity that is stated as follows:
$${u}_i=\left[{U}_{i1},{U}_{i2},\dots {U}_{im}\right]$$
First, figure out where the dust is in the search region and how fast it is moving. Then, as the number of repetitions grows, so does the difficulty, the ui can be updated by repeating the following steps
$${u}_i(r)={u}_i\left(r-1\right).{e}^{- sr}+ Rand.\left({X}_{FBest}-{X}_i\left(r-1\right)\right)$$
where S is the number of current iterations. Then the next xi is calculated as follows
$${x}_i(r)={x}_i\left(r-1\right)+{u}_i(r)$$
Task clustering using MPIO algorithm
As a result, the iteration position Mth can be updated by
$${X}_i(r)={X}_i\left(r-1\right)+ Rand.\left({X}_{Center}\left(r-1\right)-{X}_i\left(r-1\right)\right)$$
$${X}_{Center}(r)=\frac{\sum \limits_{i=1}^m{X}_i(r). fitness\left({X}_i(r)\right)}{m_p\sum \limits_{i=1}^m fitness\Big(\left({X}_i(r)\right)}$$
$${m}_q(r)= ceil\left(\frac{m_p\left(r-1\right)}{2}\right)$$
where H is the present number of the iteration H = 1, 2. …HMax, is the amount of iterations in which the signpost operator is active. The meaning of fitness is to be optimized:
$$fitness\left({X}_j(r)\right)={H}_{Max}\left({X}_j(r)\right)$$
$$fitness\left({X}_i(r)\right)=\frac{1}{H_{Min}\left({X}_i(r)\right)+\varepsilon }$$
The pigeon's position will be close to the center point after each iteration which reaches the end RMax. Algorithm 2 describes the operation of the task clustering process utilising the MPIO algorithm.
Pre-virtual CPU allocation using FARNN technique
In cloud computing, the latest virtual processor planning techniques are essential to hide physical resources from running programs and reduce performance during virtualization. However, different QoS requirements for cloud applications make it difficult to evaluate and predict the behavior of virtual processors. Based on the evaluation process, a specific planning plan regulates virtual machine priorities when processing I/O requirements for equitable distribution. Our program evaluates the CPU intensity and I/O intensity of virtual machines, making them very effective in a wide range of tasks. Here we applied fast adaptive feedback recurrent neural network (FARNN) for pre-virtual CPU allocation phase to ensure the priority based scheduling.
The FARNN methodology is a set of computing techniques that use model and method learning to anticipate computer effects by simulating the human brain's problematic-answering process. The three network layers of a normal FARNN approach are the input film, hidden film, and output film. For arrest forecast systems, the input film typically contains the current time interval's recorded MAC address. The following is a format for the MAC address input vector at time T:
$$Y(T)=\left\{{y}_1,{y}_2,.\dots, {y}_j,\dots, {y}_l\right\}$$
At the current time, the all MAC address collection is denoted as Y(T). T stands for the overall quantity of MAC addresses in use at any one period. The jth Mac address detection is represented as yj respectively. The input and network weights are used to compute the hidden layer neutrons.
$$h(T)={Z_1^t}^{\ast }Y(T)+a$$
Output film associates the results of the Hidden film and converts them.
$$X(T)=f\left({Z_2^t}^{\ast }h(T)\right)=f\left({Z_2^t}^{\ast}\left({Z_1^t}^{\ast }Y(T)+a\right)\right)$$
The hidden layer output is denoted as h(T) and the output layer output is referred as X(T) respectively. From the Input to Hidden film the weight is denoted as \({Z}_1^t\) and from the Hidden film to the Output film is stated as \({Z}_2^t\) respectively. The activation function is indicated as f(.) and the random bias is denoted as an in the output layer. The Feature film is initially combined amongst the Input film and the Hidden film in the rapid adaptive to determine the transfer prospects of one MAC address. Because the present occupancy state is reliant on the past occupancy status, the transfer possibility and transfer possibility matrix may be utilized to measure those type of methods. The transfer matrix may be stated as follows, assuming that an occupant's location in a place is either "in" or "out."
$$tpm\left|{}_{yK}=\left[\begin{array}{l}{y}_K^{j-0}\kern0.6em {y}_K^{j-j}\\ {}{y}_K^{0-0}\kern0.6em {y}_K^{0-j}\kern0.24em \end{array}\right]\right.$$
The transition probability matrix of one load is denoted as tpmyK. In the transfer matrix, \({y}_K^{j-0}\) and \({y}_K^{j-j}\) indicate the noticed probability that single inhabitant whose position is "in" at the present period in any case be "out" and "in" at the following period, correspondingly, at the following period \({y}_K^{0-0}\) and \({y}_K^{0-j}\) signify the noticed possibility that one inhabitant whose position is "out" at the present period intermission would be "out" and "in" in the next period intermission. The possibility might be computed using Bayesian models and the observed conditional probability. For example
$${y}_K^{j-j}=p\left( state\kern0.34em observed=j\left| state\kern0.34em observed=j\right.\right)$$
The one MAC address occupied probability is
$${y}_K^{j-j}=\frac{\sum {M}_{1-1}}{\sum {M}_{1-1}+\sum {M}_{1-0}}$$
$${y}_K^{0-0}=\frac{\sum {M}_{0-0}}{\sum {M}_{0-0}+\sum {M}_{0-1}}$$
where M1 − 1 is the recurrence in which the possession grade changed from "in" to "in" and M1 − 0 is the frequencies in which the possession grade changed from "in" to "out" respectively. Similarly, M0 − 0 and M0 − 1 address the frequencies in which the possession grade changed from "out" to "out" and from "out" to "in" individually. As the estimated frequency changes, the preventative education database will be automatically updated. The transfer probability will be adjusted at the next estimate as the training database is refreshed. Because each MAC address in the load is given a probability, each MAC address may be represented as follows:
$${y}_K=\left\{{y}_K^{mac},{y}_K^{0-j},{y}_K^{j-j}\right\}$$
Update the input vector in the following,
$$Y(T)=\left\{{y}_1^{mac},{y}_1^{0-j},{y}_1^{j-j},{y}_2^{mac},{y}_2^{0-j},{y}_2^{j-j},\dots {y}_K^{mac},{y}_K^{0-j},{y}_K^{j-j}\right\}$$
After that, the feature layer may be structured as follows:
$$f(T)=\left\{Y(T),Y\left(T-1\right),Y\left(T-2\right),.\dots Y\left(T-\Delta T\right)\right\}$$
The length of time window is ΔT and at time T the vector of the Feature layer is f(T). Assuming the amount of MAC reports in the time window is K, then
$$f(T)=\left\{{y}_1^{mac},{y}_1^{0-j},{y}_1^{j-j},{y}_2^{mac},{y}_2^{0-j},{y}_2^{j-j},\dots {y}_K^{mac},{y}_K^{0-j},{y}_K^{j-j}\right\}$$
At regular intervals, the environment layer retains the hidden layer feedback signal, acting as a short-term memory to stress professional dependency. The rear cover layer's output may be structured as follows:
$$h(T)=g\left({\omega}^1D\left(T-1\right)+{\omega}^2\left(f(T)\right)\right)$$
The output of the context layer is
$$D\left(T-1\right)=\alpha D\left(T-2\right)+h\left(T-1\right)$$
where h(T) is referred as the output vector of the Hidden layer at time interval T, and D is the output vector of Context layer. ω1 is stated as the joining mass from the Context layer to the Hidden layer, and ω2 is the joining mass from the Feature layer to the Hidden layer. Α is the self-connected comment gain factor. G (•) represents the Hidden layer's activation function. The mode of activation has been set to
$$g(y)=\frac{1}{1+{E}^{-y}}$$
The following is an example of a signal change from the Hidden film to the Output film:
$$x(T)={\omega}^3h(T)={\omega}^{3\ast }g\left({\omega}^1D\left(T-1\right)+{\omega}^2f(T)\right)$$
where is the output variable at period T, which in this case is the expected possession. ω3 is the joining mass from the Hidden layer to the Output layer. The following is the cost function for updating and learning connection weights:
$$e=\sum \limits_{T-1}^M{\left[x(T)-c(T)\right]}^2$$
c (t) is the actual occupancy output, and M is the size of training time samples. Algorithm 3 describes the process of pre-virtual CPU allocation.
Task load monitoring using DCNN method
There are five steps to the job load monitoring function: Data collecting and data filtering are the first two steps in the data collection process. 3) data gathering 4) examination of data 5) Issue a warning and file a complaint. Processing time, CPU speed from CPU probe, memory use, memory retrieval delay, power consumption, power consumption from power analysis, frequency, latency, and delay are all examples of information or quantity that the monitoring system should gather through various inquiries. Consider essential features of data gathering, such as structure, tactics, updating approaches, and kinds, to classify it. We employ a deep convolutional neural network (DCNN) to measure job load in this article. In DCNN, the scroll layer contains numerous filters that correspond to the intriguing local forms. The result is forwarded to a non-linear implementation function to generate a functional map. Also adjust the functional map that was constructed to reduce the calculated values by changing the properties. Stacking the scroll layers at the DCNN's front end separates the local attributes from the source data at first, and then gradually adds volume as the next abstract layer is provided. A well-trained layer produces a new representation of the original form that can be classified most successfully. For this purpose, the spiral layer is also called the functional sample layer. An assortment with several fully connected layers is attached at the end of the coil layer. For the training set samples,
$$n=\left\{\left({y}^{(j)},{x}^{(j)}\right)\right\},\kern0.48em j=1,2,.\dots, n$$
Each sample has a feature vector y(j) and a label x(j) to go with it. By introducing the loss function, we may obtain the error. As demonstrated in following equation, the loss function has an overall error and a time order.
$$I\left(z,a\right)\approx \frac{1}{m}\sum \limits_{j=1}^mk\left({H}_{\left\{z,a\right\}}\left({y}^{(j)},{x}^{(j)}\right)\right)+\lambda \sum \limits_{j,i}{z}_{j,i}^2$$
Here, z represents the weight and 'a' denotes the bias value respectively. Also, the size of the batch is represented as m. The hyper parameter λ error regulates and controls error values. The dissimilarity amongst the created assessment and the real assessment is measured in square metres. It's worded like this:
$$D=\frac{1}{2M}\sum \limits_y{\left\Vert x(y)-b(y)\right\Vert}^2$$
When calculating two gradients, the coefficient 1/2 is a normalization group that cancels the coefficient. Further derivatives can be simplified without causing side effects as a result of this. Also can modify the weight and offset to reduce losses depending on the look of the slope.
$$\Delta \omega =\left(b(y)-x(y)\right){\sigma}^{\hbox{'}}(w)y$$
$$\Delta a=\left(b(y)-x(y)\right){\sigma}^{\hbox{'}}(w)$$
In the neuron, the input is denoted as w; the activation function is represented as σ; the change in the weight is referred as Δω and the variation of the offset is stated as Δa respectively.
$${\omega}^{\left(m+1\right)}={\omega}^{(m)}-{\frac{\eta }{M}}^{\ast}\Delta \omega$$
$${a}^{\left(m+1\right)}={a}^{(m)}-{\frac{\eta }{M}}^{\ast}\Delta a$$
The learning rate is represented as η; the mth iteration weight and offset are denoted as ω(m) and a(m) respectively. The total number of loads is represented as M respectively. In Algorithm 4, we describe the working function of the task load monitoring using DCNN method.
Simulation results and analysis
In this part, we develop experimentations to test and assess the proposed dynamic scalable task scheduling (DSTS) model, and the simulation results are associated to current state-of-the-art models including ADATSA, LAEAS, PSOS, and the K8S planning machine.
To overcome the repeating scheduling issue, a self-accommodating task planning algorithm (ADATSA) is used [33]. The approach reduces the reliance of existing vibrant planning strategies on container cloud architecture and improves the connection between jobs and their runtime environments.
In the cloud system, the Learning automata based energy-aware scheduling (LAEAS) algorithm [37] is employed for real-time job planning.
In a container cloud context, the performance-based service oriented scheduling (PSOS) [38] has been utilised to handle planning problems such as average latency of service instances, resource consumption, and balancing.
Unlike Borg and Omega, which were built as completely Google-internal systems, the Kubernetes (K8S) scheduling engine [39] is open source.
Kubernetes (v1.16.2) was used to create an experimental setup on 53 servers with the similar specs as the investigational stage, comprising 3, 50 master and slave nodes. Furthermore, we utilised Python 3.7 as the major programming language for quality analysis implementation, with Anaconda Navigator integration and spyder and Jupyter as execution environments. The number of tasks in this simulation has been separated into five categories: task 1, task 2, task 3, task 4, and task 5. In job 1, we may use static scheduling with 128core and 64core CPU oriented resources as master and slave, respectively. In task 2, we may use memory-oriented resources master and slave of 256GB and 128GB, respectively, to create dynamic scheduling. In task 3, we may use time-based static scheduling with 1000GB master and slave disc oriented resources, respectively. Task 4 allows us to configure time-based dynamic scheduling with bandwidth-oriented master and slave resources of 10Gbps and 10Gbps, respectively. With the resource non-oriented master and slave as 3 and 50, we may examine test quality in job 5. Where resource non-oriented apps are ones in which the application's resource needs are composed and there is no partiality for resources. Table 2 summarises the job partitioning and resource requirements. We employed recurrent distributions to mimic large-scale uses distribution due to a shortage of apps. The experiment began with a total of 100 applications, including 20 for each category of application. Table 3 describes the super-parameter settings of proposed optimization algorithm.
Table 2 Dataset descriptions
Table 3 Optimization algorithm super-parameter settings
Performance evaluation metrics
In this section, the simulation results of proposed DSTS classic is associated with the existing state-of-art models such as ADATSA, LAEAS, PSOS and K8S planning engine in terms of different service quality evaluation metrics are resource imbalance degree (DId), resource residual degree (DRd), response time (RT) and throughput (TH). The particulars of appropriate metrics are defined as proceeds:
$${D}_{Id}=\sum \limits_{i=1}^N\frac{L_r\left({\alpha}_i\right)}{N}$$
$${D}_{Rd}=\sum \limits_{i=1}^N\frac{S_r\left({\beta}_i\right)}{N}$$
$${R}_T=\frac{1}{N_{app}}\sum \limits_{j=1}^{N_{app}}{R}_T\;{WS}_{app}$$
$${T}_H=\frac{N_{req}\;{WS}_{app}}{T_{end}\;{WS}_{app}-{T}_{start}\;{WS}_{app}}$$
where Lr(αi) and Sr(βi) represents node resource imbalance degree (ref. eqn [18].) and node resource residual degree (ref. eqn [19].) respectively for N number of node resources. The response delay of web application represents as WSapp and Tend, Tstart denotes the start and end time of the test respectively.
Result comparison of Task-1
The influence of tasks on static scheduling performance of our new DSTS model is compared to that of the current ADATSA, LAEAS, PSOS, and K8S models in this scenario. The proposed and current task scheduling models are compared in terms of resource imbalance degree (DId) in Fig. 2. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The suggested DSTS model has a resource imbalance degree (DId) of 12.698%, 10.000%, 7.895%, and 6.173%, respectively, lower than the current ADATSA, LAEAS, PSOS, and K8S models. Figure 3 shows the comparative analysis of resource residual degree (DRd) for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource residual degree (DRd) of proposed DSTS model is 10.280%, 8.155%, 6.426% and 4.695% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.
Comparative analysis of resource imbalance degree (DId) (Task-1)
Comparative analysis of resource residual degree (DRd) (Task-1)
The influence of tasks on the dynamic scheduling presentation of our suggested DSTS model is associated to that of the current ADATSA, LAEAS, PSOS, and K8S models in this scenario. Figure 4 shows the comparative analysis of resource imbalance degree (DId) for the proposed and existing task scheduling models. We can see from this graph that the DSTS dynamic scheduling model outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource imbalance degree (DId) of proposed DSTS model is 15.275%, 9.285%, 8.590% and 6.699% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 5 shows the comparative analysis of resource residual degree (DRd) for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of dynamic scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource residual degree (DRd) of proposed DSTS model is 11.710%, 8.555%, 6.740% and 5.462% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.
In this scenario, the influence of tasks on our proposed DSTS model's time-based static scheduling performance is compared to the current ADATSA, LAEAS, PSOS, and K8S models. Figure 6 shows the comparative analysis of resource imbalance degree (DId) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource imbalance degree (DId) of proposed DSTS model is 15.146%, 15.275%, 9.285% and 8.590% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 7 shows the comparative analysis of resource residual degree (DRd) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models in terms of performance. The resource residual degree (DRd) of proposed DSTS model is 6.796%, 11.710%, 8.555% and 6.740% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.
Comparative analysis of resource imbalance degree (DId) with time (Task-3)
Comparative analysis of resource residual degree (DRd) with time (Task-3)
In this scenario, the influence of tasks on our proposed DSTS model's time-based dynamic scheduling performance is compared to the current ADATSA, LAEAS, PSOS, and K8S models. Figure 8 shows the comparative analysis of resource imbalance degree (DId) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource imbalance degree (DId) of proposed DSTS model is 13.763%, 15.146%, 12.878% and 11.781% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 9 shows the comparative analysis of resource residual degree (DRd) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource residual degree (DRd) of proposed DSTS model is 6.703%, 6.796%, 11.710% and 8.555% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.
In this scenario, the effect of our proposed DSTS model's quality validation is compared to the current ADATSA, LAEAS, PSOS, and K8S models. Figure 10 shows the comparative analysis of resource imbalance degree (DId) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource imbalance degree (DId) of proposed DSTS model is 13.965%, 13.763%, 15.146% and 12.878% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 11 shows the comparative analysis of resource residual degree (DRd) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models in terms of performance. The resource residual degree (DRd) of proposed DSTS model is 13.445%, 6.703%, 6.796% and 11.710% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.
Table 4 describes the performance comparison of proposed and existing task scheduling in terms of response time (RT) and throughput (TH) with varying simulation time. The average response time (RT) of proposed DSTS model is 25.448%, 32.616%, 37.814% and 40.502% higher than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 12 gives the graphical representation of proposed and existing task scheduling models. The average throughput (TH) of proposed DSTS model is 33.168%, 38.119%, 44.059% and 49.010% higher than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 13 gives graphical representation of proposed and existing task scheduling models. Figure 14 denotes the runtime overhead of the proposed and existing task scheduling models. The plot clearly depicts average runtime overhead of the proposed DSTS model is 12.356%, 15.09%, 18.367% and 21.578% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.
Table 4 Comparative analysis of quality of service metrics
Comparative analysis of response time (RT) (Task-5)
Comparative analysis of Throughput (TH) (Task-5)
Comparative analysis of runtime overhead
In the past, Kaplan used the Amazon Elastic Compute cloud to host its applications. Working engineers were required to manually update applications, and on average there were four dedicated Amazon EC2 hosts. Rowan Drabo, head of Kaplan cloud operations, said the application update would take hours to take effect. Cost analysis shows that we spend more than $ 500 per month on the Amazon Elastic Compute cloud. After switching to micro-service-based architecture with Amazon's flexible container service and containers, Kaplan saved significant costs. "We currently have more than 500 containers in production," Drabo said. We have reduced the number of Amazon Flexible Compute cloud events by 70%, resulting in 40% cost savings per application. Using our proposed Dynamic Scalable Task Scheduler (DSTS) for automated container delivery, Kaplan allows you to reduce deployment time, increase the frequency of updates and improve developer satisfaction.
For dynamic scalable task scheduling (DSTS) in a container cloud context, we suggested a hybrid optimum and deep learning approach. The succeeding are the major influences made in this paper:
A modified multi-swarm coyote optimization (MMCO) method for scaling virtual resources in containers to improve customer service level agreements.
A modified pigeon-inspired optimization (MPIO) algorithm is for task clustering and fast adaptive feedback recurrent neural network (FARNN) for pre-virtual CPU allocation to ensure priority based scheduling.
Task load monitoring mechanism is designed based on deep convolutional neural network (DCNN) which achieves dynamic scheduling based on priority.
After the recreation outcomes, we concluded that the simulation results of projected DSTS model is very effective compared to the existing task scheduling models in terms of excellence of service metrics are resource imbalance degree (DId), resource residual degree (DRd), response time (RT) and throughput (TH). In future, we extend our DSTS model which combine with the optimization algorithm to optimize joint problems i.e. resource allocation and task scheduling in container cloud environment.
Wang B, Qi Z, Ma R, Guan H, Vasilakos AV (2015) A survey on data center networking for cloud computing. Comput Netw 91:528–547
González-Martínez JA, Bote-Lorenzo ML, Gómez-Sánchez E, Cano-Parra R (2015) Cloud computing and education: a state-of-the-art survey. Comput Educ 80:132–151
Khan AN, Kiah MM, Khan SU, Madani SA (2013) Towards secure mobile cloud computing: a survey. Futur Gener Comput Syst 29(5):1278–1299
Xie XM, Zhao YX (2013) Analysis on the risk of personal cloud computing based on the cloud industry chain. J China Univ Posts Telecommun 20:105–112
Han Y, Luo X (2013) Hierarchical scheduling mechanisms for multilingual information resources in cloud computing. AASRI Proc 5:268–273
Bose R, Luo XR, Liu Y (2013) The roles of security and trust: comparing cloud computing and banking. Procedia Soc Behav Sci 73:30–34
Elamir AM, Jailani N, Bakar MA (2013) Framework and architecture for programming education environment as a cloud computing service. Proc Technol 11:1299–1308
Tsertou A, Amditis A, Latsa E, Kanellopoulos I, Kotras M (2016) Dynamic and synchromodal container consolidation: the cloud computing enabler. Transp Res Proc 14:2805–2813
Kong W, Lei Y, Ma J (2016) Virtual machine resource scheduling algorithm for cloud computing based on auction mechanism. Optik 127(12):5099–5104
Moschakis IA, Karatza HD (2015) A meta-heuristic optimization approach to the scheduling of bag-of-tasks applications on heterogeneous clouds with multi-level arrivals and critical jobs. Simul Model Pract Theory 57:1–25
Singh S, Chana I (2015) QRSF: QoS-aware resource scheduling framework in cloud computing. J Supercomput 71(1):241–292
Lin J, Zha L, Xu Z (2013) Consolidated cluster systems for data centers in the cloud age: a survey and analysis. Front Comput Sci 7(1):1–19
Kertész A, Dombi JD, Benyi A (2016) A pliant-based virtual machine scheduling solution to improve the energy efficiency of iaas clouds. J Grid Comput 14(1):41–53
Musa IK, Walker SD, Owen AM, Harrison AP (2014) Self-service infrastructure container for data intensive application. J Cloud Comput 3(1):1–21
Choe R, Cho H, Park T, Ryu KR (2012) Queue-based local scheduling and global coordination for real-time operation control in a container terminal. J Intell Manuf 23(6):2179–2192
Nam H, Lee T (2013) A scheduling problem for a novel container transport system: a case of mobile harbor operation schedule. Flex Serv Manuf J 25(4):576–608
Bian Z, Li N, Li XJ, Jin ZH (2014) Operations scheduling for rail mounted gantry cranes in a container terminal yard. J Shanghai Jiaotong Univ Sci 19(3):337–345
Zhang R, Yun WY, Kopfer H (2010) Heuristic-based truck scheduling for inland container transportation. OR Spectr 32(3):787–808
Briskorn D, Fliedner M (2012) Packing chained items in aligned bins with applications to container transshipment and project scheduling. Mathem Methods Oper Res 75(3):305–326
Briskorn D, Angeloudis P (2016) Scheduling co-operating stacking cranes with predetermined container sequences. Discret Appl Math 201:70–85
Zhao D, Mohamed M, Ludwig H (2018) Locality-aware scheduling for containers in cloud computing. IEEE Trans Cloud Comput 8(2):635–646
Liu B, Li P, Lin W, Shu N, Li Y, Chang V (2018) A new container scheduling algorithm based on multi-objective optimization. Soft Comput 22(23):7741–7752
Lin M, Xi J, Bai W, Wu J (2019) Ant colony algorithm for multi-objective optimization of container-based microservice scheduling in cloud. IEEE Access 7:83088–83100
Adhikari M, Srirama SN (2019) Multi-objective accelerated particle swarm optimization with a container-based scheduling for Internet-of-Things in cloud environment. J Netw Comput Appl 137:35–61
Ranjan R, Thakur IS, Aujla GS, Kumar N, Zomaya AY (2020) Energy-efficient workflow scheduling using container-based virtualization in software-defined data centers. IEEE Trans Industr Inform 16(12):7646–7657
Chen Q, Oh J, Kim S, Kim Y (2020) Design of an adaptive GPU sharing and scheduling scheme in container-based cluster. Clust Comput 23(3):2179–2191
Hu Y, Zhou H, de Laat C, Zhao Z (2020) Concurrent container scheduling on heterogeneous clusters with multi-resource constraints. Futur Gener Comput Syst 102:562–573
Rajasekar P, Palanichamy Y (2020) Scheduling multiple scientific workflows using containers on IaaS cloud. 7621–7636 (2021) J Ambient Intell Humaniz Comput 1–16
Menouer T (2021) KCSS: Kubernetes container scheduling strategy. J Supercomput 77(5):4267–4293
Li C, Zhang Y, Luo Y (2021) Neighborhood search-based job scheduling for IoT big data real-time processing in distributed edge-cloud computing environment. J Supercomput 77:1853–1878
Ahmad I, AlFailakawi MG, AlMutawa A, Alsalman L (2021) Container scheduling techniques: a survey and assessment. Journal of King Saud University-Computer and Information Sciences 34(2022):3934-3947
Rausch T, Rashed A, Dustdar S (2021) Optimized container scheduling for data-intensive serverless edge computing. Futur Gener Comput Syst 114:259–271
Zhu L, Huang K, Hu Y, Tai X (2021) A self-adapting task scheduling algorithm for container cloud using learning automata. IEEE Access 9:81236–81252
Armbrust M, Fox A, Griffith R, Joseph AD, Katz R, Konwinski A, Lee G, Patterson D, Rabkin A, Stoica I et al (2010) A view of cloud computing. Commun ACM 53(4):50–58
Gawali MB, Shinde SK (2018) Task scheduling and resource allocation in cloud computing using a heuristic approach. J Cloud Comp 7:4
Gawali MB, Gawali SS (2021) Optimized skill knowledge transfer model using hybrid Chicken Swarm plus Deer Hunting Optimization for human to robot interaction. Knowl-Based Syst 220:106945
Sahoo S, Sahoo B, Turuk AK (2018) An energy-efficient scheduling framework for cloud using learning automata. In: 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, Bangalore, India. pp 1–5
Li H, Wang X, Gao S, Tong N (2020) A service performance aware scheduling approach in containerized cloud. In: 2020 IEEE 3rd International Conference on Computer and Communication Engineering Technology (CCET). IEEE, Beijing, China. pp 194–198
Burns B, Grant B, Oppenheimer D, Brewer E, Wilkes J (2016) Borg, omega, and kubernetes. Commun ACM 59(5):50–57
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
Department of Computer Science and Engineering, School of Engineering, Presidency University, Bengaluru, Karnataka, 560064, India
Saravanan Muniswamy & Radhakrishnan Vignesh
Saravanan Muniswamy
Radhakrishnan Vignesh
Mr. Saravanan Muniswamy has made substantial contributions to design and in drafting the manuscript. Mr. Vignesh Radhakrishnan has made HIS contributions in acquisition of data and interpretation of data. The author(s) read and approved the final manuscript.
Correspondence to Saravanan Muniswamy.
The authors have no relevant financial or non-financial interests to disclose.
Muniswamy, S., Vignesh, R. DSTS: A hybrid optimal and deep learning for dynamic scalable task scheduling on container cloud environment. J Cloud Comp 11, 33 (2022). https://doi.org/10.1186/s13677-022-00304-7
Cloud container
Task clustering
Priority based scheduling
Load monitoring
|
CommonCrawl
|
User Manual for Calculator Customized for Quantum Computation Equations
Go to a Standalone Calculator
It's all built on math.js
Why Didn't I Use [your favorite package]?
Dirac notation: ket bra braket ketbra
Gates with No Angles: I X Y Z H S Sdg T Tdg SX SXdg SWAP
Gates with Angles: P Rx Ry R U
Gate Generation: qcc, qc, and perm
Random things: randomAngle and randomState
Use eq() instead of ==
The @ operator
Rounding and Pretty Printing
Among the ways to use the calculator are 1) as a stand-alone (e.g. this one), and 2) as annotation for an html document (e.g. the manual you are now reading, and regression tests for the calculator).
There are sets of equations throughout this manual, and clicking on the equations makes the input equations visible. You can then edit them and see the results. Like in a jupyter notebook, this encourages learning through experimenting.
Here's how this got started. When I read IBM's textbook "Learn Quantum Computation using Qiskit", there was a simulator for experimenting with quantum computing, but there was no calculator to experiment with equations. This is the calculator I wanted to be there. For example, if you look at this discussion of phase kickback, you'll see these equations (I copied just the latex of the equations from the textbook): $$ X|{-}\rangle = -|{-}\rangle $$ $$ \begin{aligned} \text{CNOT}|{-}0\rangle & = |{-}\rangle \otimes |0\rangle \\ & = |{-}0\rangle \\ \quad & \\ \text{CNOT}|{-}1\rangle & = X|{-}\rangle \otimes |1\rangle \\ & = -|{-}\rangle \otimes |1\rangle \\ & = -|{-}1\rangle \\ \end{aligned} $$ $$ \begin{aligned} \text{CNOT}|{-}{+}\rangle & = \tfrac{1}{\sqrt{2}}(\text{CNOT}|{-}0\rangle + \text{CNOT}|{-}1\rangle) \\ & = \tfrac{1}{\sqrt{2}}(|{-}0\rangle + X|{-}1\rangle) \\ & = \tfrac{1}{\sqrt{2}}(|{-}0\rangle -|{-}1\rangle) \\ \end{aligned} $$ $$ \begin{aligned} \text{CNOT}|{-}{+}\rangle & = |{-}\rangle \otimes \tfrac{1}{\sqrt{2}}(|{0}\rangle - |1\rangle )\\ & = |{-}{-}\rangle \\ \end{aligned} $$
Now compare them to mine:
Other than the second equation, where I defined CNOT, my equations look quite similar to the ones from the textbook. But mine are the output of my calculator. Click on the equations and you'll see the input. This allows you to experiment with the equations. Plus there's a subtle but important difference—where the textbook has \(X|{-}1\rangle\), my associated equation has \((X \otimes I) |{-}1\rangle \). I don't know if that was a typo in the textbook, or if it's a case of "you know what we mean". What they meant was to apply the X to qubit 1 and not qubit 0. The calculator demands that you make that explicit, because it doesn't "know what you mean". Executable equations keep you honest.
Continuing my sales pitch, below are identities from section 2.4, More Circuit Identities of the qiskit textbook. This shows how well the calculator handles the equations.
section 1. Making a Controlled-Z from a CNOT
section 2. Swapping Qubits
section 3. Controlled Rotations
section 4. The Toffoli
The last four equations demonstrate how the almost-CCX gate applies an X to the target when the controls are 11, does nothing when they're 00 or 01, and applies a Z when 10, which, as the textbook says, "only induces a relative phase".
There are two sections to each calculator. First the textarea for editing equations, then the results rendered with latex. If only the latex results are visible, click on the results to make the textarea visible.
When the textarea is visible, there are buttons between the textarea and the results. Here's what they do:
The "parenthesis" radio button selects "keep", "auto", or "all". See the "parenthesis" section of the math.js manual for an explanation of the options.
The "implicit" radio button selects "hide" or "show". See the "implicit" section of the math.js manual for an explanation of the options.
Or maybe you prefer this explanation of the parenthesis and implicit options:
Click on the equations to get the textarea and buttons. Note that the default for parenthesis is "keep", and it's leaving the equation unchanged from how it's entered in the textarea. Click on "auto" and it removes the unnecessary parens. Click on "all" and it puts in parentheses everywhere it can. Moving on to implicit, note that the default is hide, and it's hiding the implicit multiplication. Click on "show" and it shows the multiplication explicitly.
The "Re-evaluate" button does just that, re-evaluates the equations. It's unnecessary unless you've used the randomQubit or randomState function, in which case you get new random values.
The "User Manual" button opens the user manual in a new tab. Nice for checking on some function detail when you're in the middle of entering an equation. Perhaps silly if you're already in the user manual, like you are now.
A note on not bricking your browser: The calculator can handle computations with matrices large enough that their latex rendering gets painfully slow. The example below shows such matrices. You probably don't want to display one of those 6-qubit results. Click on the equations to see the original textarea and you'll see how I avoid that. One way is to use the matrices inside an expression for which the result is not a huge matrix, like the first and last lines in the example. If you want to assign a huge matrix to a variable, you can put a semicolon at the end of an assignment, and that tells the calculator not to display the assignment. The combination of "a=expr;eq(a,expr)" lets you effectively see the assignment in the form of an eq(), which shows how the matrix is built without displaying it.
Click here to go to a standalone calculator
There's no difference between the standalone calculator linked to above, and all the calculators in this manual, other than there's just one in the standalone, and it has no initial equations in the textarea. These files can be templates for how to setup calculators for yourself.
math.js is a basic calculation engine with hooks for customizing. So I customized it, mostly by adding the functions, and one operator, listed in the table of contents, above. Other than for my customization, see math.js for all documentation.
While I'm doling out credit, mathjax.org renders all the LaTeX. Also, qiskit was the inspiration for this calculator, and I use a modified version of the qiskit number pretty-printer.
Maybe I didn't know about it, or I tried it and it was too slow, or it required sacrificing some functionality that I didn't want to lose, or it wasn't open source. I really wanted to use Jupyter because I love the Jupyter notebook approach, but using Jupyter meant (you!) having to install stuff, and it doesn't seem to have a way to hide the inputs (just the outputs), which would mean this manual would be messier. JupyterLite has hide all inputs, although it replaces them with elipses. Not too bad. But JupyterLite has some problems, including that it's not an official project at the time of this writing. Yikes! Anyway, that's the kind of agonizing I went through when choosing what packages to use.
Dirac Notation
Probably best just to show examples:
The args are strings (e.g. '01' or "01"). The qubits of the arg are 0's, 1's, +'s, and -'s.
Gates with No Angles
Probably best just to show them all:
The gates without angles are actually constants rather than functions. If they were functions you'd have to add "()" to their names. Nobody wants that.
Gates with Angles
If you're puzzled about what's going on with the angle names, it turns out math.js turns variables that are names of Greek letters into a latex rendering of the letter. How cool is that? I haven't found any mention of it in their manual. I only discovered it because I was going to add a wrapper to make that happen, and typed in a test case before coding it, and discovered it was already working.
Gate Generation: qcc
qcc() can generate a gate similarly to how one does with the simulator in qiskit. You're building a result gate with one or more internal gates. qcc() also lets you specify any number of control qubits for the internal gates. For example:
The first arg of qcc is the number of qubits. The rest of the args come in pairs, the first defining the internal gate and the second defining the control and target qubits for the internal gate. The control and target qubits are specified with a string (e.g. '01' or "01") The qubits are either all target qubits, or control qubits and target qubits separated by a '>'. The number of target qubits must match the number of qubits in the internal gate, and they must be sequential and ascending. (Yes, I considered allowing non-consequtive target qubits, and in any order, but I couldn't come up with a clean way to represent it in the display for qcc(). Use the perm() gate.) The target qubits are required. There can be 0 or more control qubits. This limits you to a ten-qubit result gate. I don't think you want to have more than that.
The gate isn't limited to internal gates without angles, like X. It could be Rx(pi/2), as you see in the examples. You could also build a gate and put it in a variable and use that as an internal gate, as you see in the examples.
The display shows each internal gate in a column, with the name of the internal gate in its target row(s), 'c' in its control rows, if there are any, and the rest are '-'. The qcc display is bracketed by curly brackets. Square brackets make it look too much like a matrix, and straight brackets (pipes) make it look like a magnitude.
The first two examples show CNOT1 and CNOT2 defined, the two forms of a controlled-X, one with the control qubit on 0, and the other with the control qubit on 1. The next two examples show how you can surround each with H's to get the other. The 3-qubit example compares \(Z \otimes Y \otimes X \) to building it with qcc(). Next, there's an example of using an internal gate with an angle. Finally, there's surrounding \(Y \otimes X \) with SWAPs to get \(X \otimes Y \).
Gate Generation: qc
I messed up. I made qc() multiply the gates from left to right. To make it look like the circuits in qiskit, it has to multiply from right to left. I left qc() unchanged for backward compatibility, and added qcc(), which stands for qc-corrected. Take a look:
The first two show the ordering. Note that it draws qc() and qcc() the same, so you have to look at the source equations to see that the first line is qc() and the second is qcc().
Gate Generation: perm
The perm gate permutes the qubits. The arg is a string listing qubits in the new order, so '012' is no change, and '021' leaves one unchanged and swaps the other two. Here are the equivalences for all the 2-qubit perms:
Here are the equivalences for all the 3-qubit perms:
Here we see that not changing the order of the qubits is the identity, as is reversing the order twice and rotating three times:
The "randomAngle()" function generates a random angle between \(-2\pi\) and \(2\pi\), and the "randomQubit()" function generates a random qubit, normalized so the sum of the squares of the states is 1. I would like to include symbolic evaluation and simplification, but it's not trivial, so in the meantime a trivial way to test an equivalence with some level of assurance is with random values, for example:
Click on the results to make the textarea and buttons visible, then click in the Re-evaluate button to evaluate with new values for \(\alpha\), \(\omega\), and \(\Psi\), and therefore another test of the equivalences. Not the formal verification you can get with symbolic evaluation, but, as I said, some level of assurance.
When comparing matrices, the == operator in math.js returns a matrix with a 1 where they're equal, and 0 where not. That's no fun if all you want is "true" or "false". So the eq() function turns that into true iff all elements are 1. Further, math.js has some behavior that I didn't think we want in our context, so I changed that. Finally, math.js may not do what you want when you say "a==b==c". Use eq(a,b,c). Examples:
In the examples you see how == returns a matrix of element-by-element comparison results. And you see a comparison of three values. Plus you see the line breaks added to make a comparison of several elements more readable, like in the introduction at the beginning of this manual.
I got tired of typing in the kron() function. I wanted an operator. I repurposed the .* operator, which had the right precedence, felt kinda like \( \otimes \), and I hoped it wouldn't be missed too much. So a.*b turns into kron(a,b). Except I didn't like typing two letters, particularly those two. So you type @ into the textarea equations, that gets turned into .* before parsing, then I intercept the .* function at evaluation time and substitute kron(). And of course display it as \( \otimes \). It may seem like a lot of bother, but \( \otimes \) is such an important and frequently-used operator that I figured it was worth it.
Quantum computing gates often look messy when their values are printed because it's not unusual to have something like \(\frac{1}{\sqrt{2}}\) as a factor. So results are "pretty-printed" by looking for common factors like \(\frac{1}{\sqrt{2}}\). Calculations are not performed symbolicly, so results that "should" be integers can be slightly off. The pretty-printer rounds to integers when they're within \(10^{-14}\). (eq() also returns true when numbers are within \(10^{-14}\).) The pretty-printer also looks for a fraction whose numerator and denominator add to less than twenty. When the printer fails to find something "nice", it prints a floating point number with five digits of precision.
|
CommonCrawl
|
On the performance of position-domain sidereal filter for 30-s kinematic GPS to mitigate multipath errors
Yuji Itoh ORCID: orcid.org/0000-0002-7848-13991 &
Yosuke Aoki ORCID: orcid.org/0000-0002-2539-41441
The noise level of kinematic Global Positioning System (GPS) coordinates is much higher than static daily coordinates. Therefore, it needs to be improved to capture details of small sub-daily tectonic deformation. Multipath is one of the dominant error sources of kinematic GPS, which the sidereal filter can mitigate. With increasing interest in applying kinematic GPS to early postseismic deformation studies, we investigate the characteristics of multipath errors and the performance of the position-domain sidereal filter using 30-s kinematic coordinates with a length of nearly 5 days. Experiments using three very short baselines mostly free from atmospheric disturbances show that multipath signature in position-domain has better repeatability at longer periods, and sidereal filtering without low-pass filtering yields a lift of power spectral density (PSD) at periods shorter than 200 s. These results recommend an empirical practice of low-pass filtering to a sidereal filter. However, a moderate cut-off period maximizes the performance of the sidereal filter because of the smaller multipath signature at longer periods. The amplitude of post-sidereal-filtered fluctuation is less than 6 mm in standard deviation, which demonstrates the nearly lowest noise level of kinematic GPS used for postseismic and other tectonic deformation studies. Our sidereal filter is proven to mitigate several peaks of power spectral density at periods up to 100,000 s, but the period dependency of PSD is not fully alleviated by sidereal filtering, which needs future investigation.
Global Navigation Satellite System (GNSS) observation is a powerful tool to study the surface deformation of the Earth. While studies on tectonics-related phenomena often employ daily static solutions of GNSS, raw observations, namely, carrier-phase of microwave transmitted from satellites, are usually recorded at a much shorter interval, for instance, 30, 15, or 1 s (e.g., Bock et al. 2000; Genrich and Bock 1992; Larson et al. 2003), or sometimes even shorter (e.g., Galetzka et al. 2015; Genrich and Bock 2006). Contrary to daily static solutions, kinematic analysis determines coordinates of antenna position at every observation epoch. Kinematic GNSS has been exploited to capture rapid ground motion during the passage of seismic waves (e.g., Bock et al. 2004; Jiang et al. 2021; Larson et al. 2003; Miyazaki et al. 2004) and postseismic deformation following great earthquakes (e.g., Jiang et al. 2021; Kato et al. 2016; Milliner et al. 2020; Miyazaki and Larson 2008; Morikami and Mitsui 2020; Munekane 2012; Twardzik et al. 2019). Daily static coordinates miss postseismic deformation less than 24 h after the mainshock as well as fast deformation rate change soon after the mainshock ("early" postseismic deformation), but kinematic GNSS coordinates can capture such early postseismic deformation in detail. Because postseismic deformation is a relaxation process of coseismic stress change, involving aseismic fault creep (afterslip) and viscoelastic relaxation of the upper mantle (e.g., Wang et al. 2012), this initial postseismic deformation contains critical information of the frictional characteristics of megathrust fault and the rheological characteristics of the upper mantle. Improvement of kinematic coordinates, hence, is crucial to gain more insights into postseismic deformation.
The nominal error of kinematic GNSS coordinates is usually on the order of centimeters or more (e.g., Bock et al. 2000; Jiang et al. 2021; Twardzik et al. 2019), considerably larger than daily static coordinates (~ a few mm; e.g., Jiang et al. 2021; Milliner et al. 2020). This larger error with kinematic GNSS coordinates needs mitigating to capture postseismic displacements, which are as small as a few millimeters at sites far from the mainshock. Many previous studies have been devoted to mitigating error sources in the GNSS time series, including atmospheric and ionospheric disturbances, multipath, satellite orbit, and Earth tides (e.g., Bock and Melgar 2016 and references therein). Among these error sources, multipath is the delay of carrier-phase of microwave arrival caused by reflection at any objects around an antenna. Therefore, multipath is inherent and has been usually mitigated by data-driven approaches. Previous studies have demonstrated that the sidereal filter is useful to mitigate multipath errors in position coordinates by taking advantage of the repeatability of satellite constellation (e.g., Genrich and Bock 1992). As long as the multipath environment (i.e., arrangement of objects around each antenna) is kept unchanged over time, the geometrical relation responsible for multipath errors is also kept unchanged. Therefore, we can largely mitigate multipath errors by taking a difference of two periods apart by the repeat time of satellite constellation. A key parameter for sidereal filtering is the repeat time of satellite constellation, which has been of primary interest in previous studies. Different system of GNSS has a different repeat time. For example, the repeat time of Global Positioning System (GPS) satellites has been known to be nearly a sidereal day (i.e., 86,164 s = 1 day minus 236 s) (e.g., Bock et al. 2004; Genrich & Bock 1992; Nikolaidis et al. 2001). However, later studies have revealed that the repeat time is different among satellites and variable with time (e.g., Larson et al. 2007; Ragheb et al. 2007).
Various methods have been established for sidereal filtering, but they are primarily classified into two approaches, an observation-domain and a position-domain filtering. The observation-domain filtering eliminates the multipath signal from raw carrier-phase observations before the positioning analysis (e.g., Atkins and Ziebart 2016; Iwabuchi et al. 2004; Ragheb et al. 2007). Two methods to construct the filter have been known in this approach. In one method, a sidereal filter is constructed from carrier-phase residuals of another day with a priori position from the static analysis and the constructed filter is subtracted from data during periods of interest with the appropriate time shift. This method allows for consideration of different orbit periods of each GNSS satellite (e.g., Atkins and Ziebart 2016; Ragheb et al. 2007; Wang et al. 2018), so it is applicable to mitigate multipath in both single-GNSS and multi-GNSS kinematic positioning analysis (e.g., Geng et al. 2017, 2018) even though different GNSS has largely different repeat periods of their satellites. The other method of sidereal filter construction in the observation-domain is the hemispherical mapping (e.g., Dong et al. 2016; Fuhrmann et al. 2015, 2021; Iwabuchi et al. 2004; Moore et al. 2014; Zheng et al. 2019). Here, the sidereal filter is constructed by mapping the carrier-phase residuals on a topocentric hemisphere above each site as a function of the elevation angle and the azimuth of satellites. As this approach is free from various orbit repeat period of each satellite, it can be easily implemented to real time GNSS positioning (Dong et al. 2016; Fuhrmann et al. 2015) combined in the antenna PCV file (Moore et al. 2014). In either method, the features raised so far are essential advantages over the position-domain filtering approach in which we are allowed to assume only one representative repeat time of multipath signature (e.g., Bock et al. 2000; Dai et al. 2014; Genrich and Bock 1992; Ragheb et al. 2007). This is because, in the position-domain filtering approach, the filter is constructed from the position time series. Yet, previous studies show that the use of one representative repeat time is a reasonable approximation in their GPS data analysis (Choi et al. 2004; Ragheb et al. 2007). The position-domain approach cannot handle multiple GNSS with largely different repeat time appropriately, so sidereal filtering for multi-GNSS positioning has been limited to the observation-domain approach (e.g., Geng et al. 2018; Zheng et al. 2019). In both observation- and position-domain approaches, the representative orbit or constellation repeat period of GPS is ~ 10 s shorter than a sidereal day, with its deviation typically within 10 s. This sub-sidereal-day repeat time yields better performance of sidereal filtering than using the repeating time of exactly one sidereal day (e.g., Agnew and Larson 2007; Choi et al. 2004; Larson et al. 2007; Ragheb et al. 2007).
Another practice in improving the performance of sidereal filtering is the removal of short-period fluctuations from a sidereal filter regardless of the observation- or the position-domain filtering. Here, previous studies empirically suggest that sidereal filtering does not work well to get rid of shorter period fluctuations and hence they are removed prior to the sidereal filter application (e.g., Atkins and Ziebart 2016; Dai et al. 2014; Geng et al. 2017, 2018; Larson et al. 2007). Dong et al. (2016) pointed out that the hemispherical mapping method, a class of the observation-domain approaches, functionally removed the shorter period fluctuations because of the discretization of hemisphere. Most previous experiments, however, are more or less biased by other error sources other than multipath, such as atmospheric disturbances, because they have employed Precise Point Positioning (PPP) or double difference analysis with long baselines, in which situation they do not cancel (e.g., Atkins and Ziebart 2016; Bock et al. 2000). Only a few previous studies have conducted experiments with baselines short enough to cancel the atmospheric disturbances out (e.g., Bock et al. 2000; Dai et al. 2014; Dong et al. 2016; Geng et al. 2017), but they have not carried out systematic tests on the role of removal of the shorter period fluctuations in mitigating multipath errors. Wang et al. (2018) carried out a systematic experiment on the role of shorter period removal in sidereal filtering using a very short baseline (~ 12.5 m), but they employed only one baseline prepared for this purpose. Hence, there is still a need to investigate the behavior of multipath errors and the performance of sidereal filtering at different periods using multiple baselines in an environment free from atmospheric disturbances and with other primary error sources eliminated as much as we can.
Most previous studies on multipath and sidereal filtering have used 1-s kinematic coordinates by explicitly or implicitly assuming an application to seismic wave measurements (e.g., Atkins and Ziebart 2016; Geng et al. 2017, 2018; Larson et al. 2007; Wang et al. 2018), and a few studies have used 30-s kinematic coordinates (e.g., Bock et al. 2000; Fuhrmann et al. 2015). As kinematic GNSS positioning can capture sub-daily postseismic deformation in detail, it is vital to investigate multipath errors and the optimum sidereal filtering for 30-s kinematic coordinates with a length of several days, which are yet to be fully understood. Investigation of multipath errors at longer periods than achieved by previous studies would also provide useful information on the use of kinematic GNSS positioning to monitor anomalous crustal activities such as earthquakes (e.g., Kawamoto et al. 2017; Melgar et al. 2020).
In this study, we investigate multipath errors and the performance of position-domain sidereal filtering to mitigate them from 30-s kinematic GPS coordinates. The new feature of this study arises from combination of (1) investigation of the multipath errors at up to the time scale of early postseismic deformation; (2) systematic experiments on removal of the relatively short-period fluctuations from the sidereal filter series, which have been empirically done, and (3) use of the environment mostly free from a substantial portion of atmospheric and ionospheric disturbances as well as orbit errors by constructing three very short baselines ranging from 3 to 65 m. The second and third features are more or less overlapping with Wang et al. (2018)'s focus, but our study covers the wider period range using the longer data length of nearly 5 days. Our study can be interpreted as a reexamination of their findings using the different baselines, which provides us an opportunity to further improve our understanding of noise characteristics of kinematic GPS.
GPS analysis
30-s kinematic analysis
We employ differential positioning rather than the precise point positioning (PPP) to focus on the multipath effect as error sources. We construct three baselines using preexisting GNSS sites at various places in the world (Table 1) to explore common characteristics of the multipath and sidereal filter. The approximate lengths of the baselines used, POTM-POTS in Germany, GODE-GODN in the USA, and TSK2-TSKB in Japan, are 3 m, 65 m, and 36 m, respectively, which are short enough to cancel most atmospheric and ionospheric disturbances, the primary sources of positioning uncertainties, as well as satellite orbit errors (e.g., Bock et al. 2000). Throughout this study, codes of two sites in each baseline are connected with "-" in the order of the rover (kinematic) and reference sites (Table 1). These sites are located in governmental research institutes, so enigmatic changes of multipath environment in a short time, especially change of object arrangement around the antennas, are less anticipated than sites in public places such as parks and schools. Antennas and receivers of the two sites for each baseline are not the same in most cases (Table 1); this factor could be an additional positioning uncertainty source (e.g., Park et al. 2004). As different GNSS sites are commonly equipped with different antennas and receivers, we do not attempt to prepare new sites with the same equipment for this study. Yet, we attempt to mitigate effects inherent to different equipment types between two sites by implementing the phase center variation correction and the receiver correlation type information.
Table 1 Observation information
We employ the TRACK package (version 1.41) of the GAMIT/GLOBK program (Herring et al. 2015, 2018a, b) to carry out the kinematic analysis of the three baselines. We retrieve daily Receiver Independent Exchange Format (RINEX) files between 01 January 2019 and 31 December 2019. RINEX files on the day of year (DOY) 169 and 176 are not available at TSK2 and TSKB and the RINEX file at TSK2 is not available on DOY 339. The sampling interval of the RINEX files and hence obtained kinematic GNSS coordinates is 30 s, except for the 1-Hz analysis shown in sections "1-Hz kinematic analysis" and Noise characteristics at shorter periods than 200–500 s. We process a one-day session for each TRACK run. As our baselines are considered short enough to cancel the ionospheric delay out, we do not take the linear combination of L1 and L2 carrier-phase observations, but independently use the two observation types to fix the integer ambiguities (e.g., Schaffrin and Bock 1988) by employing the "short" mode of TRACK. We only use observations from GPS satellites above their elevation angle of 15 degrees. We always exclude PRN4 because the assigned space vehicle changed a few times during the analysis period. We do not use observables from other satellite systems (e.g., GLONASS) because they have quite different recurrence periods (e.g., Geng et al. 2018). We use the precise orbit product of the International GNSS Service (i.e., IGS final). As all the baselines are short, we do not estimate the atmospheric delay nor model tidal response. We implement differential code bias information and phase center models of satellite and stations (i.e., the ANTEX information) aligned to ITRF2014 (Altamimi et al. 2016). The radome information at TSKB and GODE happens to be unavailable in the ANTEX table, so only the antenna information is implemented for these two sites. A priori coordinates of these sites are constrained to daily static coordinates in ITRF2014 processed by Nevada Geodetic Laboratory, University of Nevada, Reno (Blewitt et al. 2018). When the daily coordinates are missing, we interpolate daily coordinates on their neighbor days. The uncertainty of the a priori coordinates is set to 10 cm. The process noise of kinematic coordinate estimates is set to \(5\mathrm{ mm}/\sqrt{30\mathrm{s}}\).
1-Hz kinematic analysis
We carry out 1-Hz kinematic analysis for GODE-GODN to examine the effect of sampling interval on the performance of sidereal filtering, as will be discussed in section "Noise characteristics at shorter periods than 200–500 s". The analysis setting is mostly same as the 30-s analysis except for the following points. First, the data length processed by one TRACK run is 6 h (c.f., 24 h in the 30-s analysis). Second, the number of 6-h-long data sets we process is only six to save the computation time (Additional file 2: Table S1) (c.f., all the days in 2019 in the 30-s analysis). Finally, the process noise of kinematic coordinate estimates is set to \(0.91\mathrm{ mm}/\sqrt{1\text{s}}\), equivalent to that for the 30-s analysis (\(5\mathrm{ mm}/\sqrt{30\mathrm{s}}\)).
Postprocessing to remove outliers
In this study, we analyze the change in the relative position of the rover site from the reference site. Given the very short baseline lengths, it is reasonable to assume no changes of baseline lengths throughout the analysis period. Hence, any variation of obtained coordinates should predominantly represent effects of systematic errors inherent in the positioning technique such as the multipath. The obtained kinematic coordinates include epochs largely deviating from the average position and not repeating over time (Fig. 1 and Additional file 1: Fig. S1); such epochs should be removed as outliers because they are not the phenomena of our interest. We design the following outlier removal procedure consisting of five steps:
Remove epochs estimated from a small number of double differences (DDs) and with large root mean square (RMS) of post-fit DD residuals. The minimum number of DDs for each epoch to retain is set to 8 for all the baselines, while the maximum value of allowable RMS of post-fit DD residuals is variable with baselines (Table 2; Additional file 1: Fig. S2).
Remove epochs deviated from the average of coordinates of all the remained epochs by 4 times their standard deviation (Table 2). Each epoch is retained at this and next step only when coordinates of all the three (north, east, and up) components are below the threshold.
Remove epochs deviated from the average of coordinates within a specified length block by 4 or 5 times their standard deviation (Table 2). The data length for each block is 430,700 s. This block length is a common least multiple of the sampling interval (30 s) and the representative repeat period of multipath (86,154 s; Ragheb et al. 2007). We explain the reason of this choice later in this section. This step removes epochs with local, relatively smaller deviations, which cannot be removed at Step 2.
Linear interpolation of the omitted epochs at Steps 1 to 3.
Split the year-long time series obtained at Step 4 into 430,700-s-long blocks.
Outlier removal process for the north component of three baselines as labeled. Right panels show closer look of fluctuation of baselines shown at left of each panel. Red dots indicate outputs of kinematic analysis of TRACK. Green dots indicate coordinates after removing epochs with large post-fit residuals of double difference (DD) and/or determined from small number of DDs (Additional file 1: Fig. S2; see section "Postprocessing to remove outliers"). Blue dots indicate coordinates with 30-s intervals after the interpolation of neighboring epochs for outliers removed from the green dots with the whole-year and the local standard deviation criteria (see section "Postprocessing to remove outliers"). The same plots but for the east and up components are provided in Additional file 1: Fig. S1
Table 2 Summary for data cleaning process after kinematic analysis
Quality of observations and resultant positions are different among the baselines, so there is no reasonable common threshold at Stage 1 and 3; performance of a threshold designed for the noisier site would be obviously worse at the less noisy site. Hence, we determine the thresholds used at these steps by trial-and-error for each site not to remove too many epochs (Table 2). Eventually, 1.0, 1.4, and 4.2% of the total epochs are generated by interpolation for baselines POTM-POTS, GODE-GODN, and TSK2-TSKB, respectively. Although we do not include these interpolated epochs to assess the performance of the sidereal filter, we still need to interpolate removed epochs because low-pass filtering and power spectral density computation requires equally sampled time series. We present details of these analyses in section "Evaluation of performance of position-domain sidereal filter".
We obtain 73 430,770-s blocks and use them as a minimum unit to carry out all the examinations with 30-s kinematic coordinates. The assumed repeat time, 86,154 s (Ragheb et al. 2007), is slightly different in other studies and varies with time and each GPS satellite, but the deviation from 86,154 s is known much smaller than the sampling rate of 30 s (e.g., Agnew and Larson 2007; Choi et al. 2004; Larson et al. 2007; Ragheb et al. 2007). A repeat period of 86,160 s (Bock et al. 2000), the closest multiple of the sampling interval (i.e., 30 s) to 86,154 or 86,164 s, could also be the block length as a reasonable approximation of a repeat period. Yet, we need to prepare time series longer than the sidereal day to investigate multipath characteristics at the time scale of early postseismic deformation (e.g., Tsang et al. 2019; Twardzik et al. 2019). Then, the repeat time of 430,700 s is the best option because it naturally comprises the sampling interval, the representative repeat period, and the time scale of early postseismic deformation (i.e., ~ 5 days). This constellation repeat time is shorter than a sidereal day (~ 86,164 s) and our filter length is about 5 times of the standard sidereal filter, but we keep using the word "sidereal filter" to express our filter to remove multipath errors for simplicity, in contrast to previous studies which defined new terms (e.g., Choi et al. 2004).
Prior to Step 1, we concatenate the daily session of kinematic GPS coordinates obtained by the TRACK runs to construct the year-long coordinate series. Analyzing another 24-h session from 12:00:00 on 1 January 2019 to 11:59:30 on 2 January 2019 using a concatenated RINEX files (Additional file 1: Fig. S3) of POTM and POTS confirms that the concatenation in the position-domain does not introduce artificial gaps or trends; the difference between the observation-domain and the position-domain concatenation is less than a few mm within ~ 30 min from the day boundary.
Evaluation of performance of position-domain sidereal filter
Repeatability of coordinate fluctuations
We first assess the repeatability of the coordinate fluctuations, which are fundamental in applying the sidereal filter, quantitatively by taking block-by-block correlation coefficients without the epochs filled by the interpolation (section Postprocessing to remove outliers). We define the correlation coefficient (CC) as follows:
$${\mathrm{CC}}_{ij}=\frac{\sum_{k=1}^{{N}_{ij}}({d}_{k}^{(i)}-\overline{{d }^{(i)}})\times ({d}_{k}^{(j)}-\overline{{d }^{(j)}})}{\sqrt{\sum_{k=1}^{{N}_{ij}}{({d}_{k}^{(i)}-\overline{{d }^{(i)}})}^{2}}\sqrt{\sum_{k=1}^{{N}_{ij}}{({d}_{k}^{(j)}-\overline{{d }^{(j)}})}^{2}}},$$
where \({d}_{k}^{(i)}\) and \(\overline{{d }^{(i)}}\) are a coordinate at the kth epoch and an average of the coordinates of the ith block, respectively. \({N}_{ij}\) is the total number of epochs used for the computation of \({\mathrm{CC}}_{ij}\), the correlation coefficient between the ith and jth blocks. As stated above, coordinates filled by the interpolation are excluded for those terms. Then, we also compute the correlation coefficients after low-pass filtering to examine if longer-period fluctuations have better temporal repeatability than shorter-period fluctuations. We first demean and detrend the time series and then apply the low-pass filtering by the lp command of Seismic Analysis Code (SAC) (Goldstein and Snoke 2005; Helffrich et al. 2013). The low-pass filtering is designed as a second-order Butterworth filter with various cut-off periods. We test the cut-off periods of 100, 200, 500, 1000, 2000, 5000, 10,000, and 20,000 s.
Sidereal filtering
Next, we apply our 430,770-s length sidereal filtering to evaluate its performance. We make pairs of two neighboring blocks and take a difference between the two blocks. We do not use pairs in which two blocks are not neighbors because it is practically common to generate a sidereal filter from coordinates of a period as close to the period of interest as possible. We compare the performance of unfiltered and low-pass filtered sidereal filters, which is evaluated by variance reduction (VR) defined as:
$${\mathrm{VR}}_{i,j}=\left(1-\frac{{\sigma }_{A, i,j}^{2}}{{\sigma }_{B,i}^{2}}\right)\times 100,$$
where \({\sigma }_{B,i}\) is a standard deviation of coordinates of the block i before sidereal filtering, and \({\sigma }_{A,i,j}\) is that after sidereal filtering using the block j as a sidereal filter. We exclude the epochs filled by the interpolation (section "Postprocessing to remove outliers") when computing \({\sigma }_{A,i,j}\) and \({\sigma }_{B,i}\). We obtain 144 VR values from 72 pairs of neighboring blocks because \({\mathrm{VR}}_{i,j}\) is asymmetric with respect to i and j.
Power spectral density
A comparison of power spectral density (PSD) of kinematic coordinates before and after sidereal filtering is useful for assessing characteristic periods of multipath errors and whether the sidereal filter can mitigate them. First, we compute the Fourier spectrum of each block's coordinates by Fast Fourier Transform (FFT) using the fft command of SAC and then convert them to PSD by taking the squares of the amplitude of the Fourier spectrum. Prior to applying FFT to the coordinates, we demean, detrend, and taper the time series. We then apply a Hanning window to coordinates within 5% from each end to taper the time series. We obtain 72 PSDs from the coordinates before the sidereal filtering. The number of PSDs obtained from coordinates after the sidereal filtering depends on whether the low-pass filtering is applied to the sidereal filter; we derive 72 and 144 PSDs from the time series after the sidereal filtering without and with the low-pass filtering, respectively. We take an average of these PSDs when we discuss their characteristics; however, for averaging, we exclude PSDs derived from blocks containing 600 or more consecutive interpolated epochs because the linearly interpolated epochs would slightly lift the average PSD at long periods.
Kinematic GPS coordinates after the outlier removal show a notable periodic fluctuation pattern over time, but its period is shorter than the solar day seen as slant stripe patterns when the fluctuation amplitude is drawn by color (Fig. 2). When the plot is rearranged by the 430,770-s blocks, vertical stripes of fluctuation amplitude emerge (Fig. 3), strongly suggesting that these fluctuations are predominantly caused by multipath. It also suggests that the accumulation of the deviation of the "true" repeat time from the assumed representative value in this study (86,154 s) is not significant, at least, with our sampling interval (30 s). On the other hand, the stripe pattern is not uniform over time, exhibiting that the amplitude of multipath evolves over time. It can be interpreted as the temporal change of multipath environment. It is not always easy to identify their origins because it is highly case dependent, but they are possibly due to, for example, change in the number of visible satellites (e.g., Larson et al. 2007) and the recurrence position of satellites due to the maneuver (e.g., Choi et al. 2004) and/or the arrangement of reflective bodies around the antenna and their surface reflectivity (e.g., Elósegui et al. 1995). Typical nominal errors of the coordinates obtained in this study are 3–5 mm and 4–6 mm for the horizontal and vertical components, respectively. The standard deviation of fluctuation of each block is ranging 1.6–3.7 mm, 1.0–4.7 mm, and 2.4–7.6 mm for north, east, and up components, respectively.
Coordinate fluctuation of three baselines as labeled in each day of year (DOY). Left (a, d, and g), center (b, e, and h), and right (c, f, and i) columns indicate north, east, and up components, respectively. Neither low-pass filtering (LPF) nor sidereal filtering (SRF) is applied. Green color indicates epochs filled by interpolation of neighboring epochs during the data cleaning process
Coordinate fluctuation of three baselines as labeled in each block. Left (a, d, and g), center (b, e, and h), and right (c, f, and i) columns indicate north, east, and up components, respectively. Neither low-pass filtering (LPF) nor sidereal filtering (SRF) is applied. Green color represents epochs filled by interpolation of neighboring epochs during the data cleaning process. The same plots but those after LPF are provided in Additional file 1: Figs. S5, S6, and S7
Correlation coefficients of 72 pairs of neighboring blocks are mostly between 0.5–1, 0.3–0.8, and 0.5–1, for North, East, and Up components, respectively, showing good repeatability in the 430,770-s window (Fig. 4). As a rule of thumb, the repeatability of GODE-GODN is better than POTM-POTS, which is still better than TSK2-TSKB. By extracting longer-period components with low-pass filtering, the correlation coefficient of these pairs is improved (Fig. 4), consistent with the idea of empirically applying low-pass filtering in constructing sidereal filter (e.g., Atkins and Ziebart 2016; Geng et al. 2017, 2018; Larson et al. 2007). However, the correlation between two blocks becomes worse with cut-off periods longer than 20,000 s (Fig. 4). Low-pass filtering with a cut-off period of 20,000 s does not yield the maximum correlation coefficient for most pairs (Additional file 1: Fig. S4). Low-pass filtering with a cut-off period of 20,000 s yields a much smaller amplitude of fluctuation than that with a shorter cut-off period (Additional file 1: Figs. S5, S6, and S7). Hence, it is reasonable to conclude that random fluctuation has bigger effects than the periodic multipath signature at longer periods, which degrades the correlation coefficient values.
Histograms of correlation coefficients of 72 pairs of neighboring blocks. a–c Correlation coefficients of coordinates without (green) and with (others) low-pass filtering (LPF) for north (a), east (b), and up (c) components of a baseline POTM-POTS. A cut-off period of LPF is 500 (red), 5000 (blue), and 20,000 (brown) seconds. d–f Same as a–c but for a baseline GODE-GODN. g–i Same as a–c but for a baseline TSK2-TSKB
This period dependency holds when we compute correlation coefficients of all the possible pairs of blocks (Additional file 1: Fig. S8). Up to a certain cut-off period of low-pass filtering, the correlation coefficient improves even with pairs of temporarily apart blocks. This suggests that static multipath environment (e.g., antenna height and nearby buildings) causes a relatively longer-period component of multipath signature. The correlation of temporarily farther blocks is worse at the same period band, and it is worse at relatively shorter periods (Additional file 1: Fig. S8). As already discussed above, the accumulation of deviation of the "true" repeat time from the assumed value should emerge in all the baselines because the orbit repeat time changes over time among all satellites, which are not consistent with our correlation coefficient diagrams (Additional file 1: Fig. S8). Temporal change of local multipath environment would have larger impacts on multipath signature in the position-domain because the correlation coefficients of three baselines decrease with time differently (Additional file 1: Fig. S8).
Performance of position-domain sidereal filtering
Our sidereal filtering reduces the coordinate fluctuation efficiently when low-pass filtering with a cut-off period of 500 s is applied to the sidereal filter (Fig. 5 and Additional file 2: Fig. S9). The standard deviation of fluctuation after the filtering is in a range of 1.0–2.6 mm, 0.8–4.6 mm (but only four examples exceed 3.5 mm), 1.5–5.8 mm for north, east, and up components, respectively. Even without low-pass filtering to the sidereal filter, the post-sidereal-filtered coordinates look similar (time series C and D in Fig. 6, Additional file 2: Figs. S10 and S11). The difference between these two cases is not clearly discernable, apparently suggesting that low-pass filtering to a sidereal filter is unnecessary. However, the difference between them is clearly found as the difference of variance reduction (Fig. 7). The most frequent VR range of the horizontal components improved by up to 20% when low-pass filtering with a cut-off period of 500 s is applied to the sidereal filter (Fig. 7a, b, d, e, g, h). The range of the most frequent VR of the vertical component did not improve, but it is clear that more pairs have larger VR values with the low-pass filtered sidereal filter (Fig. 7c, f, i). On the other hand, applying a low-pass filtered sidereal filter with a cut-off period of 5000 or 20,000 s clearly leaves periodic fluctuations in post-sidereal-filtered coordinates (Fig. 6, Additional file 2: Figs. S10, S12, S13, S14, and S15). The reason for this observation is obvious; with a longer cut-off period, the amplitude of the sidereal filter becomes smaller (Figs. 6, Additional file 1: Figs. S5, S6, S7, and Additional file 2: Fig. S10), so we cannot mitigate moderately correlated multipath errors due to the removal of them. Accordingly, VR decreases to 30% or further worse in most cases when fluctuation at a period longer than 20,000 s is only used for a sidereal filter (Fig. 7). This competition finds that a cut-off period of 500 s yields the maximum VR in most cases for POTM-POTS and GODE-GODN (except for the east component) after testing the eight cut-off periods (Fig. 8a–d and f). The maximum cut-off period with the maximum VR for TSK2-TSKB and east component of GODE-GODN has a broader distribution between 500 to 2000s (Fig. 8e and g–i). These results imply that there is no magic number of the cut-off period of low-pass filtering for improving the performance of sidereal filtering, although 200 or 500 s might indicate the lowest limit of candidate cut-off period for the 30-s data. This conclusion is consistent with a simulation showing that the period of multipath signature depends on the site configuration and the satellite elevation angle (e.g., Larson et al. 2007). Trial-and-errors for the cut-off period with checking correlation coefficient and variance reduction would be important to improve the performance of the position-domain sidereal filter.
Coordinate fluctuation of three baselines as labeled in each block after sidereal filtering (SRF). The sidereal filter is made of each succeeding block. Low-pass filtering (LPF) with a cut-off period of 500 s is applied to coordinates used as sidereal filter. Left (a, d, and g), center (b, e, and h), and right (c, f, and i) columns indicate north, east, and up components, respectively. Green color represents epochs filled by interpolation of neighboring epochs during the data cleaning process. The same plots but using a different sidereal filter are provided in Additional file 2: Figs. S9, S11, S12, S13, S14, and S15
An example of the north component of coordinates before and after sidereal filtering (SRF). a Coordinates of baseline POTM-POTS before (A) and after sidereal filtering by a sidereal filter without (C) and with low-pass filtering using a cut-off period of 500 (D), 5000 (E), and 20,000 (F) seconds. Sidereal filters used are drawn with the corresponding color for each low-pass filter setting at row (B). b Same as a but for GODE-GODN. c Same as a but for TSK2-TSKB. The same plots but for east and up components are provided in Additional file 2: Fig. S10
Histograms of variance reduction by sidereal filtering. a–c Variance reduction using sidereal filter without (green) and with (others) low-pass filtering (LPF) for north (a), east (b), and up (c) components of a baseline POTM-POTS. A cut-off period for LPF is 500 (red), 5000 (blue), and 20,000 (brown) seconds. d–f Same as a–c but for a baseline GODE-GODN. g–i Same as a–c but for a baseline TSK2-TSKB
Histograms of cut-off periods of low-pass filtering (LPF) which provides the maximum variance reduction for each pair of neighboring blocks. a–c North (a), east (b), and up (c) components of a baseline POTM-POTS. d–f Same as a–c but for GODE-GODN. g–i Same as a–c but for TSK2-TSKB
PSD of coordinate fluctuations before sidereal filtering contains many peaks at periods of up to 100,000 s (Fig. 9). Previous studies have not clearly identified some of these peaks of PSD at the longest period, especially longer than > 50,000 s (e.g., Bock et al. 2000; Geng et al. 2017, 2018; Larson et al. 2007), likely because of their block length, quality, or both. Sidereal filtering mitigates most of these peaks (Fig. 9). On the other hand, interestingly, the PSD at the shortest periods (< ~ 200 s) somewhat increases by sidereal filtering without low-pass filtering in all the baselines tested (Fig. 9). The lift of PSD at the shortest period can be easily avoided by low-pass-filtered sidereal filter because it contains little fluctuation at these periods (Fig. 9). Not only the variance reduction, but also our PSD analysis also recommends the low-pass filtering practice to improve the position-domain sidereal filter. The qualitatively same conclusion has been drawn in Wang et al. (2018), which applied the sidereal filtering to 1-Hz GPS data in the observation-domain. We further discuss this phenomena at the shortest period in section "Noise characteristics at shorter periods than 200–500 s".
Average of power spectral density (PSD) of each block of kinematic coordinates. a–c North (a), east (b), and up (c) PSD before (black) and after sidereal filtering (SRF) without (green) and with (red) low-pass filtering (LPF) to the sidereal filter. A cut-off period of the low-pass filter is 500-s. Two vertical thick lines indicate a period of 200 and 500 s. Solid slant lines indicate PSD of random-walk (PSD ∝ f−2 where f is frequency) and flicker noises (PSD ∝ f−1). d–f Same as a–c but for GODE-GODN. g–i Same as a–c but for TSK2-TSKB
Discussion on the noise characteristics
Period (or frequency) dependency of PSD provides information of colored noise contaminated in the coordinate time series, which can bias extracted crustal deformation signature (e.g., Mao et al. 1999; Zhang et al. 1997). Most of our nine PSDs have a kink at 200–500 s (Fig. 9), showing different noise characteristics on both sides of it. Hence, in this section, we discuss the noise characteristics shorter and longer than this kink period (i.e., 200–500 s) separately to gain insights into their origins.
Noise characteristics at longer periods than 200–500 s
Bock et al. (2000) have reported period dependency of PSDs after sidereal filtering in a 50-m baseline in California; at periods between 100 and 100,000 s, its spectral index (i.e., α in PSD ~ Tα where T is period) is 0.3. The origin of the remained weak period dependency of PSD is yet to be understood, but it is much weaker than the colored noise caused by atmospheric disturbances, which have the spectral index of 1 (Bock et al. 2000) or 5/3 (Williams et al. 1998). On the longer-period side of our PSD kinks (i.e., > 200–500 s), the PSDs of POTM-POTS have similar weak period dependency (Fig. 9a–c), but the PSDs of the other two baselines indicate stronger period dependency with gradually becoming weaker towards longer periods. Actually, all of them should contain little atmospheric disturbances.
Thermal expansion of GNSS monuments, mainly controlled by configuration of monuments (e.g., material and structure) and local meteorological condition (e.g., temperature and insolation pattern), is known to impact the positioning analysis (e.g., Hatanaka et al. 2005; Munekane 2012; Yan et al. 2009). The local meteorological condition of two sites of each baseline should be the same in our study. The antennas of both POTM and POTS are situated on the brick pillars of the same architecture (Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences 2020, 2021), while the other baselines are not (International GNSS Service 2021a, 2021b, 2021c, 2021d), so one might think that the configuration of monuments is a possible reason to make the remaining period dependency of PSD in GODE-GODN and TSK2-TSKB baselines. However, the pillar height is not high at GODE (0.5 m), GODN (1.5 m) and TSKB (1.5 m), so the un-differenced thermal expansion effect at each site is likely small because the thermal expansion is proportional to the pillar height (Yan et al. 2009). The pillar of TSK2 is covered by the concentric cover, which is known to mitigate the thermal expansion effect (Munekane 2012). Furthermore, Munekane (2012) has reported diurnal change in apparent position due to the thermal expansion, so it would only produce several peaks in corresponding PSDs. Hence, we conclude that the thermal expansion of monuments is not significantly responsible for the period dependency of PSDs found in this study.
Another well-known GNSS error source is the common mode noise appearing uniformly at all the sites (e.g., Wdowinski et al. 1997). One of the origins is the instability of the reference site, but such effect is inseparable in this study because each experiment in this study uses only one reference and one kinematic site. However, if more baselines are simultaneously processed by the kinematic analysis with a common reference site, the monument effect of a reference site could be estimated by common mode filter (e.g., Wdowinski et al. 1997) or statistical decomposition such as principal component analysis or independent component analysis (e.g., Zheng et al. 2021). In this way, fluctuation of kinematic coordinates would be further alleviated (e.g., Larson et al. 2007). We should note that we cannot rule out the possibility that some local outliers remaining after the post-processing (Fig. 1 and Additional file 1: Fig. S1) as well as dynamic changes of multipath environment including ground reflectivity change impact the PSD shape. Future investigations would be necessary to pursue the origin of the period dependency of PSD at the long-period band.
Noise characteristics at shorter periods than 200–500 s
The shorter-period side of the PSD kink, where the PSD lift occurred, is close to the sampling interval (30 s). Hence, there is a need to examine the possibility that the sampling interval impacts the repeatability of the multipath signature at the shorter periods. We carry out a small experiment using 1-Hz observations with the GODE-GODN baseline. We obtain six 1-Hz kinematic GPS time series with a length of 21,600 s (i.e., 6 h; section "1-Hz kinematic analysis") and subsequently apply the sidereal filtering to the three pairs without low-pass filtering. Here, two blocks in each pair are 430,770 s apart for consistency with the 30-s data analysis (Additional file 2: Table S1). The sidereal filter again succeeds in mitigating coordinate fluctuations of all the components (Fig. 10a–f), but the PSDs at a period shorter than 100 or 200 s lift after sidereal filtering (Fig. 10g–i). Hence, the sampling interval has negligible impact on the repeatability of multipath signature in the position-domain. Similarly, Wang et al. (2018), which employed the 1-Hz GPS observations for the similar experiment using the observation-domain sidereal-filtering approach, reported the similar PSD lift at the period shorter than 50 s. Hence, the multipath noise at the shortest period with a given sampling interval might be overwhelmed by other types of noise.
Sidereal filtering results applied to the 1-Hz time series of the GODE-GODN baseline. a–c North (a), east (b), and up (c) coordinate fluctuations. Neither low-pass filtering (LPF) nor sidereal filtering (SRF) is applied. d–f North (d), east (e), and up (f) coordinate fluctuations (color) after sidereal filtering without low-pass filtering to the sidereal filter. g–i Average of power spectral density (PSD) of each block of 1-Hz kinematic coordinates. North (g), east (h), and up (i) PSD before (black) and after sidereal filtering without low-pass filtering to the sidereal filter (green). Vertical thick line indicates a period of 200 s
As mentioned in Introduction, it is well known that each GPS satellite has a different orbital period with temporal variations (e.g., Agnew and Larson 2007; Choi et al. 2004; Larson et al. 2007; Ragheb et al. 2007). Larson et al. (2007) have already pointed out that frequent adjustment of the repeat period (i.e., the amount of temporal shift of sidereal filter to a target segment) improves the performance of sidereal filtering in terms of post-sidereal-filtering position residuals. Another approach to mitigate the noise at the shortest period is the observation-domain sidereal filtering which allows us to account for the various repeat periods of each satellite and its temporal change. Successful examples are Geng et al. (2018) and Wang et al. (2018), which mitigated the noise level down to a period of 16 s and 50 s, respectively. Hence, more precise dealing of the repeat time would provide a certain improvement of the short-period multipath removal. These examples have used 1-Hz GPS observations, so their idea would not be directly applicable to improving of the sidereal filter from kinematic GPS coordinates at a sampling rate of 30 s.
The spectral index of PSD at the shortest period (i.e., 60–200 or 500 s) is ~ 2 in most of our PSDs derived from 30-s and 1-Hz data, but the period dependency continuously weakens to zero near 2 s (Figs. 9 and 10g–i). The similar change in the spectral index has also been reported in Genrich and Bock (2006) and Wang et al. (2018). This shape of PSD indicates a combination of random-walk and random noise, which might overwhelm multipath noise at this short-period band. Here, we employ a sidereal filter made by stacking multiple blocks to the GODE-GODN time series to examine whether the PSD lift is alleviated because the stacking mitigates random noise contribution. A sidereal filter cannot be made of observations after the mainshock in a practical application to postseismic deformation. Therefore, we stack two or five blocks preceding a block of interest without low-pass filtering and then subtract it from the time series in the block of interest. Average PSDs after sidereal filtering show that the PSD lift is more alleviated when more blocks are stacked (Fig. 11), demonstrating mitigation of random noise by stacking. PSD of longer periods than the kink slightly decreases as well (Fig. 11a and c). However, compared to the one-block or two-block cases, the PSD happens to become worse at an intermediate band, 200 – 1000 s, when five blocks are stacked (Fig. 11d–f). The random walk noise is not mitigated by the stacking approach, so it might contribute to the PSD lift. The random walk noise in geodetic data is often interpreted as the monument instability, but its effect is not dominant at this shortest period (e.g., Langbein and Johnson 1997). Hence, the origin of random walk noise causing this PSD lift is still elusive. A practical prescription for postseismic deformation studies using kinematic GPS coordinates with a sampling of 30 s would be application of low-pass filtering or smoothing to both sidereal filter and data of interest. It is because the shortest periods are usually not a focus of early postseismic deformation studies (e.g., Milliner et al. 2020; Tsang et al. 2019).
Average of power spectral density (PSD) before and after sidereal filtering made by stacking. a North component. The black line indicates PSD before sidereal filtering while colored lines indicate PSD after sidereal filtering in which a sidereal filter is made by stacking one (green), two (pink), or five (light blue) blocks preceding a block of interest. Vertical thick line indicates a period of 200 s. b, c Same as a but for east and up components. d–f Same as a–c but PSDs at periods from 50 to 1000 s are magnified for clarity
Discussion on applicability to the practical postseismic deformation study
Practically, very short baselines like those employed in this study are not able to capture postseismic deformation signature because the motion due to the postseismic deformation at two sites constructing each baseline are uniform. To capture the postseismic deformation, coordinates derived from the differential analysis with a much longer baseline or PPP analysis are necessary. Such setting is not comparable to our experiments using the very short baselines because the atmospheric delay more or less affects the positioning result, even though its effect is accounted for by implementing the mapping function (e.g., Boehm et al. 2006). If the atmospheric delay exhibits the sidereal periodicity, the sidereal filtering technique can mitigate the atmospheric disturbances together with the multipath, but that is not the case in most situations. Nevertheless, with careful inspection of the temporal repeatability of fluctuation in the data, the sidereal filtering operation with low-pass filtering tested in this study can still be effective to reduce the multipath effect in the practical application in postseismic deformation studies. From this viewpoint, our experiments presented in this paper demonstrate the currently achievable lowest limit of the noise level of 30-s kinematic GPS coordinates used for the postseismic deformation as well as other tectonic deformation studies. Our experiments would be useful when we design early postseismic deformation studies in the future.
We have investigated the properties of multipath errors and performance of sidereal filtering of 30-s kinematic GPS coordinates at periods between 60 s and nearly 5 days. To focus predominantly on the multipath effects, we carried out all the experiments using three very short baselines ranging from 3 to 65 m, the environment nearly free from atmospheric and ionospheric disturbances and orbit errors. The obtained correlation coefficient, variance reduction, and power spectral density before and after sidereal filtering led to the following points and suggestions as our main conclusions of this study.
The repeatability of coordinates associated with satellite constellation is better in longer periods, as a rule of thumb. If the cut-off period is too long, however, the amplitude of the filtered fluctuation becomes small, and hence the low-pass filter deteriorates the performance of sidereal filtering.
Standard sidereal filtering can remove multipath noises at intermediate-to-long periods (i.e., 200 s to 100,000 s). However, the noise level shorter than this period worsens if fluctuations at this short period are not discarded from coordinates used as a sidereal filter. At this point, an application of low-pass filtering with a moderate cut-off period to generate a sidereal filter plays a meaningful role in improving the performance of sidereal filtering.
The standard deviation of the post-sidereal-filtered coordinates is a few to 6 mm, which is the lowest noise level currently achievable by 30-s kinematic GPS coordinates used for postseismic and other tectonic deformation studies.
The shape of post-sidereal-filtered PSDs exhibits random-walk noise at the shortest period. This period (or frequency) dependency becomes weaker at the longer period, but its pattern varies among baselines and components. The origin of the remained period dependency is still enigmatic, which needs further studies in the future.
We have retrieved the RINEX files via ftp://isdcftp.gfz-potsdam.de/gnss/data/daily (GFZ Data System; Ramatschi et al. 2019), https://cddis.nasa.gov/archive/gnss/data/daily, ftp://data-out.unavco.org/pub/rinex/obs, and https://cddis.nasa.gov/archive/gnss/data/highrate. GAMIT/GLOBK (Herring et al. 2015, 2018a, b) and Seismic Analysis Code (SAC) (Goldstein and Snoke 2005; Helffrich et al. 2013) are available upon request to the developers at http://geoweb.mit.edu/gg/license.php and http://ds.iris.edu/ds/nodes/dmc/forms/sac/, respectively. A software GFZRNX (Nischan 2016) to convert format of the RINEX files of POTM and POTS is available at https://dataservices.gfz-potsdam.de/panmetaworks/showshort.php?id=escidoc:1577894. A software TEQC to concatenate RINEX files of POTM and POTS is available at https://www.unavco.org/software/data-processing/teqc/teqc.html.
Correlation coefficient
Double difference
FFT:
Fast Fourier Transform
GNSS:
Global Navigation Satellite System
IGS:
International GNSS Service
ITRF:
International Terrestrial Reference Frame
LPF:
Low-pass filtering
RINEX:
Receiver Independent Exchange Format
PPP:
Precise Point Positioning
PSD:
SRF:
VR:
Variance reduction
Agnew DC, Larson KM (2007) Finding the repeat times of the GPS constellation. GPS Solut 11:71–76. https://doi.org/10.1007/s10291-006-0038-4
Altamimi Z, Rebishung P, Métivier L, Collilieux X (2016) ITRF2014: a new release of the International Terrestrial Reference Frame modeling nonlinear station motions. J Geophys Res Solid Earth 121:6109–6131. https://doi.org/10.1002/2016JB013098
Atkins C, Ziebart M (2016) Effectiveness of observation-domain sidereal filtering for GPS precise point positioning. GPS Solut 20:111–122. https://doi.org/10.1007/s10291-015-0473-1
Blewitt G, Hammond WC, Kreemer C (2018) Harnessing the GPS data explosion for interdisciplinary science. Eos. https://doi.org/10.1029/2018EO104623
Bock Y, Melgar D (2016) Physical applications of GPS geodesy: a review. Rep Prog Phys 79(10):106801. https://doi.org/10.1088/0034-4885/79/10/106801
Bock Y, Nikolaidis RM, de Jonge PJ, Bevis M (2000) Instantaneous geodetic positioning at medium distances with the Global Positioning System. J Geophys Res Solid Earth 105(B21):28223–28253. https://doi.org/10.1029/2000JB900268
Bock Y, Prawirodirdjo L, Melbourne TI (2004) Detection of arbitrarily large dynamic ground motions with a dense high-rate GPS network. Geophys Res Lett 31:L06604. https://doi.org/10.1029/2003GL019150
Boehm J, Werl B, Schuh H (2006) Troposphere mapping functions for GPS and very long baseline interferometry from European Centre for Medium-Range Weather Forecasts operational analysis data. J Geophys Res Solid Earth 111:B02406. https://doi.org/10.1029/2005JB003629
Choi K, Bilich A, Larson KM, Axelrad P (2004) Modified sidereal filtering: Implications for high-rate GPS positioning. Geophys Res Lett 31:L22608. https://doi.org/10.1029/2004GL021621
Dai W, Huang D, Cai C (2014) Multipath mitigation via component analysis methods for GPS dynamic deformation monitoring. GPS Solut 18:417–428. https://doi.org/10.1007/s10291-013-0341-9
Dong D, Wang M, Chen W, Zeng Z, Song L, Zhang Q et al (2016) Mitigation of multipath effect in GNSS short baseline positioning by the multipath hemispherical map. J Geod 90:255–262. https://doi.org/10.1007/s00190-015-0870-9
Elósegui P, Davis JL, Jaldehag RTK, Johansson JM, Niell AE, Shapiro II (1995) Geodesy using the Global Positioning System: the effects of signal scattering on estimates of site position. J Geophys Res Solid Earth 100:9921–9934. https://doi.org/10.1029/95JB00868
Fuhrmann T, Luo X, Knöpfler A, Mayer M (2015) Generating statistically robust multipath stacking maps using congruent cells. GPS Solut 19:83–92. https://doi.org/10.1007/s10291-014-0367-7
Fuhrmann T, Garthwaite MC, McClusky S (2021) Investigating GNSS multipath effects induced by co-located radar corner reflectors. J Appl Geod 15:207–224. https://doi.org/10.1515/jag-2020-0040
Galetzka J, Melgar D, Genrich JF, Geng J, Owen S, Lindsey EO et al (2015) Slip pulse and resonance of the Kathmandu basin during the 2015 Gorkha earthquake, Nepal. Science 349:1091–1095. https://doi.org/10.1126/science.aac6383
Geng J, Jiang P, Liu J (2017) Integrating GPS with GLONASS for high-rate seismogeodesy. Geophys Res Lett 44:3139–3146. https://doi.org/10.1002/2017GL072808
Geng J, Pan Y, Li X, Guo J, Liu J, Chen X et al (2018) Noise characteristics of high-rate multi-GNSS for subdaily crustal deformation monitoring. J Geophys Res Solid Earth 123:1987–2002. https://doi.org/10.1002/2018JB015527
Genrich JF, Bock Y (1992) Rapid resolution of crustal motion at short ranges with the global positioning system. J Geophys Res Solid Earth 97:3261–3269. https://doi.org/10.1029/91JB02997
Genrich JF, Bock Y (2006) Instantaneous geodetic positioning with 10–50 Hz GPS measurements: noise characteristics and implications for monitoring networks. J Geophys Res Solid Earth 111:B03403. https://doi.org/10.1029/2005JB003617
Goldstein P, Snoke A (2005) SAC Availability for the IRIS Community. Incorporated Research Institutions for Seismology Data Management Center Electronic Newsletter. https://ds.iris.edu/ds/newsletter/vol7/no1/193/sac-availability-for-the-iris-community/. Accessed 02 June 2021
Hatanaka Y, Yamagiwa A, Yutsudo T, Miyahara B (2005) Evaluation of precision of routine solutions of GEONET. J Geograph Surv Inst 108:49–56
Helffrich G, Wookey J, Bastow I (2013) The seismic analysis code. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9781139547260
Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences (2020) Semisys 4.1 POTM00DEU, https://semisys.gfz-potsdam.de/semisys/scripts/sites/site_view.php?site_id=1023. Accessed in 31 May 2021
Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences (2021) Semisys 4.1 POTS00DEU, https://semisys.gfz-potsdam.de/semisys/scripts/sites/site_view.php?site_id=1024. Accessed 31 May 2021
Herring TA, Floyd MA, King RW, McClusky SC (2015) GLOBK Reference Manual Global Kalman filter VLBI and GPS analysis program Release 10.6. GAMIT/GLOBK. http://geoweb.mit.edu/gg/GLOBK_Ref.pdf. Accessed 02 Dec 2020
Herring TA, King RW, Floyd MA, McClusky SC (2018a) GAMIT Reference Manual GPS Analysis at MIT Release 10.7. GAMIT/GLOBK. http://geoweb.mit.edu/gg/GAMIT_Ref.pdf. Accessed 02 Dec 2020
Herring TA, King, RW, Floyd MA., McClusky SC (2018b) Introduction to GAMIT/GLOBK Release 10.7. GAMIT/GLOBK. http://geoweb.mit.edu/gg/Intro_GG.pdf. Accessed 02 Dec 2020
International GNSS Service (2021a) IGS Station – GODE00USA, https://www.igs.org/imaps/station.php?id=GODE00USA. Accessed 31 May 2021
International GNSS Service (2021b) IGS Station – GODN00USA, https://www.igs.org/imaps/station.php?id=GODN00USA. Accessed 31 May 2021
International GNSS Service (2021c) IGS Station – TSK200JPN, https://www.igs.org/imaps/station.php?id=TSK200JPN. Accessed in 31 May 2021
International GNSS Service (2021d) IGS Station—TSKB00JPN, https://www.igs.org/imaps/station.php?id=TSKB00JPN. Accessed 31 May 2021
Iwabuchi T, Shoji Y, Shimada S, Nakamura H (2004) Tsukuba GPS dense net campaign observations: comparison of the stacking maps of post-fit phase residuals estimated from three software packages. J Meteo Soc Jpn Ser II 82:315–330. https://doi.org/10.2151/jmsj.2004.315
Jiang J, Bock Y, Klein E (2021) Coevolving early afterslip and aftershock signatures of a San Andreas fault rupture. Sci Adv 7(15):eabc1606. https://doi.org/10.1126/sciadv.abc1606
Kato A, Fukuda J, Nakagawa S, Obara K (2016) Foreshock migration preceding the 2016 Mw 7.0 Kumamoto earthquake, Japan. Geophys Res Lett 43:8945–8953. https://doi.org/10.1002/2016GL070079
Kawamoto S, Ohta Y, Hiyama Y, Todoriki M, Nishimura T, Furuya T et al (2017) REGARD: a new GNSS-based real-time finite fault modeling system for GEONET. J Geophys Res Solid Earth 122:1324–1349. https://doi.org/10.1002/2016JB013485
Langbein J, Johnson H (1997) Correlated errors in geodetic time series: implications for time-dependent deformation. J Geophys Res Solid Earth 102:591–603. https://doi.org/10.1029/96JB02945
Larson KM, Bodin P, Gomberg J (2003) Using 1-Hz GPS data to measure deformations caused by the denali fault earthquake. Science 300:1421–1424. https://doi.org/10.1126/science.1084531
Larson KM, Bilich A, Axelrad P (2007) Improving the precision of high-rate GPS. J Geophys Res Solid Earth 112:B05422. https://doi.org/10.1029/2006JB004367
Mao A, Harrison CGA, Dixon TH (1999) Noise in GPS coordinate time series. J Geophys Res Solid Earth 104:2797–2816. https://doi.org/10.1029/1998JB900033
Melgar D, Crowell BW, Melbourne TI, Szeliga W, Santillman M, Scrivner C (2020) Noise characteristics of operational real-time high-rate GNSS positions in a large aperture network. J Geophys Res Solid Earth 125:e2019JB019197. https://doi.org/10.1029/2019JB019197
Milliner C, Bürgmann R, Inbal A, Wang T, Liang C (2020) Resolving the kinematics and moment release of early afterslip within the first hours following the 2016 Mw 7.1 Kumamoto Earthquake: implications for the shallow slip deficit and frictional behavior of aseismic creep. J Geophys Res Solid Earth 125:8928. https://doi.org/10.1029/2019JB018928
Miyazaki S, Larson KM (2008) Coseismic and early postseismic slip for the 2003 Tokachi-oki earthquake sequence inferred from GPS data. Geophys Res Lett 35:L04302. https://doi.org/10.1029/2007GL032309
Miyazaki S, Larson KM, Choi K, Hikima K, Koketsu K, Bodin P et al (2004) Modeling the rupture process of the 2003 September 25 Tokachi-Oki (Hokkaido) earthquake using 1-Hz GPS data. Geophys Res Lett 31:L21603. https://doi.org/10.1029/2004GL021457
Moore M, Watson C, King M, McClusky S, Tregoning P (2014) Empirical modelling of site-specific errors in continuous GPS data. J Geod 88:887–900. https://doi.org/10.1007/s00190-014-0729-5
Morikami S, Mitsui Y (2020) Omori-like slow decay (p < 1) of postseismic displacement rates following the 2011 Tohoku megathrust earthquake. Earth Planets Space 72:37. https://doi.org/10.1186/s40623-020-01162-w
Munekane H (2012) Coseismic and early postseismic slips associated with the 2011 off the Pacific coast of Tohoku Earthquake sequence: EOF analysis of GPS kinematic time series. Earth Planets Space 64:1077–1091. https://doi.org/10.5047/eps.2012.07.009
Nikolaidis RM, Bock Y, de Jonge PJ, Shearer P, Agnew DC, Van Domselaar M (2001) Seismic wave observations with the Global Positioning System. J Geophys Res Solid Earth 106:21897–21916. https://doi.org/10.1029/2001JB000329
Nischan T (2016) GFZRNX - RINEX GNSS Data Conversion and Manipulation Toolbox. V. 1.13. GFZ Data Services. doi:https://doi.org/10.5880/GFZ.1.1.2016.002.
Park KD, Nerem R, Schenewerk M, Davis JL (2004) Site-specific multipath characteristics of global IGS and CORS GPS sites. J Geod 77:799–803. https://doi.org/10.1007/s00190-003-0359-9
Ragheb AE, Clarke PJ, Edwards SJ (2007) GPS sidereal filtering: coordinate- and carrier-phase-level strategies. J Geod 81:325–335. https://doi.org/10.1007/s00190-006-0113-1
Ramatschi M, Bradke M, Nischan T, Männel B (2019) GNSS data of the global GFZ tracking network. V. 1. GFZ Data Services. doi:https://doi.org/10.5880/GFZ.1.1.2020.001.
Schaffrin B, Bock Y (1988) A unified scheme for processing GPS dual-band phase observations. Bull Geodesique 62:142–160. https://doi.org/10.1007/BF02519222
Tsang LLH, Vergnolle M, Twardzik C, Sladen A, Nocquet J-M, Rolandone F et al (2019) Imaging rapid early afterslip of the 2016 Pedernales earthquake, Ecuador. Earth Planet Sci Lett 524:115724. https://doi.org/10.1016/j.epsl.2019.115724
Twardzik C, Vergnolle M, Sladen A, Avallone A (2019) Unravelling the contribution of early postseismic deformation using sub-daily GNSS positioning. Sci Rep 9:1775. https://doi.org/10.1038/s41598-019-39038-z
Wang K, Hu Y, He J (2012) Deformation cycles of subduction earthquakes in a viscoelastic Earth. Nature 484:327–332. https://doi.org/10.1038/nature11032
Wang M, Wang J, Dong D, Chen W, Li H, Wang Z (2018) advanced sidereal filtering for mitigating multipath effects in GNSS short baseline positioning. ISPRS Int J Geo-Info 7:228. https://doi.org/10.3390/ijgi7060228
Wdowinski S, Bock Y, Zhang J, Fang P, Genrich J (1997) Southern California permanent GPS geodetic array: Spatial filtering of daily positions for estimating coseismic and postseismic displacements induced by the 1992 Landers earthquake. J Geophys Res Solid Earth 102:18057–18070. https://doi.org/10.1029/97JB01378
Wessel P, Smith WHF, Scharroo R, Luis J, Wobbe F (2013) Generic mapping tools: improved version released. EOS Trans Am Geophys Union 94:409–410. https://doi.org/10.1002/2013EO450001
Williams S, Bock Y, Fang P (1998) Integrated satellite interferometry: tropospheric noise, GPS estimates and implications for interferometric synthetic aperture radar products. J Geophys Res Solid Earth 103:27051–27067. https://doi.org/10.1029/98JB02794
Yan H, Chen W, Zhu Y, Zhang W, Zhong M (2009) Contributions of thermal expansion of monuments and nearby bedrock to observed GPS height changes. Geophys Res Lett 36:L13301. https://doi.org/10.1029/2009GL038152
Zhang J, Bock Y, Johnson H, Fang P, Williams S, Genrich J et al (1997) Southern California permanent GPS geodetic array: error analysis of daily position estimates and site velocities. J Geophys Res Solid Earth 102:18035–18055. https://doi.org/10.1029/97JB01380
Zheng K, Zhang X, Li P, Li X, Ge M, Guo F et al (2019) Multipath extraction and mitigation for high-rate multi-GNSS precise point positioning. J Geod 93:2037–2051. https://doi.org/10.1007/s00190-019-01300-7
Zheng K, Zhang X, Sang J, Zhao Y, Wen G, Guo F (2021) Common-mode error and multipath mitigation for subdaily crustal deformation monitoring with high-rate GPS observations. GPS Solut 25:67. https://doi.org/10.1007/s10291-021-01095-1
We acknowledge S. Shimada for the fruitful discussion and his help in using TRACK. T. Herring also helped us use TRACK. This manuscript has been improved by constructive comments by the editor Y. Ohta and two anonymous reviewers. We used Generic Mapping Tools (Wessel et al. 2013) to draft the figures.
We acknowledge support from Director Discretionary Fund (Fiscal Year 2020) of Earthquake Research Institute, the University of Tokyo (YI and YA) and support from the Japan Society of the Promotion of Science (JSPS) Grants-in Aid for Scientific Research (KAKENHI) through Grants JP21K14007 (YI) and JP21K03694 (YI).
Earthquake Research Institute, The University of Tokyo, Tokyo, 113-0032, Japan
Yuji Itoh & Yosuke Aoki
Yuji Itoh
Yosuke Aoki
YI and YA designed the study and interpreted the results. YI carried out all the analysis and drafted the manuscript. YA helped draft the manuscript. Both authors read and approved the final manuscript.
Correspondence to Yuji Itoh.
Figures S1–S8.
Figures S9–S15 and Table S1.
Itoh, Y., Aoki, Y. On the performance of position-domain sidereal filter for 30-s kinematic GPS to mitigate multipath errors. Earth Planets Space 74, 23 (2022). https://doi.org/10.1186/s40623-022-01584-8
Kinematic analysis
Sidereal filter
Multipath
6. Geodesy
Recent Advances in Scientific Application of GNSS Array Data
|
CommonCrawl
|
Question about Gelfand-Naimark
Thread starter neworder1
neworder1
Gelfand-Naimark theorem states that every commutative C-* algebra is isometric to $C(M)$, the ring of continuous functions over its spectrum. Is the theorem true for ordinary commutative Banach semisimple algebras, i.e. without *? Every proof that the Gelfand transform is an isometry uses the fact that in C-* algebra $|xx^{*}| = |x|^{2}$, so I wonder whether it is true when we don't have the * and that identity. If it is not isometric, is it isomorphic?
Hurkyl
Well, my first thought is just to think of interesting algebras. The simplest 'interesting' one I can think of is the ring of polynomials with complex coefficients. I'd expect that you could make this into a normed algebra -- how do things work out for this example?
No, it won't work for non-C* algebras.
It can't be isometric for anything else, because C(M) is always a C* algebra.
For example, consider the set of integrable functions f:R->C with convolution as the 'multiplication' operator.
This is a Banach algebra with the L1 norm, [itex]\Vert f\Vert = \int |f(x)|\,dx[/itex]. However, ||a* a|| <= ||a||2, and equality doesn't always hold, so it isn't a C*-algebra.
Also, in my example, you can see that the Gelfand transform is a Fourier transform, so C(M) is isomorphic to C(R) and the original algebra maps to those elements of C(R) which can be written as a Fourier transform of an integrable function.
I think the way you phrased your question, you were only talking about unitial alegebras (otherwise you should replace C(M) by the continuous functions vanishing at infinity).
In that case, you could let A be the alegebra of functions f:Z->C such that [itex]\sum_n |f(n)| < \infty[/itex], again with convolution as the multiplication operator.
Using fourier series, C(M) is (isomorphic to) the set of continuous functions f:R->C of period 1 or, equivalently, C(S1). However, the Gelfand transform maps A only to those elements whose fourier series is absolutely convergent.
Interesting to note that in both my examples, C(M) is isomorphic to the C*-algebra generated by the Banach alegebra. Wouldn't be surprised if that is always the case.
Thanks. What do you mean by "C* algebra generated by a Banach algebra"?
If B is semisimple, then its Gelfand transform f(B) is dense in C(M) - so if Gelfand transform isn't "onto", f(B) isn't closed in C(M)?
neworder1 said:
I'm not sure if this is standard terminology. You could try taking the completion w.r.t. the largest continuous C*-norm, which should give a closed subspace of C(M), and maybe is isometric to C(M).
It isn't closed in my examples above. It is closed (and complete) under the original Banach norm, but not the C*-norm on C(M).
Do you need the semisimple condition?
What do you mean by C-* norm? $C(M)$ is equipped naturally with standard supremum norm.
I need semisimplicity only to assure that the translation is injective.
Given any Banach *-algebra A, you could consider the C* seminorms on A. That is, F is a seminorm satisfying F(a* a) = F(a*) F(a). If F is continuous then it follows that F(a) <= ||a||. So, the maximum of all such seminorms exists and gives a unique maximum C* seminorm. Then, A can be completed with respect to this, to give a C* algebra.
That's what I meant by the C* algebra generated by A.
If this C* algebra is B, then the map A -> C(M) extends uniquely to a continuous homomorphism B->C(M), which I was suggesting gave an isometry.
However, you've already stated that semisimple => map is injective, and that the map is dense, so it seems that you know quite a bit already.
Related Threads on Question about Gelfand-Naimark
Question about primes
Question about arithmetic
Question about automorphisms
Question about rref
Question about eigenvalues
I Question about coset
Question about determinant
Question about permutations
Question about proof
Question about Proofs
I Can this work as a basis for S?
I Group operator bijectivity
I Similarity transformation, basis change and orthogonality
A Solve: A*A'=B for A
B Proof involving two linear equations
|
CommonCrawl
|
Are Nootropics Legit|Nootropics How They Work
roprotective effects, with the potential to slow or I'm just lost in my own thoughts during study time. How can I come out of that?
Abstract (text) Baramati Hols from £9.50 Become an Affiliate However, since there are a few mild side effects that one can experience, I wanted to give you an overview so you know what to expect.
December 8, 2017 at 4:29 pm Surgical Sciences You previously identified the sequence of steps needed to work on your task more efficiently. It's now time to break this sequence of steps down into chunks.
If you buy any medicines, check with a pharmacist that they are suitable to take with your other medicines. GS15-4 Panax Ginseng Capsules | 200mg $19.99 Choose Options
of Your Nootropics The stimulant now most popular in news articles as a legitimate "smart drug" is Modafinil, which came to market as an anti-narcolepsy drug, but gained a following within the military, doctors on long shifts, and college students pulling all-nighters who needed a drug to improve alertness without the "wired" feeling associated with caffeine. Modafinil is a relatively new smart drug, having gained widespread use only in the past 15 years. More research is needed before scientists understand this drug's function within the brain – but the increase in alertness it provides is uncontested.
Flicker L, Grimley Evans J D Yearly review Apart from all these allergic side effects, other problems that might get a catch of you include sever tingling, weakness of muscles, bruising, numbness, nose bleeding, gum bleeding, mouth sores, problem in swallowing food, chest pain, irregular heartbeats, depression, aggression, suicidal thoughts, headache, nausea, insomnia, and many more.
Ketogenic diet Tryptophan # sigma1 0.83778 0.81619 0.78042 0.5665 1.1559 NA Increases enkephalins (a natural opiate neurotransmitter), which helps with memory formation, consolidation, and reactivation [R, R].
D-Ribose Spirulina Kalviainen BMervaala ESivenius J Vigabatrin: clinical use. In: Mattson RH, Meldrum BS, eds. Antiepileptic Drugs. New York, NY: Raven Press; 1995:925-930. Google Scholar
50% of WHYY's funding comes from donations made by people just like you. Therapists: Log In | Sign Up 0.25 – 0.5g Pre-Debate Antioxidants [R]
Coluracetam – 20 mg – 60 Capsules… Threads From Jaguar Land Rover's future in the UK, to celebrating Englishness, readers respond to today's top stories ñ Inositol-1,4,5-triphosphate; P2Y → Pyritinol
Phillip Cite Great Deals on Subscribe Today! The US Marines use meditation to help troops deal with stressful situations they face on the job. (30)
Posted byu/eggfood -racetam Share this with WhatsApp Random: Piracetam impaired learning by parameters of procedural memory.
When Matzner's turn came, he plopped down in a folding chair. His eyelids fluttered shut, and as his brain jolted toward tranquility, he pursed his lips and breathed out. For a while, he and his opponent were neck and neck, brain to brain. But then Matzner pulled ahead.
19 MESSAGES Day Two 1200 mg and three whole eggs Roger J. Porter1, … Michael A. Rogawski2*, in Handbook of Clinical Neurology, 2012
Front Bench newsletter Which also means you can buy and ship Piracetam to U.S. addresses legally. The 5 Most Popular Smart Drugs – Which One is Best?
protecting against chronic stress Arch Neurol. 2001;58(5):781-786. doi:10.1001/archneur.58.5.781 What are nootropics? RELATED ARTICLES
Product Reviews Consciousness Meditation Mindfullness Omharmonics Thinking and Innovation Piracetam (2-oxo-1-pyrrolidine acetamide) is a nootropic drug in the racetams group which is a derivative of GABA. Piracetam was first synthesized in 1964 by Corneliu E. Giurgea and other scientists at the Belgian pharmaceutical company UCB. As a "smart drug", it is reported to enhance cognitive functions including memory, intelligence, and attention.[1]
48mg: 24 – 26 May: 1 Onnit Alpha Brain: Clinically Studied Nootropic for Memory, Focus, and Mental Clarity (90ct)
Live Cricket Score Coffee is the biggest source of antioxidants in the diet. It has many health benefits, such as improved brain function and a lower risk of serious…
Modulating Neurotransmitter Production and Activity Send Us a Message off2 1.3 GABA receptors 14. Soerensen J.B., Smith D.F.: J. Neural Transm. Protects against amyloid-beta-induced toxicity, which is one of the most important factors in the development of Alzheimer's disease [R].
skip to main content How to Exercise Your Concentration Muscle U.S./Canada: One thing I've noticed is that most people do not take a high enough dosage. If you use piracetam, it isn't going to be as strong as some of the other racetams so taking a hefty dose is in order. 3600 – 4800 mg of piracetam is not unheard of and it's important to give the drug weeks to work. It has a significant "build-up" effect in my experience.
Karolinska Institutet, Neurotec, Huddinge, University Hospital B 84, S-14186 Stockholm, Sweden. [email protected]
Mood/productivity: none (d=-0.18; p=0.86) The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 – 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total.
Noopept – 20mg (afternoon only) Follow us: Study: Teaching Students Philosophy Will Improve Their Academic Performance
Marathi Videos Introduction 14K Yellow Gold, Tourmaline & Diamond Ring Submitted May 9, 2017 12:48PM Submitted February 1, 2014 05:46AM
Send Feedback Besides all animal products, foods that promote dopamine formation include avocados, apples, bananas, beets, sea vegetables, green leafy vegetables, oatmeal, chocolate, green tea, and coffee. (18, 19)
Choline Bitartrate (VitaCholine®) Capsules 21-Day Veggie Challenge Disclaimer We have drugs to improve our mood (antidepressants) and to look better (weight loss drugs). We even have a drug to increase the height of children (growth factor). What is wrong with a drug that makes us smarter?
for (n in seq(from = 300, to = 600, by = 30)) { Petroleum, complex mixture of hydrocarbons that occur in Earth in liquid, gaseous, or solid form. The…
Contact us Submitted October 15, 2017 08:25PM Nootropics are a class of cognitive enhancing supplements that are used to improve concentration and boost memory power. Nootropics are often used to increase attention spans, help individuals focus and as studying aids.
Who Uses Nootropics|Cognitive Brain Enhancement Who Uses Nootropics|Brain Enhancement Institute Palm Harbor Who Uses Nootropics|Brain Enhancement Reddit
CategoriesEnglish TagsBrain Enhancement Tablets, Music For Brain Enhancement
15 Replies to "Are Nootropics Legit|Nootropics How They Work"
Edwin Pace says:
Introduction to nootropics
How to Create the Best Nootropic Stack
Debate, An Education
Our Pain guide highlights over 280 products for Pain research.
Joel Rogers says:
While piracetam is not a stimulant, it is believed to also effect vascular and neuronal functions once it enters the central nervous system. Research has shown that piracetam also increases the permeability of neurons in the brain, making it easier for nutrients to enter and for waste to be eliminated.
8 – 9 July: 0
Raquel Bright says:
rat … Gria1(50592)
mcmcChain = BESTmcmc(before, after)
Submitted by Jason Bussell on February 13, 2013 – 5:46pm
Yet it's estimated that only 25% of Americans get enough. (17)
Tamara Griffith says:
I became more and more dehydrated and, once again, I wasn't hungry.
Thought connectivity
Thelma Nicholson says:
According to Dr Corneliu Giurgea, who created the first nootropic, piracetam, a nootropic must adhere to these rules:
What makes memories stronger?
Leo Willis says:
Joshua Riggs says:
-Dr. Datis Kharrazian, DHSc, DC, MS, author of Why Isn't My Brain Working? and Why Do I Still Have Thyroid Symptoms?
SDFT was founded in late 2010.
Your brain is essentially a network of billions of neurons connected by synapses. These neurons communicate and work together through chemicals known as neurotransmitters. When neurotransmitters are able to send signals more efficiently, you experience improved concentration, better memory, mood elevation, increased processing ability for mental work, and longer attention spans.
Submitted January 15, 2017 09:54PM
Common Nootropic Stacks to Support Keto
arbtt2 <- read.csv("~/selfexperiment/2013-2014-arbtt.txt")
Rene Whitfield says:
Jump up ^ Kennedy DO, Wightman EL (January 2011). "Herbal extracts and phytochemicals: plant secondary metabolites and the enhancement of human brain function". Adv Nutr. 2 (1): 32–50. doi:10.3945/an.110.000117. PMC 3042794 . PMID 22211188.
8) Forskolin & artichoke extract – memory, focus, learning
than possessing any neurotransmitter-like activity of
Submitted November 17, 2015 08:43PM
2.1 Cognitive Enhancement
VitaMonk
40. ↑ Piracetam in acute stroke: a systematic review. (2000)
Jacquelyn Miles says:
Some nootropic experts say the easiest place to begin is by combining Piracetam with a choline supplement. (Please make sure to talk to your primary care provider first to see if any of these drugs will negatively impact you or have bad reactions with current medications.)
Absorption of nicotine across biological membranes depends on pH. Nicotine is a weak base with a pKa of 8.0 (Fowler, 1954). In its ionized state, such as in acidic environments, nicotine does not rapidly cross membranes…About 80 to 90% of inhaled nicotine is absorbed during smoking as assessed using C14-nicotine (Armitage et al., 1975). The efficacy of absorption of nicotine from environmental smoke in nonsmoking women has been measured to be 60 to 80% (Iwase et al., 1991)…The various formulations of nicotine replacement therapy (NRT), such as nicotine gum, transdermal patch, nasal spray, inhaler, sublingual tablets, and lozenges, are buffered to alkaline pH to facilitate the absorption of nicotine through cell membranes. Absorption of nicotine from all NRTs is slower and the increase in nicotine blood levels more gradual than from smoking (Table 1). This slow increase in blood and especially brain levels results in low abuse liability of NRTs (Henningfield and Keenan, 1993; West et al., 2000). Only nasal spray provides a rapid delivery of nicotine that is closer to the rate of nicotine delivery achieved with smoking (Sutherland et al., 1992; Gourlay and Benowitz, 1997; Guthrie et al., 1999). The absolute dose of nicotine absorbed systemically from nicotine gum is much less than the nicotine content of the gum, in part, because considerable nicotine is swallowed with subsequent first-pass metabolism (Benowitz et al., 1987). Some nicotine is also retained in chewed gum. A portion of the nicotine dose is swallowed and subjected to first-pass metabolism when using other NRTs, inhaler, sublingual tablets, nasal spray, and lozenges (Johansson et al., 1991; Bergstrom et al., 1995; Lunell et al., 1996; Molander and Lunell, 2001; Choi et al., 2003). Bioavailability for these products with absorption mainly through the mucosa of the oral cavity and a considerable swallowed portion is about 50 to 80% (Table 1)…Nicotine is poorly absorbed from the stomach because it is protonated (ionized) in the acidic gastric fluid, but is well absorbed in the small intestine, which has a more alkaline pH and a large surface area. Following the administration of nicotine capsules or nicotine in solution, peak concentrations are reached in about 1 h (Benowitz et al., 1991; Zins et al., 1997; Dempsey et al., 2004). The oral bioavailability of nicotine is about 20 to 45% (Benowitz et al., 1991; Compton et al., 1997; Zins et al., 1997). Oral bioavailability is incomplete because of the hepatic first-pass metabolism. Also the bioavailability after colonic (enema) administration of nicotine (examined as a potential therapy for ulcerative colitis) is low, around 15 to 25%, presumably due to hepatic first-pass metabolism (Zins et al., 1997). Cotinine is much more polar than nicotine, is metabolized more slowly, and undergoes little, if any, first-pass metabolism after oral dosing (Benowitz et al., 1983b; De Schepper et al., 1987; Zevin et al., 1997).
Tami Marshall says:
↑ Piracetam: a review of pharmacological properties and clinical uses (NCBI) | https://www.ncbi.nlm.nih.gov/pubmed/16007238
Prevent the disruption of memory formation from conditions which tend to disrupt it.
85. Van Hout A, Giurgea D. The effects of piracetam in dyslexia. Approche Neuropsychol Apprent VEnfant (ANAE) 1990;3:145-152.
By: DAVE ASPREY
Students taking 'smart drugs' to boost their academic performance are putting their health at risk.
Tea contains several stimulant substances, including caffeine, theobromine, theophylline and L-theanine.
Sonia Poole says:
Pyritinol
# LLLT.randomTRUE 0.04099628 0.09108322 0.45010 0.65324
by Ned Dymoke
Viola Gardner says:
Epilepsy Society facts
This non-medical use raises concerns about safety: We do not know the long-term consequences of using these drugs, and it is especially worrying in the case of students, whose brains are not yet fully matured. It also raises concerns about fairness: If smart drug use becomes widespread, some may feel compelled to take them to stay competitive with their peers, but may not have access to them. [See: The Neuroethics of Smart Drugs and What Were You Thinking?! Understanding the Neurobiology of the Teen Brain]
You do have to be to be careful though, says Johnny. "It gives you this amazing concentration but you have to make sure you're actually in front of your books. I spent five hours in my room rearranging my iTunes library on it once."
Nina Hahn says:
Dementia Patients
3 Maher B. Poll results: Look who's doping. Nature 2008;452:674–5.
Alpha Brain by Onnit Labs: Click Here to Learn More!
…The Fate of Nicotine in the Body also describes Battelle's animal work on nicotine absorption. Using C14-labeled nicotine in rabbits, the Battelle scientists compared gastric absorption with pulmonary absorption. Gastric absorption was slow, and first pass removal of nicotine by the liver (which transforms nicotine into inactive metabolites) was demonstrated following gastric administration, with consequently low systemic nicotine levels. In contrast, absorption from the lungs was rapid and led to widespread distribution. These results show that nicotine absorbed from the stomach is largely metabolized by the liver before it has a chance to get to the brain. That is why tobacco products have to be puffed, smoked or sucked on, or absorbed directly into the bloodstream (i.e., via a nicotine patch). A nicotine pill would not work because the nicotine would be inactivated before it reached the brain.
Effects on blood coagulation
18: 0 (40%)
Ernest Summers says:
A popular blog called Slate Star Codex, which conducted a nootropics survey in 2016, warned readers this year that "the benefits [of nootropics] are usually subtle at best." The author further cautioned, "[I]f some stimulant product combines caffeine with something else, and you feel an effect, your first theory should be that the effect is 100% caffeine — unless the 'something else' is amphetamine." And even caffeine comes with downsides: It's addictive, has diminishing returns, and makes many people jittery or irritable.
There's this nifty book called The Power of Concentration, published in 1918 and written by Theron Q. Dumant. It's got some great exercises that can help to improve concentration, like:
For memory > https://nootropicsexpert.com/best-nootropics-for-learning-and-memory/
Willie Mercado says:
Constant distractions, and the low productivity that's associated with these distractions, have become so commonplace in today's offices that doctors have even given it a name: Attention Deficit Trait, or ADT. And, they say that entire organizations can suffer from it.
Dopaminergics
In animal studies, piracetam inhibited vasospasm and counteracted the effects of various spasmogenic agents. It lacked any vasodilatory action and did not induce "steal" phenomenon, nor low or no reflow, nor hypotensive effects.
Previous PostPrevious Why Take Nootropics|Who Uses Nootropics
Next PostNext Which Nootropics Actually Work|Brain Enhancement Gadgets
|
CommonCrawl
|
Is the gravitational constant $G$ a fundamental universal constant?
Is the gravitational constant $G$ a fundamental universal constant like Planck constant $h$ and the speed of light $c$?
gravity physical-constants
Qmechanic♦
FarhâdFarhâd
Real "fundamental" constants should be dimensionless, i.e. numbers that don't depend on units. The existence of $c$ is simply due to the Lorentzian nature of spacetime; it's value is only a matter of choice of unit. The existence of $\hbar$ is simply due to the path integral or canonical commutation relations, whose value is again a matter of choice of unit. Similar for Boltzmann constant etc.
On the other hand, the fine structure constant $\alpha\simeq 1/137$ is dimensionless, so this quantity actually means something other than choice of unit. But the number of the quantity is still not that "fundamental" (we will discuss whether the quantity itself is fundamental in the next paragraph) because the number can change by running renormalization flow - i.e. it changes if you define it on different energy scales. So it's the quantity, rather than the number, that has some actual physical meaning.
In the Standard Model of particle physics there are a bunch of such dimensionless quantities. Are these quantities "fundamental"? People tend to believe NO, because Kenneth Wilson let us realize that quantum field theories like the Standard Model are just low energy effective theories that has some high energy cutoff (just like nuclear physics is effective theory of Standard Model); dimensionless quantities in an effective theory should be depend on those in the higher level theory (just like the dimensionless Reynolds number that tell about the behavior of a fluid depends on the molecular constituent of the fluid). String theorists etc are trying to find a theory that has a least number of dimensionless quantities. Some people think an ultimate theory of everything, if exists, should best has no such quantities at all but only numbers that has math significance (like $1, 2, \pi$, or some number with certain analytical, algebraic or topological significance).
In terms of the gravitational constant itself, people generally believe Einstein's General Relativity is an effective theory whose cutoff is about (or lower than) the Planck scale ($\sim 10^{19} GeV$, our temporary experimental reach is $\sim 10^4 GeV$ in the LHC), above which it needs to be replaced by a theory of quantum gravity. But the quantity $G$ might still be there (just like it was from Newton, but still there after Einstein), we are not sure.
Jing-Yuan ChenJing-Yuan Chen
There is an alternative to General Relativity known as Brans-Dicke theory that treats the constant $G$ as having a value derivable from a scalar field $\phi$ with its own dynamics. The coupling of $\phi$ to other matter is defined by a variable $\omega$ in the theory, that was assumed to be of order unity. IN the limit where $\omega \rightarrow\infty$, Brans-Dicke theory becomes General Relativity. Current experiments and observations tell us that if Brans-Dicke theory describes the universe, $\omega > 40,000$. Other theories with a varying $G$ would face similar constraints.
Jerry SchirmerJerry Schirmer
It is probably constant - at least we have no evidence of any change.
"Is it fundamental?" is the big question of theoretical physics. Nobody has yet managed to derive it in terms of more fundamental constant - but a lot of people have tried.
Waffle's Crazy Peanut
Martin BeckettMartin Beckett
$\begingroup$ Yes. I agree with that. Just like Dirac. Pss.. You forgot the formatting in case of your speed-typing eh..? :-) $\endgroup$ – Waffle's Crazy Peanut Nov 9 '12 at 15:50
$\begingroup$ @CrazyBuddy - early morning pre-coffee typing! $\endgroup$ – Martin Beckett Nov 9 '12 at 16:50
$\begingroup$ Martin, can you give me some sources on those who have tried to derive G in terms of other more fundamental universal constants? $\endgroup$ – Farhâd Nov 9 '12 at 18:53
$\begingroup$ It's not possible to derive it from a more fundamental constant, because it has units. In the system of units normally used by relativists, G=1. $\endgroup$ – Ben Crowell Dec 15 '18 at 17:48
$\begingroup$ @BenCrowell - sorry I meant fundamental in, could a GUT give a reason for its value base on a, fine structure constant etc $\endgroup$ – Martin Beckett Dec 15 '18 at 20:33
For all intents and purposes, it is a fundamental constant. No one has been able to prove that it isn't fundamental, and within our error in measurement, it's definitely a constant. Like @Crazy Buddy says, $c$ (speed of light), $h$ (Planck's constant), $k_{B}$ (Boltzmann's constant) are all considered to be fundamental constants of the universe. You could have a look at this wiki page.
I think it's important also to realize that the values that they have are only valid within a particular unit convention for measurements. For example, $G = 6.67 \times 10^{-11} m^{3} kg^{-1} s^{-2}$ but this value will obviously change if you measure it in say centimeter-gram-second (cgs units). You could also set $G = 1$ (which they do in Planck units), except the rest of your units will have to change accordingly to keep the dimensions correct.
I hope the last part wasn't confusing.
KitchiKitchi
Define the Planck length as
$$L_P=\sqrt{\dfrac{G\hbar}{c^3}}\approx 1.6\cdot 10^{-35}\;m$$
or the Planck area as
$$A_P=L_P^2\approx 2.6\cdot 10^{-70}\; m^2$$
Many theories of quantum gravity argue that something weird must happen at that ludicrous tiny lenght/area that any current theory cannot fully describe. Some people, like Padnabhan and others, are beginning to realize and propose that, maybe, what it is fundamental is the Planck area (or length). You will see often Newton's gravitational law written as:
$$F_N=G\dfrac{Mm}{R^2}=\dfrac{L_p^2 c^3}{\hbar}\dfrac{Mm}{R^2}$$
$$F_N=\dfrac{L_p^2}{\hbar c}\dfrac{EE'}{R^2}$$
in some recent research papers. About if it is $G$ or $L_p$ or $L_p^2$ fundamental, there are suspicions from black hole physics that it is the Planck length squared or the Planck area what it is more fundamental, it is just the analogue of phase space "area" ($\hbar$ or $\hbar \cdot 2\pi=h$). However, people have good criticism on the fundamental role of Planck area (proposed by Bekenstein as some kind of Bohr-Sommerfeld quantization rule for black holes).
At last, what are the fundamental constants depend on our conventions. If could be wonderful if we could define the speed of light as the velocity that runs one Planck length in one Planck time, but that is trivial. Today, we keep c as fundamental in natural units, AND that allows us to define the meter exactly.
There are different systems of "fundamental natural units": Stoney's, Schrödinger's, Pavsic's dilational natural system, Planck's and some other minor variants (called geometrical or reduced sometimes).
Curiously, there are a curious relationship concerning the "fundamental" role of G (or Planck length/area) and the apparent dimensionality (D=4) of our spacetime. The argument, as far as I know, is due to John D. Barrow (perhaps someone else guessed it, but he is the one to have published it first, to my limited knowledge). John D. Barrow last contribution also hints towards a "maximum tension", also highlighted in theories like (super)string theory/M-theory. In "Maximum Tension: with and without a cosmological constant" John D. Barrow and G. W. Gibbons find the maximum force
$$F_M=\dfrac{c^4}{4G}=\dfrac{\hbar c}{4L_P^2}$$
Following the maximum tension paper, we see that in N-space (not spacetime, but it can be generalized carefully) the quantity
$$\Xi (G,\hbar, c, e)=\hbar^{2-N}e^{N-1}G^{(3-N)/2}c^{N-4}$$ is dimensionless and the higher-dimensional generalization of the fine structure constant alpha $\alpha$ (case N=3). Moreover, note that ONLY and ONLY if N=3 is G excluded!!!!!!!! And $\hbar$ is excluded if N=2!!!!!! or c if N=4. Nota, also, that $G=M^{-1}L^NT^{-2}$, $e^2=ML^NT^{-2}$, $c=LT^{-1}$ and $\hbar=ML^2T^{-1}$, so $c,\hbar$ are independent of dimension with these definitions. Why $G$ does not "play" a role when $N=3$? Maybe that and the rewritten G in terms of planck length or planck area is the answer you are searching for, but nobody knows for sure yet the best or ultimate answer to your riddle.
riemanniumriemannium
$\begingroup$ Nature doesn't give a rat's ass what units human beings use to measure things. So we can choose to measure everything in terms of Planck units and physical reality would be no different. If we do that, there is no $G$ to vary, no $c$ to vary, and no $\hbar$ to vary. All of physical law can be rewritten with those constants removed (or, more specifically, with those constants replaced with "$1$"). So they hardly seem fundamental to me, but are really only a reflection of the system of units we use to measure stuff. If they were fundamental, reality would be different if they changed. $\endgroup$ – robert bristow-johnson Sep 26 '16 at 18:48
Velocity of light $c$, Elementary charge $e$, Mass of the electron $m_e$, Mass of the proton $m_p$, Avogadro constant, $N$, Planck's constant $h$, Universal gravitational constant $G$ and the Boltzmann's constant $k$ are all considered as the fundamental constants in Astrophysics and many other fields.
If any of these values would've to change, there would be a great contradiction differentiating our measured values with that of observed & predicted ones.
But, there are cases where $G$ is currently accepted as a variable with some standard deviation 0.003 which is too small. Hence, we use $6.67\times10^{-11}Nm^2kg^{-2}$ for doing most of our homeworks. The thing is, It's still fundamental...!
So far, investigations have found no evidence of variation of fundamental "constants." So to the best of our current ability to observe, the fundamental constants really are constant.
riemannium
Waffle's Crazy PeanutWaffle's Crazy Peanut
$\begingroup$ I know it's a constant, but I'm asking if it is a "fundamental" universal constant. $\endgroup$ – Farhâd Nov 9 '12 at 18:50
$\begingroup$ @Farhâd: Hello Farhad, That's what we all are repeating around. They are fundamental..! $\endgroup$ – Waffle's Crazy Peanut Nov 10 '12 at 3:04
$\begingroup$ Why? Just repeating "fundamental" in bold font doesn't make it so. $\endgroup$ – Farhâd Nov 12 '12 at 13:29
Not the answer you're looking for? Browse other questions tagged gravity physical-constants or ask your own question.
Why is the gravitational constant.. constant?
Gravitational constant, $G$, What if it is not Constant?
What constant varies in the fine structure constant?
Is Newton's universal gravitational constant the inverse of permittivity of mass in vacuum?
Why is the speed of light considered as a fundamental constant if its speed changes with medium resulting in refraction?
Which is the most fundamental constant between the Planck constant $h$ and the reduced Planck constant $\hbar$?
Is there any relation between Planck constant and Gravitational constant?
Why do universal constants have the values they do?
Is the Boltzmann constant really that important?
Why must the speed of light be the universal speed limit for all the fundamental forces of nature?
After the redefinition of the units of fundamental physical quantities in 2019, will an uncertainty incur in the Universal constants values?
|
CommonCrawl
|
Not to be confused with Valuation risk.
The 5% Value at Risk of a hypothetical profit-and-loss probability density function
Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by firms and regulators in the financial industry to gauge the amount of assets needed to cover possible losses.
For a given portfolio, time horizon, and probability p, the p VaR can be defined informally as the maximum possible loss during that time after we exclude all worse outcomes whose combined probability is at most p. This assumes mark-to-market pricing, and no trading in the portfolio.[1]
For example, if a portfolio of stocks has a one-day 5% VaR of $1 million, that means that there is a 0.05 probability that the portfolio will fall in value by more than $1 million over a one-day period if there is no trading. Informally, a loss of $1 million or more on this portfolio is expected on 1 day out of 20 days (because of 5% probability).
More formally, p VaR is defined such that the probability of a loss greater than VaR is (at most) p while the probability of a loss less than VaR is (at least) 1−p. A loss which exceeds the VaR threshold is termed a "VaR breach".[2]
It is important to note that, for a fixed p, the p VaR does not assess the magnitude of loss when a VaR breach occurs and therefore is considered by some to be a questionable metric for risk management. For instance, assume someone makes a bet that flipping a coin seven times will not give seven heads. The terms are that they win $100 if this does not happen (with probability 127/128) and lose $12,700 if it does (with probability 1/128). That is, the possible loss amounts are $0 or $12,700. The 1% VaR is then $0, because the probability of any loss at all is 1/128 which is less than 1%. They are, however, exposed to a possible loss of $12,700 which can be expressed as the p VaR for any p <= 0.78%.[3]
VaR has four main uses in finance: risk management, financial control, financial reporting and computing regulatory capital. VaR is sometimes used in non-financial applications as well.[4] However, it is a controversial risk management tool.
Important related ideas are economic capital, backtesting, stress testing, expected shortfall, and tail conditional expectation.[5]
1 Details
2 Varieties
2.1 In governance
3 Mathematical definition
4 Risk measure and risk metric
5 VaR risk management
6 Computation methods
7 Backtesting
9.1 VaR, CVaR and EVaR
Details[edit]
Common parameters for VaR are 1% and 5% probabilities and one day and two week horizons, although other combinations are in use.[6]
The reason for assuming normal markets and no trading, and to restricting loss to things measured in daily accounts, is to make the loss observable. In some extreme financial events it can be impossible to determine losses, either because market prices are unavailable or because the loss-bearing institution breaks up. Some longer-term consequences of disasters, such as lawsuits, loss of market confidence and employee morale and impairment of brand names can take a long time to play out, and may be hard to allocate among specific prior decisions. VaR marks the boundary between normal days and extreme events. Institutions can lose far more than the VaR amount; all that can be said is that they will not do so very often.[7]
The probability level is about equally often specified as one minus the probability of a VaR break, so that the VaR in the example above would be called a one-day 95% VaR instead of one-day 5% VaR. This generally does not lead to confusion because the probability of VaR breaks is almost always small, certainly less than 50%.[1]
Although it virtually always represents a loss, VaR is conventionally reported as a positive number. A negative VaR would imply the portfolio has a high probability of making a profit, for example a one-day 5% VaR of negative $1 million implies the portfolio has a 95% chance of making more than $1 million over the next day.[8]
Another inconsistency is that VaR is sometimes taken to refer to profit-and-loss at the end of the period, and sometimes as the maximum loss at any point during the period. The original definition was the latter, but in the early 1990s when VaR was aggregated across trading desks and time zones, end-of-day valuation was the only reliable number so the former became the de facto definition. As people began using multiday VaRs in the second half of the 1990s, they almost always estimated the distribution at the end of the period only. It is also easier theoretically to deal with a point-in-time estimate versus a maximum over an interval. Therefore, the end-of-period definition is the most common both in theory and practice today.[9]
Varieties[edit]
The definition of VaR is nonconstructive; it specifies a property VaR must have, but not how to compute VaR. Moreover, there is wide scope for interpretation in the definition.[10] This has led to two broad types of VaR, one used primarily in risk management and the other primarily for risk measurement. The distinction is not sharp, however, and hybrid versions are typically used in financial control, financial reporting and computing regulatory capital.[11]
To a risk manager, VaR is a system, not a number. The system is run periodically (usually daily) and the published number is compared to the computed price movement in opening positions over the time horizon. There is never any subsequent adjustment to the published VaR, and there is no distinction between VaR breaks caused by input errors (including Information Technology breakdowns, fraud and rogue trading), computation errors (including failure to produce a VaR on time) and market movements.[12]
A frequentist claim is made, that the long-term frequency of VaR breaks will equal the specified probability, within the limits of sampling error, and that the VaR breaks will be independent in time and independent of the level of VaR. This claim is validated by a backtest, a comparison of published VaRs to actual price movements. In this interpretation, many different systems could produce VaRs with equally good backtests, but wide disagreements on daily VaR values.[1]
For risk measurement a number is needed, not a system. A Bayesian probability claim is made, that given the information and beliefs at the time, the subjective probability of a VaR break was the specified level. VaR is adjusted after the fact to correct errors in inputs and computation, but not to incorporate information unavailable at the time of computation.[8] In this context, "backtest" has a different meaning. Rather than comparing published VaRs to actual market movements over the period of time the system has been in operation, VaR is retroactively computed on scrubbed data over as long a period as data are available and deemed relevant. The same position data and pricing models are used for computing the VaR as determining the price movements.[2]
Although some of the sources listed here treat only one kind of VaR as legitimate, most of the recent ones seem to agree that risk management VaR is superior for making short-term and tactical decisions today, while risk measurement VaR should be used for understanding the past, and making medium term and strategic decisions for the future. When VaR is used for financial control or financial reporting it should incorporate elements of both. For example, if a trading desk is held to a VaR limit, that is both a risk-management rule for deciding what risks to allow today, and an input into the risk measurement computation of the desk's risk-adjusted return at the end of the reporting period.[5]
In governance[edit]
VaR can also be applied to governance of endowments, trusts, and pension plans. Essentially trustees adopt portfolio Values-at-Risk metrics for the entire pooled account and the diversified parts individually managed. Instead of probability estimates they simply define maximum levels of acceptable loss for each. Doing so provides an easy metric for oversight and adds accountability as managers are then directed to manage, but with the additional constraint to avoid losses within a defined risk parameter. VaR utilized in this manner adds relevance as well as an easy way to monitor risk measurement control far more intuitive than Standard Deviation of Return. Use of VaR in this context, as well as a worthwhile critique on board governance practices as it relates to investment management oversight in general can be found in Best Practices in Governance.[13]
Mathematical definition[edit]
Let X {\displaystyle X} be a profit and loss distribution (loss negative and profit positive). The VaR at level α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} is the smallest number y {\displaystyle y} such that the probability that Y := − X {\displaystyle Y:=-X} does not exceed y {\displaystyle y} is at least 1 − α {\displaystyle 1-\alpha } . Mathematically, VaR α ( X ) {\displaystyle \operatorname {VaR} _{\alpha }(X)} is the ( 1 − α ) {\displaystyle (1-\alpha )} -quantile of Y {\displaystyle Y} , i.e.,
VaR α ( X ) = − inf { x ∈ R : F X ( x ) > α } = F Y − 1 ( 1 − α ) . {\displaystyle \operatorname {VaR} _{\alpha }(X)=-\inf {\big \{}x\in \mathbb {R} :F_{X}(x)>\alpha {\big \}}=F_{Y}^{-1}(1-\alpha ).} [14][15]
This is the most general definition of VaR and the two identities are equivalent (indeed, for any random variable X {\displaystyle X} its cumulative distribution function F X {\displaystyle F_{X}} is well defined). However this formula cannot be used directly for calculations unless we assume that X {\displaystyle X} has some parametric distribution.
Risk managers typically assume that some fraction of the bad events will have undefined losses, either because markets are closed or illiquid, or because the entity bearing the loss breaks apart or loses the ability to compute accounts. Therefore, they do not accept results based on the assumption of a well-defined probability distribution.[7] Nassim Taleb has labeled this assumption, "charlatanism".[16] On the other hand, many academics prefer to assume a well-defined distribution, albeit usually one with fat tails.[1] This point has probably caused more contention among VaR theorists than any other.[10]
Value of Risks can also be written as a distortion risk measure given by the distortion function g ( x ) = { 0 if 0 ≤ x < 1 − α 1 if 1 − α ≤ x ≤ 1 . {\displaystyle g(x)={\begin{cases}0&{\text{if }}0\leq x<1-\alpha \\1&{\text{if }}1-\alpha \leq x\leq 1\end{cases}}.} [17][18]
Risk measure and risk metric[edit]
The term "VaR" is used both for a risk measure and a risk metric. This sometimes leads to confusion. Sources earlier than 1995 usually emphasize the risk measure, later sources are more likely to emphasize the metric.
The VaR risk measure defines risk as mark-to-market loss on a fixed portfolio over a fixed time horizon. There are many alternative risk measures in finance. Given the inability to use mark-to-market (which uses market prices to define loss) for future performance, loss is often defined (as a substitute) as change in fundamental value. For example, if an institution holds a loan that declines in market price because interest rates go up, but has no change in cash flows or credit quality, some systems do not recognize a loss. Also some try to incorporate the economic cost of harm not measured in daily financial statements, such as loss of market confidence or employee morale, impairment of brand names or lawsuits.[5]
Rather than assuming a static portfolio over a fixed time horizon, some risk measures incorporate the dynamic effect of expected trading (such as a stop loss order) and consider the expected holding period of positions.[5]
The VaR risk metric summarizes the distribution of possible losses by a quantile, a point with a specified probability of greater losses. A common alternative metrics is expected shortfall.[1]
VaR risk management[edit]
Supporters of VaR-based risk management claim the first and possibly greatest benefit of VaR is the improvement in systems and modeling it forces on an institution. In 1997, Philippe Jorion wrote:[19]
[T]he greatest benefit of VAR lies in the imposition of a structured methodology for critically thinking about risk. Institutions that go through the process of computing their VAR are forced to confront their exposure to financial risks and to set up a proper risk management function. Thus the process of getting to VAR may be as important as the number itself.
Publishing a daily number, on-time and with specified statistical properties holds every part of a trading organization to a high objective standard. Robust backup systems and default assumptions must be implemented. Positions that are reported, modeled or priced incorrectly stand out, as do data feeds that are inaccurate or late and systems that are too-frequently down. Anything that affects profit and loss that is left out of other reports will show up either in inflated VaR or excessive VaR breaks. "A risk-taking institution that does not compute VaR might escape disaster, but an institution that cannot compute VaR will not."[20]
The second claimed benefit of VaR is that it separates risk into two regimes. Inside the VaR limit, conventional statistical methods are reliable. Relatively short-term and specific data can be used for analysis. Probability estimates are meaningful, because there are enough data to test them. In a sense, there is no true risk because you have a sum of many independent observations with a left bound on the outcome. A casino doesn't worry about whether red or black will come up on the next roulette spin. Risk managers encourage productive risk-taking in this regime, because there is little true cost. People tend to worry too much about these risks, because they happen frequently, and not enough about what might happen on the worst days.[21]
Outside the VaR limit, all bets are off. Risk should be analyzed with stress testing based on long-term and broad market data.[22] Probability statements are no longer meaningful.[23] Knowing the distribution of losses beyond the VaR point is both impossible and useless. The risk manager should concentrate instead on making sure good plans are in place to limit the loss if possible, and to survive the loss if not.[1]
One specific system uses three regimes.[24]
One to three times VaR are normal occurrences. You expect periodic VaR breaks. The loss distribution typically has fat tails, and you might get more than one break in a short period of time. Moreover, markets may be abnormal and trading may exacerbate losses, and you may take losses not measured in daily marks such as lawsuits, loss of employee morale and market confidence and impairment of brand names. So an institution that can't deal with three times VaR losses as routine events probably won't survive long enough to put a VaR system in place.
Three to ten times VaR is the range for stress testing. Institutions should be confident they have examined all the foreseeable events that will cause losses in this range, and are prepared to survive them. These events are too rare to estimate probabilities reliably, so risk/return calculations are useless.
Foreseeable events should not cause losses beyond ten times VaR. If they do they should be hedged or insured, or the business plan should be changed to avoid them, or VaR should be increased. It's hard to run a business if foreseeable losses are orders of magnitude larger than very large everyday losses. It's hard to plan for these events, because they are out of scale with daily experience. Of course there will be unforeseeable losses more than ten times VaR, but it's pointless to anticipate them, you can't know much about them and it results in needless worrying. Better to hope that the discipline of preparing for all foreseeable three-to-ten times VaR losses will improve chances for surviving the unforeseen and larger losses that inevitably occur.
"A risk manager has two jobs: make people take more risk the 99% of the time it is safe to do so, and survive the other 1% of the time. VaR is the border."[20]
Another reason VaR is useful as a metric is due to its ability to compress the riskiness of a portfolio to a single number, making it comparable across different portfolios (of different assets). Within any portfolio it is also possible to isolate specific position that might better hedge the portfolio to reduce, and minimise, the VaR. An example of market-maker employed strategies for trading linear interest rate derivatives and interest rate swaps portfolios is cited.[25]
Computation methods[edit]
VaR can be estimated either parametrically (for example, variance-covariance VaR or delta-gamma VaR) or nonparametrically (for examples, historical simulation VaR or resampled VaR).[5][7] Nonparametric methods of VaR estimation are discussed in Markovich [26] and Novak.[27] A comparison of a number of strategies for VaR prediction is given in Kuester et al.[28]
A McKinsey report[29] published in May 2012 estimated that 85% of large banks were using historical simulation. The other 15% used Monte Carlo methods.
Backtesting[edit]
A key advantage to VaR over most other measures of risk such as expected shortfall is the availability several backtesting procedures for validating a set of VaR forecasts. Early examples of backtests can be found in Christoffersen (1998),[30] later generalized by Pajhede (2017),[31] which models a "hit-sequence" of losses greater than the VaR and proceed to tests for these "hits" to be independent from one another and with a correct probability of occurring. E.g. a 5% probability of a loss greater than VaR should be observed over time when using a 95% VaR, these hits should occur independently.
A number of other backtests are available which model the time between hits in the hit-sequence, see Christoffersen (2014),[32] Haas (2016),[33] Tokpavi et al. (2014).[34] and Pajhede (2017)[31] As pointed out in several of the papers, the asymptotic distribution is often poor when considering high levels of coverage, e.g. a 99% VaR, therefore the parametric bootstrap method of Dufour (2006)[35] is often used to obtain correct size properties for the tests. Backtest toolboxes are available in Matlab [1], or R—though only the first implements the parametric bootstrap method.
The second pillar of Basel II includes a backtesting step to validate the VaR figures.
The problem of risk measurement is an old one in statistics, economics and finance. Financial risk management has been a concern of regulators and financial executives for a long time as well. Retrospective analysis has found some VaR-like concepts in this history. But VaR did not emerge as a distinct concept until the late 1980s. The triggering event was the stock market crash of 1987. This was the first major financial crisis in which a lot of academically-trained quants were in high enough positions to worry about firm-wide survival.[1]
The crash was so unlikely given standard statistical models, that it called the entire basis of quant finance into question. A reconsideration of history led some quants to decide there were recurring crises, about one or two per decade, that overwhelmed the statistical assumptions embedded in models used for trading, investment management and derivative pricing. These affected many markets at once, including ones that were usually not correlated, and seldom had discernible economic cause or warning (although after-the-fact explanations were plentiful).[23] Much later, they were named "Black Swans" by Nassim Taleb and the concept extended far beyond finance.[36]
If these events were included in quantitative analysis they dominated results and led to strategies that did not work day to day. If these events were excluded, the profits made in between "Black Swans" could be much smaller than the losses suffered in the crisis. Institutions could fail as a result.[20][23][36]
VaR was developed as a systematic way to segregate extreme events, which are studied qualitatively over long-term history and broad market events, from everyday price movements, which are studied quantitatively using short-term data in specific markets. It was hoped that "Black Swans" would be preceded by increases in estimated VaR or increased frequency of VaR breaks, in at least some markets. The extent to which this has proven to be true is controversial.[23]
Abnormal markets and trading were excluded from the VaR estimate in order to make it observable.[21] It is not always possible to define loss if, for example, markets are closed as after 9/11, or severely illiquid, as happened several times in 2008.[20] Losses can also be hard to define if the risk-bearing institution fails or breaks up.[21] A measure that depends on traders taking certain actions, and avoiding other actions, can lead to self reference.[1]
This is risk management VaR. It was well established in quantitative trading groups at several financial institutions, notably Bankers Trust, before 1990, although neither the name nor the definition had been standardized. There was no effort to aggregate VaRs across trading desks.[23]
The financial events of the early 1990s found many firms in trouble because the same underlying bet had been made at many places in the firm, in non-obvious ways. Since many trading desks already computed risk management VaR, and it was the only common risk measure that could be both defined for all businesses and aggregated without strong assumptions, it was the natural choice for reporting firmwide risk. J. P. Morgan CEO Dennis Weatherstone famously called for a "4:15 report" that combined all firm risk on one page, available within 15 minutes of the market close.[10]
Risk measurement VaR was developed for this purpose. Development was most extensive at J. P. Morgan, which published the methodology and gave free access to estimates of the necessary underlying parameters in 1994. This was the first time VaR had been exposed beyond a relatively small group of quants. Two years later, the methodology was spun off into an independent for-profit business now part of RiskMetrics Group (now part of MSCI).[10]
In 1997, the U.S. Securities and Exchange Commission ruled that public corporations must disclose quantitative information about their derivatives activity. Major banks and dealers chose to implement the rule by including VaR information in the notes to their financial statements.[1]
Worldwide adoption of the Basel II Accord, beginning in 1999 and nearing completion today, gave further impetus to the use of VaR. VaR is the preferred measure of market risk, and concepts similar to VaR are used in other parts of the accord.[1]
VaR has been controversial since it moved from trading desks into the public eye in 1994. A famous 1997 debate between Nassim Taleb and Philippe Jorion set out some of the major points of contention. Taleb claimed VaR:[37]
Ignored 2,500 years of experience in favor of untested models built by non-traders
Was charlatanism because it claimed to estimate the risks of rare events, which is impossible
Gave false confidence
Would be exploited by traders
In 2008 David Einhorn and Aaron Brown debated VaR in Global Association of Risk Professionals Review[20][3] Einhorn compared VaR to "an airbag that works all the time, except when you have a car accident". He further charged that VaR:
Led to excessive risk-taking and leverage at financial institutions
Focused on the manageable risks near the center of the distribution and ignored the tails
Created an incentive to take "excessive but remote risks"
Was "potentially catastrophic when its use creates a false sense of security among senior executives and watchdogs."
New York Times reporter Joe Nocera wrote an extensive piece Risk Mismanagement[38] on January 4, 2009 discussing the role VaR played in the Financial crisis of 2007-2008. After interviewing risk managers (including several of the ones cited above) the article suggests that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by giving false security to bank executives and regulators. A powerful tool for professional risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when misunderstood.
Taleb in 2009 testified in Congress asking for the banning of VaR for a number of reasons. One was that tail risks are non-measurable. Another was that for anchoring reasons VaR leads to higher risk taking.[39]
VaR is not subadditive:[5] VaR of a combined portfolio can be larger than the sum of the VaRs of its components.
For example, the average bank branch in the United States is robbed about once every ten years. A single-branch bank has about 0.0004% chance of being robbed on a specific day, so the risk of robbery would not figure into one-day 1% VaR. It would not even be within an order of magnitude of that, so it is in the range where the institution should not worry about it, it should insure against it and take advice from insurers on precautions. The whole point of insurance is to aggregate risks that are beyond individual VaR limits, and bring them into a large enough portfolio to get statistical predictability. It does not pay for a one-branch bank to have a security expert on staff.
As institutions get more branches, the risk of a robbery on a specific day rises to within an order of magnitude of VaR. At that point it makes sense for the institution to run internal stress tests and analyze the risk itself. It will spend less on insurance and more on in-house expertise. For a very large banking institution, robberies are a routine daily occurrence. Losses are part of the daily VaR calculation, and tracked statistically rather than case-by-case. A sizable in-house security department is in charge of prevention and control, the general risk manager just tracks the loss like any other cost of doing business. As portfolios or institutions get larger, specific risks change from low-probability/low-predictability/high-impact to statistically predictable losses of low individual impact. That means they move from the range of far outside VaR, to be insured, to near outside VaR, to be analyzed case-by-case, to inside VaR, to be treated statistically.[20]
VaR is a static measure of risk. By definition, VaR is a particular characteristic of the probability distribution of the underlying (namely, VaR is essentially a quantile). For a dynamic measure of risk, see Novak,[27] ch. 10.
There are common abuses of VaR:[7][10]
Assuming that plausible losses will be less than some multiple (often three) of VaR. Losses can be extremely large.
Reporting a VaR that has not passed a backtest. Regardless of how VaR is computed, it should have produced the correct number of breaks (within sampling error) in the past. A common violation of common sense is to estimate a VaR based on the unverified assumption that everything follows a multivariate normal distribution.
VaR, CVaR and EVaR[edit]
The VaR is not a coherent risk measure since it violates the sub-additivity property, which is
I f X , Y ∈ L , t h e n ρ ( X + Y ) ≤ ρ ( X ) + ρ ( Y ) . {\displaystyle \mathrm {If} \;X,Y\in \mathbf {L} ,\;\mathrm {then} \;\rho (X+Y)\leq \rho (X)+\rho (Y).}
However, it can be bounded by coherent risk measures like Conditional Value-at-Risk (CVaR) or entropic value at risk (EVaR). In fact, for X ∈ L M + {\displaystyle X\in \mathbf {L} _{M^{+}}} (with L M + {\displaystyle \mathbf {L} _{M^{+}}} the set of all Borel measurable functions whose moment-generating function exists for all positive real values) we have
VaR 1 − α ( X ) ≤ CVaR 1 − α ( X ) ≤ EVaR 1 − α ( X ) , {\displaystyle {\text{VaR}}_{1-\alpha }(X)\leq {\text{CVaR}}_{1-\alpha }(X)\leq {\text{EVaR}}_{1-\alpha }(X),}
VaR 1 − α ( X ) := inf t ∈ R { t : Pr ( X ≤ t ) ≥ 1 − α } , CVaR 1 − α ( X ) := 1 α ∫ 0 α VaR 1 − γ ( X ) d γ , EVaR 1 − α ( X ) := inf z > 0 { z − 1 ln ( M X ( z ) / α ) } , {\displaystyle {\begin{aligned}&{\text{VaR}}_{1-\alpha }(X):=\inf _{t\in \mathbf {R} }\{t:{\text{Pr}}(X\leq t)\geq 1-\alpha \},\\&{\text{CVaR}}_{1-\alpha }(X):={\frac {1}{\alpha }}\int _{0}^{\alpha }{\text{VaR}}_{1-\gamma }(X)d\gamma ,\\&{\text{EVaR}}_{1-\alpha }(X):=\inf _{z>0}\{z^{-1}\ln(M_{X}(z)/\alpha )\},\end{aligned}}}
in which M X ( z ) {\displaystyle M_{X}(z)} is the moment-generating function of X {\displaystyle X} at z {\displaystyle z} . In the above equations the variable X {\displaystyle X} denotes the financial loss, rather than wealth as is typically the case.
Capital Adequacy Directive
Conditional value-at-risk
Cyber risk quantification based on cyber value-at-risk or CyVaR
EMP for stochastic programming— solution technology for optimization problems involving VaR and CVaR
Entropic value at risk
Profit at risk
Margin at risk
Liquidity at risk
Risk return ratio
Valuation risk
^ a b c d e f g h i j Jorion, Philippe (2006). Value at Risk: The New Benchmark for Managing Financial Risk (3rd ed.). McGraw-Hill. ISBN 978-0-07-146495-6.
^ a b Holton, Glyn A. (2014). Value-at-Risk: Theory and Practice second edition, e-book.
^ a b David Einhorn (June–July 2008), Private Profits and Socialized Risk (PDF), GARP Risk Review, archived (PDF) from the original on April 26, 2016
^ McNeil, Alexander; Frey, Rüdiger; Embrechts, Paul (2005). Quantitative Risk Management: Concepts Techniques and Tools. Princeton University Press. ISBN 978-0-691-12255-7.
^ a b c d e f Dowd, Kevin (2005). Measuring Market Risk. John Wiley & Sons. ISBN 978-0-470-01303-8.
^ Pearson, Neil (2002). Risk Budgeting: Portfolio Problem Solving with Value-at-Risk. John Wiley & Sons. ISBN 978-0-471-40556-6.
^ a b c d Aaron Brown (March 2004), The Unbearable Lightness of Cross-Market Risk, Wilmott Magazine
^ a b Crouhy, Michel; Galai, Dan; Mark, Robert (2001). The Essentials of Risk Management. McGraw-Hill. ISBN 978-0-07-142966-5.
^ Jose A. Lopez (September 1996). "Regulatory Evaluation of Value-at-Risk Models". Wharton Financial Institutions Center Working Paper 96-51.
^ a b c d e Kolman, Joe; Onak, Michael; Jorion, Philippe; Taleb, Nassim; Derman, Emanuel; Putnam, Blu; Sandor, Richard; Jonas, Stan; Dembo, Ron; Holt, George; Tanenbaum, Richard; Margrabe, William; Mudge, Dan; Lam, James; Rozsypal, Jim (April 1998). Roundtable: The Limits of VaR. Derivatives Strategy.
^ Aaron Brown (March 1997), The Next Ten VaR Disasters, Derivatives Strategy
^ Wilmott, Paul (2007). Paul Wilmott Introduces Quantitative Finance. Wiley. ISBN 978-0-470-31958-1.
^ Lawrence York (2009), Best Practices in Governance
^ Artzner, Philippe; Delbaen, Freddy; Eber, Jean-Marc; Heath, David (1999). "Coherent Measures of Risk" (PDF). Mathematical Finance. 9 (3): 203–228. doi:10.1111/1467-9965.00068. Retrieved February 3, 2011.
^ Foellmer, Hans; Schied, Alexander (2004). Stochastic Finance. de Gruyter Series in Mathematics. 27. Berlin: Walter de Gruyter. pp. 177–182. ISBN 978-311-0183467. MR 2169807.
^ Nassim Taleb (December 1996 – January 1997), The World According to Nassim Taleb, Derivatives Strategy
^ Julia L. Wirch; Mary R. Hardy. "Distortion Risk Measures: Coherence and Stochastic Dominance" (PDF). Retrieved March 10, 2012.
^ Balbás, A.; Garrido, J.; Mayoral, S. (2008). "Properties of Distortion Risk Measures". Methodology and Computing in Applied Probability. 11 (3): 385. doi:10.1007/s11009-008-9089-z.
^ Jorion, Philippe (April 1997). The Jorion-Taleb Debate. Derivatives Strategy.
^ a b c d e f Aaron Brown (June–July 2008). "Private Profits and Socialized Risk". GARP Risk Review.
^ a b c Espen Haug (2007). Derivative Models on Models. John Wiley & Sons. ISBN 978-0-470-01322-9.
^ Ezra Zask (February 1999), Taking the Stress Out of Stress Testing, Derivative Strategy
^ a b c d e Kolman, Joe; Onak, Michael; Jorion, Philippe; Taleb, Nassim; Derman, Emanuel; Putnam, Blu; Sandor, Richard; Jonas, Stan; Dembo, Ron; Holt, George; Tanenbaum, Richard; Margrabe, William; Mudge, Dan; Lam, James; Rozsypal, Jim (April 1998). "Roundtable: The Limits of Models". Derivatives Strategy.
^ Aaron Brown (December 2007). "On Stressing the Right Size". GARP Risk Review.
^ The Pricing and Hedging of Interest Rate Derivatives: A Practical Guide to Swaps, J H M Darbyshire, 2016, ISBN 978-0995455511
^ Markovich, N. (2007), Nonparametric analysis of univariate heavy-tailed data, Wiley
^ a b Novak, S.Y. (2011). Extreme value methods with applications to finance. Chapman & Hall/CRC Press. ISBN 978-1-4398-3574-6.
^ Kuester, Keith; Mittnik, Stefan; Paolella, Marc (2006). "Value-at-Risk Prediction: A Comparison of Alternative Strategies". Journal of Financial Econometrics. 4: 53–89. doi:10.1093/jjfinec/nbj002.
^ McKinsey & Company. "McKinsey Working Papers on Risk, Number 32" (pdf).
^ Christoffersen, Peter (1998). "Evaluating interval forecasts". International Economic Review. 39 (4): 841–62. CiteSeerX 10.1.1.41.8009. doi:10.2307/2527341. JSTOR 2527341.
^ a b Pajhede, Thor (2017). "Backtesting Value-at-Risk: A Generalized Markov Framework". Journal of Forecasting. 36 (5): 597–613. doi:10.1002/for.2456.
^ Christoffersen, Peter (2014). "Backtesting Value-at-Risk: A Duration-Based Approach". Journal of Financial Econometrics.
^ Haas, M. (2006). "Improved duration-based backtesting of value-at-risk". Journal of Risk. 8.
^ Tokpavi, S. "Backtesting Value-at-Risk: A GMM Duration-Based Test". Journal of Financial Econometrics.
^ Dufour, J-M (2006). "Monte carlo tests with nuisance parameters: A general approach to finite-sample inference and nonstandard asymptotics". Journal of Econometrics.
^ a b Taleb, Nassim Nicholas (2007). The Black Swan: The Impact of the Highly Improbable. New York: Random House. ISBN 978-1-4000-6351-2.
^ Nassim Taleb (April 1997), The Jorion-Taleb Debate, Derivatives Strategy
^ Joe Nocera (January 4, 2009), Risk Mismanagement, The New York Times Magazine
^ Nassim Taleb (Sep 10, 2009). "Report on The Risks of Financia l Modeling, VaR and the Economic Breakdown" (PDF). U.S. House of Representatives. Archived from the original (PDF) on November 4, 2009.
"Value At Risk", Ben Sopranzetti, Ph.D., CPA
"Perfect Storms" – Beautiful & True Lies In Risk Management, Satyajit Das
"Gloria Mundi" – All About Value at Risk, Barry Schachter
Risk Mismanagement, Joe Nocera NY Times article.
"VaR Doesn't Have To Be Hard", Rich Tanenbaum
"The Pricing and Trading of Interest Rate Derivatives", J H M Darbyshire, MSc.
Online real-time VaR calculator, Razvan Pascalau, University of Alabama
Value-at-Risk (VaR), Simon Benninga and Zvi Wiener. (Mathematica in Education and Research Vol. 7 No. 4 1998.)
Derivatives Strategy Magazine. "Inside D. E. Shaw" Trading and Risk Management 1998
Financial risk and financial risk management
Concentration risk
Consumer credit risk
Credit derivative
Commodity risk (e.g. Volume risk, Basis risk, Shape risk, Holding period risk, Price area risk)
Equity risk
FX risk
Margining risk
Volatility risk
Liquidity risk (e.g. Refinancing risk)
Legal risk
Reputational risk
Profit risk
Settlement risk
Systemic risk
Market portfolio
RAROC
Risk-free rate
Risk parity
Value-at-Risk (VaR) and extensions Profit at risk, Margin at risk, Liquidity at risk
Expected return
Risk pool
Systematic risk
Business and economics portal
Retrieved from "https://en.wikipedia.org/w/index.php?title=Value_at_risk&oldid=906502588"
Financial risk modeling
Monte Carlo methods in finance
|
CommonCrawl
|
Robust control problems for primitive equations of the ocean
An epidemiological approach to the spread of political third parties
December 2011, 15(3): 739-767. doi: 10.3934/dcdsb.2011.15.739
Computation of symbolic dynamics for two-dimensional piecewise-affine maps
Lorenzo Sella 1, and Pieter Collins 2,
Niels Bohrweg 1, Leiden, 2333 CA, Netherlands
Bouillonstraat 8-10, 6211 LH Maastricht, Netherlands
Received June 2009 Revised June 2010 Published February 2011
In this paper we design and implement an algorithm for computing symbolic dynamics for two dimensional piecewise-affine maps. The algorithm is based on detection of periodic orbits using the Conley index and Szymczak decomposition of Conley index pair. The algorithm is also extended to deal with discontinuous maps. We compare the algorithm with the algorithm based on tangle of fixed points. We apply the algorithms to compute the symbolic dynamics and entropy bounds for the Lozi map.
Keywords: Conley index, Lozi map., homoclinic tangles, Two-dimensional piecewise-affine map, symbolic dynamics.
Mathematics Subject Classification: Primary: 37E99, 37B30, 37B40, 37B1.
Citation: Lorenzo Sella, Pieter Collins. Computation of symbolic dynamics for two-dimensional piecewise-affine maps. Discrete & Continuous Dynamical Systems - B, 2011, 15 (3) : 739-767. doi: 10.3934/dcdsb.2011.15.739
D. Lind and B. Marcus, "An Introduction To Symbolic Dynamics And Coding,", Cambridge University Press, (1995). doi: 10.1017/CBO9780511626302. Google Scholar
J. Milnor and W. Thurston, On iterated maps of the interval,, in, (1342), 1986. Google Scholar
J. P. Lampreia and S. Ramos, Trimodal maps,, International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, 3 (1993), 1607. doi: 10.1142/S0218127493001276. Google Scholar
J. P. Lampreia and S. Ramos, Kneading theory for tree maps,, Ergodic Theory and Dynamical Systems, 24 (2004), 957. doi: 10.1017/S014338570400015X. Google Scholar
J. L. Rocha and S. Ramos, On iterated maps of the interval with holes,, Journal of Difference Equations and Applications, 9 (2003), 319. doi: 10.1080/1023619021000047752. Google Scholar
L. Sella and P. Collins, "Discrete Dynamics of Two-Dimensional Nonlinear Hybrid Automata,", Hybrid Systems: Computation and Control, (2008). Google Scholar
P. Collins, Symbolic dynamics from homoclinic tangles,, HInternational Journal of Bifurcation and Chaos in Applied Sciences and Engineering, 12 (2002), 605. doi: 10.1142/S0218127402004565. Google Scholar
T. Kaczynski, K. Mischaikow and M. Mrozek, "Computational Homology,", Applied Mathematical Sciences, (). Google Scholar
S. Day, O. Junge and M. Konstantin, Towards automated chaos verification,, EQUADIFF, (2003), 157. Google Scholar
Z. Galias and P. Zgliczyński, Abundance of homoclinic and heteroclinic orbits and rigorous bounds for the topological entropy for the Hénon map,, Nonlinearity, 14 (2001), 909. doi: 10.1088/0951-7715/14/5/301. Google Scholar
A. Szymczak, The Conley index for decompositions of isolated invariant sets,, Fundamenta Mathematicae, 148 (1995), 71. Google Scholar
P. Collins, Dynamics of surface diffeomorphisms relative to homoclinic and heteroclinic orbits,, Dynamical Systems, 19 (2004), 1. doi: 10.1080/14689360310001623421. Google Scholar
M. Misiurewicz, Strange attractors for the Lozi mappings,, Nonlinear Dynamics (Internat. Conf., (1979), 348. Google Scholar
A. Hatcher, "Algebraic Topology,", Cambridge University Press, (2002). Google Scholar
J. Munkres, "Elements of Algebraic Topology,", Addison-Wesley Publishing Company, (2002). Google Scholar
R. Gilmore and M. Lefranc, "The Topology of Chaos," Alice in Stretch and Squeezeland,, Wiley-Interscience [John Wiley & Sons], (1984). Google Scholar
D. Sand, Numerical computations on Lozi maps,, \url{http://topo.math.u-psud.fr/ sands/Programs/Lozi/index.html}., (). Google Scholar
Anke D. Pohl. Symbolic dynamics for the geodesic flow on two-dimensional hyperbolic good orbifolds. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2173-2241. doi: 10.3934/dcds.2014.34.2173
Tiantian Wu, Xiao-Song Yang. A new class of 3-dimensional piecewise affine systems with homoclinic orbits. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5119-5129. doi: 10.3934/dcds.2016022
Anatoli F. Ivanov. On global dynamics in a multi-dimensional discrete map. Conference Publications, 2015, 2015 (special) : 652-659. doi: 10.3934/proc.2015.0652
Simone Creo, Maria Rosaria Lancia, Alexander Nazarov, Paola Vernole. On two-dimensional nonlocal Venttsel' problems in piecewise smooth domains. Discrete & Continuous Dynamical Systems - S, 2019, 12 (1) : 57-64. doi: 10.3934/dcdss.2019004
Wolf-Jürgen Beyn, Thorsten Hüls. Continuation and collapse of homoclinic tangles. Journal of Computational Dynamics, 2014, 1 (1) : 71-109. doi: 10.3934/jcd.2014.1.71
Sebastián Ferrer, Francisco Crespo. Alternative angle-based approach to the $\mathcal{KS}$-Map. An interpretation through symmetry and reduction. Journal of Geometric Mechanics, 2018, 10 (3) : 359-372. doi: 10.3934/jgm.2018013
Ketty A. De Rezende, Mariana G. Villapouca. Discrete conley index theory for zero dimensional basic sets. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1359-1387. doi: 10.3934/dcds.2017056
David Burguet. Examples of $\mathcal{C}^r$ interval map with large symbolic extension entropy. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 873-899. doi: 10.3934/dcds.2010.26.873
Boris Kruglikov, Martin Rypdal. A piece-wise affine contracting map with positive entropy. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 393-394. doi: 10.3934/dcds.2006.16.393
Micah Webster, Patrick Guidotti. Boundary dynamics of a two-dimensional diffusive free boundary problem. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 713-736. doi: 10.3934/dcds.2010.26.713
Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalization of two-dimensional piecewise linear maps: Abundance of 2-D strange attractors. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 941-966. doi: 10.3934/dcds.2018040
Luigi Ambrosio, Federico Glaudo, Dario Trevisan. On the optimal map in the $ 2 $-dimensional random matching problem. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7291-7308. doi: 10.3934/dcds.2019304
Hun Ki Baek, Younghae Do. Dangerous Border-Collision bifurcations of a piecewise-smooth map. Communications on Pure & Applied Analysis, 2006, 5 (3) : 493-503. doi: 10.3934/cpaa.2006.5.493
Zhiying Qin, Jichen Yang, Soumitro Banerjee, Guirong Jiang. Border-collision bifurcations in a generalized piecewise linear-power map. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 547-567. doi: 10.3934/dcdsb.2011.16.547
Fryderyk Falniowski, Marcin Kulczycki, Dominik Kwietniak, Jian Li. Two results on entropy, chaos and independence in symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3487-3505. doi: 10.3934/dcdsb.2015.20.3487
Ming Zhao, Cuiping Li, Jinliang Wang, Zhaosheng Feng. Bifurcation analysis of the three-dimensional Hénon map. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 625-645. doi: 10.3934/dcdss.2017031
Dyi-Shing Ou, Kenneth James Palmer. A constructive proof of the existence of a semi-conjugacy for a one dimensional map. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 977-992. doi: 10.3934/dcdsb.2012.17.977
Steven M. Pederson. Non-turning Poincaré map and homoclinic tangencies in interval maps with non-constant topological entropy. Conference Publications, 2001, 2001 (Special) : 295-302. doi: 10.3934/proc.2001.2001.295
Todd Young. A result in global bifurcation theory using the Conley index. Discrete & Continuous Dynamical Systems - A, 1996, 2 (3) : 387-396. doi: 10.3934/dcds.1996.2.387
M. C. Carbinatto, K. Mischaikow. Horseshoes and the Conley index spectrum - II: the theorem is sharp. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 599-616. doi: 10.3934/dcds.1999.5.599
Lorenzo Sella Pieter Collins
|
CommonCrawl
|
Large-scale string clustering
I've more than 10 million strings of length 1-100 characters. This number will be even bigger in the future. I'm interested in clustering this data, but I'm not quite sure what would be effective at this scale.
These are the clustering algorithms I've been looking into:
Affinity propagation:: seems like a good solution, but the memory usage seems way too high, since the data is dense.
DBSCAN: could also be an option, but I want all nodes/strings to belong to a cluster and not be considered "noise.
K-medoids: seems like a good option in terms of memory usage, but the computation time seems worrying. It would also be highly preferred if the number of clusters was not determined before running the algorithm as in affinity propagation.
Do you have any ideas on how this problem can be solved? The computation time is not extremely important, as long as the result is satisfiable and it can be done within a couple of days.
algorithms algorithm-analysis strings clustering big-data
James SmithJames Smith
$\begingroup$ There are as many clustering algorithms as there are clustering problems. In many cases, you can't really pick an algorithm without thinking hard about what your similarity measure is, how you parameterise feature space, and what you expect the clusters to look like in that space. DBSCAN is a case in point, because it's designed for the case where the clusters are not linearly separable in feature space. $\endgroup$
– Pseudonym
$\begingroup$ Well, I already know what the similarity metric should/can be. Levensthein or Damerau-levenshtein and this metric can easily be incorporated in the above-mentioned clustering algorithms. $\endgroup$
– James Smith
$\begingroup$ In that case, it would be helpful to edit the question to provide that additional context. We'd prefer that you put it in the question, not in the comments, so people don't have to read the comments to understand what you're asking. And we'd prefer that you provide all relevant information up front, so we don't waste our time telling you things you already know and to help us provide answers that are more relevant to your particular situation. $\endgroup$
There are many possible approaches. One approach that I would suggest investigating is finding all pairs of similar strings, and then applying a standard algorithm for clustering of sparse graphs. There are multiple possible approaches for finding similar strings, depending on how you plan to measure similarity.
One approach is to measure similarity using the Levenshtein edit distance. If you have $N$ strings, the naive way is to loop over all $N^2/2$ pairs, compute the edit distance for each, and save the pairs where the edit distance is below some threshold. However, this doesn't scale when $N$ gets large. Another alternative is to use fancier data structures and algorithms to find only the pairs of strings that are similar, e.g., BK-trees, ternary search trees, metric trees, Levenshtein automata, shingling, or other methods. See, e.g., How fast can we identifiy almost-duplicates in a list of strings?, How to speed up process of finding duplicates/similar items in a large amount of strings?, Find all pairs of strings in a set with Levenshtein distance < d, Efficient map data structure supporting approximate lookup, Efficient data structures for building a fast spell checker, and https://cstheory.stackexchange.com/q/4165/5038 for some references. Credits to Pseudonym for part of this suggestion.
Alternatively, if you want to measure similarity using the number of characters the differ (i.e., like the edit distance but with only the "replace" operation and without the "insert" or "delete" operations), you could look at locality sensitive hashing as a way to find all pairs of similar strings.
$\begingroup$ I'm not quite sure why locality-sensitive hashing would be a better solution than just using Levenshtein to find similar strings and can you explain to me why you would apply a clustering algorithm after LSH. LSH already puts similar strings in the same bucket/cluster. $\endgroup$
$\begingroup$ 10 million strings isn't that much. If Levenshtein was your metric, a ternary search tree would probably do the job for finding neighbours. $\endgroup$
$\begingroup$ You might be right about that. But I'm looking for a solution that can handle a lot more strings than that. Perhaps a billion. $\endgroup$
$\begingroup$ Besides that - a ternary tree will not make me able to cluster the dataset.. :-) $\endgroup$
$\begingroup$ @JamesSmith, the main benefit it has is that it spares you from having to do 10 million * 10 million (= 100 trillion) edit distance computations. But Pseudonym is right that a ternary search tree might be even better. $\endgroup$
Not the answer you're looking for? Browse other questions tagged algorithms algorithm-analysis strings clustering big-data or ask your own question.
Efficient map data structure supporting approximate lookup
Efficient data structures for building a fast spell checker
How to speed up process of finding duplicates/similar items in a large amount of strings?
Find all pairs of strings in a set with Levenshtein distance < d
How fast can we identifiy almost-duplicates in a list of strings?
Is an $\mathcal{O}(n\times \text{Number of clusters})$ clustering algorithm useful?
Appropriate graph clustering algorithm
CURE algorithm: what does moving the representative points towards the centroid do?
Creating Best Clusters of Objects Based on Distance Between Them
Why does the BFR (Bradley, Fayyad and Reina) algorithm assume clusters to be normally distributed around its centroid?
|
CommonCrawl
|
A Novel Framework for Early Detection of Hypertension using Magnetic Resonance Angiography
Associations between systemic blood pressure parameters and intraplaque hemorrhage in symptomatic intracranial atherosclerosis: a high-resolution MRI-based study
Xiaowei Song, Xihai Zhao, … Jian Wu
Bilateral carotid artery geometry using magnetic resonance angiography: a 10-year longitudinal single center study
Woocheol Kwon, Yeryung Kim, … Hyo Sung Kwak
Daily blood pressure profile and blood–brain barrier permeability in patients with cerebral small vessel disease
L. A. Dobrynina, K. V. Shamtieva, … M. V. Krotenkova
Synergistic effect of hypertension and smoking on the total small vessel disease score in healthy individuals: the Kashima scan study
Megumi Hara, Yusuke Yakushiji, … Hideo Hara
Brain arterial dilatation modifies the association between extracranial pulsatile hemodynamics and brain perivascular spaces: the Northern Manhattan Study
Jose Gutierrez, Marco DiTullio, … Tatjana Rundek
Cerebral macro- and microcirculatory blood flow dynamics in successfully treated chronic hypertensive patients with and without white mater lesions
Martin Müller, Mareike Österreich, … Alexander von Hessling
Cerebral and renal hemodynamics: similarities, differences, and associations with chronic kidney disease and aortic hemodynamics
Keisei Kosaki, Takashi Tarumi, … Seiji Maeda
Association of Newly Found Asymptomatic Intracranial Artery Stenosis and Ideal Cardiovascular Health Metrics in Chinese Community Population
Changfeng Fan, Qian Zhang, … Xingquan Zhao
Home-measured orthostatic hypotension associated with cerebral small vessel disease in a community-based older population
Yi Cui, Hua Zhang, … Zhendong Liu
Heba Kandil1,2,3,
Ahmed Soliman ORCID: orcid.org/0000-0002-1931-34161,
Mohammed Ghazal4,
Ali Mahmoud1,
Ahmed Shalaby1,
Robert Keynton1,
Adel Elmaghraby2,
Guruprasad Giridharan1 &
Ayman El-Baz ORCID: orcid.org/0000-0001-7264-13231
Scientific Reports volume 9, Article number: 11105 (2019) Cite this article
Hypertension is a leading mortality cause of 410,000 patients in USA. Cerebrovascular structural changes that occur as a result of chronically elevated cerebral perfusion pressure are hypothesized to precede the onset of systemic hypertension. A novel framework is presented in this manuscript to detect and quantify cerebrovascular changes (i.e. blood vessel diameters and tortuosity changes) using magnetic resonance angiography (MRA) data. The proposed framework consists of: 1) A novel adaptive segmentation algorithm to delineate large as well as small blood vessels locally using 3-D spatial information and appearance features of the cerebrovascular system; 2) Estimating the cumulative distribution function (CDF) of the 3-D distance map of the cerebrovascular system to quantify alterations in cerebral blood vessels' diameters; 3) Calculation of mean and Gaussian curvatures to quantify cerebrovascular tortuosity; and 4) Statistical and correlation analyses to identify the relationship between mean arterial pressure (MAP) and cerebral blood vessels' diameters and tortuosity alterations. The proposed framework was validated using MAP and MRA data collected from 15 patients over a 700-days period. The novel adaptive segmentation algorithm recorded a 92.23% Dice similarity coefficient (DSC), a 94.82% sensitivity, a 99.00% specificity, and a 10.00% absolute vessels volume difference (AVVD) in delineating cerebral blood vessels from surrounding tissues compared to the ground truth. Experiments demonstrated that MAP is inversely related to cerebral blood vessel diameters (p-value < 0.05) globally (over the whole brain) and locally (at circle of Willis and below). A statistically significant direct correlation (p-value < 0.05) was found between MAP and tortuosity (medians of Gaussian and mean curvatures, and average of mean curvature) globally and locally (at circle of Willis and below). Quantification of the cerebrovascular diameter and tortuosity changes may enable clinicians to predict elevated blood pressure before its onset and optimize medical treatment plans of pre-hypertension and hypertension.
One in three adults in the US suffers from hypertension. Hypertension is a leading contributor of death in 410,000 patients in USA1. Many factors such as renal dysfunction, high sodium intake, and chronic stress contribute in the development of hypertension. The chronic elevation of cerebral perfusion pressure (CPP) changes the cerebrovasculature of the brain and disrupts its vasoregulation mechanisms. This cerebral vascular alteration has a severe effect on the human body organs2 and is a leading cause of cognitive impairment, strokes, dementia, ischemic cerebral injury, and brain lesions3. Specifically, recent studies hypothesized that changes in the cerebrovasculature and CPP precede the systemic elevation of blood pressure (BP)4,5.
Currently, sphygmomanometers are used to measure repeated brachial artery pressure to diagnose systemic hypertension after its onset. However, this method cannot detect cerebrovascular alterations that lead to adverse events which may occur prior to the onset of hypertension. Quantifying these cerebral vascular structural changes could help in predicting patients who are at a high risk of cerebral adverse events. This may enable early medical intervention prior to the onset of systemic hypertension, potentially mitigating vascular-initiated end-organ damage.
Previous studies have demonstrated vascular changes with hypertension. A direct relationship between cerebral microvascular changes and hypertension was demonstrated using ultra high-resolution magnetic resonance angiography (MRA) of the lenticulostriate arteries (LSAs)6. Chen et al. analyzed 3-D time-of-flight (TOF)-MRA and found that there is a significant decrease in the number of LSA stems in hypertension patients compared to normal people7. Cerebrovascular structural alterations such as changes in blood vessels' diameters and tortuosity have been used in the diagnosis of many diseases. Cerebral blood vessels' change in diameters has been reported as an early sign of cerebrovascular dysfunction from both in-vivo and clinical observations8,9. Vascular resistance in hypertension resulted from the reduction of lumen size of small size arteries and arterioles10. Carotid artery diameter change in rats has been correlated to the chronic elevation of BP11. Pulmonary hypertension in humans has been reported to correlate to changes of pulmonary arterial diameters12,13. Abnormal or excessive vascular tortuosity of blood vessels has also been clinically observed to precede the onset of multiple vascular and non-vascular diseases14,15. Vascular tortousity has been linked to hypertension, genetic defects, aging, atherosclerosis, and diabetes mellitus14. Tortuosity has been used to measure how sharply a vessel is twisting or bending. Tortuosity of retinal blood vessels has been used and assessed by ophthalmologists as a diagnostic parameter16,17,18. Increase of tortuosity of coronary vessel was linked to patients with hypertension19. Hemispheric white matter tortuosity has been correlated to the severity of systemic hypertension and elevated cerebral perfusion pressure20. Thus, detection and quantification of cerebral vascular changes in diameters and tortuosity at an early stage would help clinicians in early diagnosis and identification of patients at a risk of hypertension, and initiating a treatment before the onset of the disease.
However the widespread usage of imaging technologies such as magnetic resonance angiography, alterations or remodeling of cerebral blood vessels' diameters and tortuosity have not been correlated to elevated arterial pressure due to limitations associated with current segmentation algorithms which cannot delineate small blood vessels efficiently. Manual segmentation of blood vessels is time consuming and intensive, error-prone, and is subject to inter-observer variability. In addition, semi-automatic blood vessel segmentation algorithms may need further investigation, revisions and/or evaluations by clinicians. In this manuscript, a novel framework is presented to automatically segment and accurately measure and quantify cerebrovascular changes using MRA data, and correlate these changes to mean arterial pressure (MAP). The framework includes a proposed novel automatic local adaptive segmentation algorithm which was capable of delineating both large as well as small blood vessels from MRA data. To the best of our knowledge, this study is the first to investigate the cerebrovasculature changes that precede hypertension from MRA.
The proposed framework (Fig. 1) includes 3 basic modules: 1) a novel, 3-D fully-automated local adaptive segmentation algorithm that extracts large as well as small cerebral blood vessels accurately, 2) a feature extraction module where imaging markers are quantified to predict the potential of elevated blood pressure, and 3) a statistical and correlation analysis module to correlate MAP to cerebrovascular change. These three modules are explained in more details in the following subsections.
A framework for detection and quantification of cerebral vascular changes.
A 3-D Local adaptive segmentation algorithm
Segmentation is an essential step in most of the medical imaging analysis systems. Segmentation accuracy is affected by many factors such as scanning parameters, application domain, and imaging modality. Particularly, segmentation of cerebral blood vessels from MRA data has several challenges including the complex nature of the vasculature, diameter and density of small vessels, dynamic range of intensities, acquisition errors, noise, and high inter- person variability of the vascular tree which hinders the creation of common atlas to be used for segmentation, as done for other human organs. Therefore, there is a limitation of current segmentation algorithms to delineate cerebral blood vessels efficiently, specifically smaller ones.
Skull stripping
A preprocessing step (Fig. 2) precedes the segmentation step to account for any biasing or inhomogeneity of the MRA data. A nonparametric bias correction algorithm21 was used to reduce any effects of noise and remove data inconsistencies. Then, a homogeneity enhancement algorithm employing a 3-D generalized Gauss-Markov random field (GGMRF) model22 was used. It makes use of the 3-D spatially homogeneous pairwise interactions of the 26-neighborhood system and minimizes differences between a centered voxel and its 26 neighbors using the following energy function:
$$\widehat{{q}_{s}}={\rm{\arg }}\,\mathop{{\rm{\min }}}\limits_{{\tilde{q}}_{s}}[|{q}_{s}-{\tilde{q}}_{s}|+{\rho }^{\alpha }\,{\lambda }^{\beta }\,\sum _{r\varepsilon {v}_{s}}\,{\eta }_{s,r}|{\tilde{q}}_{s}-{q}_{r}{|}^{\beta }]$$
such that qs and \({\tilde{q}}_{s}\) represent original and new estimated gray-levels; vs represents the 26-neighborhood system; \({\eta }_{s,r}\) represents the GGMRF potential; λ and \(\rho \,\) represent the scaling factors; \(\alpha \,\varepsilon \,\{1,2\}\) was used to define the prior distribution of the estimator (\(\alpha =1\) for Laplace or \(\alpha =2\) for Gaussian); and β was used in controlling the smoothing level such that \(\beta \,\varepsilon \,[1.01,2.0]\) (\(\beta =2\) for smoothing and \(\beta =1.01\) for relatively abrupt edges). A skull stripping procedure was utilized and applied on the preprocessed data to remove human brain's fat tissues that have a similar visual appearance to that's of blood vessels and retain brain tissues only. It combines Markov-Gibbs random field (MGRF) model of data with a geometric deformable model (brain isosurface) which preserves the cerebral topology during the process of extraction. Algorithm 1 presents the details of skull stripping algorithm.
Steps of Preprocessing Stage.
A Linear Combination of Discrete Gaussians (LCDG)-based segmentation
A Bayesian framework was used to extract an initial vasculature where a linear combination of discrete Gaussians (LCDG)23 was used for estimating marginal probability density of MRA voxel values for cerebral vessels and other cerebral tissues (Fig. 3). The LCDG model used has Cp positive and Cn negative components for cerebral vessels and other tissues and is given by the following equation:
$${p}_{w,{\rm{\Theta }}}(q)=\sum _{r=1}^{{C}_{p}}\,{w}_{p,r}\psi (q|{\theta }_{p,r})-\sum _{l=1}^{{C}_{n}}\,{w}_{n,l}\psi (q|{\theta }_{n,l})$$
where \({{\rm{\Phi }}}_{\theta }(q)\) is the cumulative Gaussian function with \(\theta =(\mu ,{\sigma }^{2})\) for the mean, μ, and variance, σ2 such that \(\psi (q|\theta )={{\rm{\Phi }}}_{\theta }(q+0.5)-{{\rm{\Phi }}}_{\theta }(q-0.5))\) for \(q=1,\ldots ,Q-2,\psi (0|\theta )={{\rm{\Phi }}}_{\theta }(0.5)\) and \(\psi (Q-1|\theta )=1-\)\({{\rm{\Phi }}}_{\theta }(Q-1.5)\). Total summation of the non-negative weights of the LCDG models is equal to one. For detailed description of the LCDG model, see23,24. Parameters of the LCDG model (prior probability, number of Gaussian components, mean, and variance) were estimated using the modified Expectation Maximization (EM) algorithm23. Finally, the extraction of the blood vessels was performed based on the following Bayesian rule: \(P(v)p(q|v)\ge P(O)p(q|O)\). Where P(v) is the prior probability of the cerebral blood vessels, and P(O) is the prior probability of the other cerebral tissues.
Empirical density normalization using the LCDG model.
Blood vessels' segmentation refinement
The initial segmentation of the vasculature may miss some of the small blood vessels. To handle this challenge, a novel 3-D local adaptive segmentation algorithm has been developed (Fig. 4). The algorithm processes the initially segmented vasculature to find more blood vessels, specifically the smaller ones. In this algorithm, each slice was divided into a set of connected components. A search window of adaptive size was then centered around every component in the set where a new separation threshold was estimated as \(T=\frac{{\mu }_{b}+{\mu }_{o}}{2}\), where μb represents average intensity of blood vessels and μo represents average intensity of other cerebral tissues. A novel seed-generation refinement algorithm was developed and utilized to detect seeds within regions that have a high potential to contain smaller vessels missed in the initial segmentation. A 3-D region growing connected components algorithm was subsequently used to delineate the final, connected vasculature. Steps of the segmentation algorithm are presented in Algorithm 2.
Steps of Segmentation Stage.
Cerebral vascular system feature extraction
Median vascular radius
For each subject, medians of vascular radii were obtained by estimating distance map for the delineated vascular tree. Making use of these measurements, the cumulative distribution function (CDF) of the vascular radii was estimated as the cumulative distribution of the PDF. The CDF FX of a discrete random variable X is obtained as \(F(x)=P(X\le x)={\sum }_{t\le x}\,f(t)\). The CDF provides a probability estimate for the blood vessels the exists at or below a specific vascular diameter point. Each CDF value represents the average of the vascular diameters in the MRA volume (Fig. 5).
Brain tissue extraction from MRA scans.
Local Adaptive Segmentation.
Visualization of the distance map calculated for each blood vessel.
Tortuosity of cerebral blood vessels
Tortuosity from imaging modalities can be measured either as the ratio of a vessel curve length over the line distance between the two ends19,25,26, or as the cumulative sum of angles between segment vectors normalized by vessel length (total curvature or mean curvature)26,27,28. Gaussian and mean curvatures are considered to be the most significant types of curvatures in surface theory29. Thus, in the proposed framework, tortuosity was measured from MRA data by calculating mean and Gaussian curvatures across the whole vasculature of each patient. Mean curvature equals \(({k}_{1}+{k}_{2})\)/2, where k1, k2 are the principal curvatures, and is an extrinsic measure of curvature that depends on the embedding. Gaussian curvature equals \({k}_{1}\times {k}_{2}\) and is an intrinsic property of the surface and does not depend on the embedding of the surface.
The surface of the segmented vascular tree was modeled as a triangulated mesh. Following the methodology proposed by Smedby et al.30, and generalizing to higher dimensions, we defined tortuosity as the integral of absolute curvature |K|. Total (or Gaussian) curvature K was estimated in the neighborhood of each mesh vertex using the algorithm of Chen and Schmitt31. Briefly, let vi be the coordinates of vertex i, ni is the surface normal (a weighted average of normals to the triangles incident on i), and Ni is a vertex neighborhood of i, specifically the one-ring neighborhood if it contains at least four neighbors, or two-ring neighborhood in other cases. Then for each vertex j in Ni, let \({e}_{ij}={v}_{j}-{v}_{i}\), and \({t}_{ij}=\frac{{e}_{ij}-({n}_{i}.{e}_{ij}){n}_{i}}{\parallel {e}_{ij}-({n}_{i}.{e}_{ij}){n}_{i}\parallel }\); the finite-difference approximation of normal curvature at vi in the direction tij is \({k}_{ij}=-\,\frac{{e}_{ij}({n}_{j}-{n}_{i})}{\parallel {e}_{ij}{\parallel }^{2}}\). Euler's theorem states that the normal curvature along tangent direction t is \(k(\theta )={k}_{1}{co}{{s}}^{2}\theta +{k}_{2}{si}{{n}}^{2}\theta \), where θ is the angle t makes with the first principal direction E1, and k1 and k2 (principal curvatures), with \({k}_{1}\ge {k}_{2}\). Given a sample of tangent vectors and corresponding normal curvatures, the Chen-Schmitt algorithm estimates principal directions and curvatures. The total curvature \(K={k}_{1}\times {k}_{2}\) can then be computed (Fig. 6).
Visualization of the calculation of curvatures. N is the normal vector, and k1, k2 are the principal curvatures (maximum and minimum normal curvatures).
Materials and procedure
Magnetic resonance angiography scans and blood pressure measurements used in this study were collected from 15 participants (Female = 7, Male = 8, Age = 49.2 ± 7.3 years) during a study period that extended for 700-days (day 0 (t0) and on day 700 (t1)). The total number of data sets processed was 30. Data were collected and approved by the Institutional Review Board at the University of Pittsburgh and the research was performed in accordance to the relevant guidelines and regulations. Participants of ages 35–60 years were enrolled in the study with the following exclusion criteria: 1) general medical conditions: ischemic coronary artery disease, pregnancy, chronic liver disease, cancer (treatment < 12 months), diabetes mellitus (fasting blood glucose > 125 mg/dL), or chronic kidney disease (creatinine > 1.2 mg/dL); 2) neuropsychiatric conditions: multiple sclerosis, stroke, epilepsy, serious head injury, brain tumor, and major mental illness; 3) using prescription medications for hypertension and psychotropic drugs. Participant were selected to be pre-hypertensive with a systolic BP > 120 and <140 mmHg or a diastolic BP > 80 and <90 mmHg.
Initial screening via phone calls was made with each participant to ensure eligibility. All participants provided informed consent before any study procedures. Participants had to attend to the lab three times within two weeks. During the first visit, participants provided their medical history, blood pressure readings, some physical traits such as weight, height, etc. (Table 1). In the second visit, blood pressure readings were taken from the participants, followed by a 2-hour neuropsychological battery of tests. The third visit included a 1-hour MRI screening.
Table 1 Participant Demographics and Characteristics.
Blood pressure measurements were obtained using the auscultatory technique with cuff size appropriate to patient arm after giving him/her a seated rest of 5-minutes. Two measurements were taken with at least 1-minute separation time. This procedure was repeated on a second day and the average of the four readings taken during both visits was calculated and used to calculate MAP value. Participants were invited to the follow-up assessment after approximately 2 years where the blood pressure measurements and the MRA scans were obtained again. Average of blood pressure readings on t0 was 122 ± 6.9 mmHg systolic and 82 ± 3.8 mmHg diastolic, while on t1, average was 118.9 ± 12.4 mmHg systolic and 79.9 ± 11.0 mmHg diastolic. Blood pressure measurements of patients were comparable over time, but measurements for each patient changed or stayed the same temporally.
MRA scans were collected by a 3 T Trio TIM scanner with a 12-channel phased-array head coil. Each scan was composed of 3-D multi-slab high-resolution images with about 160 slices, a thickness of 0.5 mm, a resolution of 384 × 448, a flip angle of 15 degrees, a repetition time of 21 ms, and an echo time of 3.8 ms. MRA data were analyzed blinded to patients' blood pressure.
Segmentation and statistical analysis
The proposed automatic novel adaptive segmentation algorithm was evaluated using commonly used segmentation evaluation metrics; Dice similarity coefficient (DSC), the absolute vessels volume difference (AVVD), sensitivity, and specificity. The first metric is the DSC, one of the most commonly used similarity metrics, that characterizes the agreement between the segmented (S) and the gold standard (G) regions based on the determination of true positive (TP) value, true negative (TN) value, false negative (FN) value, and false positive (FP) value. The TP is defined as the number of positively labeled voxels that are correct; the FP is the number of positively labeled voxels that are incorrect; the TN is the number of negatively labeled voxels that are correct; and the FN is the number of negatively labeled voxels that are incorrect. These values are used to calculate the DSC as described in details in32: The calculated value of the DSC can have a percentage value in the range 0% to 100%, where 0% means strong dissimilarity and 100% means that there is a perfect similarity. To obtain the gold standard that was used in the segmentation evaluation process, an MRA expert delineated the brain vessels. The second used evaluation metric is AVVD, an area-based metric, which measures the volume difference (percentage) between the output of the segmentation framework, S, and the gold standard, G, as follows:
$${\rm{AVVD}}({\bf{G}},{\bf{S}})=\frac{|{\bf{G}}-{\bf{S}}|}{|{\bf{G}}|}$$
where |G − S| is the absolute difference between the number of voxels in G and S, |G| is the number of voxels in G. Moreover, both the sensitivity, \((Sens=\frac{TP}{TP+FN})\), and specificity, \((Spec=\frac{TN}{TN+FP})\) of the segmentation have been evaluated to measure both the true positive and true negative detection accuracy.
The proposed automatic novel adaptive segmentation algorithm obtained a DSC of ~92.23%, a sensitivity of ~94.82%, a specificity of ~99.00%, and an AVVD of ~10.00% in delineating cerebral blood vessels compared to the manually segmented ground truth. These results demonstrate the high accuracy and efficacy of this algorithm. Figure 7 shows an output instance of the segmentation algorithm. To highlight the accuracy enhancement of the proposed approach over existing methods, a comparison to the global statistical approach24 was conducted and the results are shown in Table 2. Figure 8 shows sample 3-D segmentation results for three MRA subjects along their maximum intensity projection (MIP). These qualitative results demonstrate how the proposed segmentation approach is capable of obtaining fine details of the brain vasculature.
Sample output of the local adaptive segmentation algorithm: (a) Original (raw) image, (b) After bias-correction, (c) After GGMRF-enhancement, (d) Distance map and iso-surfaces generated, (e) Subsurfaces-based extraction of brain tissues, (f) Final delineated cerebrovasculature (h) Initial LCDG global segmentation, (i) Results after applying the proposed local adaptive segmentation from the same plane, (j) and preceding and succeeding planes, and 3-D visualization of results using a growing tree model (k–m).
Table 2 Result of comparing the proposed segmentation algorithm and the Global Statistical Based approach (GSB)24 in terms of the Sensitivity, Specificity, DSC and AVVD.
Segmentation results for three MRA subjects (a–c). The maximum intensity projection (MIP) (first-row); 3D segmentation obtained by the proposed segmentation approach (second-row).
R-software version 3.2 was utilized to perform statistical analysis. The add-on package lme433 was used to study the potential correlation between blood pressure measurements and MRA data. MAP dependence upon features of the cerebral vasculature was tested using a mixed-effects linear model of fixed effects of median of vascular radius (represents blood vessels diameters) and averages and medians of both Gaussian curvature and mean curvature of cerebral blood vessels (represent tortuosity of blood vessels). A random intercept per patient was also included in the test model. This analysis defined MAP using the equation that follows, \({\rm{MAP}}=(2\,\ast \,{\rm{Diastolic}}\,{\rm{BP}}+{\rm{Systolic}}\,{\rm{BP}})/3\). For each feature quantified, we tested our framework both globally (over the whole brain) and locally (brain was segmented into upper section (above circle of Willis), and lower section (circle of Willis and below)). Correlation analysis was performed using MATLAB R2017a software.
Results of statistical experiments demonstrated a statistically significant (p-value < 0.05) inverse correlation between MAP and alterations in diameters of blood vessels (represented by median of vascular radius) globally over the whole brain (Table 3) and locally at the lower section of the brain (Table 4). Additionally, the statistical analysis demonstrated that MAP was significantly correlated with median of mean curvature, median of Gaussian curvature, and average of mean curvature, (p-value < 0.05) globally (Table 3) and locally at the lower section of the brain (Table 4). In the upper section of the brain (above the circle of Willis), the p-values for median of mean curvature, median of Gaussian curvature, average of mean curvature and average of Gaussian curvature were 0.068, 0.063, 0.060, 0.027 respectively.
Table 3 MAP Response on Change of Tortuosity and Diameter of Blood Vessels Globally.
Table 4 MAP Response on Change of Tortuosity and Diameter of Blood Vessels Locally.
Discussion and Limitations
It is common that patients with hypertension are asymptomatic, sometimes even in advanced stages. Even a measurement of high blood pressure is often neglected as a temporary result due to stress or other factors rather than chronic hypertension. Predicting the potential of developing a hypertension in early stages may help in slowing the progression of the disease by following proactive and preventive lifestyles recommended by clinicians. Timely information regarding vascular health would potentially enhance the quality of life for patients and their families and reduce the health care costs. In this study we presented a framework that would help clinicians in predicting elevated blood pressure before its onset. The proposed automatic local adaptive segmentation algorithm was able to rapidly delineate large and small cerebral blood vessels with high degrees of specificity and sensitivity. Importantly, the 3-D segmentation algorithm is fully automatic and is applicable to both healthy, and non-healthy vessels. Previous methodologies published in literature are suitable only for healthy vessels due to inherent assumptions that do not fit with pathological vessels such as linearity and circular cross-section34. Studies that developed automatic cerebrovasculature segmentation algorithms have been reported35,36,37,38. For example, one study proposed an architecture based on using a deep convolutional neural network (CNN) to automatically segment cerebral blood vessels from TOF-MRA datasets of healthy subjects by training the delineated manually annotated data. Their proposed framework was able to delineate cerebral blood vessels with a DSC ranging from 0.764 to 0.78635. A deep learning approach called DeepVesselNet was proposed in36 where a 3-D CNN architecture was employed to segment blood vessels along with other tasks such as vessels center-lines prediction and bifurcations detection. In their methodology, they used cross-hair filters (one of the components of DeepVesselNet) from three intersecting 2-D filters to help in avoiding memory and speed problems of traditional 3-D networks while at the same time taking advantages of the 3-D information in volumetric data. Their experiments showed that their method performed well compared to 3-D filters and at the same time improved speed and memory consumption significantly. Another MRA- based vasculature segmentation method was proposed in37 where they first performed background subtraction and vessel reservation by applying volume projection, 2-D segmentation, and back-projection operations. Then, they utilized a stochastic expectation maximization algorithm to estimate the PDF of remaining vessels' voxels, which were assumed to be mixture of one Rayleigh and two Gaussian distributions. Their method classified image voxels into three classes; background, middle region, and vascular structure. Subsequently, the K-means method, which is based on the gradient of remaining vessels was utilized to detect true positives around boundaries of vessels effectively. The methodology could achieve accurate segmentation in regions of low contrast. However, one disadvantage of their method was the computing time consumed by the K-means method to determine the appropriate gradient value. In contrast, the proposed segmentation algorithm utilizes an adaptive thresholding methodology and overcomes the limitations associated with MRA images such as biasing, noise, or resolution problems, which enables efficient extraction of small and large cerebral blood vessels. Importantly, the automatic nature of this framework ensures that there will be no intra- or inter-observer variability because there are no human interactions.
This study demonstrated that changes of cerebral blood vessel diameters were inversely correlated to MAP globally (over the entire brain) as well as at and below the circle of Willis (lower section of the brain). Cerebral blood vessels below the circle of Willis are typically larger compared to blood vessels in the upper compartment of the brain. While the segmentation algorithm can delineate these smaller blood vessels, changes in the small blood vessel diameters were not statistically significant. This is potentially due to the smaller variations in diameters of small blood vessels that is limited by the resolution of the MRA images. Our proposed approach used CDF, which is a measure of diameter over the entire volume of the brain rather than the diameter of a vessel at a specific point. Thus, CDF was used to represent the median radius of the vessels globally, simplifying temporal comparisons and facilitating analysis. As an example, Fig. 9 presents the temporal alterations in radii of blood vessels and CDFs for two different subjects A and B. MAP values for subject A were decreasing from t0 to t1, while MAP values for subject B were increasing from t0 to t1. As shown in Fig. 9, the CDF for subject A saturated (CDF reaches 1) at a vascular radius of smaller value on t0 (higher MAP) compared to t1. On the contrary, the CDF for subject B saturated at a vascular radius of smaller value on t1 (higher MAP) compared to t0 (Table 5). The given results supported the efficacy of the presented framework to distinguish intra-patient temporal alterations in cerebrovasculature and demonstrated an inverse relationship between MAP and vascular diameters. Importantly, the framework recognized temporal alterations in cerebral vascular system in response to MAP in non-hypertensive patients (MAP < 120 mmHg). Thus, the presented framework would help ascertain risk of cerebral vascular events or systemic hypertension before its onset. Figure 10 illustrates median vascular radii of a hypertensive and normotensive subjects and their corresponding CDFs. The CDF of the hypertensive subject saturated at a vascular radius of smaller value (solid line), compared to the normotensive subject (dotted line). Additionally, the median vascular radius of the hypertensive subject was 0.46 mm smaller compared to the normotensive subject demonstrating the capability of the framework to discern inter-patient variabilities.
CDF estimations and the corresponding medians of vascular radii at t0 and t1 for subjects A, B. (a) CDF saturated at a vascular radius of smaller value on t0 compared to t1 for Subject A. (b) CDF saturated at a vascular radius of smaller value for Subject B on t1 compared to t0.
Table 5 Measurements of Blood Pressure of Subjects A, B.
CDF estimations and the corresponding median vascular radii for a hypertensive subject and a normotensive subject. Elevated blood pressure corresponded to CDF saturation at a smaller vascular radius value. These results support the efficacy of the presented framework.
In the proposed framework, blood vessels' tortuosity was measured by the calculation of mean and Gaussian curvatures. Algorithms proposed in literature have typically used a 2-D methodology for estimating curvatures. In contrast, our framework presents a novel 3-D methodology to estimate curvatures by using a 3-D mesh to model the vasculature obtained from segmentation. This three dimensional approach results in a more accurate representation and calculation of tortuosity in the cerebral volume. Our study demonstrated that change in cerebrovascular tortuosity was strongly correlated to MAP (Fig. 11) globally and in the lower section of the brain. In the upper section of the brain, the correlation values were not statistically significant due to the limited clinical sample size and smaller tortuosity changes observed in small diameter blood vessels. However, the correlation was trending towards statistical significance and additional data collection is currently underway. In summary, the results of this study is supportive of previously published hypothesis that cerebrovascular tortuosity changes may precede hypertension6,7,14,15,19,20.
A sample of two patients to explain the correlation between Tortuosity and Mean Arterial Pressure (MAP). Patient 1 shows a decrease in tortuosity index from t0 to t1 corresponding to a decrease in MAP value for this patient. Patient 2 shows a slight increase from t0 to t1 corresponding to an increase in MAP value for this patient. Hence, these results show that MAP is directly correlated to tortuosity of cerebral blood vessels.
A limitation of our study is that we had temporal MRA data from a small sample size of 15 patients. To the best of our knowledge, there are no free standard databases available in this field. Despite the small sample size, the cerebrovascular changes were adequate to enable statistical significance. The efficacy of the proposed framework to detect and quantify cerebrovascular structural changes could potentially enable clinicians in formulating appropriate medical treatment plans to mitigate risks of adverse events. Additionally, the framework may enable better follow-up of patients and to test the effectiveness of the treatment regimen in slowing down the progression of cerebrovascular changes. However, despite the accuracy of the framework in quantifying cerebrovascular changes from MRA, our approach may be limited to patients with high risk of hypertension or adverse events due to cost considerations. While MRA is expensive, the cost hypertension medication alone is $ 2000/year per patient and the hospitalization cost for a hemorrhagic stroke is over $ 32,00039,40. Thus, MRA screening may be cost effective long-term especially in patients with high risk of developing hypertension. Access to MRA, while limited in remote rural areas, is readily available in major care centers and hospitals that serve rural areas in the US.
Materials, data, and associated protocols will be available to readers after the manuscript is being accepted.
Control. Cfd, about underlying cause of death, hypertension, available via national center for health statistics, http://wonder.cdc.gov/ucd-icd10.html, accessed: 2018-01-18 (2015).
Soler, D., Cox, T., Bullock, P., Calver, D. & Robinson, R. Diagnosis and management of benign intracranial hypertension. Archives of disease in childhood 78, 89–94 (1998).
Iadecola, C. & Davisson, R. L. Hypertension and cerebrovascular dysfunction. Cell metabolism 7, 476–484 (2008).
Barnes, J. N. et al. Aortic hemodynamics and white matter hyperintensities in normotensive postmenopausal women. Journal of Neurology 264, 938–945 (2017).
Launer, L. J. et al. Vascular factors and multiple measures of early brain health: Cardia brain mri study. PloS one 10, e0122138 (2015).
Kang, C.-K. et al. Hypertension correlates with lenticulostriate arteries visualized by 7t magnetic resonance angiography. Hypertension 54, 1050–1056 (2009).
Chen, Y.-C., Li, M.-H., Li, Y.-H. & Qiao, R.-H. Analysis of correlation between the number of lenticulostriate arteries and hypertension based on high-resolution mr angiography findings. American Journal of Neuroradiology 32, 1899–1903 (2011).
Gebru, Y. A. A novel mra-based framework for the detection of changes in cerebrovascular blood pressure. (2017).
Kandil, H. et al. A Novel MRA-Based Framework For Detecting Correlation Between Cerebrovascular Changes and Mean Arterial Pressure. In 2018 IEEE International Conference on Imaging Systems and Techniques (IST) 1–6 (IEEE, 2018).
Intengan, H. D. & Schiffrin, E. L. Structure and mechanical properties of resistance arteries in hypertension: role of adhesion molecules and extracellular matrix determinants. Hypertension 36, 312–318 (2000).
Hayashi, K., Makino, A. & Kakoi, D. Remodeling of arterial wall: Response to changes in both blood flow and blood pressure. Journal of the mechanical behavior of biomedical materials 77, 475–484 (2018).
Ussavarungsi, K. et al. The significance of pulmonary artery size in pulmonary hypertension. Diseases 2, 243–259 (2014).
Lange, T. J. et al. Increased pulmonary artery diameter on chest computed tomography can predict borderline pulmonary hypertension. Pulmonary circulation 3, 363–368 (2013).
Han, H.-C. Twisted blood vessels: symptoms, etiology and biomechanical mechanisms. Journal of vascular research 49, 185–197 (2012).
Abdalla, M., Hunter, A. & Al-Diri, B. Quantifying retinal blood vessels' tortuosity. In Science and Information Conference (SAI), 2015, 687–693 (IEEE, 2015).
Hart, W. E., Goldbaum, M., Cote, B., Kube, P. & Nelson, M. R. Automated measurement of retinal vascular tortuosity. In Proceedings of the AMIA Annual Fall Symposium, 459 (American Medical Informatics Association, 1997).
Trucco, E., Azegrouz, H. & Dhillon, B. Modeling the tortuosity of retinal vessels: Does caliber play a role? IEEE Transactions on Biomedical Engineering 57, 2239–2247 (2010).
Annunziata, R., Kheirkhah, A., Aggarwal, S., Hamrah, P. & Trucco, E. A fully automated tortuosity quantification system with application to corneal nerve fibres in confocal microscopy images. Medical image analysis 32, 216–232 (2016).
Jakob, M. et al. Tortuosity of coronary arteries in chronic pressure and volume overload. Catheterization and Cardiovascular Interventions 38, 25–31 (1996).
Hiroki, M., Miyashita, K. & Oda, M. Tortuosity of the white matter medullary arterioles is related to the severity of hypertension. Cerebrovascular Diseases 13, 242–250 (2002).
Tustison, N. J. et al. N4itk: improved n3 bias correction. IEEE transactions on medical imaging 29, 1310–1320 (2010).
Bouman, C. & Sauer, K. A generalized gaussian image model for edge-preserving map estimation. IEEE Transactions on Image Processing 2, 296–310 (1993).
El-Baz, A. & Gimel'farb, G. Em based approximation of empirical distributions with linear combinations of discrete gaussians. In Image Processing, 2007. ICIP 2007. IEEE International Conference on, vol. 4, 300–373 (IEEE, 2007).
El-Baz, A. et al. Precise segmentation of 3-d magnetic resonance angiography. IEEE Transactions on Biomedical Engineering 59, 2019–2029 (2012).
Hoi, Y. et al. In vivo assessment of rapid cerebrovascular morphological adaptation following acute blood flow increase. Journal of neurosurgery 109, 1141–1147 (2008).
Wolf, Y. G. et al. Impact of aortoiliac tortuosity on endovascular repair of abdominal aortic aneurysms: evaluation of 3d computer-based assessment. Journal of vascular surgery 34, 594–599 (2001).
Koreen, S. et al. Evaluation of a computer-based system for plus disease diagnosis in retinopathy of prematurity. Ophthalmology 114, e59–e67 (2007).
Onkaew, D., Turior, R., Uyyanonvara, B., Akinori, N. & Sinthanayothin, C. Automatic retinal vessel tortuosity measurement using curvature of improved chain code. In Electrical, Control and Computer Engineering (INECCE), 2011 International Conference on, 183–186 (IEEE, 2011).
Abbena, E., Salamon, S. & Gray, A. Modern differential geometry of curves and surfaces with Mathematica (CRC press, 2017).
Smedby, Ö. et al. Two-dimensional tortuosity of the superficial femoral artery in early atherosclerosis. Journal of vascular research 30, 181–191 (1993).
Chen, X. & Schmitt, F. Intrinsic surface properties from surface triangulation. In European Conference on Computer Vision, 739–743 (Springer, 1992).
Soliman, A., Khalifa, F., Alansary, A., Gimel'farb, G. & El-Baz, A. Performance evaluation of an automatic mgrf-based lung segmentation approach. In AIP Conference Proceedings, vol. 1559, 323–332 (AIP, 2013).
Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823 (2014).
Moccia, S., De Momi, E., El Hadji, S. & Mattos, L. S. Blood vessel segmentation algorithms—review of methods, datasets and evaluation metrics. Computer methods and programs in biomedicine 158, 71–91 (2018).
Phellan, R., Peixinho, A., Falcão, A. & Forkert, N. D. Vascular segmentation in tof mra images of the brain using a deep convolutional neural network. In Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, 39–46 (Springer, 2017).
Tetteh, G. et al. Deepvesselnet: Vessel segmentation, centerline prediction, and bifurcation detection in 3-d angiographic volumes. arXiv preprint arXiv:1803.09340 (2018).
Zhao, S. et al. Vascular extraction using mra statistics and gradient information. Mathematical Problems in Engineering 2018 (2018).
Kandil, H. et al. A novel MRA framework based on integrated global and local analysis for accurate segmentation of the cerebral vascular system. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) 1365–1368 (IEEE, 2018).
Kirkland, E. B. et al. Trends in healthcare expenditures among us adults with hypertension: National estimates, 2003–2014. Journal of the American Heart Association 7, e008731 (2018).
Wang, G. et al. Costs of hospitalization for stroke patients aged 18–64 years in the united states. Journal of Stroke and Cerebrovascular Diseases 23, 861–868 (2014).
Bioimaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY, 40292, USA
Heba Kandil, Ahmed Soliman, Ali Mahmoud, Ahmed Shalaby, Robert Keynton, Guruprasad Giridharan & Ayman El-Baz
Computer Engineering and Computer Science Department, University of Louisville, Louisville, KY, USA
Heba Kandil & Adel Elmaghraby
Faculty of Computer Science and Information, Information Technology Department, Mansoura University, Mansoura, 35516, Egypt
Heba Kandil
Electrical and Computer Engineering Department, University of Abu Dhabi, Abu Dhabi, UAE
Mohammed Ghazal
Ahmed Soliman
Ali Mahmoud
Ahmed Shalaby
Robert Keynton
Adel Elmaghraby
Guruprasad Giridharan
Ayman El-Baz
H. Kandil, A. Soliman and A. El-Baz participated in the problem analysis and methodology design. M. Ghazal and R. Keynton provided all financial support to conduct the experiments. A. Mahmoud and A. Shalaby provided technical support. G. Giridharan provided advising and technical support. A. El-Baz and A. Elmaghraby provided mentorship and advising.
Correspondence to Ayman El-Baz.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Kandil, H., Soliman, A., Ghazal, M. et al. A Novel Framework for Early Detection of Hypertension using Magnetic Resonance Angiography. Sci Rep 9, 11105 (2019). https://doi.org/10.1038/s41598-019-47368-1
Quantitative Analysis of the Cerebral Vasculature on Magnetic Resonance Angiography
Pulak Goswami
Mia K. Markey
Adrienne N. Dula
|
CommonCrawl
|
ρ Follow the links to pictures and more information. Problems in Exploration Seismology and their Solutions, http://dx.doi.org/10.1190/1.9781560801733, Relation between lithology and seismic velocities, Porosities, velocities, and densities of rocks, Dependence of velocity-depth curves on geology, Determining lithology from well-velocity surveys, Effects of weathered layer (LVL) and permafrost, Stacking velocity versus rms and average velocities, Quick-look velocity analysis and effects of errors, Effect of timing errors on stacking velocity, depth, and dip, Estimating lithology from stacking velocity, Influence of direction on velocity analyses, Effect of time picks, NMO stretch, and datum choice on stacking velocity, https://wiki.seg.org/index.php?title=Porosities,_velocities,_and_densities_of_rocks&oldid=141084, Problems in Exploration Seismology & their Solutions, the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). Density Values of Different Construction Materials. g 3 ϕ I took two pills non-car items should go. {\displaystyle \rho _{\rm {quartz}}=2.68} = We also use partner advertising cookies to deliver targeted, geophysics-related advertising to you; these cookies are not added without your direct consent. {\displaystyle \rho _{s}} V Crushed Rock density in truck oldestguy (Geotechnical) (OP) 14 Jul 07 16:37. 4 Grade* Term Uniaxial Comp. {\displaystyle \phi =31\%} c The RockWorks "Rock Density Table" can be used as a reference table for determining the densities of various rock types, to be used if you wish to compute total density in the 2D and 3D Volume tools.! ρ %���� ��$55�u�������q�bX�'3��LPe���%�B�r ����Xw��q�g�)@S�S�A#łA;*�[���.��O��;O��ͤ����ɰ�#C�%�#E��:�}��@�ߓ���1�oL��$�B�T�� For sandstone, limestone, and shale saturated with salt water The chemical composition of the parent rock is not or is only slightly altered. and This isn't as complex as you may think because water's density is 1 gram per cubic centimeter or 1 g/cm 3. 4 Density is the term for how heavy an object is for its size. e m The bulk density of a rock is ρ B = WG / VB, where WG is the weight of grains (sedimentary rocks) or crystals (igneous and metamorphic rocks) and natural cements, if any, and VB is the total volume of the grains or crystals plus the void (pore) space. 2.71 % = Density is defined as the mass per unit volume. s r a 3 ρ c Start in the left column of the appropriate table and work your way across. Porosity Rock Density range Density av. 2.19 You can disable cookies at any time. {\displaystyle \phi =0} (US)for his material, truck measure. 3 endobj Table 1 (after Kehew, 1995). t The following wood density chart will tell you how much weight you can expect from your various types of … ) / i {\displaystyle \rho _{m}} Features Introduction. Using .). is in m/s or ft/s, respectively. 4 Density measures the mass per unit volume.It is calculated by dividing the mass of the material by the volume and is normally expressed in g/cm 3. c �V���ɮޡ哨�U&C�ֹ&���f{q����ӻ�L�L5��*@��nc*@!�P0-�h*C�0*�a���b���]�]+>���w ��t��� a t m We encourage you to measure your own rock density in your project areas, and add them to this listing. 3 c ϕ The heaviest rocks would be those that are made up of dense, metallic minerals. Engineering Materials. <> u Rocks vary considerably in density, so the density of a rock is often a good identification tool and useful for distinguishing terrestrial (Earth) rocks from meteorites. Lower dense material occupies more volume than higher dense material. and 3 by 1728; for g/cm 3, multiply density in lb/in. ( 1.03 k It is important to note that density varies with temperature. = ρ t Therefore, these numbers translate directly to g/cm 3, or tonnes per cubic meter (t/m 3). The density (more precisely, the volumetric mass density; also known as specific mass), of a substance is its mass per unit volume.The symbol most often used for density is ρ (the lower case Greek letter rho), although the Latin letter D can also be used. This concept is also frequently used in other natural sciences such as chemistry and materials science. The upper limits of the density range sometimes exceed the mineral densities, hence heavier minerals must be present in the rocks; in these cases we assume that = , obtaining. … e 3 by 27679.9. Density kg/m 3. {\displaystyle \rho _{\rm {muscovite}}=2.83} The three major classes of rock are igneous, sedimentary, and metamorphic rock. 3 Using (1974) plotted the log of velocity against the log of density for sedimentary rocks and obtained the empirical relation known as Gardner's rule: ρ r c / Plot these on Figure 5.3b. ), air or oven drying of a representative50 lb (22.7 kg) minimum sample of plus 0.75 inch (19 mm) rock fragments is recommended for determining the overall moisture content and dry density. , what porosities are implied by the densities shown in Figure 5.3a? Mechanical weathering can result from the action of agents such as frost action, salt m = They each have a density of between 3.0 to 3.4 grams per cubic centimeter. 10 {\displaystyle \phi } t , the values for shale being averages for kaolinite and muscovite. Density of Sedimentary Rocks ~y G. EDWARD MANGER :ONTRIBUTIONS TO GEOCHEMISTRY GEOLOGICAL SURVEY BULLETIN 1144-E Prepared partly on behalf of the U.S. Atomic Energy Commission UNITED STATES GOVERNMENT PRINTING OFFICE, WASHINGTON : 1963 . rock density table pdf of inverting download rock density table pdf of inverting read online learn how to create densi… m 2.22 The list of rock density values and the references to relevant sources are given in Table 2. {\displaystyle \rho _{\rm {shale\ minerals}}=2.71} ; Density Table. {\displaystyle \rho _{f}} ρ l 2.68 = m 3 by 27.68; for kg/m 3, multiply density in lb/in. Density is defined as mass per volume or grams/cm3 for solids. ) We solve equation (5.3a) for the velocity The actual densities of pure, dry, geologic materials vary from 880 kg/m 3 for ice (and almost 0 kg/m 3 for air) to over 8000 kg/m 3 for some rare minerals. Metal / Element or Alloy Density : Density g/cm 3. ρ z {\displaystyle \rho _{\rm {quartz}}=2.68\ {\rm {g/cm}}^{3}} m ρ {\displaystyle ^{3}} Densities of Metals and Elements Table . ϕ The mineral densities are for Rock Identification Chart . k,b'@)R?�)�7�F'�{rs�=f����D&9Z;�7���3��X��e0tM�����,��qk�����Ś�:�De6�M��i�t��E>X��c��R�"D.� q8��۹�%������M�M[u��&�"J��HWqc���cs��nr���م. s For each specimen, the permeability and particle and bulk densities … The densities of rocks and minerals are normally expressed as specific gravity, which is the density of the rock relative to the density of water. From Figure 5.3c, what densities would you expect at 7500 ft and how do these compare with Figures 5.3d and 5.3e from offshore Louisiana? Density is a basic property of a material that is equal to the objects mass divided by its volume. UNITED STATES DEPARTMENT OF THE INTERIOR STEW ART L. UDALL, Secretary GEOLOGICAL SURVEY Thomas B. 0.31 l ρ The values of the presented rock properties were predominantly determined as an arithmetic average of two to five rock specimen tests. ). Examples of these densities include 2.2–2.6 g/cm3 for clays, 3.2–4.37 g/cm3 for the mafic silicates pyroxene and olivine, and 7.3–7.7 g/cm3 for Ni-Fe. u ( Such aggregates constitute the basic unit of which the solid Earth is composed and typically form recognizable and mappable volumes. endobj Rock mass properties 6 Table 2: Field estimates of uniaxial compressive strength. Density values for three rock types (broken formation, calc-silicate, clinopyroxenite totalling 0.2% of land area) could not be found and a value of 2670 kg/m 3 has been adopted. A density table is a table that displays the density of a substance in the form of a table. / Assume that sandstone is composed only of grains of quartz, limestone only of grains of calcite, and shale of equal quantities of kaolinite and muscovite. The density of a meteorite can tell us a lot about a particular specimen. Most meteorites are ordinary chondrites, and ordinary chondrites … 0 = This website uses cookies. ϕ ; {\displaystyle ^{3}} The density of a wood is determined based on the multiple growths and physiological factors such as Age, diameter, height, radial growth, geographical location, site and growing conditions etc. v ( / 0 x���ۮ�Ƒ�����Z��&�g 0�3�b�d ��ض�m���gK�����G����EjĒ����j��u����?]~��Ͼ���_^�������_\>�������4]Uw���{������e4Um��0������^��/��勿_��u���n�������)���������w��o���/?ٿ����? %PDF-1.5 q = However the determination of an accurate density for some rock materials can difficult. = Porosity in rocks ranges from about 50% to 0%. [1] Accurate lunar rock densities are necessary for constructing gravity models of the Moon 's crust and lithosphere. 2.60 This can be computed as … 2.1 Mechanical Weathering Mechanical weathering causes disintegration of rocks into smaller pieces by exfoliation or decrepitation (slaking). ρ Iron meteorites are very dense, 7-8 g/cm3. 1.03 The histogram in Figure 5.3a does not encompass the complete range of samples and the range limits have been picked somewhat arbitrarily. ) ) is given by the equation. ϕ Iron & Nickel) they will generally be significantly heavier than common Earth rocks. 3 − Common Density Units. the densities of the fluid and rock matrix, respectively. We usually use density to describe the mass of a substance at a unit volume. {\displaystyle (\rho =1.03\ {\rm {g/cm}}^{3})} ρ The density ranges in Table 5.3a were obtained from Figure 5.3a. ρ and endobj For common rock-forming minerals, the crystal structures, lattice volumes, and elemental compositions are well defined, so the densities of geologic materials common in asteroids are similarly well defined. Mathematically, density is defined as mass divided by volume: = where ρ is the density, m is the mass, and V is the volume. e h Conversions: For density in lb/ft 3, multiply lb/in. Table 4–13 Fracture type 4–21 Table 4–14 Fracturing density description chart 4–21 Table 4–15 Joint set spacing categories 4–22 Table 4–16 Aperture category 4–23 Table 4–17 Joint infilling 4–24 Table 4–18 Joint persistence categories 4–25 Table 4–19 Types of joint ends 4–25 Table 4–20 Descriptors for weathering condition of joint face rock 4–27 i Two of the heaviest or densest rocks are peridotite or gabbro. ρ {\displaystyle \rho _{f}=1.03\ {\rm {g/cm}}^{3}} �.��2l��.M�nQ��3Ӈ_X0��E0�DQ��S�� / f Because many types of meteorites contain a high level of metal (e.g. o Min Ss 2.00–2.60 g/cm: 2.35 g/cm: 2.68 g/cm: 41% 20% 5% Ls 2.20–2.75 2.55 2.71 30 10 0 Sh 1.90–2.70 2.40 2.72 48 19 0 being in g/cm and = or 0.23 when is in m/s or ft/s, respectively. {\displaystyle \rho _{\rm {calcite}}=2.71} V Below is a table of units in which density is commonly expressed, as well as the densities of some common materials. ρ a / = 3 Density is usually expressed in units like grams per cubic centimeter (g/cc or g/cm3), kilograms per cubic meter, pounds per cubic inch (cubic foot or cubic yard), or pounds per gallon. 0.11 q k We take g <>>> 3 0 obj i They note that placed density varies from 1.7 to 1.9 T/cy, with the lower end of the range for materials with say 40% sand and the upper end for more open-graded ballast. stream The density of a plastic sample may change due to change in crystallinity, loss of plasticizers, absorption of solvent, etc. , which is slightly lower than most values in Figure 5.3e. 2.71 {\displaystyle \rho =2.19\ {\rm {g/cm}}^{3}} m What velocities would be expected for the density values in Table 5.3a according to Gardner's rule? n When a porous rock is saturated with a fluid, its density l as the fluid density. Unlike other physical properties, the densities of the commonest rock forming minerals are remarkably close together. being in g/cm = , all in g/cm <> {\displaystyle {\begin{aligned}V=(\rho /a)^{4}=(\rho /0.31)^{4}\times 10^{-3}=0.11\rho ^{4}\ {\rm {km/s}}.\end{aligned}}}. {\displaystyle \rho _{\rm {kaolinite}}=2.60} 2 0 obj , equation (5.3c) gives = from Figure 5.3c, equation (5.3c) gives Rock densities and porosities. ρ / g If two different materials are same in weight, but their density of both may be different. = 0.31 a . Rock Density Table. u 4 0 obj {\displaystyle V} Except where noted, this work is licensed under a Creative Commons Attribution 4.0 International License {\displaystyle V} Table 1. o 1 0 obj 2017, GeoSci Developers. Do you guys remember spray which can be with those individual cakes hardware cloth is usually scratching or fouling or a matter of time owners and independent professionals. Rocks are generally between 1600 kg/m 3 (sediments) and 3500 kg/m 3 (gabbro). 31 2.83 a being the porosity, Density of Stone, crushed g mm3 = 0.001602 g/mm³; Density of Stone, crushed kg m3 = 1 602 kg/m³; Density of Stone, crushed lb in3 = 0.057875921784 lb/in³; Density of Stone, crushed lb ft3 = 100.0095928812 lb/ft³; See density of Stone, crushed in hundreds of units of density measurement grouped by weight. The density and porosity of lunar rocks Walter S. Kiefer,1 Robert J. Macke,2,3 Daniel T. Britt,2 Anthony J. Irving,4 and Guy J. Consolmagno5 Received 13 February 2012; accepted 7 March 2012; published 13 April 2012. Rock, in geology, naturally occurring and coherent aggregate of one or more minerals. The rule is valid for the major sedimentary rock types, but not for evaporites or carbonaceous rocks (coal, lignite). Relative density, or specific gravity, is the ratio of the density (mass of a unit volume) of a substance to the density of a given reference material. f The velocities in Table 5.3b are plotted as triangles on Figure 5.3b. We solve equation (5.3b) for Igneous Rock Identification . ρ n (Mineral densities are: in the truck, your result is requested in tons per cubic yard. or 0.23 when Densities of Typical Rock Types and Minerals Rock Type Range (g/cm3) Average (g/cm3) Sediments(wet) Overburden 1.92 Soil 1.2 – 2.4 1.92 Clay 1.63 – 2.6 2.21 Gravel 1.70 – 2.40 2.0 Sand 1.70 – 2.30 2.0 Sandstone 1.61 – 2.76 2.35 Shale 1.77 – 3.20 2.40 Limestone 1.93 – 2.90 2.55 Dolomite 2.28 – 2.90 2.70 Sedimentary rocks (av.) bulk densities, and porosity are listed in Table 1. g a i Mineral density Max Av. A density of 920 and 1000 kg/m 3 is attributed to ice and water, respectively. <>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> s a SO a crushed gravel is about 1.43 T/cy in the truck and 1.85 T/cy placed, typ. e 3 Density is a physical quantity with the symbol ρ. c �������w��k� ����^6���~���_��C��Q�X��i�����&����勿���ޚnƴ�c�kF�\����ȏ�0��CS5St�/�Wn�f;H�ܚ������?�|qV�i�j2{��^��6hs ���4�t�Aw�s������4C]�Ij��q��iW/5M� C�̓;������T�����;(�'��W��������}��T�S՛,��i�������,��'1���n/"����*]E�x�6��א�W�i��,e�a�Q��O�W��?���%����=�zΉ�v$��.����x�"ڪ�k�\0�TlZ��Qk�N^36���+�t��x���$�ZF{]ꪛ�?��w���� ����.�n`��k���ݝ�1s5���u�dq C3��Q�E��/�$�s�a�����H����*3�o&��r��d5��X��3Nnmw�Op�_�$���W��ܒ���w��Nx��h/��Bn����i���…�w���{�:$���Q�'"צ� ��EY�m�j�p( Even though sand is made of rock fragments, its density is less because the porosity of sand lowers its bulk density (as shown below). t c densities of common rock types is shown below in Table 2.3.1 . Below is a list of meteorite densities for an array of various classifications. 2.68 , which is in accord with Figure 5.3d. Note: The values in parentheses are the porosities. z {\displaystyle \phi } {\displaystyle a=0.31} That is really a tricky question because rocks are defined as consisting two or more minerals. ρ This table has in-truck and placed densities. ; m r This page was last edited on 8 November 2019, at 15:04. c It is decided by knowing the density of liquid. a If you have measured crushed rock, dense grade base course, 3/4" max. {\displaystyle \rho =2.22\ {\rm {g/cm}}^{3}} , obtaining, V Once you've determined what type of rock you've got, look closely at its color and composition. Table 5.3a. Local rock is dolomite and one contractor says the number is 1.35 tons per c.y. ?����6�����v��������h{诗���/_������|�[�s6���Ce�{� s . × Densities of Metals and Elements Table. = = Gardner et al. ��I�44A�� ���uv�E��n�&c@N ����u����QZY{`�J�O,��il��:��HX���j�D�6�d��~�E��?��$�VL ���O�S�6O�å���o~v����� ��p�x���S��i! Learn more. Use Juniper Geranium and speak the truth about of the feet and up the legs and. e For plus 0.75 inch (19 mm) rock fragments containing some moisture (as in the case of weathered rock,claystones, wetted porous rock, high absorption rock fragments, etc. m j��#pc)-��;mZH�rZ�IZ�m_����v�kS͆���h�[wA)M�T[�N ��A�:X��G h�M@�>�>b�q���DG7l�6y���f��Ý�#�?���~�i�!��OY�e��(W:M����%99�x�5]�C3�\���oL5�ױ�_����� If the rock sample weighed 2.71 g and the soil 1.20 g, we could describe the density of the rock as 2.71 g cm –3 and that of the soil as 1.20 g cm –3. = / {\displaystyle \phi =0} = Density also decides the sinking property of material. s If you continue without changing your browser settings, you consent to our use of cookies in accordance with our cookie policy. This table is designed to offer sample densities only. g Specific gravity for liquids is nearly always measured with respect to water at its densest (at 4 °C or 39.2 °F); for gases, the reference is air at room temperature (20 °C or 68 °F). This will help you identify it. {\displaystyle \rho } Interestingly, peridotite are the rocksthat naturally occurring diamonds are found in. Hog deer are solitary and have lower densities than 40 000 edits. i l
rock density chart
Umeboshi Paste Whole Foods, Haymakers Mushroom Poisonous To Dogs, Home For Sale In Brady, Tx, English Sayings About Life, Steamed Broccoli With Balsamic Vinegar, Builder Pattern Example, Tile Decals 4x4, Susan Jackson Umass Boston, Fdep Mangrove Coordination,
rock density chart 2020
|
CommonCrawl
|
Total nitrogen estimation in agricultural soils via aerial multispectral imaging and LIBS
Mapping barrier island soil moisture using a radiative transfer model of hyperspectral imagery from an unmanned aerial system
Rehman S. Eon & Charles M. Bachmann
Influence of soil heterogeneity on soybean plant development and crop yield evaluated using time-series of UAV and ground-based geophysical imagery
Nicola Falco, Haruko M. Wainwright, … Susan S. Hubbard
Prediction of soil available water-holding capacity from visible near-infrared reflectance spectra
Michael Blaschek, Pierre Roudier, … Carolyn B. Hedley
Exposed soil and mineral map of the Australian continent revealing the land at its barest
Dale Roberts, John Wilford & Omar Ghattas
African soil properties and nutrients mapped at 30 m spatial resolution using two-scale ensemble machine learning
Tomislav Hengl, Matthew A. E. Miller, … Jonathan Crouch
Evaluating the efficiency of coarser to finer resolution multispectral satellites in mapping paddy rice fields using GEE implementation
Mirza Waleed, Muhammad Mubeen, … Ayman EL Sabagh
Machine Learning Allows Calibration Models to Predict Trace Element Concentration in Soils with Generalized LIBS Spectra
Chen Sun, Ye Tian, … Jin Yu
Prediction of soil salinity with soil-reflected spectra: A comparison of two regression methods
Xiaoguang Zhang & Biao Huang
Visible-NIR hyperspectral classification of grass based on multivariate smooth mapping and extreme active learning approach
Xuanhe Zhao, Xin Pan, … Shengwei Zhang
Md Abir Hossen1,
Prasoon K Diwakar2 &
Shankarachary Ragi1
Lasers, LEDs and light sources
Measuring soil health indicators (SHIs), particularly soil total nitrogen (TN), is an important and challenging task that affects farmers' decisions on timing, placement, and quantity of fertilizers applied in the farms. Most existing methods to measure SHIs are in-lab wet chemistry or spectroscopy-based methods, which require significant human input and effort, time-consuming, costly, and are low-throughput in nature. To address this challenge, we develop an artificial intelligence (AI)-driven near real-time unmanned aerial vehicle (UAV)-based multispectral sensing solution (UMS) to estimate soil TN in an agricultural farm. TN is an important macro-nutrient or SHI that directly affects the crop health. Accurate prediction of soil TN can significantly increase crop yield through informed decision making on the timing of seed planting, and fertilizer quantity and timing. The ground-truth data required to train the AI approaches is generated via laser-induced breakdown spectroscopy (LIBS), which can be readily used to characterize soil samples, providing rapid chemical analysis of the samples and their constituents (e.g., nitrogen, potassium, phosphorus, calcium). Although LIBS was previously applied for soil nutrient detection, there is no existing study on the integration of LIBS with UAV multispectral imaging and AI. We train two machine learning (ML) models including multi-layer perceptron regression and support vector regression to predict the soil nitrogen using a suite of data classes including multispectral characteristics of the soil and crops in red (R), near-infrared, and green (G) spectral bands, computed vegetation indices (NDVI), and environmental variables including air temperature and relative humidity (RH). To generate the ground-truth data or the training data for the machine learning models, we determine the N spectrum of the soil samples (collected from a farm) using LIBS and develop a calibration model using the correlation between actual TN of the soil samples and the maximum intensity of N spectrum. In addition, we extract the features from the multispectral images captured while the UAV follows an autonomous flight plan, at different growth stages of the crops. The ML model's performance is tested on a fixed configuration space for the hyper-parameters using various hyper-parameter optimization techniques at three different wavelengths of the N spectrum.
Soil health indicators are a composite set of measurable physical, chemical and biological properties which can be used to determine soil health status. Among the chemical indicators, we particularly focus on nitrogen (N) because N is the most limiting nutrient in many of the world's agricultural areas1. Insufficient use of N causes economic loss, in contrast, excessive use of N implies wasting fertilizer, causes nitrate pollution, and increases the cost2,3. Nitrogen treatment can account for up to 30% of the total production cost4.
Chlorophyll meter (CM) measures the chlorophyll content of crops to estimate their N nutrition status. In recent years, the use of CM has increased among researchers and farmers5,6. For instance, N application rates for corn were determined using the adjusted \(R^2\) of the relationship between nitrogen rate difference (ND) and CM readings7. However, CM-based methods fail to capture the spatial variability that is often present within the field. For N management, determination of spatial patterns is necessary but requires collection and analysis of a large number of samples which is labor-intensive and time-consuming2,6.
Satellite-based remote sensing is one alternative to ground-based measurements. Satellite-based techniques utilize images at the spectral level for crop growth monitoring and real-time management8,9,10. For instance, vegetation indices (VIs), evaluated using the data obtained from satellite-based multispectral sensors, have been used to detect the N stress at V4–V7 (4–7 leaves with visible leaf collar) stages11,12,13. However, satellite-based sensing suffers from lower spatial and temporal resolution, and sensing disruption may occur during image acquisition in some areas because of cloud cover and/or sprinkler irrigation14. Farmers' adaption of the system is still limited. Additionally, the high cost of obtaining these images for relatively small areas is a significant drawback15. Multispectral cameras mounted on unmanned aerial vehicles (UAVs) have enormous potential to resolve this problem. UAVs can be deployed rapidly and frequently for image acquisition, resulting in reduced costs, greater flexibility in terms of data resolution and mission timing16,17,18. For instance, a variable rate N fertilization map was created using hyperspectral airborne images19, the ground-sensor measurements were compared with hyperspectral images to determine the N sufficiency index17, and estimation of N side-dress using NDVI computed from aerial imaging20. However, radiometric and geometric calibrations are needed for the UAVs on-board miniaturized electro-optical sensors to obtain quantitative results and provide precise georeferencing21. UAVs also fail to perform on-board mosaicking images due to limited computational resources.
Laser induced breakdown spectroscopy (LIBS) is an analytical method for qualitative and quantitative elemental detection. LIBS can be readily applied to soil samples, providing rapid chemical analysis of soil samples and their constituents (e.g., Nitrogen, Potassium, Phosphorus, Calcium). The combination of an autonomous UA, LIBS, and machine learning can be used to achieve in-field measurement which provides instant results for deficient nutrient analysis and fertilization planning. With appropriate calibration, the LIBS analysis can provide quantitative measurement for most elements in soil including, carbon, nitrogen, potassium, sulfur, and phosphorus22,23. There have been some applications of standalone LIBS systems in precision agriculture22,23,24,25. However, there has been no detailed research of LIBS application in combination with ML and UAVs. Some studies have found it challenging to measure nitrogen using LIBS due to environmental factors; Earth's atmosphere is almost 80% nitrogen which will interfere with the sample measurement result since the soil is less than 1% nitrogen. Testing in a vacuum or in low-pressure conditions has been suggested to improve measurement accuracy23. In this study, we conducted LIBS analysis on soil samples under a normal atmosphere for observation. Low laser pulse energies were used to minimize the breakdown of air and thereby minimize the influence of atmospheric nitrogen.
The purpose of the present study is to develop a machine learning (ML)-based predictive model to estimate TN of soil using crops and soil spectral characteristics measured from the multispectral images captured from a UAV, and LIBS. Specifically, we train a multi-layer perceptron regression (MLP-R) and support vector regression (SVR) model to predict TN in soil. We use root mean square error (RMSE) and computational time (CT) as performance metrics to measure the performance of the above predictive model. To reduce the RMSE and lower CT in the machine learning models, we perform hyper-parameter optimization (HPO). The HPO tuning process depends on the ML model used for prediction26. The traditional way to tune hyper-parameter is through manual testing, although it requires a deep understanding of the ML models27. However, manual tuning is ineffective for many problems due to a large number of hyperparameters, model complexity, time-consuming model evaluations, and non-linear hyper-parameter interactions. Several HPO techniques28 have been used for different applications such as grid search (GS), random search (RS), bayesian optimization, genetic algorithm (GA), and particle swarm optimization. In this study, we implement GS, RS, and GA for hyper-parameter optimization.
An aerial survey was carried out with Mavic 2 Pro UAV. We obtained multispectral images using the Sentera high-precision NDVI single sensor which was mounted on the UAV (Fig. 1a). The sensor is 1.2 MP CMOS with a 60\(^\circ\) horizontal FOV and a 47\(^\circ\) vertical FOV and works with two wide spectral bands: red (625 nm CWL × 100 nm width) and NIR (850 nm CWL × 40 nm width) with a pixel count of 1248 horizontal/950 vertical. The green band is typically unused. The sensor has a total weight of 30 g and size of 25.4 × 33.8 × 37.3 mm.
Data collection: multispectral images and soil samples
The farm used for data collection is located at Sturgis, South Dakota, USA (\({44}^{\circ }\ 25'\ 27''N;\ {103}^{\circ }\ 22'\ 34''W\)). We created an autonomous UAV flight plan for minimal passes similar to a raster scan pattern using the coordinates of the four corners of the field (44.25.39 N, 103.22.60 W; 44.25.28 N, 103.22.60 W; 44.25.39 N, 103.23.16 W; 44.25.39 N, 103.23.16 W). We captured 865 multispectral images at each of the growth stages (V4, V8, and V12) and used Sentera image stitching software to mosaicking the images. The multispectral images were captured while the UAV was following the raster scan pattern using the following parameters and experimental setup,
Flight Type: QuickTile
Overlap Setting: 75%
Altitude: 60.96 m
Speed: 6.71 m/s
Experimental setup:
Desired resolution: The Ground Sample Distance (GSD)/pixel of the multispectral camera was set to 0.05 m for a 60.96 m altitude.
Cloud cover and time of day: The UAV was flown when the sun was highest in the sky for more accurate data. Data is best when sky conditions are consistent, ideally 100% sunny or 100% cloudy. Flying with a mix of sun and clouds causes inconsistency in brightness and contrast while stitching images. Therefore, the stitched image will provide an inaccurate NDVI value.
(a) Mavic 2 pro UAV with multispectral camera mounted. (b) The flags show the sample locations of the corresponding crops. The patches have crops including Peas, HRS Wheat, Millet, Soybean, Corn, and HRW Wheat, respectively.
We collected six soil samples 6.1 m from the edge of the field and six samples from the opposite side of the field, and six soil samples from the center of the field as shown in Fig. 1b. We followed the soil sampling methods for South Dakota region29 to select the sample locations and number of samples collected. A total of 54 soil samples were collected at an 0.2 m depth from six patches (3 samples per patch) at the V4, V8, and V12 stages (18 samples per stage) using a hydraulic probe. We avoided sampling from the areas where conditions were different from the rest of the field (e.g., former manure piles, fertilizer bands, or fence lines). Figure 1b shows the sample locations across the patches.
LIBS utilizes a high energy pulsed laser which generates a high temperature ranging from \(10^{\circ }\)–\(20{,}000^{\circ }\hbox {K}\) resulting in plasma formation when focused on a sample. This, in turn, leads to ablation of a minuscule amount of sample, leading to excitation of the sample's constituent elements. As the plasma cools, these excited atoms and electrons emit photons which correspond to specific elements present in the sample. These photons are collected by a spectrometer and result in quantitative and qualitative analysis of samples. The SciAps Z-300 handheld LIBS analyzer was used for these measurements. This device has an extended spectrometer wavelength range from \(190\,\upmu \hbox {m}\) to \(950\, \upmu \hbox {m}\). The extended range allows emission lines from elements H, F, N, O, Br, Cl, Rb and, S to be measured. The LIBS instrument is equipped with a Q-switched Nd:YAG laser, 5-6 mJ per pulse at 1064 nm. Ten laser pulses are shot on the soil samples in the presence of Ar purge to obtain averaged data on each measurement. The focused laser on the soil surface forms a \(\mu m\) size of a sample into \(>10{,}000^{\circ }\hbox {K}\) plasma. The unique emission spectrum is collected by the spectrometer as the plasma cools.
Emission lines of soil samples at the V4, V8, and V12 stages for six patches.
We used NIST LIBS database30 to determine the N lines from the emission spectrum (Fig. 2) and found N lines at 493.4 nm, 746.6 nm, 821.4 nm, and 868.1 nm (Fig. 3). However, we discarded the 746.6 nm N lines due to weaker intensity response and inconsistency between the samples in wavelength. We verified the N lines from the study of soil nutrient detection for precision agriculture22. From the soil samples, we select four samples randomly and obtained the actual TN of soil in ppm for calibration. We analyzed all the 54 soil samples in LIBS to determine the N spectrum's maximum intensity at 493.4 nm, 821.4 nm, and 868.1 nm (Figs. 5, 6, and 7) at V4, V8, and V12 stages.
Determining N lines from the soil sample using NIST database.
Using the correlation between actual TN and the maximum intensity of N spectrum, we construct calibration plots for 493.4 nm, 821.4 nm, and 868.1 nm through linear regression (Fig. 4). We use \(R^2\) as our calibration metric and find \(R^2=0.98\), \(R^2=0.99\), and \(R^2=0.90\), respectively, showing a strong correlation between the actual soil TN and the peak intensity of the N spectrum. Using the calibrated model, we converted the peak intensity of the N spectrum (Figs. 5, 6, and 7) to TN (ppm) for all the 54 soil samples (Table 1) to generate the training data for the ML models.
Calibration plot for computing soil TN using the peak intensity of the nitrogen spectrum at 493.4 nm, 821.4 nm, and 868.1 nm.
Nitrogen spectrum of the soil samples at 493.4 nm for six patches at the V4, V8 and V12 stages.
Feature extraction and dataset
The multispectral images are composed of three channels, channel-1: R, channel-2: G, and channel-3: NIR. The multispectral sensor's datasheet31 shows that channel-1 contains both R and NIR light. Therefore, the NIR light needed to be removed to isolate R and compute NDVI. The equations for R and NIR light are,
$$\begin{aligned} R&= 1.0 * DN_{ch1} - 1.012 * DN_{ch3} \end{aligned}$$
$$\begin{aligned} NIR&= 9.605 * DN_{ch3} - 0.6182 * DN_{ch1} \end{aligned}$$
where \(DN_{ch1}\) is the Digital Number (pixel value) of channel one, and \(DN_{ch3}\) is the Digital Number (pixel value) of channel three. The coefficients of DN were provided in the datasheet31.
Band separation, and computed NDVI pixels and zonal NDVI.
Using Eqs. (1) and (2), band separation (Fig. 8a) was performed to compute NDVI (Fig. 8b) and extract the pixel values from each of the bands. The dataset (Table 1) was created using the mean NDVI and the mean pixel values of each of the bands from individual zones at the V4, V8, and V12 stages. The equation for computing NDVI,
$$\begin{aligned} NDVI = \frac{1.236*DN_{ch3}-0.188*DN_{ch1}}{1.000*DN_{ch3}+0.044*DN_{ch1}} \end{aligned}$$
Table 1 Generated training data for the ML models.
In supervised learning, the goal is to obtain an optimal predictive model function \(f^*\) based on the input x and the output y to minimize the cost function L(f(x), y). In this study, we particularly use MLP-R and SVR which can be used for both classification and regression problems. We applied HPO techniques to determine the best set of hyper-parameters from the ML models and train the ML models using those hyper-parameters on the training dataset.
Multi-layer perceptron regression (MLP-R)
Mulit-layer perceptron is a supervised learning algorithm that learns a function \(f(.): R^x \rightarrow R^o\) by training on a dataset32, where x is the number of input dimension and o is the number of output dimension. We designed the MLP-R (Fig. 9a) with multiple organized layers consisting of various neuron-like processing units. Each node in the layer was connected with the nodes in the previous layer. Each node may have symmetrical or differing strengths and weights. The data in the network enters with the input layer and gradually runs through each layer to reach the output layer. For a given a set of features \(x =\){R, NIR, G, NDVI, Air temperature, RH} and target \(y =\) TN, \(f(.): R^6 \rightarrow R^1\). To train the MLP-R from a given set of input-output pairs \(X = \{(\vec {x}_1, y_1),\ldots ,(\vec {x}_N, y_N)\}\), learning consists of iteratively updating the values of weight and bias of the perceptron to minimize RMSE. The hyper-parameter configuration (Table 2) was created using the solver type33, activation function34, learning rate and hidden layer sizes.
(a) MLP-R with four hidden network having different weights, where input layer \(\in {\mathbb {R}}^6\), hidden layer \(\in {\mathbb {R}}^4\), output layer \(\in {\mathbb {R}}^1\) and n1, n2, n3, and n4 represent the number of perceptron in each hidden layer, respectively. (b) HPO for GS, RS and GA with cross-validation, training the ML models with the tuned HP and prediction.
Support vector regression (SVR)
Support vector machine (SVM) makes data points linearly separable by mapping them from low-dimensional to high-dimensional space. The classification boundary creates a partition between the data points by generating a hyperplane35. SVM concepts can be applied to regression problems by generalizing them. SVR uses a symmetrical loss function that penalizes both high and low misestimates equally. The \(\varepsilon\)-tube is used to generalize SVM to SVR by adding an \(\varepsilon\)-insensitive region around the function, ignoring the absolute values of errors less than a certain threshold \(\varepsilon\) from both above and below the estimation36,37. In SVR, points outside the tube are penalized, but points inside the tube, whether above or below the function, are not penalized. SVR uses different types of kernels for non-linear functions to map the data into a higher dimensional space34,36,37. Linear kernels, radial basis function (RBF), polynomial kernels, and sigmoid kernels are common kernel types in SVR. We created the hyper-parameter configuration (Table 2) using the kernel types, regularization parameter (C)34, and distance error (epsilon) of the loss function34.
Hyper-parameter optimization (HPO)
GS, RS, and GA HPO techniques were executed within their respective hyper-parameters to train the model. We performed cross-validation by splitting the train and test data into 5-folds. After obtaining the RMSE from the cross-validation score, we selected the hyper-parameters which yielded the lowest RMSE. Finally, using the best set of hyper-parameters we trained the MLP-R and SVR models for each HPO technique. Figure 9b shows the step-by-step process of HPO, training the dataset, and prediction of test data.
Table 2 Specifics of the configuration space for the hyper-parameters.
GS exhaustively evaluates all the combinations in the hyper-parameter configuration space specified by the user in the form of a grid configuration38. The user must identify the global optimums manually since GS cannot utilize the well-performing regions28. However, in RS, the user defines a budget (i.e., time) as well as the upper and lower bounds of the hyper-parameter values. RS randomly selects the values from the pre-defined boundary and trains until the budget is exhausted28. If the configuration space is wide enough, RS can detect the global optima. Assuming a model has k parameters and each of them has n distinct values, the GS computational complexity increases exponentially at a rate of \(O(n^k)\)39. Therefore, the effectiveness of GS depends on the size of the hyper-parameter configuration space. For RS, the computational complexity is defined as O(n), where n is specified by the user before the optimization process starts28.
GA40 randomly initializes the population and chromosomes. Genes represents the entire search space, hyper-parameters, and hyper-parameter values. GA uses a fitness function to evaluate the performance of each individual in the current generation similarly to the objective function of a ML model. To produce a new generation, GA performs selection, crossover, and mutation operations on the chromosomes involving the next hyper-parameter configurations to be evaluated. The cycle continues until the algorithm reaches the global optimum.
To evaluate the HPO methods, we implemented five fold cross-validation and used RMSE as the performance metric. Additionally, we measured CT as a model efficiency metric. CT is the total time required to complete an HPO process. We specify the same hyper-parameter configuration space (Table 2) for all HPO methods to fairly compare GS, RS and GA. The optimal hyper-parameter configuration (Table 3) was determined by each of the HPO methods based on the lowest RMSE for all three wavelengths.
Performance comparison for different HPO algorithms at different wavelengths.
Table 3 Optimal hyper-parameter configuration selected by different HPO algorithms at different wavelengths.
We tuned the models on a machine with an 8 Core i7-9700K processor and 16 gigabytes (GB) of memory. We used Python 3.5, multiple open-source Python libraries, and open-source Python frameworks, including sklearn34. Figure 10 shows that for both MLP-R and SVR, RS produces much faster results than GS while maintaining lower RMSE for the same search space size. In general, GA offers lower RMSE for both models but has a higher CT compared to GS and RS in all three wavelengths. Overall, MLP-R outperforms SVR in terms of performance. However, we achieved better efficiency with SVR in our dataset.
We introduced the machine learning approach to estimate the TN of soil using NDVI and multispectral characteristics (R, NIR and G) of the images. We also consider the environmental factors such as air temperature and RH. The performance of MLP-R and SVR models were tested on a fixed configuration space for the hyper-parameters under various hyper-parameter optimization techniques at three different wavelengths (Table 4). For both MLP-R and SVR, the default HP configuration do not yield the lowest RMSE, this demonstrates the significance of utilizing HPO. From Table 4, the estimation error of predicting soil TN is lowest in GA compared to GS and RS for both MLP-R and SVR, where \(\mu\) is the mean and \(\sigma\) is the standard deviation. While training the models, we split our dataset into train and test for all three wavelengths individually, where we use 80% of the data for training and 20% for testing.
Table 4 Estimation error for predicting soil TN.
The UMS framework can be used to estimate the total nitrogen in soil. However, depending on the types of soil and crops, the model needs to be re-calibrated. More specifically, the actual TN of soil should be obtained from the subset of the samples to calibrate the N spectrum's intensity after determining the N lines using LIBS. Furthermore, N lines that fall around the 500 nm region should be avoided in sea sand due to the interferences with Titanium (Ti) lines23.
In this paper, we have demonstrated the ability of a UAV-based multispectral sensing solution to estimate soil total nitrogen. Specifically, we implemented two machine learning models multilayer perceptron regression and support vector regression to predict soil total nitrogen using a suite of data classes including UAV-based imaging data in red, near infrared, and green spectral bands, normalized difference vegetation indices (computed using the multispectral images), air temperature, and relative humidity. We performed hyperparameter optimization methods to tune the models for prediction performance. Overall, our numerical studies confirm that our machine learning-based predictive models can estimate total nitrogen of the soil with a root mean square percent error (RMSPE) of 10.8%.
The source code, and the training data can be found here, https://git.io/JOaqK.
Fageria, N. & Baligar, V. Enhancing nitrogen use efficiency in crop plants. Adv. Agron. 88, 97–185 (2005).
Bausch, W. C. & Duke, H. Remote sensing of plant nitrogen status in corn. Trans. ASAE 39, 1869–1875 (1996).
Khan, S., Mulvaney, R. L. & Hoeft, R. A simple soil test for detecting sites that are nonresponsive to nitrogen fertilization. Soil Sci. Soc. Am. J. 65, 1751–1760 (2001).
Lloveras Vilamanyà, J. et al. Costes de producción de cultivos extensivos en secano y regadio. Vida Rural 2015(401), 38–47 (2015).
Bagheri, N., Ahmadi, H., Alavipanah, S. K. & Omid, M. Multispectral remote sensing for site-specific nitrogen fertilizer management. Pesquisa Agropecuária Brasileira 48, 1394–1401 (2013).
Bausch, W. & Khosla, R. Quickbird satellite versus ground-based multi-spectral data for estimating nitrogen status of irrigated maize. Precis. Agric. 11, 274–290 (2010).
Hawkins, J., Sawyer, J., Barker, D. & Lundvall, J. Using relative chlorophyll meter values to determine nitrogen application rates for corn. Agron. J. 99, 1034–1040 (2007).
Daughtry, C., Walthall, C., Kim, M., De Colstoun, E. B. & McMurtrey, J. III. Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance. Remote Sens. Environ. 74, 229–239 (2000).
Zhang, D.-Y. et al. A field-based pushbroom imaging spectrometer for estimating chlorophyll content of maize. Spectrosc. Spectral Anal. 31, 771–775 (2011).
Zarco-Tejada, P. J., Catalina, A., González, M. & Martín, P. Relationships between net photosynthesis and steady-state chlorophyll fluorescence retrieved from airborne hyperspectral imagery. Remote Sens. Environ. 136, 247–258 (2013).
Sripada, R. P., Heiniger, R. W., White, J. G. & Meijer, A. D. Aerial color infrared photography for determining early in-season nitrogen requirements in corn. Agron. J. 98, 968–977 (2006).
Ma, B.-L., Wu, T.-Y. & Shang, J. On-farm comparison of variable rates of nitrogen with uniform application to maize on canopy reflectance, soil nitrate, and grain yield. J. Plant Nutr. Soil Sci. 177, 216–226 (2014).
Jones, J. et al. Influence of soil, crop residue, and sensor orientations on ndvi readings. Precis. Agric. 16, 690–704 (2015).
Hunt, E. R., Cavigelli, M., Daughtry, C. S., Mcmurtrey, J. E. & Walthall, C. L. Evaluation of digital photography from model aircraft for remote sensing of crop biomass and nitrogen status. Precis. Agric. 6, 359–378 (2005).
Robert, P. C. Precision agriculture: a challenge for crop nutrition management. In Progress in Plant Nutrition: Plenary Lectures of the XIV International Plant Nutrition Colloquium, 143–149 (Springer, 2002).
Strachan, I. B., Pattey, E. & Boisvert, J. B. Impact of nitrogen and environmental conditions on corn as detected by hyperspectral reflectance. Remote Sens. Environ. 80, 213–224 (2002).
Quemada, M., Gabriel, J. L. & Zarco-Tejada, P. Airborne hyperspectral images and ground-level optical sensors as assessment tools for maize nitrogen fertilization. Remote Sens. 6, 2940–2962 (2014).
Papadopoulos, A. et al. Preliminary results for standardization of ndvi using soil nitrates in corn growing. Fresen. Environ. Bull. 23, 348–354 (2014).
Cilia, C. et al. Nitrogen status assessment for variable rate fertilization in maize through hyperspectral imagery. Remote Sens. 6, 6549–6565 (2014).
Scharf, P. C. & Lory, J. A. Calibrating corn color from aerial photographs to predict sidedress nitrogen need. Agron. J. 94, 397–404 (2002).
Berni, J., Zarco-Tejada, P., Suárez, L., González-Dugo, V. & Fereres, E. Remote sensing of vegetation from uav platforms using lightweight multispectral and thermal imaging sensors. Int. Arch. Photogramm. Remote Sens. Spatial Inform. Sci 38, 6 (2009).
Erler, A., Riebe, D., Beitz, T., Löhmannsröben, H.-G. & Gebbers, R. Soil nutrient detection for precision agriculture using handheld laser-induced breakdown spectroscopy (libs) and multivariate regression methods (plsr, lasso and gpr). Sensors 20, 418 (2020).
Harris, R. D., Cremers, D. A., Ebinger, M. H. & Bluhm, B. K. Determination of nitrogen in sand using laser-induced breakdown spectroscopy. Appl. Spectrosc. 58, 770–775 (2004).
Tran, M., Sun, Q., Smith, B. W. & Winefordner, J. D. Determination of c:H:O:N ratios in solid organic compounds by laser-induced plasma spectroscopy. J. Anal. Atomic Spectrom. 16, 628–632 (2001).
Yu, K., Ren, J. & Zhao, Y. Principles, developments and applications of laser-induced breakdown spectroscopy in agriculture: a review. Artif. Intell. Agric. 4, 127–139. https://doi.org/10.1016/j.aiia.2020.07.001 (2020).
DeCastro-García, N., Muñoz Castañeda, Á. L., Escudero García, D. & Carriegos, M. V. Effect of the sampling of a dataset in the hyperparameter optimization phase over the efficiency of a machine learning algorithm. Complexity 2019 (2019).
Abreu, S. Automated architecture design for deep neural networks. arXiv preprint arXiv:1908.10714 (2019).
Yang, L. & Shami, A. On hyperparameter optimization of machine learning algorithms: theory and practice. Neurocomputing 415, 295–316 (2020).
Gelderman R, R. K., Gerwing J. Recommended soil sampling methods for south dakota. (2006).
Kramida, A., Olsen, K. & Ralchenko, Y. Nist libs database. National Institute of Standards and Technology, US Department of Commerce (2019).
False color to ndvi conversion precision ndvi single sensor. Sentera, LLC.
Gardner, M. W. & Dorling, S. Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmos. Environ. 32, 2627–2636 (1998).
Fine, T. L. Feedforward neural network methodology (Springer Science & Business Media, 2006).
MATH Google Scholar
Pedregosa, F. et al. Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
MathSciNet MATH Google Scholar
Noble, W. S. What is a support vector machine?. Nat. Biotechnol. 24, 1565–1567 (2006).
Awad, M. & Khanna, R. Support vector regression. In Efficient learning machines, 67–80 (Springer, 2015).
Smola, A. J. & Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 14, 199–222 (2004).
Article MathSciNet Google Scholar
Hutter, F., Kotthoff, L. & Vanschoren, J. Automated machine learning: methods, systems, challenges (Springer Nature, 2019).
Lorenzo, P., Nalepa, J., Kawulok, M., Ramos, L. & Ranilla, J. Particle swarm optimization for hyper-parameter selection in deep neural networks. In Proceedings of the Genetic and Evolutionary Computation Conference (2017).
Gogna, A. & Tayal, A. Metaheuristics: review and application. J. Exp. Theor. Artif. Intell. 25, 503–526 (2013).
This work was supported in part by South Dakota GOED i6 program through the Proof of Concept grant. We thank Dr. Christopher Graham for support in data collection.
Department of Electrical Engineering, South Dakota School of Mines and Technology, Rapid City, SD, 57701, USA
Md Abir Hossen & Shankarachary Ragi
Department of Mechanical Engineering, South Dakota School of Mines and Technology, Rapid City, SD, 57701, USA
Prasoon K Diwakar
Md Abir Hossen
Shankarachary Ragi
M.A.H. conducted the data collection, experiments and prepared the results. The LIBS experiments were conducted by P.K.D. and M.A.H. Analysis was done by all authors. The manuscript was prepared by M.A.H. and S.R. S. R. served as the principal investigator for this project. All authors reviewed the manuscript.
Correspondence to Shankarachary Ragi.
Hossen, M.A., Diwakar, P.K. & Ragi, S. Total nitrogen estimation in agricultural soils via aerial multispectral imaging and LIBS. Sci Rep 11, 12693 (2021). https://doi.org/10.1038/s41598-021-90624-6
Spectroscopic analysis reveals that soil phosphorus availability and plant allocation strategies impact feedstock quality of nutrient-limited switchgrass
Zhao Hao
Yuan Wang
Eoin L. Brodie
Communications Biology (2022)
UAV-based multispectral image analytics for generating crop coefficient maps for rice
Suyog Balasaheb Khose
Damodhara Rao Mailapalli
Chandranath Chatterjee
Arabian Journal of Geosciences (2022)
|
CommonCrawl
|
>Journal of Fluid Mechanics
>Volume 850
>Finite-sized rigid spheres in turbulent Taylor–Couette...
Experimental set-up
Conclusions and outlook
Finite-sized rigid spheres in turbulent Taylor–Couette flow: effect on the overall drag
Published online by Cambridge University Press: 04 July 2018
Dennis Bakhuis [Opens in a new window] ,
Ruben A. Verschoof [Opens in a new window] ,
Varghese Mathai [Opens in a new window] ,
Sander G. Huisman ,
Detlef Lohse [Opens in a new window] and
Chao Sun [Opens in a new window]
Dennis Bakhuis
Physics of Fluids Group, Max Planck UT Center for Complex Fluid Dynamics, MESA+ Institute and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands
Ruben A. Verschoof
Varghese Mathai
Sander G. Huisman
Detlef Lohse
Physics of Fluids Group, Max Planck UT Center for Complex Fluid Dynamics, MESA+ Institute and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
Chao Sun*
Physics of Fluids Group, Max Planck UT Center for Complex Fluid Dynamics, MESA+ Institute and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands Center for Combustion Energy, Key Laboratory for Thermal Science and Power Engineering of Ministry of Education, Department of Energy and Power Engineering, Tsinghua University, Beijing, China
†Email address for correspondence: [email protected]
Save PDF (0.44 mb) View PDF[Opens in a new window] Save hi-res PDF (0.49 mb)
We report on the modification of drag by neutrally buoyant spherical finite-sized particles in highly turbulent Taylor–Couette (TC) flow. These particles are used to disentangle the effects of size, deformability and volume fraction on the drag, and are contrasted to the drag in bubbly TC flow. From global torque measurements, we find that rigid spheres hardly decrease or increase the torque needed to drive the system. The size of the particles under investigation has a marginal effect on the drag, with smaller diameter particles showing only slightly lower drag. Increase of the particle volume fraction shows a net drag increase. However, this increase is much smaller than can be explained by the increase in apparent viscosity due to the particles. The increase in drag for increasing particle volume fraction is corroborated by performing laser Doppler anemometry, where we find that the turbulent velocity fluctuations also increase with increasing volume fraction. In contrast to rigid spheres, for bubbles, the effective drag reduction also increases with increasing Reynolds number. Bubbles are also much more effective in reducing the overall drag.
JFM classification
Multiphase and Particle-laden Flows: Multiphase flow Flow Control: Drag reduction Turbulent Flows: Shear layer turbulence
JFM Papers
Journal of Fluid Mechanics , Volume 850 , 10 September 2018 , pp. 246 - 261
DOI: https://doi.org/10.1017/jfm.2018.462[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© 2018 Cambridge University Press
Flows in nature and industry are generally turbulent, and often these flows carry bubbles, drops or particles of various shapes, sizes and densities. Examples include sediment-laden rivers, gas–liquid reactors, volcanic eruptions, plankton in the oceans, pollutants in the atmosphere and air bubbles in the ocean mixing layer (Toschi & Bodenschatz Reference Toschi and Bodenschatz2009). Particle-laden flows may be characterized in terms of the particle density $\unicode[STIX]{x1D70C}_{p}$ , particle diameter $d_{p}$ , volume fraction $\unicode[STIX]{x1D6FC}$ and Reynolds number Re of the flow. When $d_{p}$ is small (compared with the dissipative length scale $\unicode[STIX]{x1D702}_{K}$ ) and $\unicode[STIX]{x1D6FC}$ is low ( ${<}10^{-3}$ ), the system may be modelled using a point particle approximation with two-way coupling (Elghobashi Reference Elghobashi1994; Mazzitelli, Lohse & Toschi Reference Mazzitelli, Lohse and Toschi2003; Mathai et al. Reference Mathai, Calzavarini, Brons, Sun and Lohse2016). With recent advances in computing, fully resolved simulations of particle-laden flows have also become feasible. Uhlmann (Reference Uhlmann2008) conducted one of the first numerical simulations of finite-sized rigid spheres in a vertical particle-laden channel flow. They observed a modification of the mean velocity profile and turbulence modulation due to the presence of particles. A number of studies followed, which employed immersed boundary (Peskin Reference Peskin2002; Cisse, Homann & Bec Reference Cisse, Homann and Bec2013), Physalis (Naso & Prosperetti Reference Naso and Prosperetti2010; Wang, Sierakowski & Prosperetti Reference Wang, Sierakowski and Prosperetti2017) and front-tracking methods (Unverdi & Tryggvason Reference Unverdi and Tryggvason1992; Roghair et al. Reference Roghair, Mercado, Annaland, Kuipers, Sun and Lohse2011; Tagawa et al. Reference Tagawa, Roghair, Prakash, van Sint Annaland, Kuipers, Sun and Lohse2013) to treat rigid particles and deformable bubbles respectively in channel and pipe flow geometries (Pan & Banerjee Reference Pan and Banerjee1996; Lu, Fernández & Tryggvason Reference Lu, Fernández and Tryggvason2005; Uhlmann Reference Uhlmann2008; Dabiri, Lu & Tryggvason Reference Dabiri, Lu and Tryggvason2013; Kidanemariam et al. Reference Kidanemariam, Chan-Braun, Doychev and Uhlmann2013; Lashgari et al. Reference Lashgari, Picano, Breugem and Brandt2014; Picano, Breugem & Brandt Reference Picano, Breugem and Brandt2015; Costa et al. Reference Costa, Picano, Brandt and Breugem2016). Flows with dispersed particles, drops and bubbles can, under the right conditions, reduce the skin friction and result in significant energetic (and therefore financial) savings. In industrial settings, this is already achieved using polymeric additives which disrupt the self-sustaining cycle of wall turbulence and dampen the quasi-streamwise vortices (Procaccia, L'vov & Benzi Reference Procaccia, L'vov and Benzi2008; White & Mungal Reference White and Mungal2008). Polymeric additives are impractical for maritime applications, and therefore gas bubbles are used, with varying success rates (Ceccio Reference Ceccio2010; Murai Reference Murai2014). Local measurements in bubbly flows are non-trivial, and the key parameters and their optimum values are still unknown. For example, it is impossible to fix the bubble size in experiments and therefore to isolate the effect of bubble size. Various studies have hinted that drag reduction can also be achieved using spherical particles (Zhao, Andersson & Gillissen Reference Zhao, Andersson and Gillissen2010) and also by using very large particles in a turbulent von Kármán flow (Cisse et al. Reference Cisse, Saw, Gibert, Bodenschatz and Bec2015). In the latter study, a tremendous decrease in turbulent kinetic energy (TKE) was observed. A similar, but less intense, decrease in TKE was also seen by Bellani et al. (Reference Bellani, Byron, Collignon, Meyer and Variano2012) using a very low particle volume fraction. By using solid particles, it is possible to isolate the size effect on drag reduction, and, even though rigid particles are fundamentally different from bubbles, this can give additional insight into the mechanism of bubbly drag reduction. Machicoane & Volk (Reference Machicoane and Volk2016) have already shown that the particle dynamics is highly influenced by the diameter of the particle. This might or might not have a direct influence on the global drag of the system and has never been studied. Whether and when solid particles increase or decrease the drag in a flow is yet not fully understood, and two lines of thought exist. On one side, it is hypothesized that solid particles decrease the overall drag as they damp turbulent fluctuations (Poelma, Westerweel & Ooms Reference Poelma, Westerweel and Ooms2007; Zhao et al. Reference Zhao, Andersson and Gillissen2010). On the other side, one could expect that solid particles increase the drag as they shed vortices, which must be dissipated. In addition, they also increase the apparent viscosity. A common way to quantify this is the so called 'Einstein relation' (Einstein Reference Einstein1906),
(1.1) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D708}_{\unicode[STIX]{x1D6FC}}=\unicode[STIX]{x1D708}\left(1+{\textstyle \frac{5}{2}}\unicode[STIX]{x1D6FC}\right), & & \displaystyle\end{eqnarray}$$
where $\unicode[STIX]{x1D708}$ is the viscosity of the continuous phase. This compensation is valid for the small $\unicode[STIX]{x1D6FC}$ values used in this paper (Stickel & Powell Reference Stickel and Powell2005). Direct measurements of drag in flows with solid particles are scarce, and the debate on under what conditions they either enhance or decrease the friction has not yet been settled. Particles and bubbles may show collective effects (clustering), and experiments have revealed that this has a significant influence on the flow properties (Liu & Bankoff Reference Liu and Bankoff1993; Kulick, Fessler & Eaton Reference Kulick, Fessler and Eaton1994; Muste & Patel Reference Muste and Patel1997; So et al. Reference So, Morikita, Takagi and Matsumoto2002; Fujiwara, Minato & Hishida Reference Fujiwara, Minato and Hishida2004; van den Berg et al. Reference van den Berg, Luther, Lathrop and Lohse2005, Reference van den Berg, van Gils, Lathrop and Lohse2007; Calzavarini et al. Reference Calzavarini, Cencini, Lohse and Toschi2008; Shawkat, Ching & Shoukri Reference Shawkat, Ching and Shoukri2008; Colin, Fabre & Kamp Reference Colin, Fabre and Kamp2012; van Gils et al. Reference van Gils, Narezo Guzman, Sun and Lohse2013; Maryami et al. Reference Maryami, Farahat, Javad poor and Shafiei Mayam2014; Mathai et al. Reference Mathai, Prakash, Brons, Sun and Lohse2015; Alméras et al. Reference Alméras, Mathai, Lohse and Sun2017; Mathai et al. Reference Mathai, Huisman, Sun, Lohse and Bourgoin2018). In general, the Stokes number is used to predict this clustering behaviour, but for neutrally buoyant particles, this is found to be insufficient (Fiabane et al. Reference Fiabane, Zimmermann, Volk, Pinton and Bourgoin2012; Bragg, Ireland & Collins Reference Bragg, Ireland and Collins2015). In addition, the position of the particles (or the particle clusters) is likely to have a large influence on the skin friction. In direct numerical simulation at low Reynolds numbers, Kazerooni et al. (Reference Kazerooni, Fornari, Hussong and Brandt2017) found that the particle distribution is mainly governed by the bulk Reynolds number.
In order to study the effects of particles on turbulence it is convenient to use a closed set-up where one can relate global and local quantities directly through rigorous mathematical relations. In this paper, the Taylor–Couette (TC) geometry (Grossmann, Lohse & Sun Reference Grossmann, Lohse and Sun2016) – the flow between two concentric rotating cylinders – is employed, as this is a closed set-up with global balances. The driving of the TC geometry can be described using the Reynolds number based on the inner cylinder (IC), $\mathit{Re}_{i}=u_{i}d/\unicode[STIX]{x1D708}$ , where $u_{i}=\unicode[STIX]{x1D714}_{i}r_{i}$ is the azimuthal velocity at the surface of the IC, $\unicode[STIX]{x1D714}_{i}$ is the angular velocity of the IC, $d=r_{o}-r_{i}$ is the gap between the cylinders, $\unicode[STIX]{x1D708}$ is the kinematic viscosity and $r_{i}$ ( $r_{o}$ ) is the radius of the inner (outer) cylinder. The geometry of TC flow is characterized by two parameters: the radius ratio $\unicode[STIX]{x1D702}=r_{i}/r_{o}$ and the aspect ratio $\unicode[STIX]{x1D6E4}=L/d$ , where $L$ is the height of the cylinders. The response parameter of the system is the torque, $\unicode[STIX]{x1D70F}$ , required to maintain constant rotation speed of the inner cylinder. It has been mathematically shown that in TC flow, the angular velocity flux, defined as $J^{\unicode[STIX]{x1D714}}=r^{3}(\langle u_{r}\unicode[STIX]{x1D714}\rangle _{A,t}-\unicode[STIX]{x1D708}(\unicode[STIX]{x2202}/\unicode[STIX]{x2202}r)\langle \unicode[STIX]{x1D714}\rangle _{A,t})$ , where the subscript $A,t$ denotes averaging over a cylindrical surface and time, is a radially conserved quantity (Eckhardt, Grossmann & Lohse Reference Eckhardt, Grossmann and Lohse2007). One can, in analogy to Rayleigh–Bénard convection, normalize this flux and define a Nusselt number based on the flux of the angular velocity,
(1.2) $$\begin{eqnarray}\displaystyle \mathit{Nu}_{\unicode[STIX]{x1D714}}=\frac{J^{\unicode[STIX]{x1D714}}}{J_{lam}^{\unicode[STIX]{x1D714}}}=\frac{\unicode[STIX]{x1D70F}}{2\unicode[STIX]{x03C0}L\unicode[STIX]{x1D70C}J_{lam}^{\unicode[STIX]{x1D714}}}, & & \displaystyle\end{eqnarray}$$
where $J_{lam}^{\unicode[STIX]{x1D714}}=2\unicode[STIX]{x1D708}r_{i}^{2}r_{o}^{2}(\unicode[STIX]{x1D714}_{i}-\unicode[STIX]{x1D714}_{o})/(r_{o}^{2}-r_{i}^{2})$ is the angular velocity flux for laminar purely azimuthal flow and $\unicode[STIX]{x1D714}_{o}$ is the angular velocity of the outer cylinder. In this spirit, the driving is expressed in terms of the Taylor number,
(1.3) $$\begin{eqnarray}\displaystyle \mathit{Ta}={\textstyle \frac{1}{4}}\unicode[STIX]{x1D70E}d^{2}(r_{i}+r_{o})^{2}(\unicode[STIX]{x1D714}_{i}-\unicode[STIX]{x1D714}_{o})^{2}\unicode[STIX]{x1D708}^{-2}. & & \displaystyle\end{eqnarray}$$
Here, $\unicode[STIX]{x1D70E}=((1+\unicode[STIX]{x1D702})/(2\sqrt{\unicode[STIX]{x1D702}}))^{4}\approx 1.057$ is a geometric parameter ('geometric Prandtl number'), in analogy to the Prandtl number in Rayleigh–Bénard convection. In the presented work, where only the inner cylinder is rotated and the outer cylinder is kept stationary, we can relate $\mathit{Ta}$ to the Reynolds number of the inner cylinder by
(1.4) $$\begin{eqnarray}\displaystyle \mathit{Re}_{i}=\frac{r_{i}\unicode[STIX]{x1D714}_{i}d}{\unicode[STIX]{x1D708}}=\frac{8\unicode[STIX]{x1D702}^{2}}{(1+\unicode[STIX]{x1D702})^{3}}\sqrt{\mathit{Ta}}. & & \displaystyle\end{eqnarray}$$
The scaling of the dimensionless angular velocity flux (torque) with the Taylor (Reynolds) number has been analysed extensively, see, e.g., Lathrop, Fineberg & Swinney (Reference Lathrop, Fineberg and Swinney1992), Lewis & Swinney (Reference Lewis and Swinney1999), van Gils et al. (Reference van Gils, Bruggert, Lathrop, Sun and Lohse2011), Paoletti & Lathrop (Reference Paoletti and Lathrop2011), Ostilla-Mónico et al. (Reference Ostilla-Mónico, Stevens, Grossmann, Verzicco and Lohse2013) and the review articles by Fardin, Perge & Taberlet (Reference Fardin, Perge and Taberlet2014) and Grossmann et al. (Reference Grossmann, Lohse and Sun2016), and the different regimes are well understood. In the current Taylor number regime, it is known that $\mathit{Nu}_{\unicode[STIX]{x1D714}}\propto \mathit{Ta}^{0.4}$ . Because this response is well known, it can be exploited to study the influence of immersed bubbles and particles (van den Berg et al. Reference van den Berg, Luther, Lathrop and Lohse2005, Reference van den Berg, van Gils, Lathrop and Lohse2007; van Gils et al. Reference van Gils, Narezo Guzman, Sun and Lohse2013; Maryami et al. Reference Maryami, Farahat, Javad poor and Shafiei Mayam2014; Verschoof et al. Reference Verschoof, van der Veen, Sun and Lohse2016) on the drag needed to sustain constant rotational velocity of the inner cylinder.
In this paper, we will use the TC geometry to study the effect of neutrally buoyant rigid spherical particles on the drag. We study the effects of varying the particle size $d_{p}$ , the volume fraction $\unicode[STIX]{x1D6FC}$ , the density ratio $\unicode[STIX]{x1D719}$ and the flow Reynolds number $Re$ on the global torque (drag) of the TC flow. The drag reduction is expressed as $\text{DR}=(1-\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC})/\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=0))$ and, as we are interested in the net drag reduction, it is not compensated for increased viscosity effects using correction models, such as the Einstein relation.
The paper is organized as follows. Section 2 presents the experimental set-up. In § 3, we discuss the results. The findings are summarized and an outlook for future work is given in the last section.
Figure 1. Schematic of the TC set-up: two concentric cylinders of radii $r_{i,o}$ with a working fluid in between. Particles are not to scale. The inner cylinder rotates with angular velocity $\unicode[STIX]{x1D714}_{i}$ , while the outer cylinder is kept at rest. We measure the torque on the middle section (highlighted). The laser Doppler anemometry (LDA) probe is positioned at midheight to measure the azimuthal velocity at midgap.
2 Experimental set-up
The experiments were conducted in the Twente Turbulent Taylor–Couette ( $\text{T}^{3}\text{C}$ ) facility (van Gils et al. Reference van Gils, Bruggert, Lathrop, Sun and Lohse2011). A schematic of the set-up is shown in figure 1. In this set-up, the flow is confined between two concentric cylinders, which rotate independently. The top and bottom plates are attached to the outer cylinder. The radius of the inner cylinder (IC) is $r_{i}=0.200~\text{m}$ and the radius of the outer cylinder (OC) is $r_{o}=0.2794~\text{m}$ , resulting in a gap width of $d=r_{o}-r_{i}=0.0794~\text{m}$ and a radius ratio of $\unicode[STIX]{x1D702}=r_{i}/r_{o}=0.716$ . The IC has a total height of $L=0.927~\text{m}$ , resulting in an aspect ratio of $L/d=11.7$ . The IC is segmented axially into three parts. To minimize the effect of the stationary end plates, the torque is measured only over the middle section of the IC with height $L_{mid}/L=0.58$ , away from the end plates. A hollow reaction torque sensor made by Honeywell is used to measure the torque, which has an error of roughly 1 % for the largest torques we measured. Between the middle section and the top and bottom sections of the inner cylinder is a gap of 2 mm.
The IC can be rotated up to $f_{i}=\unicode[STIX]{x1D714}_{i}/(2\unicode[STIX]{x03C0})=20~\text{Hz}$ . In these experiments, only the IC is rotated and the OC is kept at rest. The system holds a volume of $V=111~\text{l}$ of working fluid, which is a solution of glycerol ( $\unicode[STIX]{x1D70C}=1260~\text{kg}~\text{m}^{-3}$ ) and water. To tune the density of the working fluid, the amount of glycerol is varied between 0 % and 40 %, resulting in particles being marginally heavy, neutrally buoyant or marginally light. The system is thermally controlled by cooling the top and bottom plates of the set-up. The temperature is kept at $T=(20\pm 1)\,^{\circ }\text{C}$ for all the experiments, with a maximum spatial temperature difference of 0.2 K within the set-up, and we account for the density and viscosity changes of water and glycerol (Glycerine Producers' Association 1963).
Rigid polystyrene spherical particles (RGPballs S.r.l.) are used in the experiments; these particles have a density close to that of water (940– $1040~\text{kg}~\text{m}^{-3}$ ). We chose particles with diameters $d_{p}=1.5$ , 4.0 and $8.0~\text{mm}$ . At our disposal are 2.22 l of 1.5 mm diameter particles, 2.22 l of 4 mm diameter particles and 6.66 l of 8 mm diameter particles, resulting in maximum volume fractions of 2 % , 2 % and 6 % respectively. The particles are found to be nearly mono-disperse (99.9 % of the particles are within $\pm$ 0.1 mm of their target diameter). Due to the fabrication process, small air bubbles are sometimes entrapped within the particles. This results in a slightly heterogeneous density distribution of the particles. After measuring the density distribution for each diameter, we calculated the average for all batches, which was $\unicode[STIX]{x1D70C}_{p}=1036\pm 5~\text{kg}~\text{m}^{-3}$ . By adding glycerol to water, we match this value in order to have neutrally buoyant particles.
Using a laser Doppler anemometry (LDA) system (BSA F80, Dantec Dynamics) we capture the azimuthal velocity at midheight and midgap of the system (see figure 1) and we perform a radial scan at midheight. The flow is seeded with $5~\unicode[STIX]{x03BC}\text{m}$ diameter polyamide particles (PSP-5, Dantec Dynamics). Because of the curved surface of the outer cylinder (OC), the beams of the LDA get refracted in a non-trivial manner, which is corrected for using a ray-tracing technique described by Huisman, van Gils & Sun (Reference Huisman, van Gils and Sun2012).
Obviously, LDA measurements in a multiphase flow are more difficult to set up than for single-phase flows, as the method relies on the reflection of light from tiny tracer particles passing through a measurement volume ( $0.07~\text{mm}\times 0.07~\text{mm}\times 0.3~\text{mm}$ ). Once we add a second type of relatively large particles to the flow, this will affect the LDA measurements, mostly by blocking the optical path, resulting in lower acquisition rates. These large particles will also move through the measurement volume, but as these particles are at least 300 times larger than the tracers and thus much larger than the fringe pattern (fringe spacing $d_{f}=3.4~\unicode[STIX]{x03BC}\text{m}$ ), the reflected light is substantially different from a regular Doppler burst and does not result in a measured value. The minimal signal-to-noise ratio for accepting a Doppler burst is set to 4. As a postprocessing step, the velocities are corrected for the velocity bias by using the transit time of the tracer particle.
3.1 Effect of particle size
Figure 2. (a) A plot of $\mathit{Nu}_{\unicode[STIX]{x1D714}}(\mathit{Ta})$ for 2 % particle volume fraction with particle diameters of 1.5 mm, 4.0 mm and 8.0 mm, and for comparison the single-phase case. Data from comparable bubbly drag reduction studies are plotted using black markers. (b) The same data, but now as a compensated plot $\mathit{Nu}_{\unicode[STIX]{x1D714}}/\mathit{Ta}^{0.40}$ as a function of $\mathit{Ta}$ . The error bar indicates the maximum deviation for repeated measurements from all measurements combined (coloured curves), which is less than 1 %. At $\mathit{Ta}\geqslant 2\times 10^{12}$ , the 1.5 mm particles show an increased uncertainty of 1.7 %, which is indicated by the right error bar.
First, we study the effect of changing the particle diameter on the torque of the system. In these experiments, we kept the particle volume fraction fixed at 2 % and the density of the working fluid, $\unicode[STIX]{x1D70C}_{f}$ , at $1036~\text{kg}~\text{m}^{-3}$ , for which the particles are neutrally buoyant. The results of these measurements are presented as $\mathit{Nu}_{\unicode[STIX]{x1D714}}(\mathit{Ta})$ in figure 2(a). Our curves are practically overlapping, suggesting that the difference in drag between the different particle sizes is only marginal. We compare these with the bubbly drag reduction data at similar conditions (hollow symbols) from van den Berg et al. (Reference van den Berg, Luther, Lathrop and Lohse2005), van Gils et al. (Reference van Gils, Narezo Guzman, Sun and Lohse2013) and Verschoof et al. (Reference Verschoof, van der Veen, Sun and Lohse2016). At low $\mathit{Ta}$ , the symbols overlap with our data. However, at larger $\mathit{Ta}$ , the bubbly flow data show much lower torque (drag) than the particle-laden cases. As we are in the ultimate regime of turbulence where $\mathit{Nu}_{\unicode[STIX]{x1D714}}$ effectively scales as $\mathit{Nu}_{\unicode[STIX]{x1D714}}\propto \mathit{Ta}^{0.4}$ (Huisman et al. Reference Huisman, van Gils and Sun2012; Ostilla-Mónico et al. Reference Ostilla-Mónico, Stevens, Grossmann, Verzicco and Lohse2013), we compensate the data with $\mathit{Ta}^{0.40}$ in figure 2(b) to emphasize the differences between the datasets. For the single-phase case, this yields a clear plateau. For the particle-laden cases, the lowest drag corresponds to the smallest particle size. The reduction is, however, quite small ( ${<}3\,\%$ ). The compensated plots also reveal a sudden increase in drag at a critical Taylor number of $\mathit{Ta}^{\ast }=0.8\times 10^{12}$ . The jump is more distinct for the smaller particles, and might suggest a reorganization of the flow (Huisman et al. Reference Huisman, van der Veen, Sun and Lohse2014). Beyond $\mathit{Ta}^{\ast }$ , the drag reduction is negligible for the larger particles (4 mm and 8 mm spheres). However, for the 1.5mm particles, the drag reduction seems to increase, and was found to be very repeatable in experiments. Interestingly, the size of these particles is comparable to that of the air bubbles in van Gils et al. (Reference van Gils, Narezo Guzman, Sun and Lohse2013). This might suggest that for smaller size particles at larger $\mathit{Ta}$ , one could expect drag reduction. At the increased viscosity of the suspension, a maximum $\mathit{Ta}\approx 3\times 10^{12}$ could be reached in our experiments. We have performed an uncertainty analysis by repeating the measurements for the single phase and for the cases with 8 mm and 1.5 mm particles multiple times and calculating the maximum deviation from the ensemble average. The left error bar indicates the maximum deviation for all measurements combined and is ${\approx}1\,\%$ . For $\mathit{Ta}\geqslant 2\times 10^{12}$ , we see an increase in uncertainty of 1.7 % (shown by the right error bar in figure 2 b), which is only caused by the 1.5 mm particles. These tiny particles can accumulate in the 2 mm gap between the cylinder segments and thereby increase the uncertainty. Above $\mathit{Ta}\geqslant 2\times 10^{12}$ , both the 8 mm and 4 mm particles show a maximum deviation below 0.25 %.
Below $\mathit{Ta}^{\ast }$ , the drag reduction due to spherical particles appears to be similar to bubbly drag reduction (van Gils et al. Reference van Gils, Narezo Guzman, Sun and Lohse2013). However, in the lower- $\mathit{Ta}$ regime, the bubble distribution is highly non-uniform due to the buoyancy of the bubbles (van den Berg et al. Reference van den Berg, Luther, Lathrop and Lohse2005; van Gils et al. Reference van Gils, Narezo Guzman, Sun and Lohse2013; Verschoof et al. Reference Verschoof, van der Veen, Sun and Lohse2016). Therefore, the volume fractions reported are only the global values, and the torque measurements are for the midsections of their set-ups. What is evident from the above comparisons is that in the high- $\mathit{Ta}$ regime, air bubbles drastically reduce the drag, reaching far beyond the drag modification by rigid spheres.
Figure 3. (a) A plot of $\mathit{Nu}_{\unicode[STIX]{x1D714}}(\mathit{Ta})$ , compensated by $\mathit{Ta}^{0.4}$ , for 8 mm particles with various particle volume fractions, and for comparison the single-phase case. (b) The drag reduction, defined as $\text{DR}=(1-\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC})/\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=0))$ , plotted against $\mathit{Ta}$ .
3.2 Effect of particle volume fraction
The next step is to investigate the effect of the particle volume fraction on the torque. For the 8 mm particles, we have the ability to increase the particle volume fraction up to 6 %. This is done in steps of 2 %, and the results are plotted in compensated form in figure 3(a). The normalized torque increases with the volume fraction of particles. The 6 % case shows the largest drag. Figure 3(b) shows the same data in terms of drag reduction as function of $\mathit{Ta}$ . A 2 % volume fraction of particles gives the highest drag reduction. With increasing $\unicode[STIX]{x1D6FC}$ , the drag reduction decreases. These measurements are in contrast to the findings for bubbly drag reduction (van Gils et al. Reference van Gils, Narezo Guzman, Sun and Lohse2013), for which the net drag decrease with increasing gas volume fraction of drag in a particle-laden flow is the larger apparent viscosity. If we were to calculate the drag modification using an apparent viscosity (e.g. the Einstein relation (1.1)) for the case using $\unicode[STIX]{x1D6FC}=6\%$ of particles, the measured drag would be 15% larger compared to the pure working fluid case. Inclusion of this effect in our drag reduction calculation would result in reductions of the same order. However, when comparing the drag with or without particles, the net drag reduction is practically zero. This result is different from the work of Picano et al. (Reference Picano, Breugem and Brandt2015) in turbulent channel flow, where they found that the drag increased more than the increase of the viscosity.
For a better comparison with bubbly drag reduction, we plot the drag reduction as a function of (gas or particle) volume fraction $\unicode[STIX]{x1D6FC}$ ; see figure 4(a). Different studies are shown using different symbols, and $\mathit{Re}$ is indicated by colours. None of the datasets were compensated for the changes in effective viscosity. The DR is defined in a slightly different way in each study: van den Berg et al. (Reference van den Berg, Luther, Lathrop and Lohse2005) makes use of the friction coefficient $(1-c_{f}(\unicode[STIX]{x1D6FC})/c_{f}(0))$ ; van Gils et al. (Reference van Gils, Narezo Guzman, Sun and Lohse2013) uses the dimensionless torque $G=\unicode[STIX]{x1D70F}/(2\unicode[STIX]{x03C0}L_{mid}\unicode[STIX]{x1D70C}\unicode[STIX]{x1D708}^{2})$ , $(1-G(\unicode[STIX]{x1D6FC})/G(0))$ ; Verschoof et al. (Reference Verschoof, van der Veen, Sun and Lohse2016) uses the plain torque value $(1-\unicode[STIX]{x1D70F}(\unicode[STIX]{x1D6FC})/\unicode[STIX]{x1D70F}(0))$ . While the rigid particles only show marginal drag reduction, some studies using bubbles achieve dramatic reduction of up to 30 % and beyond. Figure 4(b) shows a zoomed in view of the bottom part of the plot with the rigid-sphere data. The triangles denote the data from Verschoof et al. (Reference Verschoof, van der Veen, Sun and Lohse2016), corresponding to small bubbles in the TC system. The rigid particles and the small bubbles show a similar drag response. What is remarkable is that this occurs despite the huge difference in size. The estimated diameter of the bubbles in Verschoof et al. (Reference Verschoof, van der Veen, Sun and Lohse2016) is 0.1 mm, while the rigid spheres are approximately two orders in magnitude larger. This provides key evidence that the particle size alone is not enough to cause drag reduction, the density ratio of the particles and the carrier fluid is also of importance.
Figure 4. (a) Drag reduction as function of particle volume fraction from ○ $d_{p}=8~\text{mm}$ particles from the present work compared with similar gas volume fractions from ▫ van den Berg et al. (Reference van den Berg, Luther, Lathrop and Lohse2005), ♢ van Gils et al. (Reference van Gils, Narezo Guzman, Sun and Lohse2013) and ▵ Verschoof et al. (Reference Verschoof, van der Veen, Sun and Lohse2016). Symbols indicate the different studies while colours differentiate between the Reynolds numbers. The current work has DR defined as $(1-\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC})/\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=0))$ ; the other studies use dimensionless torque $G$ (van Gils et al. Reference van Gils, Narezo Guzman, Sun and Lohse2013), friction coefficient $c_{f}$ (van den Berg et al. Reference van den Berg, Luther, Lathrop and Lohse2005) or plain torque $\unicode[STIX]{x1D70F}$ (Verschoof et al. Reference Verschoof, van der Veen, Sun and Lohse2016) to define DR. (b) Zoom of the bottom part of (a) where the data from the present work are compared with bubbly drag reduction data using 6 ppm of surfactant from Verschoof et al. (Reference Verschoof, van der Veen, Sun and Lohse2016).
3.3 Effect of marginal changes in particle density ratio
With the effects of particle size and volume fraction revealed, we next address the sensitivity of the drag to marginal variations in particle density. A change in the particle density ratio brings about a change in the buoyancy and centrifugal forces on the particle, both of which can affect the particle distribution within the flow. We tune the particle to fluid density ratio $\unicode[STIX]{x1D719}\equiv \unicode[STIX]{x1D70C}_{p}/\unicode[STIX]{x1D70C}_{f}$ by changing the volume fraction of glycerol in the fluid, such that the particles are marginally buoyant ( $\unicode[STIX]{x1D719}=0.94,0.97$ ), neutrally buoyant ( $\unicode[STIX]{x1D719}=1.00$ ) and marginally heavy ( $\unicode[STIX]{x1D719}=1.04$ ) particles. In figure 5(a), we show the compensated $\mathit{Nu}_{\unicode[STIX]{x1D714}}$ as a function of $\mathit{Ta}$ for various values of $\unicode[STIX]{x1D719}$ . Here, $\unicode[STIX]{x1D6FC}$ is fixed to 6 % and only 8 mm particles are used. The darker shades of colour correspond to the single-phase cases, while lighter shades correspond to particle-laden cases. In general, the single-phase drag is larger as compared with the particle-laden cases. However, there is no striking difference between the different values of $\unicode[STIX]{x1D719}$ . In figure 5(b), we present the drag reduction for particle-laden cases at different density ratios. On average, we see for all cases drag modification of approximately $\pm 2\,\%$ . We can also identify a small trend in the lower- $\mathit{Ta}$ region: the two larger $\unicode[STIX]{x1D719}$ cases (heavy and neutrally buoyant particles) tend to have a drag increase, while the smaller $\unicode[STIX]{x1D719}$ cases (both light particles) have a tendency for drag reduction. Nevertheless, the absolute difference in DR between the cases is within 4 %. The above results provide clear evidence that minor density mismatches do not have a serious influence on the global drag of the system. To investigate for strong buoyancy effects, additional measurements were made using 2 mm expanded polystyrene particles ( $\unicode[STIX]{x1D719}=0.02$ ). However, due to the particles accumulating between the inner cylinder segments leading to additional mechanical friction, these measurements were inconclusive.
Figure 5. (a) A plot of $\mathit{Ta}$ as a function of $\mathit{Nu}_{\unicode[STIX]{x1D714}}$ compensated by $\mathit{Ta}^{0.4}$ for various density ratios $\unicode[STIX]{x1D719}=\unicode[STIX]{x1D70C}_{p}/\unicode[STIX]{x1D70C}_{f}$ indicated by the corresponding colour. The darker shades indicate the single-phase cases while the lighter shades show the cases using 6 % particle volume fraction of 8 mm diameter particles. Due to the increase in viscosity, the maximum attainable $\mathit{Ta}$ is lower for larger density ratios. The uncertainty is again estimated using the maximum deviation from the average for multiple runs and here is only shown for the green curves. This value is slightly below 1 % at lower $\mathit{Ta}$ and decreases with increasing $\mathit{Ta}$ to values below 0.25 %. This trend is seen for all values of $\unicode[STIX]{x1D719}$ . (b) The drag reduction, calculated from the data of (a), plotted against $\mathit{Ta}$ . The drag reduction is defined as $\text{DR}=(1-\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=6\,\%)/\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=0))$ .
3.4 Flow statistics using particles
In the above sections, we presented the effects of changing particle size, volume fraction and density on the global drag of the system. Next, we look into local flow properties using LDA while the particles are present. First, we collect a total of $1\times 10^{6}$ data points of azimuthal velocity at midheight and midgap. These are captured over a period of approximately $3\times 10^{4}$ cylinder rotations. From these data, we calculate the probability density function (PDF) of $u_{\unicode[STIX]{x1D703}}$ normalized by $u_{i}$ for various values of $\unicode[STIX]{x1D6FC}$ , shown in figure 6(a). The particle size is fixed to 8 mm and the Reynolds number is set to $1\times 10^{6}$ . From this figure, we see a large increase in turbulent fluctuations, resulting in very wide tails. While the difference between 2 %, 4 % and 6 % is not large, we can identify an increase in fluctuations with increasing $\unicode[STIX]{x1D6FC}$ . These increased fluctuations can be explained by the additional wakes produced by the particles (Poelma et al. Reference Poelma, Westerweel and Ooms2007; Alméras et al. Reference Alméras, Mathai, Lohse and Sun2017). The increase in fluctuations can also be visualized using the standard deviation of $\unicode[STIX]{x1D70E}(u_{\unicode[STIX]{x1D703}})=\langle u_{\unicode[STIX]{x1D703}}^{\prime 2}\rangle ^{1/2}$ normalized by the standard deviation of the single-phase case – see figure 6(b). In this figure, $\unicode[STIX]{x1D70E}(u_{\unicode[STIX]{x1D703}})$ is shown for three different values of $\mathit{Re}$ , again for 8 mm particles. In general, we see a monotonically increasing trend with $\unicode[STIX]{x1D6FC}$ , and it seems to approach an asymptotic value. One can speculate that there has to be an upper limit for fluctuations that originate from wakes of the particles. For large $\unicode[STIX]{x1D6FC}$ , the wakes from particles will interact with one another and with the carried flow.
Figure 6. (a) Probability density functions of $u_{\unicode[STIX]{x1D703}}/u_{i}$ for various values of $\unicode[STIX]{x1D6FC}$ and the single-phase case. The particle size is fixed to 8 mm and $\mathit{Re}_{i}=1\times 10^{6}$ for all cases. (b) Standard deviation of the azimuthal velocity normalized by the standard deviation of the single-phase case for three different values of $\mathit{Re}$ for a fixed particle size of 8 mm.
Measurements using 4 mm particles yielded qualitatively similar results. It is known that in particle-laden gaseous pipe flows, large particles can increase the turbulent fluctuations, while small particles result in turbulence attenuation (Tsuji, Morikawa & Shiomi Reference Tsuji, Morikawa and Shiomi1984; Gore & Crowe Reference Gore and Crowe1989; Vreman Reference Vreman2015). The LDA measurements were not possible with the smallest particles (1.5 mm), as the large amount of particles in the flow blocked the optical paths of the laser beams.
We are confident that for these bidisperse particle-laden LDA measurements, the large particles do not have an influence on the measurements as these millimetric-sized particles are much larger than the fringe spacing ( $d_{f}=3.4~\unicode[STIX]{x03BC}\text{m}$ ) and do not show a Doppler burst. However, during the measurements, the particles get damaged and small bits of material are fragmented off the particles. We estimate the size of these particles to be slightly larger than the tracer particles, and these can have an influence on the LDA measurements as they do not act as tracers.
The way in which the average azimuthal velocity changes with particle radius is shown in figure 7. We measured a total of $3\times 10^{4}$ data points during approximately 900 cylinder rotations. Again, the data are corrected for velocity bias by using the transit time as a weighing factor. Figure 7(a) shows the effect of particle size for $\unicode[STIX]{x1D6FC}=2\,\%$ and figure 7(b) shows the effect of particle volume fraction for 8 mm particles. Both figures additionally show the high-precision single-phase data from Huisman et al. (Reference Huisman, Scharnowski, Cierpka, Kähler, Lohse and Sun2013), for which our single-phase measurements are practically overlapping. Since LDA measurements close to the inner cylinder are difficult, due to the reflecting inner cylinder surface, we limited our radial extent to $\tilde{r}=(r-r_{i})/(r_{o}-r_{i})=[0.2,1]$ . We found that the penetration depth of our LDA measurements is the smallest for experiments with the smallest particles and the largest $\unicode[STIX]{x1D6FC}$ . All differences from the single-phase case are only marginal and we can conclude that the average mean velocity is not much affected by the particles in the flow, at least for $\tilde{r}\geqslant 0.2$ .
Figure 7. A plot of $u_{\unicode[STIX]{x1D703}}$ normalized by the velocity of the inner cylinder wall $u_{i}$ as a function of the normalized radius for various values $d_{p}$ while $\unicode[STIX]{x1D6FC}=2\,\%$ (a) and various values of $\unicode[STIX]{x1D6FC}$ while $d_{p}=8~\text{mm}$ (b). In all cases, $\mathit{Re}_{i}$ is fixed to $1\times 10^{6}$ . For comparison, the single-phase case using water at $\mathit{Re}_{i}=1\times 10^{6}$ from Huisman et al. (Reference Huisman, Scharnowski, Cierpka, Kähler, Lohse and Sun2013) is also plotted in dashed black in both plots. Both plots have an inset showing an enlargement of the centre area from the same plot.
Figure 8. Probability density functions of the normalized azimuthal velocity as a function of the normalized radial position for various values of $\unicode[STIX]{x1D6FC}$ for the case of 8 mm particles and the single-phase case while keeping $\mathit{Re}$ at $1\times 10^{6}$ . With increasing $\unicode[STIX]{x1D6FC}$ , the maximum penetration depth decreases. The grey areas indicate radial positions for which no data are available.
To get an idea of the fluctuations we can use the previous data to construct two-dimensional PDFs of the azimuthal velocity as a function of radius. These are shown for $\mathit{Re}=1\times 10^{6}$ using 8 mm particles at various values of $\unicode[STIX]{x1D6FC}$ and the single-phase case in figure 8. The first thing to notice is again that the penetration depth decreases with increasing $\unicode[STIX]{x1D6FC}$ . The single-phase case shows a narrow-banded PDF. When $\unicode[STIX]{x1D6FC}$ is increased, for the lower values of $\tilde{r}$ , the PDF is much wider. While it makes sense that an increase in $\unicode[STIX]{x1D6FC}$ increases the fluctuations due to the increased number of wakes of particles, this is expected everywhere in the flow, not only closer to the inner cylinder. It is possible that the particles have a preferred concentration closer to the inner cylinder. We tried to measure the local concentration of particles as a function of radius but failed due to limited optical accessibility. Therefore, we can only speculate under what circumstances there would be an inhomogeneous particle distribution that would lead to the visible increase in fluctuations. The first possibility is a mismatch in density between the particles and the fluid, which would result in light particles ( $\unicode[STIX]{x1D719}<1$ ) accumulating closer to the inner cylinder. Another possibility is that due to the rotation of the particles, an effective lift force arises, leading to a different particle distribution in the flow. While this is quite plausible, this is difficult to validate as we would need to capture the rotation. The fragments of plastic that are sheared off the particles can also give a bias to the LDA measurement. While we estimate them to be larger than the tracers, they might still be small enough to produce a signal, and they might not follow the flow faithfully.
4 Conclusions and outlook
We have conducted an experimental study on the drag response of a highly turbulent TC flow containing rigid neutrally buoyant spherical particles. We have found that, unlike the case of bubbles used in prior works (van Gils et al. Reference van Gils, Narezo Guzman, Sun and Lohse2013; Verschoof et al. Reference Verschoof, van der Veen, Sun and Lohse2016), rigid particles barely reduce (or increase) the drag on the system, even for cases where their size is comparable to that of the bubbles used in other studies. There is no significant size effect. Even for very large particles, which can attenuate turbulent fluctuations and generate wakes, there is no distinct difference from the single-phase flow. We also varied the volume fraction of the particles in the range 0 %–6 %. The particle volume fraction has no greater effect on the system drag than what is expected due to changes in the apparent viscosity of the suspension. Further, we tested the sensitivity of our drag measurements to marginal variations in the particle to fluid density ratio $\unicode[STIX]{x1D719}$ . A trend was noticeable, towards drag reduction when $\unicode[STIX]{x1D719}$ was reduced from 1.00 to 0.94. This suggests that a low density of the particles could be a necessary ingredient for drag reduction. Finally, we have also probed the local flow at the midheight and midgap of the system using LDA. With the addition of particles, the liquid velocity fluctuations are enhanced, with wider tails of the distributions. A finite relative velocity between the particle and the flow around it can cause this increase in velocity fluctuations (Mathai et al. Reference Mathai, Prakash, Brons, Sun and Lohse2015), as seen for bubbly flows (pseudo-turbulence) and in situations of sedimenting particles in quiescent or turbulent environments (Gore & Crowe Reference Gore and Crowe1989). In the present situation, the relative velocity between the particle and the flow is expected, due to the inertia of the finite-sized particles we used. There is only a marginal deviation from the single-phase case in the average azimuthal velocity over the radial positions measured using any size or concentration of particles measured. From the two-dimensional PDFs, we see that closer to the inner cylinder, using smaller $d_{f}$ or larger $\unicode[STIX]{x1D6FC}$ , the PDF gets wider. This could be due to a preferential concentration of the particles or a slight density mismatch.
Our study is a step towards a better understanding of the mechanisms of bubbly drag reduction. Bubbles are deformable, and they have a tendency to migrate towards the walls, either due to lift force (Dabiri et al. Reference Dabiri, Lu and Tryggvason2013) or due to centripetal effects (van Gils et al. Reference van Gils, Narezo Guzman, Sun and Lohse2013). When compared with the drag reducing bubbles in van Gils et al. (Reference van Gils, Narezo Guzman, Sun and Lohse2013) and Verschoof et al. (Reference Verschoof, van der Veen, Sun and Lohse2016), our particles do not deform, and they do not experience centripetal effects as they are density matched. At least one of these differences must therefore be crucial for the observed bubbly drag reduction in those experiments. In a future investigation, we will conduct more experiments using very light spherical particles that experience similar centripetal forces to the bubbles in van Gils et al. (Reference van Gils, Narezo Guzman, Sun and Lohse2013), but are non-deformable. These particles need to be larger than the size of the gap between the inner cylinder segments, and very rigid, or the set-up needs to be modified to close the gap between the IC segments. Such experiments can then disentangle the role of particle density on drag reduction from that of the particle shape.
We would like to thank E. Guazzelli, B. Vreman, R. Ezeta, P. Bullee and A. te Nijenhuis for various stimulating discussions. Moreover, we would like to thank G.-W. Bruggert and M. Bos for technical support. This work was funded by the Natural Science Foundation of China under grant no. 11672156, VIDI grant no. 13477, STW, FOM and MCEC, which are part of the Netherlands Organisation for Scientific Research (NWO).
Alméras, E., Mathai, V., Lohse, D. & Sun, C. 2017 Experimental investigation of the turbulence induced by a bubble swarm rising within incident turbulence. J. Fluid Mech. 825, 1091–1112.CrossRefGoogle Scholar
Bellani, G., Byron, M. L., Collignon, A. G., Meyer, C. R. & Variano, E. A. 2012 Shape effects on turbulent modulation by large nearly neutrally buoyant particles. J. Fluid Mech. 712, 41–60.CrossRefGoogle Scholar
van den Berg, T. H., van Gils, D. P. M., Lathrop, D. P. & Lohse, D. 2007 Bubbly turbulent drag reduction is a boundary layer effect. Phys. Rev. Lett. 98, 084501.CrossRefGoogle ScholarPubMed
van den Berg, T. H., Luther, S., Lathrop, D. P. & Lohse, D. 2005 Drag reduction in bubbly Taylor–Couette turbulence. Phys. Rev. Lett. 94, 044501.CrossRefGoogle ScholarPubMed
Bragg, A. D., Ireland, P. J. & Collins, L. R. 2015 Mechanisms for the clustering of inertial particles in the inertial range of isotropic turbulence. Phys. Rev. E 92 (2), 023029.Google ScholarPubMed
Calzavarini, E., Cencini, M., Lohse, D. & Toschi, F. 2008 Quantifying turbulence-induced segregation of inertial particles. Phys. Rev. Lett. 101 (8), 084504.CrossRefGoogle ScholarPubMed
Ceccio, S. L. 2010 Friction drag reduction of external flows with bubble and gas injection. Annu. Rev. Fluid Mech. 42, 183–203.CrossRefGoogle Scholar
Cisse, M., Homann, H. & Bec, J. 2013 Slipping motion of large neutrally buoyant particles in turbulence. J. Fluid Mech. 735, R1.CrossRefGoogle Scholar
Cisse, M., Saw, E.-W., Gibert, M., Bodenschatz, E. & Bec, J. 2015 Turbulence attenuation by large neutrally buoyant particles. Phys. Fluids 27, 061702.CrossRefGoogle Scholar
Colin, C., Fabre, J. & Kamp, A. 2012 Turbulent bubbly flow in pipe under gravity and microgravity conditions. J. Fluid Mech. 711, 469–515.CrossRefGoogle Scholar
Costa, P., Picano, F., Brandt, L. & Breugem, W.-P. 2016 Universal scaling laws for dense particle suspensions in turbulent wall-bounded flows. Phys. Rev. Lett. 117, 134501.CrossRefGoogle ScholarPubMed
Dabiri, S., Lu, J. & Tryggvason, G. 2013 Transition between regimes of a vertical channel bubbly upflow due to bubble deformability. Phys. Fluids 25, 102110.CrossRefGoogle Scholar
Eckhardt, B., Grossmann, S. & Lohse, D. 2007 Torque scaling in turbulent Taylor–Couette flow between independently rotating cylinders. J. Fluid Mech. 581, 221–250.CrossRefGoogle Scholar
Einstein, A. 1906 Eine neue Bestimmung der Moleküldimensionen. Ann. Phys. 324 (2), 289–306.CrossRefGoogle Scholar
Elghobashi, S. 1994 On predicting particle-laden turbulent flows. Appl. Sci. Res. 52 (4), 309–329.CrossRefGoogle Scholar
Fardin, M. A., Perge, C. & Taberlet, N. 2014 The hydrogen atom of fluid dynamics – introduction to the Taylor–Couette flow for soft matter scientists. Soft Matt. 10, 3523–3535.CrossRefGoogle ScholarPubMed
Fiabane, L., Zimmermann, R., Volk, R., Pinton, J.-F. & Bourgoin, M. 2012 Clustering of finite-size particles in turbulence. Phys. Rev. E 86 (3), 035301.Google ScholarPubMed
Fujiwara, A., Minato, D. & Hishida, K. 2004 Effect of bubble diameter on modification of turbulence in an upward pipe flow. Intl J. Heat Fluid Flow 25 (3), 481–488.CrossRefGoogle Scholar
van Gils, D. P., Bruggert, G.-W., Lathrop, D. P., Sun, C. & Lohse, D. 2011 The Twente Turbulent Taylor–Couette (T3C) facility: strongly turbulent (multi-phase) flow between independently rotating cylinders. Rev. Sci. Instrum. 82, 025105.CrossRefGoogle Scholar
van Gils, D. P., Narezo Guzman, D., Sun, C. & Lohse, D. 2013 The importance of bubble deformability for strong drag reduction in bubbly turbulent Taylor–Couette flow. J. Fluid Mech. 722, 317–347.CrossRefGoogle Scholar
Glycerine Producers' Association 1963 Physical Properties of Glycerine and its Solutions. Glycerine Producers' Association.Google Scholar
Gore, R. A. & Crowe, C. T. 1989 Effect of particle size on modulating turbulent intensity. Intl J. Multiphase Flow 15 (2), 279–285.CrossRefGoogle Scholar
Grossmann, S., Lohse, D. & Sun, C. 2016 High-Reynolds number Taylor–Couette turbulence. Annu. Rev. Fluid Mech. 48, 53–80.CrossRefGoogle Scholar
Huisman, S. G., van Gils, D. P. & Sun, C. 2012 Applying laser Doppler anemometry inside a Taylor–Couette geometry using a ray-tracer to correct for curvature effects. Eur. J. Mech. (B/Fluids) 36, 115–119.CrossRefGoogle Scholar
Huisman, S. G., Scharnowski, S., Cierpka, C., Kähler, C. J., Lohse, D. & Sun, C. 2013 Logarithmic boundary layers in strong Taylor–Couette turbulence. Phys. Rev. Lett. 110, 264501.CrossRefGoogle ScholarPubMed
Huisman, S. G., van der Veen, R. C., Sun, C. & Lohse, D. 2014 Multiple states in highly turbulent Taylor–Couette flow. Nat. Comm. 5, 3820.CrossRefGoogle ScholarPubMed
Kazerooni, H. T., Fornari, W., Hussong, J. & Brandt, L. 2017 Inertial migration in dilute and semidilute suspensions of rigid particles in laminar square duct flow. Phys. Fluids 2, 084301.Google Scholar
Kidanemariam, A. G., Chan-Braun, C., Doychev, T. & Uhlmann, M. 2013 Direct numerical simulation of horizontal open channel flow with finite-size, heavy particles at low solid volume fraction. New J. Phys. 15, 025031.CrossRefGoogle Scholar
Kulick, J. D., Fessler, J. R. & Eaton, J. K. 1994 Particle response and turbulence modification in fully developed channel flow. J. Fluid Mech. 277, 109–134.CrossRefGoogle Scholar
Lashgari, I., Picano, F., Breugem, W.-P. & Brandt, L. 2014 Laminar, turbulent, and inertial shear-thickening regimes in channel flow of neutrally buoyant particle suspensions. Phys. Rev. Lett. 113, 254502.CrossRefGoogle ScholarPubMed
Lathrop, D. P., Fineberg, J. & Swinney, H. L. 1992 Turbulent flow between concentric rotating cylinders at large Reynolds number. Phys. Rev. Lett. 68, 1515.CrossRefGoogle ScholarPubMed
Lewis, G. S. & Swinney, H. L. 1999 Velocity structure functions, scaling, and transitions in high-Reynolds-number Couette–Taylor flow. Phys. Rev. E 59, 5457.Google ScholarPubMed
Liu, T. J. & Bankoff, S. G. 1993 Structure of air–water bubbly flow in a vertical pipe – I. Liquid mean velocity and turbulence measurements. Intl J. Heat Mass Transfer 36 (4), 1049–1060.CrossRefGoogle Scholar
Lu, J., Fernández, A. & Tryggvason, G. 2005 The effect of bubbles on the wall drag in a turbulent channel flow. Phys. Fluids 17, 095102.CrossRefGoogle Scholar
Machicoane, N. & Volk, R. 2016 Lagrangian velocity and acceleration correlations of large inertial particles in a closed turbulent flow. Phys. Fluids 28, 035113.CrossRefGoogle Scholar
Maryami, R., Farahat, S., Javad poor, M. & Shafiei Mayam, M. H. 2014 Bubbly drag reduction in a vertical Couette–Taylor system with superimposed axial flow. Fluid Dyn. Res. 46 (5), 055504.CrossRefGoogle Scholar
Mathai, V., Calzavarini, E., Brons, J., Sun, C. & Lohse, D. 2016 Microbubbles and microparticles are not faithful tracers of turbulent acceleration. Phys. Rev. Lett. 117, 024501.CrossRefGoogle Scholar
Mathai, V., Huisman, S. G., Sun, C., Lohse, D. & Bourgoin, M.2018 Enhanced dispersion of big bubbles in turbulence. Available at: arXiv:1801.05461.Google Scholar
Mathai, V., Prakash, V. N., Brons, J., Sun, C. & Lohse, D. 2015 Wake-driven dynamics of finite-sized buoyant spheres in turbulence. Phys. Rev. Lett. 115, 124501.CrossRefGoogle ScholarPubMed
Mazzitelli, I. M., Lohse, D. & Toschi, F. 2003 The effect of microbubbles on developed turbulence. Phys. Fluids 15, L5.CrossRefGoogle Scholar
Murai, Y. 2014 Frictional drag reduction by bubble injection. Exp. Fluids 55 (7), 1773.CrossRefGoogle Scholar
Muste, M. & Patel, V. C. 1997 Velocity profiles for particles and liquid in open-channel flow with suspended sediment. ASCE J. Hydraul. Engng 123 (9), 742–751.CrossRefGoogle Scholar
Naso, A. & Prosperetti, A. 2010 The interaction between a solid particle and a turbulent flow. New J. Phys. 12, 033040.CrossRefGoogle Scholar
Ostilla-Mónico, R., Stevens, R. J. A. M., Grossmann, S., Verzicco, R. & Lohse, D. 2013 Optimal Taylor–Couette flow: direct numerical simulations. J. Fluid Mech. 719, 14–46.CrossRefGoogle Scholar
Pan, Y. & Banerjee, S. 1996 Numerical simulation of particle interactions with wall turbulence. Phys. Fluids 8, 2733.CrossRefGoogle Scholar
Paoletti, M. S. & Lathrop, D. P. 2011 Angular momentum transport in turbulent flow between independently rotating cylinders. Phys. Rev. Lett. 106, 024501.CrossRefGoogle ScholarPubMed
Peskin, C. S. 2002 The immersed boundary method. Acta Numerica 11, 479–517.CrossRefGoogle Scholar
Picano, F., Breugem, W.-P. & Brandt, L. 2015 Turbulent channel flow of dense suspensions of neutrally buoyant spheres. J. Fluid Mech. 764, 463–487.CrossRefGoogle Scholar
Poelma, C., Westerweel, J. & Ooms, G. 2007 Particle–fluid interactions in grid-generated turbulence. J. Fluid Mech. 589, 315–351.CrossRefGoogle Scholar
Procaccia, I., L'vov, V. S. & Benzi, R. 2008 Colloquium: theory of drag reduction by polymers in wall-bounded turbulence. Rev. Mod. Phys. 80 (1), 225–247.CrossRefGoogle Scholar
Roghair, I., Mercado, J. M., Annaland, M. V. S., Kuipers, H., Sun, C. & Lohse, D. 2011 Energy spectra and bubble velocity distributions in pseudo-turbulence: numerical simulations versus experiments. Intl J. Multiphase Flow 37 (9), 1093–1098.CrossRefGoogle Scholar
Shawkat, M. E., Ching, C. Y. & Shoukri, M. 2008 Bubble and liquid turbulence characteristics of bubbly flow in a large diameter vertical pipe. Intl J. Multiphase Flow 34 (8), 767–785.CrossRefGoogle Scholar
So, S., Morikita, H., Takagi, S. & Matsumoto, Y. 2002 Laser Doppler velocimetry measurement of turbulent bubbly channel flow. Exp. Fluids 33 (1), 135–142.CrossRefGoogle Scholar
Stickel, J. J. & Powell, R. L. 2005 Fluid mechanics and rheology of dense suspensions. Annu. Rev. Fluid Mech. 37, 129–149.CrossRefGoogle Scholar
Tagawa, Y., Roghair, I., Prakash, V. N., van Sint Annaland, M., Kuipers, H., Sun, C. & Lohse, D. 2013 The clustering morphology of freely rising deformable bubbles. J. Fluid Mech. 721, R2.CrossRefGoogle Scholar
Toschi, F. & Bodenschatz, E. 2009 Lagrangian properties of particles in turbulence. Annu. Rev. Fluid Mech. 41, 375–404.CrossRefGoogle Scholar
Tsuji, Y., Morikawa, Y. & Shiomi, H. 1984 LDV measurements of an air–solid two-phase flow in a vertical pipe. J. Fluid Mech. 139, 417–434.CrossRefGoogle Scholar
Uhlmann, M. 2008 Interface-resolved direct numerical simulation of vertical particulate channel flow in the turbulent regime. Phys. Fluids 20, 053305.CrossRefGoogle Scholar
Unverdi, S. O. & Tryggvason, G. 1992 A front-tracking method for viscous, incompressible, multi-fluid flows. J. Comput. Phys. 100 (1), 25–37.CrossRefGoogle Scholar
Verschoof, R. A., van der Veen, R. C., Sun, C. & Lohse, D. 2016 Bubble drag reduction requires large bubbles. Phys. Rev. Lett. 117, 104502.CrossRefGoogle ScholarPubMed
Vreman, A. W. 2015 Turbulence attenuation in particle-laden flow in smooth and rough channels. J. Fluid Mech. 773, 103–136.CrossRefGoogle Scholar
Wang, Y., Sierakowski, A. J. & Prosperetti, A. 2017 Fully-resolved simulation of particulate flows with particles–fluid heat transfer. J. Comput. Phys. 350, 638–656.CrossRefGoogle Scholar
White, C. M. & Mungal, M. G. 2008 Mechanics and prediction of turbulent drag reduction with polymer additives. Annu. Rev. Fluid Mech. 40, 235–256.CrossRefGoogle Scholar
Zhao, L. H., Andersson, H. I. & Gillissen, J. J. J. 2010 Turbulence modulation and drag reduction by spherical particles. Phys. Fluids 22, 081702.CrossRefGoogle Scholar
View in content
Figure 1. Schematic of the TC set-up: two concentric cylinders of radii $r_{i,o}$ with a working fluid in between. Particles are not to scale. The inner cylinder rotates with angular velocity $\unicode[STIX]{x1D714}_{i}$, while the outer cylinder is kept at rest. We measure the torque on the middle section (highlighted). The laser Doppler anemometry (LDA) probe is positioned at midheight to measure the azimuthal velocity at midgap.
Figure 2. (a) A plot of $\mathit{Nu}_{\unicode[STIX]{x1D714}}(\mathit{Ta})$ for 2 % particle volume fraction with particle diameters of 1.5 mm, 4.0 mm and 8.0 mm, and for comparison the single-phase case. Data from comparable bubbly drag reduction studies are plotted using black markers. (b) The same data, but now as a compensated plot $\mathit{Nu}_{\unicode[STIX]{x1D714}}/\mathit{Ta}^{0.40}$ as a function of $\mathit{Ta}$. The error bar indicates the maximum deviation for repeated measurements from all measurements combined (coloured curves), which is less than 1 %. At $\mathit{Ta}\geqslant 2\times 10^{12}$, the 1.5 mm particles show an increased uncertainty of 1.7 %, which is indicated by the right error bar.
Figure 3. (a) A plot of $\mathit{Nu}_{\unicode[STIX]{x1D714}}(\mathit{Ta})$, compensated by $\mathit{Ta}^{0.4}$, for 8 mm particles with various particle volume fractions, and for comparison the single-phase case. (b) The drag reduction, defined as $\text{DR}=(1-\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC})/\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=0))$, plotted against $\mathit{Ta}$.
Figure 4. (a) Drag reduction as function of particle volume fraction from ○ $d_{p}=8~\text{mm}$ particles from the present work compared with similar gas volume fractions from ▫ van den Berg et al. (2005), ♢ van Gils et al. (2013) and ▵ Verschoof et al. (2016). Symbols indicate the different studies while colours differentiate between the Reynolds numbers. The current work has DR defined as $(1-\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC})/\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=0))$; the other studies use dimensionless torque $G$ (van Gils et al.2013), friction coefficient $c_{f}$ (van den Berg et al.2005) or plain torque $\unicode[STIX]{x1D70F}$ (Verschoof et al.2016) to define DR. (b) Zoom of the bottom part of (a) where the data from the present work are compared with bubbly drag reduction data using 6 ppm of surfactant from Verschoof et al. (2016).
Figure 5. (a) A plot of $\mathit{Ta}$ as a function of $\mathit{Nu}_{\unicode[STIX]{x1D714}}$ compensated by $\mathit{Ta}^{0.4}$ for various density ratios $\unicode[STIX]{x1D719}=\unicode[STIX]{x1D70C}_{p}/\unicode[STIX]{x1D70C}_{f}$ indicated by the corresponding colour. The darker shades indicate the single-phase cases while the lighter shades show the cases using 6 % particle volume fraction of 8 mm diameter particles. Due to the increase in viscosity, the maximum attainable $\mathit{Ta}$ is lower for larger density ratios. The uncertainty is again estimated using the maximum deviation from the average for multiple runs and here is only shown for the green curves. This value is slightly below 1 % at lower $\mathit{Ta}$ and decreases with increasing $\mathit{Ta}$ to values below 0.25 %. This trend is seen for all values of $\unicode[STIX]{x1D719}$. (b) The drag reduction, calculated from the data of (a), plotted against $\mathit{Ta}$. The drag reduction is defined as $\text{DR}=(1-\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=6\,\%)/\mathit{Nu}_{\unicode[STIX]{x1D714}}(\unicode[STIX]{x1D6FC}=0))$.
Figure 7. A plot of $u_{\unicode[STIX]{x1D703}}$ normalized by the velocity of the inner cylinder wall $u_{i}$ as a function of the normalized radius for various values $d_{p}$ while $\unicode[STIX]{x1D6FC}=2\,\%$ (a) and various values of $\unicode[STIX]{x1D6FC}$ while $d_{p}=8~\text{mm}$ (b). In all cases, $\mathit{Re}_{i}$ is fixed to $1\times 10^{6}$. For comparison, the single-phase case using water at $\mathit{Re}_{i}=1\times 10^{6}$ from Huisman et al. (2013) is also plotted in dashed black in both plots. Both plots have an inset showing an enlargement of the centre area from the same plot.
Figure 8. Probability density functions of the normalized azimuthal velocity as a function of the normalized radial position for various values of $\unicode[STIX]{x1D6FC}$ for the case of 8 mm particles and the single-phase case while keeping $\mathit{Re}$ at $1\times 10^{6}$. With increasing $\unicode[STIX]{x1D6FC}$, the maximum penetration depth decreases. The grey areas indicate radial positions for which no data are available.
You have Access Open access
|
CommonCrawl
|
Submit preprint
screening preprints
Submit Log in
42 articles found
AuthorAbstractTitleJournal-refKeywordsArticle typeDOITitle, Keywords
AllBIOLOGYEARTH SCIENCESPHYSICAL SCIENCESMATERIALS SCIENCECHEMISTRYLIFE SCIENCESMEDICINE & PHARMACOLOGYENGINEERINGMATHEMATICS & COMPUTER SCIENCESOCIAL SCIENCESBEHAVIORAL SCIENCESARTS & HUMANITIES
search preprints.org
search all preprints
Order by Most Viewed Most Downloaded Default
Preprint ARTICLE | doi:10.20944/preprints201905.0371.v1
Thermodynamic, non-extensive, or turbulent quasi equilibrium for space plasma environment
Peter Yoon
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: non-extensive entropic principle; plasma turbulence; quasi equilibrium
Online: 31 May 2019 (06:12:02 CEST)
Show abstract| Download PDF| Share
Share this article with
The Boltzmann-Gibbs (BG) entropy has been used in a wide variety of problems for more than a century. It is well known that BG entropy is extensive, but for certain systems such as those dictated by long-range interactions, the entropy must be non-extensive. Tsallis entropy possesses non-extensive characteristics, which is parametrized by a variable q (q = 1 being the classic BG limit), but unless q is determined from microscopic dynamics, the model remains but a phenomenological tool. To this date very few examples have emerged in which q can be computed from first principles. This paper shows that the space plasma environment, which is governed by long-range collective electromagnetic interaction, represents a perfect example for which the q parameter can be computed from micro-physics. By taking the electron velocity distribution function measured in the heliospheric environment into account, and considering them to be in quasi equilibrium state with electrostatic turbulence known as the quasi-thermal noise, it is shown that the value corresponding to q = 9/13 = 0.6923 may be deduced. This prediction is verified against observation made by spacecraft, and it is shown to be in excellent agreement.
Quasi-Degree in Neutrosophic Graphs
Henry Garrett
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Quasi-Co-Degree; Quasi-Degree; Vertex
Online: 7 February 2022 (16:23:20 CET)
New setting is introduced to study quasi-degree and quasi-co-degree arising from co-neighborhood. quasi-degree and quasi-co-degree is about a vertex which are applied into the setting of neutrosophic graphs. . The structure of set is studied and general results are obtained. Also, some classes of neutrosophic graphs namely path-neutrosophic graphs, cycle-neutrosophic graphs, complete-neutrosophic graphs and star-neutrosophic graphs, complete-bipartite-neutrosophic graphs and complete-multipartite-neutrosophic graphs are investigated in the terms of a vertex which is called either quasi-degree or quasi-co-degree. Neutrosophic number is reused in this way. It's applied to use the type of neutrosophic number in the way that, three values of a vertex are used and they've same share to construct this number to compare with other vertices. Summation of three values of vertex makes one number and applying it to a comparison. This approach facilitates identifying vertices which form quasi-degree and quasi-co-degree. Quasi-degree is a value of a vertex which is maximum amid all values of vertices which are neighbors to a fixed vertex. Quasi-co-degree is a value of an edge which is maximum amid all values of edges which are neighbors to a fixed vertex but corresponded vertex is representative for this notion. Using different values which are related to a vertex inspire us to focus on edge and vertices which are corresponded to a fixed vertex. The notion of neighborhood is used to collect either vertices are titled neighbors or edges are incident to fixed vertex. In both settings, some classes of well-known neutrosophic graphs are studied. Some clarifications for each result and each definitions are provided. Using fixed vertex has key role to have these notions in the form of vertex or edge. The value of an edge has eligibility to call quasi-co-degree but the value of a vertex has eligibility to call quasi-degree. Some results get more frameworks and perspective about these definitions. The way in that, two vertices have connection together, open the way to define neighborhood and co-neighborhood. The maximum values in neighborhood and co-neighborhood introduces quasi-degree and quasi-co-degree, respectively. New name is chosen from degree. Since amid all vertices with different degrees, one vertex is chosen. In other words, one vertex is fixed and its degree turns out quasi-degree where two degrees could be assigned to a vertex. Degree of edges and degree of vertices. The number of edges which are incident to the vertex and the number of vertices which are neighbors to the vertex. Degree and co-degree are the notions which are transformed to use in quasi-style. Two neutrosophic values introduce two neutrosophic vertices separately in each settings. These notions are applied into neutrosophic graphs as individuals but not family of them as drawbacks for these notions. Finding special neutrosophic graphs which are well-known, is an open way to purse this study. Some problems are proposed to pursue this study. Basic familiarities with graph theory and neutrosophic graph theory are proposed for this article.
Some Results in Classes Of Neutrosophic Graphs
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Neutrosophic Quasi-Order; Neutrosophic Quasi-Size; Neutrosophic Quasi-Number; Neutrosophic Quasi-Co-Number; Neutrosophic Co-t-Neighborhood
Online: 17 March 2022 (08:48:38 CET)
New setting is introduced to study co-neighborhood, neutrosophic t-neighborhood, neutrosophic quasi-vertex set, neutrosophic quasi-order, neutrosophic neighborhood, neutrosophic co-t-neighborhood, neutrosophic quasi-edge set, neutrosophic quasi-size, Neutrosophic number, neutrosophic co-neighborhood, co-neutrosophic number, quasi-number and quasi-co-number. Some classes of neutrosophic graphs are investigated.
Non- and Quasi-Equilibrium Multi-Phase Field Methods Coupled with CALPHAD Database for Rapid-Solidification Microstructural Evolution in Laser Powder Bet Additive Manufacturing Condition
Sukeharu Nomoto, Masahito Segawa, Makoto Watanabe
Subject: Materials Science, Biomaterials Keywords: additive manufacturing; rapid solidification; microstructural evolution; non-equilibrium; quasi-equilibrium; multi-phase field method; CALPHAD database; nickel alloy
Solidification microstructure is formed under high cooling rates and temperature gradients in powder-based additive manufacturing. In this study, a non-equilibrium multi-phase field method (MPFM), which was based on a finite interface dissipation model proposed by Steinbach et. al., coupled with a CALPHAD database was developed for a multicomponent Ni alloy. A qua-si-equilibrium MPFM was also developed for comparison. Two-dimensional equiaxed micro-structural evolution for the Ni (Bal.)–Al–Co–Cr–Mo–Ta–Ti–W–C alloy was performed at various cooling rates. The temperature–γ fraction profiles obtained under 10^5 K/s using non- and qua-si-equilibrium MPFMs were in good agreement with each other. Over 10^6 K/s, the differences between non- and quasi-equilibrium methods grew as the cooling rate increased. The non-equilibrium solidification was strengthened over a cooling rate of 10^6 K/s. Colum-nar-solidification microstructural evolution was performed under cooling rates from 5×10^5 K/s to 1×10^7 K/s at various temperature gradient values under the constant interface velocity (0.1 m/s). The results showed that as the cooling rate increased, the cell space decreased in both methods, and the non-equilibrium MPFM agreed well with experimental measurements. Our results show that the non-equilibrium MPFM can simulate solidification microstructure in powder bed fusion additive manufacturing.
The Quasi Steady State Cosmology in a Radiation Dominated Phase
Raj Bali
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Quasi Steady State Cosmology, Radiation Phase
Online: 19 November 2018 (16:48:10 CET)
Analytical solutions for radiation dominated phase of Quasi Steady State Cosmology in Friedmann-Robertson-Walkar models are obtained. We find that matter density is positive in all the cases (k = 0,-1,1). The nature of Hubble parameter (H) in [0,2] is discussed. The deceleration parameter (q) is marginally less than zero indicating accelerating universe. The scale factor (S) is graphically shown with time. The model represents oscillating universe between the above mentioned limits. Because of bounce in QSSC, the maximum density phase is still matter dominated. The models represent singularity free model. We also find that the models have event horizon i.e. no observer beyond the proper distance rH can communicate each other in FRW mdels for radiation dominated phase in the frame work of QSSC. The FRW models are special classes of Bianchi type I, V, IX space-times with zero, negative and positive curvatures respectively. Initially i.e. at = 0, the model represents steady model. We have tried to show how a good fit can be obtained to the observations in the framework of QSSC during radiation dominated phase.
The Seasonal Variability of the Ocean Energy Cycle From a Quasi-Geostrophic Double Gyre Ensemble
Takaya Uchida, Bruno Deremble, Thierry Penduff
Subject: Earth Sciences, Atmospheric Science Keywords: Ocean circulation; Geostrophic turbulence; Quasi-geostrophic flows
With the advent of submesoscale O(1km) permitting basin-scale ocean simulations, the seasonality in the mesoscale O(50km) eddies with kinetic energies peaking in summer has been commonly attributed to submesoscale eddies feeding back onto the mesoscale via an inverse energy cascade under the constraint of stratification and Earth's rotation. In contrast, by running a 101-member, seasonally forced, three-layer quasi-geostrophic (QG) ensemble configured to represent an idealized double-gyre system of the subtropical and subpolar basin, we find that the mesoscale kinetic energy shows a seasonality consistent with the summer peak without resolving the submesoscales; by definition, a QG model only resolves small Rossby number dynamics (O(Ro)≪1) while as submesoscale dynamics are associated with O(Ro)∼1. Here, by quantifying the Lorenz cycle of the mean and eddy energy, defined as the ensemble mean and fluctuations about the mean respectively, we propose a different mechanism from the inverse energy cascade by which the stabilization and strengthening of the western-boundary current during summer due to increased stratification leads to a shedding of stronger mesoscale eddies from the separated jet. Conversely, the opposite occurs during the winter; the separated jet destablizes and results in overall lower mean and eddy kinetic energies despite the domain being more susceptible to baroclinic instability from weaker stratification.
Observed Quasi 16-Day Wave by Meteor Radar Over 9 Years at Mengcheng (33.4°N, 116.5°E) and Comparison with the WACCM Simulation
Chengyun Yang, Dexin Lai, Wen Yi, Jianfei Wu, Xianghui Xue, Tao Li, Tingdi Chen, Xiankang Dou
Subject: Earth Sciences, Atmospheric Science Keywords: meteor radar; quasi 16-day wave; mesospheric dynamics
Online: 19 December 2022 (02:51:19 CET)
In this study, we present more than 8 years of observations of the quasi-16-day wave (Q16DW) in the mesosphere and lower thermosphere (MLT) wind at middle latitudes observed by the Mengcheng (33.4°N, 116.5°E) meteor radar. The long-term variation in amplitudes calculated from the data between April 2014 and December 2022 shows enhanced wave activity during winter and early spring (near equinox) and suppressed wave activity during the summer. The Q16DWs are relatively weak in the meridional wind. During the winter months, the Q16DWs in the zonal component exhibit a burst below 85 km, and their amplitudes reach up to 10 m/s. In the early spring, the Q16DWs strengthen above 90 km with amplitudes in excess of 12 m/s. The phase differences between the zonal and meridional components of the Q16DW are, on average, slightly smaller than 90°, suggesting the existence of orthogonal relationships between them. During strong bursts, the periods of the Q16DW in winter range between 15 and 18 d, whereas in winter, the periods tend to be more diffuse. The wintertime Q16DW is amplified, on average, when the zonal wind shear peaks, suggesting that barotropic instability may be one source of Q16DW. Q16DW amplitudes exhibit considerable interannual variability; however, a relationship between the 11-year solar cycle and the Q16DW is not found.
Rossby Waves in Total Ozone over the Arctic in 2000–2021
Chenning Zhang, Asen Grytsai, Oleksandr Evtushevsky, Gennadi Milinevsky, Yulia Andrienko, Valery Shulga, Andrew Klekociuk, Yuriy Rapoport, Wei Han
Subject: Earth Sciences, Atmospheric Science Keywords: Rossby wave; quasi-stationary wave; stratosphere; Arctic; ozone
Online: 29 March 2022 (11:26:45 CEST)
The purpose of this work is to study Rossby wave parameters in total ozone over Arctic in 2000–2021. We consider the averages in the January–March period, when stratospheric trace gases (including ozone) in sudden stratospheric warming events are strongly disturbed by planetary waves. To characterize the wave parameters, we analyzed ozone data at the latitudes of 50° N (the sub-vortex area), 60° N (the polar vortex edge) and 70° N (inner region of the polar vortex). Total ozone column (TOC) measurements during 22-year time interval were used from Total Ozone Mapping Spectrometer (TOMS) / Earth Probe and Ozone Mapping Instrument (OMI) / Aura satellite observations. The total ozone zonal distribution and variations in the parameters of the Fourier spectral components with zonal wave numbers m = 1–5 are presented. Daily and interannual variations in TOC, amplitudes and phases of spectral wave components, and linear trends of the quasi-stationary wave 1 (QSW1) amplitudes are discussed. The positive TOC peaks inside the vortex in 2010 and 2018 alternate with negative ones in 2011 and 2020. The latter TOC anomalies correspond to severe depletion of stratospheric ozone over the Arctic in the strong vortex conditions due to anomalously low activity of planetary waves. Variations in TOC in sub-vortex region exhibit the statistically significant negative trend –4.8±5.4 DU decade–1 in QSW1 amplitude, while the trend is statistically insignificant at the vortex edge region due to increased TOC variability. Processes associated with polar vortex dynamics are discussed, including quasi-stationary vortex asymmetry and quasi-circumpolar migration of the wave-1 phase at the vortex edge.
Deterministic Model of the Eddy Dynamics for a Midlatitude Ocean Model
Takaya Uchida, Bruno Deremble, Stephane Popinet
Subject: Earth Sciences, Atmospheric Science Keywords: Mesoscale eddy closure; Quasi geostrophy; Mid-latitude double gyre
Mesoscale eddies, the weather system of the oceans, although being on the scales of O(20-100km), have a disproportionate role in shaping the mean jets such as the separated Gulf Stream in the North Atlantic Ocean, which is on the scale of O(1000km) in the along-jet direction. With the increase in computational power, we are now able to partially resolve the eddies in basin-scale and global ocean simulations, a model resolution often referred to as mesoscale permitting. It is well known, however, that due to grid-scale numerical viscosity, mesoscale permitting simulations have less energetic eddies and consequently weaker eddy feedback onto the mean flow. In this study, we run a quasi-geostrophic model at mesoscale resolving resolution in a double gyre configuration and formulate a deterministic parametrization for the eddy rectification term of potential vorticity (PV), namely, the eddy PV flux divergence. We have moderate success in reproducing the spatial patterns and magnitude of eddy kinetic and potential energy diagnosed from the model. One novel point about our approach is that we account for non-local eddy feedbacks onto the mean flow by solving the eddy PV equation prognostically in addition to the mean flow. In return, we are able to parametrize the variability in total (mean+eddy) PV at each time step instead of solely the mean PV. A closure for the total PV is beneficial as we are able to account for both the mean state and extreme events.
Quasi-stationary Strength of ECAP-processed Cu-Zr at 0.5Tm
Wolfgang Blum, Jiri Dvorak, Petr Kral, Philip Eisenlohr, Vaclav Sklenicka
Subject: Materials Science, General Materials Science Keywords: Cu-Zr; ECAP; deformation; quasi-stationary; subgrains; grains; coarsening
Online: 6 September 2019 (12:15:47 CEST)
Show abstract| Download PDF| Supplementary Files| Share
The influence of the grain structure on the tensile deformation strength is studied for precipitation-strengthened Cu-0.2%Zr at 673 K. Subgrains and grains are formed by ECAP and annealing. The fraction of high-angle boundaries increases with prestrain. Subgrains and grains coarsen during deformation. This leads to softening in the quasi-stationary state. The initial quasi-stationary state of severely predeformed, ultrafine-grained material exhibits relatively high rate-sensitivity at relatively high stresses. This is interpreted as result of the stress dependences of the quasi-stationary subgrain size and the volume fraction of subgrain-free grains.
Hybrid PV-Wind Micro-Grid Development Using Quasi-Z-Source Inverter Modeling and Control –Experimental Investigation
Neeraj Priyadarshi, Sanjeevikumar Padmanaban, Dan M. Ionel, Lucian Mihet-Popa, Farooque Azam
Subject: Engineering, Electrical & Electronic Engineering Keywords: PV; MPRVS; quasi Z-source inverter; MPP; SEPIC converter
Online: 4 June 2018 (12:19:02 CEST)
This research work deals with the modeling and control of hybrid photovoltaic (PV) - Wind micro-grid using Quasi Z-Source inverter. This inverter provides better buck/boost characteristics, able to regulate the phase angle output, less harmonic content, no requirement of the filter and has high power performance characteristics over conventional inverter as major benefits. A SEPIC converter as dc-dc switched power apparatus is employed for maximum power point tracking (MPPT) functions which provides high voltage gain throughout the process. Moreover, a modified power ratio variable step (MPRVS) based perturb & observe (P&O) method has been proposed in the PV MPPT action which forces the operating point close to maximum power point (MPP). Practical responses justify the performance of hybrid PV-Wind micro-grid with Quasi Z-Source inverter structure.
Applying the Cracking Elements Method for Analyzing the Damaging Processes of Structures with Fissures
Qianqian Dong, Jie Wu, Zizheng Sun, Xiao Yan, Yiming Zhang
Subject: Engineering, Mechanical Engineering Keywords: Quasi-brittle material; cracking elements method; Uni-axial compression tests
Online: 24 August 2020 (05:59:51 CEST)
In this work, the recently proposed cracking elements method (CEM) is used for simulating the damaging processes of structures with initial imperfections. CEM is built in the framework of conventional FEM which is formally like a special type of finite element. The disconnected piecewise cracks are used for representing crack paths. Taking the advantages of CEM that both the initiations and propagations of cracks can be naturally captured, we numerically study the uni-axial compression tests of specimens with multiple joints and fissures, where the cracks may propagate from the tips, or from some other unexpected positions. Though uni-axial compression tests are considered, mainly tensile damage criteria are used in the numerical model. On one hand, the results demonstrate the robustness and effectiveness of the CEM while on the other hand, some drawbacks of the present model are demonstrated, showing the future work.
Working Paper ARTICLE
Completeness in Quasi-Pseudometric Spaces
Ştefan Cobzas
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: quasi-pseudometric space; Cauchy sequence; Cauchy net; Cauchy filter; completeness
Online: 15 July 2020 (08:49:38 CEST)
Show abstract| Share
The aim of this paper is to discuss the relations between various notions of sequential completeness and the corresponding notions of completeness by nets or by filters in the setting of quasi-metric spaces. We propose a new definition of right $K$-Cauchy net in a quasi-metric space for which the corresponding completeness is equivalent to the sequential completeness. In this way we complete some results of R.~A. Stoltenberg, Proc. London Math. Soc. \textbf{17} (1967), 226--240, and V.~Gregori and J.~Ferrer, Proc. Lond. Math. Soc., III Ser., \textbf{49} (1984), 36. A discussion on nets defined over ordered or pre-ordered directed sets is also included.
Preprint COMMUNICATION | doi:10.20944/preprints202102.0425.v1
Perspectives on the Yang-Baxter Equation in Bck-Algebras
Florin Nichita, Tahsin Oner, Tugce KALKAN, Ibrahim Senturk, Mehmet Terziler
Subject: Keywords: BCK-algebras; Yang-Baxter Equation; Quasi-negation operator; Boolean coalgebras; poetry
Online: 18 February 2021 (17:26:17 CET)
We present set-theoretical solutions of the Yang-Baxter equation in BCK–algebras. Some solutions in BCK−algebras are not solutions in other structures (such as MV −algebras). Related to our investigations we also consider some new structures: Boolean coalgebras and a unified braid condition – quantum Yang-Baxter equation. Finally, we will see how poetry has accompanied the development / history of the Yang–Baxter equation.
Working Paper REVIEW
Nano-(Q)SAR for Cytotoxicity Prediction of Engineered Nanomaterials
Andrey A. Buglak, Anatoly V. Zherdev, Boris B. Dzantiev
Subject: Materials Science, Nanotechnology Keywords: engineered nanomaterials; safety of nanomaterials; toxicological tests; modeling; descriptors; quasi-qsar
Online: 31 October 2019 (09:38:45 CET)
Although nanotechnology is a new and rapidly growing area of science, the impact of nanomaterials on living organisms is unknown in many aspects. In this regard it is extremely important to perform toxicological tests, but complete characterization of all varying preparations is extremely laborious. The computational technique called quantitative structure-activity relationship, or QSAR, allows reducing the cost of time- and resource-consuming nanotoxicity tests. In this review, (Q)SAR cytotoxicity studies of the past decade are systematically considered. We regard here five classes of engineered nanomaterials (ENMs): metal oxides, metal-containing nanoparticles, multi-walled carbon nanotubes, fullerenes, and silica nanoparticles. Some studies reveal that QSAR models are better than classification SAR models, while other reports conclude that SAR is more precise than QSAR. The quasi-QSAR method appears to be the most promising tool, as it allows accurately taking experimental conditions into account. However, experimental artifacts are a major concern in this case
Transversely Modulated Wave Packet
Vladimir N. Salomatov
Subject: Physical Sciences, General & Theoretical Physics Keywords: quasi-monochromatic waves; group velocity; dispersion relation; longitudinal modulation; coherence time
The wave packet consisting of two harmonic plane waves with the same frequencies, but with different wave vectors is considered. The dispersion relation of a packet is structurally similar to the dispersion relation of a relativistic particle with a nonzero rest mass. The possibility of controlling the group velocity of a quasi-monochromatic wave packet by varying the angle between the wave vectors of its constituent waves is discussed.
On Unsteady Three-Dimensional Axisymmetric MHD Nanofluid Flow with Entropy Generation and Thermo-Diffusion Effects
Mohammed Almakki, Sharadia Dey, Sabyasachi Mondal, Precious Sibanda
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Unsteady 3-D axisymmetric nanofluid; Entropy generation; Spectral quasi-linearization method.
We investigate entropy generation in unsteady three-dimensional axisymmetric MHD nanofluid flow over a non-linearly stretching sheet. The flow is subject to thermal radiation and a chemical reaction. The conservation equations were solved using the spectral quasi-linearization method. The novelty of the work is in the study of entropy generation in three-dimensional axisymmetric MHD nanofluid and the choice of the spectral quasilinearization method as the solution method. The effects of Brownian motion and thermophoresis are also taken into account when the nanofluid particle volume fraction on the boundary in passively controlled. The results show that as the Hartman number increases, both the Nusselt number and the Sherwood number decrease whereas the skin friction increases. It is further shown that an increase in the thermal radiation parameter corresponds to a decrease in the Nusselt number. Moreover, entropy generation increases with the physical parameters.
Rc Quasi-Distributed Sensor With Tree-Like Structure Adaptable for Physical Fields Measurement
Evgeny Denisov, Ilnaz Shafigullin
Subject: Engineering, Electrical & Electronic Engineering Keywords: quasi-distributed sensor; tree-like structure; sensitive RC elements; physical fields measurements
Online: 24 October 2022 (13:50:50 CEST)
The paper presents a new conception of quasi-distributed sensor for simultaneous measurement of several physical fields and the results of an experimental study of this sensor. A distinctive feature of the sensor defined by the sensitive RC elements connected in the original tree-like structure. The proposed is the structure of the sensor and measurement system as well as corresponding measurement algorithm. The high accuracy demonstrated by the sensor's prototype gives possibility to effective use the proposed sensors in many technical and scientific applications.
Three-Dimensional Finite Element Investigation Into Effects of Implant Thread Design and Loading Rate on Stress Distribution in Dental Implants and Anisotropic Bone
Dawit Bogale Alemayehu, Yeau Ren Jeng
Subject: Engineering, Biomedical & Chemical Engineering Keywords: quasi-static load; abutment screw; dental implant; finite element method; dynamic load; mesiodistal
Online: 13 September 2021 (15:55:30 CEST)
Variations in the implant thread shape and occlusal load behavior may result in significant changes in the biological and mechanical properties of dental implants and surrounding bone tissue. Most previous studies consider a single implant thread design, an isotropic bone structure, and a static occlusal load. However, the effects of different thread designs, bone material properties, and loading conditions are important concerns in clinical practice. Accordingly, the present study performs Finite Element Analysis (FEA) simulations to investigate the static, quasi-static and dynamic response of the implant and implanted bone material under various thread designs and occlusal loading directions (buccal-lingual, mesiodistal and apical). The simulations focus specifically on the von Mises stress, displacement, shear stress, compressive stress and tensile stress within the implant and the surrounding bone. The results show that the thread design and occlusal loading rate have a significant effect on the stress distribution and deformation of the implant and bone structure during clinical applications. Overall, the results provide a useful insight into the design of enhanced dental implants for an improved load transfer efficiency and success rate.
Comparative Genomics of Global SARS-CoV-2 Quasispecies Offers Insights into Its Microevolution and Holds Implications for Pathogenesis and Control
Santi M. Mandal, Suresh K. Mondol, Shriparna Mukherjee, Wriddhiman Ghosh, Ranadhir Chakraborty
Subject: Biology, Other Keywords: comparative genomics; SARS-CoV-2; microevolution; quasi-species; point mutation; disinfectants as mutagens
In the wake of the current SARS-CoV-2 pandemic devastating the world, it is imperative to elucidate the comparative genomics of geographically-diverse strains of this novel coronavirus to gain insights into its microevolution, pathogenesis and control. Here we explore the molecular nature, genome-wide frequency, and gene-wise distribution of mutations in three distinct datasets encompassing 68 SARS-CoV-2 RNA-genomes altogether. While phylogenomic analysis revealed parallelism between the evolutionary paths charted by distinct quasispecies clusters of the virus, occurrence of mutations across genomes was found to be non-random. Whereas deletion mutations are extremely scarce and insertions totally absent, of all the instances of single nucleotide substitution detected, the overwhelming majority were transition mutations with cytidine to uridine being the most prevalent type. Propensity of this transition could be attributed to hydrolytic deamination mediated by ultra-violet irradiation or bisulfite reagent, both of which find wide usage as sterilizer/disinfectant. Transversions, albeit few and predominated by the guanosine to uridine form, were found concentrated in loci encoding the structural proteins of the virus, so might confer versatile tissue-colonization potentials. Mutation frequency of the three distinct genome-sets ranged narrowly between 0.07-1.08 × 10-4 nucleotides positions mutated per nucleotide aligned. Gene-wise mapping of the global mutations illuminated the highly conserved nature of the genes encoding the non-structural proteins Nsp7, Nsp8 (two essential cofactors of the viral RNA-dependent RNA-polymerase) and Nsp9 (Nsp8-interacting single-strand RNA-binding protein), plus the envelope protein E (involved in SARS-CoV-2 assembly, budding and pathogenesis). These mutation-free genomic loci and/or their protein products could be potent targets for future drug designing/targeting.
Asymptotic Dynamics of a Class of Third Order Rational Difference Equations
Sk Sarif Hassan, Soma Mondal, Swagata Mandal, Chumki Sau
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: rational difference equations; local asymptotic stability; periodic; Quasi-Periodic and Fractal-like trajectory
Online: 8 April 2020 (04:05:22 CEST)
The asymptotic dynamics of the classes of rational difference equations (RDEs) of third order defined over the positive real-line as $$\displaystyle{x_{n+1}=\frac{x_{n}}{ax_n+bx_{n-1}+cx_{n-2}}}, \displaystyle{x_{n+1}=\frac{x_{n-1}}{ax_n+bx_{n-1}+cx_{n-2}}}, \displaystyle{x_{n+1}=\frac{x_{n-2}}{ax_n+bx_{n-1}+cx_{n-2}}}$$ and $$\displaystyle{x_{n+1}=\frac{ax_n+bx_{n-1}+cx_{n-2}}{x_{n}}}, \displaystyle{x_{n+1}=\frac{ax_n+bx_{n-1}+cx_{n-2}}{x_{n-1}}}, \displaystyle{x_{n+1}=\frac{ax_n+bx_{n-1}+cx_{n-2}}{x_{n-2}}}$$ is investigated computationally with theoretical discussions and examples. It is noted that all the parameters $a, b, c$ and the initial values $x_{-2}, x_{-1}$ and $x_0$ are all positive real numbers such that the denominator is always positive. Several periodic solutions with high periods of the RDEs as well as their inter-intra dynamical behaviours are studied.
Application of Improved Quasi-Affine Transformation Evolutionary Algorithm in Power System Stabilizer Optimization
Jing Huang, Jiajing Liu, Cheng Zhang, Yu Kuang, Shaowei Weng
Subject: Engineering, Electrical & Electronic Engineering Keywords: Simulated annealing quasi-Affine Transformation Evolutionary (SA-QUATRE); Coordinated optimization design; Power system stabilizer
Online: 30 June 2022 (03:48:32 CEST)
This paper proposes a parameter coordination optimization design of Power System Stabilizer (PSS) based on an improved Quasi-Affine Transformation Evolutionary (QUATRE) algorithm to suppress low-frequency oscillation and improve the dynamic stability of power system. To begin, the Simulated Annealing (SA) algorithm randomly updates the globally optimal solution of each QUATRE iteration and matches the inferior solution with a certain probability to escape the local extreme point. This new algorithm is first applied to power system. Second, Since damping ratio is one of the criteria to measure the dynamic stability of power system, this paper sets the objective function according to the principle of maximization of damping coefficient of electromechanical mode, and uses SA-QUATRE to search a group of global optimal PSS parameter combinations to improve the stability margin of the system as much as possible. Finally, the method's rationality and validity were validated by applying it to the simulation examples of IEEE 4-machine 2-area system with different operation states. The comparison with the traditional optimization algorithm shows that the proposed method has more advantages for multi-machine PSS parameter coordination optimization, and can restrain the low-frequency oscillation of power system more effectively and enhance the system stability.
Accelerating Symmetric Rank-1 Quasi-Newton Method with Nesterov's Gradient for Training Neural Networks
S. Indrapriyadarsini, Shahrzad Mahboubi, Hiroshi Ninomiya, Takeshi Kamio, Hideki Asai
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Neural networks; quasi-Newton; symmetric rank-1; Nesterov's accelerated gradient; limited memory; trust-region
Online: 8 December 2021 (17:51:54 CET)
Gradient based methods are popularly used in training neural networks and can be broadly categorized into first and second order methods. Second order methods have shown to have better convergence compared to first order methods, especially in solving highly nonlinear problems. The BFGS quasi-Newton method is the most commonly studied second order method for neural network training. Recent methods have shown to speed up the convergence of the BFGS method using the Nesterov's acclerated gradient and momentum terms. The SR1 quasi-Newton method though less commonly used in training neural networks, are known to have interesting properties and provide good Hessian approximations when used with a trust-region approach. Thus, this paper aims to investigate accelerating the Symmetric Rank-1 (SR1) quasi-Newton method with the Nesterov's gradient for training neural networks and briefly discuss its convergence. The performance of the proposed method is evaluated on a function approximation and image classification problem.
Quasi-Interpolation Operators for Bivariate Quintic Spline Spaces and Their Applications
Rengui Yu, Chungang Zhu, Xianmin Hou, Li Yin
Subject: Engineering, Other Keywords: bivariate spline space; quasi-interpolation operator; type-2 triangulation 3; burgers' equations; image reconstruction
Online: 12 January 2017 (10:04:06 CET)
Splines and quasi-interpolation operators are important both in approximation theory and applications. In this paper, we construct a family of quasi-interpolation operators for the bivariate quintic spline spaces S53 (∆mn(2)). Moreover, the properties of the proposed quasi-interpolation operators are studied, as well as its applications for solving two-dimensional Burgers' equation and image reconstruction. Some numerical examples show that these methods, which are easy to implement, provide accurate results.
Zonal Asymmetry of the Stratopause in the 2019/2020 Arctic Winter
Yu Shi, Oleksandr Evtushevsky, Gennadi Milinevsky, Andrew Klekociuk, Wei Han, Oksana Ivaniha, Yulia Andrienko, Valery Shulga, Chenning Zhang
Subject: Earth Sciences, Atmospheric Science Keywords: stratopause; mesosphere; sudden stratospheric warming; polar vortex; zonal wind; quasi-biennial oscillation; planetary wave; stratosphere
The aim of this work is to study the zonally asymmetric stratopause that occurred in the Arctic winter of 2019/2020, when the polar vortex was particularly strong and there was no sudden stratospheric warming. Aura Microwave Limb Sounder temperature data were used to analyze the evolution of the stratopause with a particular focus on its zonally asymmetric wave 1 pattern. There was a rapid descent of the stratopause height below 50 km in the anticyclone region in mid-December 2019. The descended stratopause persisted until mid-January 2020 and was accompanied by a slow descent of the higher stratopause in the vortex region. The results show that the stratopause in this event was inclined and lowered from the mesosphere in the polar vortex to the stratosphere in the anticyclone. It was found that the vertical amplification of wave 1 between 50 km and 60 km closely coincides in time with the rapid stratopause descent in the anticyclone. Overall, the behavior contrasts with the situation during sudden stratospheric warmings when the stratopause reforms at higher altitudes following wave amplification events. We link the mechanism responsible for coupling between the vertical wave 1 amplification and this form of zonally asymmetric stratopause descent to the unusual disruption of the quasi-biennial oscillation that occurred in late 2019.
Preprint REVIEW | doi:10.20944/preprints202108.0552.v1
The Hyperfunction Theory: An Emerging Paradigm for the Biology of Aging
David Gems
Subject: Life Sciences, Cell & Developmental Biology Keywords: antagonistic pleiotropy; insulin/IGF-1 signalling; hyperfunction; quasi-programs; mTOR; theories of aging; programmatic aging
The process of senescence (aging) is largely determined by the action of wild-type genes. For most organisms, this does not reflect any adaptive function of senescence, but rather evolutionary effects of declining selection against genes with deleterious effects later in life. To understand aging requires an account of how evolutionary mechanisms give rise to pathogenic gene action and late-life disease, that integrates evolutionary (ultimate) and mechanistic (proximate) causes into a single explanation. A well-supported evolutionary explanation by G.C. Williams argues that senescence can evolve due to pleiotropic effects of alleles with antagonistic effects on fitness and late-life health (antagonistic pleiotropy, AP). What has remained unclear is how gene action gives rise to late-life disease pathophysiology. One ultimate-proximate account is T.B.L. Kirkwood's disposable soma theory. Based on the hypothesis that stochastic molecular damage causes senescence, this reasons that aging is coupled to reproductive fitness due to preferential investment of resources into reproduction, rather than somatic maintenance. An alternative and more recent ultimate-proximate theory argues that aging is largely caused by programmatic, developmental-type mechanisms. Here ideas about AP and programmatic aging are reviewed, particularly those of M.V. Blagosklonny (the hyperfunction theory) and J.P. de Magalhães (the developmental theory), and their capacity to make sense of diverse experimental findings is described.
Sumudu Transform of Dixon Elliptic Functions With Non-Zero Modulus as Quasi C Fractions and Its Hankel Determinants
Adem Kilicman, Rathinavel Silambarasan
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: dixon elliptic functions; non-zero modulus; sumudu transform; hankel determinants; continued fractions; Quasi C fractions
Sumudu transform of the Dixon elliptic function with non zero modulus a ≠ 0 for arbitrary powers smN(x,a) ; N ≥ 1 ; smN(x,a)cm(x,a) ; N ≥ 0 and smN(x,a)cm2(x,a) ; N ≥ 0 is given by product of Quasi C fractions. Next by assuming denominators of Quasi C fraction to 1 and hence applying Heliermann correspondance relating formal power series (Maclaurin series of Dixon elliptic functions) and regular C fraction, Hankel determinants are calculated and showed by taking a = 0 gives the Hankel determinants of regular C fraction. The derived results were back tracked to the Laplace transform of sm(x,a) ; cm(x,a) and sm(x,a)cm(x,a).
The Role of Quasi-Stationary Waves in Annual Cycle in Mid-Latitude Stratospheric and Mesospheric Ozone in 2011-2020
Chenning Zhang, Oleksandr Evtushevsky, Gennadi Milinevsky, Yulia Andrienko, Valerii Shulga, Wei Han, Yu Shi
Subject: Earth Sciences, Atmospheric Science Keywords: quasi-stationary wave; stratosphere; mesosphere; westward phase tilt; geopotential height; ozone; annual and semi-annual oscillation
The purpose of this work is to study quasi-stationary wave structure in the mid-latitude stratosphere and mesosphere (40–50°N) and its role in the formation of the annual ozone cycle. Geopotential height and ozone from Aura MLS data are used and winter climatology for January–February 2011–2020 is considered. More closely examined is the 10-degree longitude segment centered on Longfengshan Brewer station, China, and located in the region of the Aleutian Low influence associated with the quasi-stationary zonal maximum of total ozone. Annual and semi-annual oscillations in ozone were compared using units of ozone volume mixing ratio and concentration, as well as changes in ozone peak altitude and in time series of ozone at individual pressure levels between 316 hPa (9 km) and 0.001 hPa (96 km). The ozone maximum in the vertical profile is higher in volume mixing ratio (VMR) values than in concentration by about 15 km (5 km) in the stratosphere (mesosphere), in consistency with some previous studies. We found that the properties of the annual cycle are better resolved in the altitude range of the main ozone maximum: middle–upper stratosphere in VMR and lower stratosphere in concentration. Both approaches reveal SAO/AO-related changes in the of ozone peak altitudes in a range of 4–6 km during the year. In the lower-stratospheric ozone of the Longfengshan domain, an earlier development of the annual cycle takes place with a maximum in February and a minimum in August compared to spring and autumn, respectively, in zonal means. This is presumably due to the higher rate of dynamical ozone accumulation in the region of the quasi-stationary zonal ozone maximum. The "no-annual-cycle" transition layers are found in the stratosphere and mesosphere. These layers with undisturbed ozone volume mixing ratio throughout the year are of interest for more detailed future study.
Quasi Cubic Trigonometric Curve and Surface
Guicang Zhang, Kai Wang
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Quasi Extended Chebyshev space; optimal normalized totally positive basis; high-order continuity; shape preserving; shape features
Firstly, a new set of Quasi-Cubic Trigonometric Bernstein basis with two tension shape parameters is constructed, and we prove that it is an optimal normalized totally basis in the framework of Quasi Extended Chebyshev space. And the Quasi-Cubic Trigonometric Bézier curve is generated by the basis function and the cutting algorithm of the curve are given, the shape features (cusp, inflection point, loop and convexity) of the Quasi-Cubic Trigonometric Bézier curve are analyzed by using envelope theory and topological mapping; Next we construct the non-uniform Quasi-Cubic Trigonometric B-spline basis by assuming the linear combination of the optimal normalized totally positive basis have partition of unity and continuity, and its expression is obtained. And the non-uniform B-spline basis is proved to have totally positive and high-order continuity. Finally, the non-uniform Quasi Cubic Trigonometric B-spline curve and surface are defined, the shape features of the non-uniform Quasi-Cubic Trigonometric B-spline curve are discussed, and the curve and surface are proved to be continuous.
Molecular Organisation of Tick-Borne Encephalitis Virus
Lauri Ilmari Aurelius Pulkkinen, Sarah Victoria Barrass, Aušra Domanska, Anna K. Överby, Maria Anastasina, Sarah Jane Butcher
Subject: Biology, Other Keywords: tick-borne encephalitis virus; cryo-electron microscopy; TBEV; envelope protein; membrane protein; lipid factor; glycoprotein; quasi-equivalence
Tick-borne encephalitis virus (TBEV) is a pathogenic, enveloped, positive-stranded RNA virus in the family Flaviviridae. Structural studies of flavivirus virions have primarily focused on mosquito-borne species with only one cryo-electron microscopy (cryo-EM) structure of a tick-borne species published. Here, we present a 3.3 Å cryo-EM structure of the TBEV virion of the Kuutsalo-14 isolate, confirming the overall organisation of the virus. We observe conformational switching of the peripheral and transmembrane helices of M protein, which can explain the quasi-equivalent packing of the viral proteins and highlights their importance in stabilizing the membrane protein arrangement in the virion. The residues responsible for the M protein inter-actions are highly conserved in TBEV but not in the structurally studied Hypr strain, nor in mosquito-borne flaviviruses. These interactions may compensate for the lower number of hydrogen bonds between E proteins in TBEV compared to the mosquito-borne flaviviruses. The structure reveals two lipids bound in the E protein, which are important for virus assembly. The lipid pockets are comparable to those recently described in mosquito-borne Zika, Spondweni, Dengue, and Usutu viruses. Our results thus advance the understanding of tick-borne flavivirus architecture and virion-stabilising interactions.
Proof-of-Concept of a Quasi-2D Water-Quality Modelling Approach to Simulate Transverse Mixing in Rivers
Pouya Sabokruhie, Eric Akomeah, Tammy Rosner, Karl-Erich Lindenschmidt
Subject: Earth Sciences, Environmental Sciences Keywords: lower Athabasca River; Oil Sands Region; quasi-2D modelling; Water-Quality Analysis Simulation Program (WASP); water-quality modelling
A quasi-two-dimensional (quasi-2D) modelling approach is introduced to mimic transverse mixing of an inflow into a river from one of its banks, either an industrial outfall or a tributary. The concentrations of determinands in the inflow vary greatly from those in the river, leading to very long mixing lengths in the river downstream of the inflow location. Ideally, a two-dimensional (2D) model would be used on a small scale to capture the mixing of the two flow streams. However, for large-scale applications of several hundreds of kilometres of river length, such an approach demands too many computational resources and too much computational time, especially if the application will at some point require ensemble input from climate-change scenario data. However, a one-dimensional (1D) model with variables varying in the longitudinal flow direction but averaged across the cross-sections is too simple of an approach to capture the lateral mixing between different flow streams within the river. Hence, a quasi-2D method is proposed in which a simplified 1D solver is still applied but the discretisation of the model setup can be carried out in such a way as to enable a 2D representation of the model domain. The quasi-2D model setup also allows secondary channels and side lakes in floodplains to be incorporated into the discretisation. To show proof-of-concept, the approach has been tested on a stretch of the lower Athabasca River in Canada flowing through the oil sands region between Fort McMurray and Fort MacKay. A dye tracer and suspended sediments are the constituents modelled in this test case.
Identification of LLDPE Constitutive Material Model for Energy Absorption in Impact Applications
Luděk Hynčík, Petra Kochová, Jan Špička, Tomasz Bońkowski, Robert Cimrman, Sandra Kaňáková, Radek Kottner, Miloslav Pašek
Subject: Materials Science, Biomaterials Keywords: LLDPE; quasi-static and dynamic experimental tests, impact energy absorption; material parameter identification; constitutive material model; validation; simulation
Current industrial trends bring new challenges in energy absorbing systems. Polymer materials as the traditional packaging material seem to be promising due to their low weight, structure and production price. Based on the review, the linear low-density polyethylene material was identified as the most promising material for absorbing impact energy. The current paper addresses the identification of the material parameters and the development of a Constitutive material model to be used in future design by virtual prototyping. The paper deals with the experimental measurement of the stress-strain relations of the linear low-density polyethylene under static and dynamic loading. The quasi-static measurement is realized in two perpendicular principal directions and is supplemented by a test measurement in the 45 degrees direction, i.e. exactly between the principal directions. The quasi-static stress-strain curves are analyzed as an initial step for dynamic strain rate dependent material behavior. The dynamic response is tested in the drop tower using a spherical impactor hitting the flat material multi-layered specimen at two different energy levels. The strain rate dependent material model is identified by optimizing the static material response obtained in the dynamic experiments. The material model is validated by the virtual reconstruction of the experiments and by comparing the numerical results to the experimental ones.
General Relativity Fractal for Cosmic Web
Irina Rozgacheva
Subject: Physical Sciences, General & Theoretical Physics Keywords: fractal; General Relativity; exact solutions; geodetic vector; cosmic web; quasi-periodic distribution of matter; deformation tensor of space-time
Abstract: A new method for constructing exact solutions of the General Relativity equations for a dusty matter with fractal property is proposed. This method allows to find the solution of the GR-equations in terms of matter velocities : the connection coefficients and the Ricci tensor of space-time are expressed in terms of matter velocities; the metric tensor and the matter density are found as functions of velocity from the GR-equations. The connection coefficients and the Ricci tensor are invariant with respect to the discrete scaling transformation of velocity , where is constant. Therefore, the found solution can be used to simulate the fractal properties of the cosmic web in terms of matter velocities. This solution includes isotropic and anisotropic distributions of matter density. In an isotropic case, there is a class of exact solutions including both the well-known Friedmann's solution and a solution with a periodic distribution of the matter density in space. This last solution may be used to simulate the quasi-periodic distribution of matter in the cosmic web. It is possible that, the cosmic web and its fractal properties are the space-time primary properties. These properties are described with a deformation tensor of the space-time.
A Non-Standard Finite Difference Scheme for Magneto-Hydro Dynamics Boundary Layer Flows of an Incompressible Fluid Past a Flat Plate
Riccardo Fazio, Alessandra Jannelli
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: MHD model problem; boundary problem on semi-infinite interval; non-standard finite difference scheme; quasi-uniform mesh; error estimation
This paper deals with a non-standard finite difference scheme defined on a quasi-uniform mesh for approximate solutions of the Magneto-Hydro Dynamics (MHD) boundary layer flow of an incompressible fluid past a flat plate for a wide range of the magnetic parameter. We show how to improve the obtained numerical results via a mesh refinement and a Richardson extrapolation. The obtained numerical results are favourably compared with those available in the literature.
Analysis of Mechanical Behaviors of Waterbomb Thin-Shell Structures Under Quasi-Static Load
Lijuan Zhao, Zuen Shang, Tianyi Zhang, Zhan Liu, Liguo Han, and Chongwang Wang
Subject: Engineering, Automotive Engineering Keywords: Waterbomb structure; Origami pattern; Quasi-static load; Critical axial buckling load-to-weight ratio; Radial stiffness-to-weight ratio
Waterbomb structures are origami-inspired deformable structural components used in new types of robots. They have a unique radially deployable ability that enables robots to better adapt to their environment. In this paper, we propose a series of new waterbomb structures with square, rectangle, and parallelogram base units. Through quasi-static axial and radial compression experiments and numerical simulations, we prove that the parallelogram waterbomb structure has a twist displacement mode along the axial direction. Compared with the square waterbomb structure, the proposed optimal design of the parallelogram waterbomb structure reduces the critical axial buckling load-to-weight ratio by 55.4% and increases the radial stiffness-to-weight ratio by 67.6%. The significant increase in the radial stiffness-to-weight ratio of the waterbomb structure and decrease in the critical axial buckling load-to-weight ratio make the proposed origami pattern attractive for practical robotics applications.
Critical View on Buffer Layer Formation and Monolayer Graphene Properties in High-Temperature Sublimation
Vallery Stanishev, Nerijus Armakavicius, Chamseddine Bouhafs, Camilla Coletti, Philipp Kuhne, Ivan G. Ivanov, Alexei A. Zakharov, Rositsa Yakimova, Vanya Darakchieva
Subject: Materials Science, Biomaterials Keywords: Epitaxial graphene; buffer layer; quasi-free standing graphene; high-temperature sublimation; terahertz Optical Hall effect; free charge carrier properties
Online: 4 January 2021 (11:48:08 CET)
In this work we have critically reviewed the processes in high-temperature sublimation growth of graphene in Ar atmosphere using enclosed graphite crucible. Special focus is put on buffer layer formation and free charge carrier properties of monolayer graphene and quasi-freestanding monolayer garphene on 4H-SiC. We show that by introducing Ar at different temperatures, TAr one can shift to higher temperatures the formation of the buffer layer for both n-type and semi-insulating substrates. A scenario explaining the observed suppresed formation of buffer layer at higher TAr is proposed and discussed. Increased TAr is also shown to reduce the sp3 hybridization content and defect densities in the buffer layer on n-type conductive substrates. Growth on semi-insulating substrates results in ordered buffer layer with significantly improved structural properties, for which TAr plays only a minor role. The free charge density and mobility parameters of monolayer graphene and quasi-freestanding monolayer graphene with different TAr and different environmental treatment conditions are determined by contactless terahertz optical Hall effect. An efficient annealing of donors on and near the SiC surface takes place in intrinsic monolayer graphene grown at 2000∘C, and which is found to be independent of TAr. Higher TAr leads to higher free charge carrier mobility parameters in both intrinsically n-type and ambient p-type doped monolayer graphene. TAr is also found to have a profound effect on the free hole parameters of quasi-freestanding monolayer graphene. These findings are discussed in view of interface and buffer layer properties in order to construct a comprehensive picture of high-temperature sublimation growth and provide guidance for growth parameters optimization depending on the targeted graphene application.
Layered Graphs: A Class that Admits Polynomial Time Solutions for Some Hard Problems
Bhadrachalam Chitturi, Srijith Balachander, Sandeep Satheesh, Krithic Puthiyoppil
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: NP-complete; graph theory; layered graph; polynomial time; quasi-polynomial time; dynamic programming; independent set; vertex cover; dominating set
Online: 2 May 2018 (05:41:54 CEST)
The independent set, IS, on a graph G = ( V , E ) is V * ⊆ V such that no two vertices in V * have an edge between them. The MIS problem on G seeks to identify an IS with maximum cardinality, i.e. MIS. V * ⊆ V is a vertex cover, i.e. VC of G = ( V , E ) if every e ∈ E is incident upon at least one vertex in V * . V * ⊆ V is dominating set, DS, of G = ( V , E ) if ∀ v ∈ V either v ∈ V * or ∃ u ∈ V * and ( u , v ) ∈ E . The MVC problem on G seeks to identify a vertex cover with minimum cardinality, i.e. MVC. Likewise, MCV seeks a connected vertex cover, i.e. VC which forms one component in G, with minimum cardinality, i.e. MCV. A connected DS, CDS, is a DS that forms a connected component in G. The problems MDS and MCD seek to identify a DS and a connected DS i.e. CDS respectively with minimum cardinalities. MIS, MVC, MDS, MCV and MCD on a general graph are known to be NP-complete. Polynomial time algorithms are known for bipartite graphs, chordal graphs, cycle graphs, comparability graphs, claw-free graphs, interval graphs and circular arc graphs for some of these problems. We introduce a novel graph class, layered graph, where each layer refers to a subgraph containing at most some k vertices. Inter layer edges are restricted to the vertices in adjacent layers. We show that if k = Θ ( log ∣ V ∣ ) then MIS, MVC and MDS can be computed in polynomial time and if k = O ( ( log ∣ V ∣ ) α ) , where α < 1 , then MCV and MCD can be computed in polynomial time. If k = Θ ( ( log ∣ V ∣ ) 1 + ϵ ) , for ϵ > 0 , then MIS, MVC and MDS require quasi-polynomial time. If k = Θ ( log ∣ V ∣ ) then MCV, MCD require quasi-polynomial time. Layered graphs do have constraints such as bipartiteness, planarity and acyclicity.
A Theoretical Study and Numerical Simulation of a Quasi-Distributed Sensor Based on the Low-Finesse Fabry-Perot Interferometer: Frequency-Division Multiplexing
José Trinidad Guillen Bonilla, Alex Guillén-Bonilla, Rodríguez-Betancourtt Veronica M., Héctor Guillen Bonilla, Antonio Casillas Zamora
Subject: Engineering, Electrical & Electronic Engineering Keywords: Quasi-distributed sensor; Low-finesse Fabry-Perot interferometer; Sensor simulation; Frequency-domain multiplexing and resolution vs. signal-to-noise ratio
The application of the sensors optical fiber in the areas of scientific instrumentation and industrial instrumentation is very attractive due to its numerous advantages. In the industry of civil engineering for example, quasi-distributed sensors made with optical fiber are used for reliable strain and temperature measurements. Here, a quasi-distributed sensor in the frequency domain is discussed. The sensor consists of a series of low-finesse Fabry-Perot interferometers where each Fabry-Perot interferometer acts as a local sensor. Fabry-Perot interferometers are formed by pairs of identical low reflective Bragg gratings imprinted in a single mode fiber. All interferometer sensors have different cavity length, provoking the frequency-domain multiplexing. The optical signal represents the superposition of all interference patterns which can be decomposed using the Fourier transform. The frequency spectrum is analyzed and sensor´s properties were defined. Following, a quasi-distributed sensor was numerically simulated. Our sensor simulation considers sensor properties, signal processing, noise system and instrumentation. The numerical results show the behavior of resolution vs. signal-to-noise ratio. From our results, the Fabry-Perot sensor has high resolution and low resolutions. Both resolutions are conceivable because the FDPA algorithm elaborates two evaluations of Bragg wavelength shift
Triplet Test on Rubble Stone Masonry: Numerical Assessment of the Shear Mechanical Parameters
Michele Angiolilli, Amedeo Gregori
Subject: Keywords: unreinforced masonry; quasi-brittle material; in-plane behavior; shear-compression; triplet test; dilatancy; bond behavior; confinement; finite element model; macro-model
Rubble stone masonry walls are widely diffused in most of the cultural and architectural heritage of historical cities. The mechanical response of such material is rather complicated to predict due to their composite nature. Vertical compression tests, diagonal compression tests, and shear-compression tests are usually adopted to experimentally investigate the mechanical properties of stone masonries. However, further tests are needed for the safety assessment of these ancient structures. Since the relation between normal and shear stresses plays a major role in the shear behavior of masonry joints, governing the failure mode, triplet test configuration was here investigated. First, the experimental tests carried out at the laboratory (LPMS) of the University of L'Aquila on stone masonry specimens were presented. Then, the triplet test was simulated by using the Total Strain Crack Model, which reflects all the ultimate states of quasi-brittle material such as cracking, crushing and shear failure. The goal of the numerical investigation was to evaluate the shear mechanical parameters of the masonry sample, including strength, dilatancy, normal and shear deformations. Furthermore, the effect of (i) confinement pressure and (ii) bond behavior at the sample-plates interfaces were investigated, showing that they can strongly influence the mechanical response of the walls.
Transition from Electromechanical Dynamics to Quasi-Electromechanical Dynamics Caused by Participation of Full Converter-based Wind Power Generation
Jianqiang Luo, Siqi Bu
Subject: Engineering, Electrical & Electronic Engineering Keywords: Electromechanical dynamics; FCWG dynamics; strong interaction; electromechanical loop correlation ratio (ELCR); FCWG dynamic correlation ratio (FDCR); quasi- electromechanical loop correlation ratio (QELCR)
Online: 1 October 2020 (13:30:09 CEST)
Previous studies generally reckon that the full converter-base wind power generation (FCWG) is a 'decoupled' power source from the grid, which hardly participates in electromechanical oscillations. However, it is found recently that strong interaction could be induced which might incur severe resonance incidents in electromechanical dynamic timescale. In this paper, the participation of FCWG in electromechanical dynamics is extensively investigated, and particularly, an unusual transition of electromechanical oscillation mode (EOM) is uncovered for the first time. The detailed mathematical models of open-loop and closed-loop power systems are firstly established, and modal analysis is employed to quantify the FCWG participation in electromechanical dynamics, with two new mode identification criteria, i.e., FCWG dynamics correlation ratio (FDCR) and quasi-electromechanical loop correlation ratio (QELCR). On this basis, the impact of different wind penetration levels and controller parameter settings on the participation of FCWG is investigated. It is revealed that if an FOM has a similar oscillation frequency to the system EOMs, there is a high possibility to induce strong interactions between FCWG dynamics and system electromechanical dynamics of the external power systems. In this circumstance, an interesting phenomenon may occur that an EOM may be dominated by FCWG dynamics, and hence is transformed into a quasi-EOM, which actively involves the participation of FCWG quasi-electromechanical state variables.
Quasi-Static and Tensile Behaviors of the Bamboos
Baoming Gong, Shaohua Cui, Ziye Fan, Fuguang Ren, Qian Ding, Yongtao Sun, Bin Wang
Subject: Materials Science, Biomaterials Keywords: moso bamboo; quasi-static behavior; tensile behavior; size effect on energy absorption; damage pattern of the multiple bamboo columns; macroscopic tensile fracture mode
In this paper, quasi-static axial compression tests are performed on the nodal Moso bamboos to study the size effect on energy absorption of the bamboos and the damage pattern of the multiple bamboo columns. Experimental results show that under the same moisture content, growth age and growing environment, the specific energy absorption (SEA) of the test samples increases with the increase of the out-diameter and thickness of the bamboo columns, indicating that size effect exists for energy absorption of the Moso Bamboos. For the multiple bamboo columns, there are mainly three failure modes for the constituent single bamboo columns: splitting above the node, splitting below the node and splitting through the node. Also, the tensile tests are conducted on three kinds of dog-bone shaped bamboo samples to investigate the macroscopic tensile fracture mode in the longitudinal direction of Moso bamboos. Results show that there is no direct relationship between the fracture pattern and moisture content of the bamboos, as well as the growth age of the bamboos. However, the tensile loading rate and the shape of the dog-bone shaped bamboo sample could affect the macroscopic fracture pattern of the bamboos in some cases.
Separation Axioms Interval-Valued Fuzzy Soft Topology via Quasi-Neighbourhood Structure
Mabruka Ali, Adem Kılıçman, Azadeh Zahedi Khameneh
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: interval-valued fuzzy soft set; interval-valued fuzzy soft topology; interval-valued fuzzy soft point; interval-valued fuzzy soft neighborhood; interval-valued fuzzy soft quasi-neighbourhood; interva
In this study, we present the concept of interval-valued fuzzy soft point and then introduce the notions of neighborhood and quasi-neighbourhood of it in interval-valued fuzzy soft topological spaces. Separation axioms in interval-valued fuzzy soft topology, so-called $q$-$T_{i}$ for $ i=0,1,2,3,4 ,$ is introduced and some of its basic properties are also studied.
Preprints is a free preprint server supported by MDPI in Basel, Switzerland.
Contact us RSS
Choose the area that interest you and we will send you notifications of new preprints at your preferred frequency.
RSS feed for
Preprints: The Multidisciplinary Preprint Platform
Paste the URL in your RSS reader:
https://www.preprints.org/rss
Please note that by using ScienceDirect RSS feeds, you agree to our Terms of Use and Privacy Policy.
|
CommonCrawl
|
Convergence criteria for Linear Process time series models
For the model $X_t = \sum_{j=-\infty} ^{\infty} \psi_j Z_{t-j}$, where $Z_t \sim WN(0, \sigma^2)$, I'm not totally clear on why we require $\sum_{j=1}^{\infty} | \psi_j| < \infty$.
I think we can show that $$ E \left[\sum_{j=-\infty} ^{\infty} |\psi_j Z_{t-j}| \right] \le \sigma \sum_{j=-\infty} ^{\infty} |\psi_j|$$ so the left hand side converges if the right hand side does. But why is it important for the left hand side to converge?
FequishFequish
$\begingroup$ Is this by any chance self-study? It affects the way we try to answer the question... $\endgroup$ – jbowman Jan 27 '16 at 20:47
$\begingroup$ I'm auditing a TS course but am not officially enrolled. Does that help? $\endgroup$ – Fequish Jan 27 '16 at 22:33
$\begingroup$ It should be "we require $\sum_{j=-\infty}^{\infty} | \psi_j| < \infty$" rather than $\sum_{j=1}^{\infty} | \psi_j| < \infty$, I guess? $\endgroup$ – Christoph Hanck Jan 28 '16 at 7:10
$\begingroup$ A more thorough derivation of the existence of the process given absolute summability: stats.stackexchange.com/questions/353071/… $\endgroup$ – Taylor Aug 21 '18 at 16:05
Short Answer: Requiring $\sum_j |\psi_j| < \infty$ avoids a few strange behaviors easier without being a much stronger assumption, so folks make it to avoid having to caveat all their other theorems.
Longer:
When building a model, you're almost certainly going to want a finite variance (unless you're specifically building a 'heavy-tailed' model). For this, we only need the slightly weaker condition that $\sum_j \psi_j^2 < \infty$. [SS15, Definition 1.12]
$$ \begin{align*} \text{Var}\left[\sum_{j=1}^{\infty} \psi_j Z_j\right] &= \sum_{j=1}^{\infty} \text{Var}[\psi_j Z_j] \\ &= \sum_{j=1}^{\infty} \psi_j^2 \text{Var}[Z_j] \\ &= \sigma^2 \sum_{j=1}^{\infty} \psi_j^2 \end{align*} $$
If the variance exists (is finite), it's a standard result that the mean exists (is finite) as well [S03, Section 1.3.2]. However, if we don't require absolute convergence on the $\psi_j$, the series $\sum_j \psi_j$ may only be conditionally convergent and not absolutely convergent, which leads to strange things like the Riemann Rearrangement Theorem applying. In practice, it's not the RRT that you're worried about - that's just an example of the strange properties of conditionally convergent series. One of the great things about absolute convergence is that it lets you switch around integrals (expectations) and sums: this lets us assume that the sum gives a sensible random variable.
Another, more serious, issue is that, without assuming absolute summability, you can't prove the ergodicity of the mean of the series (ergodicity means that given a long enough observation, you can get a good estimate of the mean which is useful because we typically only have one realization of a time series). The series may be 'long-memory' (long-range dependence) and having more observations won't necessarily make the variance of your mean estimate decay: roughly, any shocks will 'stick around' forever and pollute your estimate of the mean. (See [SS15, Section 5.2]; I also like [S06] for a more general overview of long-memory processes, but it's not the easiest read just because the subject is hard.)
Hamilton [H94] discusses this briefly in section 3.3, particularly footnote 3, where he refers the reader to [R73, p.111] for details, and appendix 3.A but I don't have the Rao reference handy.
[H94] James D. Hamilton, Time Series Analysis, 1st Ed. (1994) Princeton University Press.
[R73] C. Radhakrishna Rao, Linear Statistical Inference and Its Applications 2nd Ed. (1973) Wiley.
[S03] Jun Shao, Mathematical Statistics, 2nd Ed. (2003) Springer. Springer Texts in Statistics.
[S06] Gennady Samorodnitsky, "Long Range Dependence". Foundations and Trends in Stochastic Systems 1(3). p.163-257 (2006).
[SS15] Robert H. Shumway and David S. Stoffer, Time Series Analysis and Its Applications, 3rd Ed. Blue Printing (2015-12). Springer. Freely available at http://www.stat.pitt.edu/stoffer/tsa3/
mweylandtmweylandt
Since you have an infinite series $\sum_j \psi_j Z_{t-j}$ it is not immediately given that it sums up to a random variable. Furthermore since it is a sum of random variables there are various notions how to understand the sum of such series.
The most simple is the convergence almost surely and for that we have the following statement. If $\sum_j E|Y_j|<\infty$ then the series $\sum_j Y_j$ converges absolutely almost surely and $E\sum_j Y_j = \sum EY_j$.
Since $E|Z_{t}|=const$ for all $t$, if $Z_t\sim WN(0,\sigma^2)$, the condition $\sum_j |\psi_j|<\infty$ ensures that series $\sum_j \psi_j Z_{t-j}$ are well defined, i.e. that $X_t$ exists for all $t$.
mpiktasmpiktas
Not the answer you're looking for? Browse other questions tagged time-series or ask your own question.
Mean square convergence of linear processes
Property of the autocovariance function in time series
Least squares estimator in a time series $\{Y_t\}$
Distribution of infinite sum $\sum_{t=0}^{\infty} \epsilon_t r^t $
Why does absolutely-summable weights ensures a linear series itself summable (convergent)? Some questions on def'n of Linear Series
|
CommonCrawl
|
Finding the initial velocity vector of an orbiting body
I'm writing a program that simulates Newton's law of universal gravitation by simply calculating the force and applying it on the objects. The simulation works very well, but now I want to simulate real systems using real data. I looked all over for how to find the initial conditions such as the initial velocity and initial position, but I can't find anything.
I'll make it clear by taking the Earth as an example as in the figure below. How do I calculate w (green vector)? If I want to set up the system and simulate it, what would the Earth's initial velocity be so that it orbits the sun at a period of 365 days? And what would its respective position be? I know the orbital speeds of the Earth and the Moon, but they are obviously not the initial velocities of the bodies in the system.
I played around with the values and got something around 30 km/s, but now the Earth's gravity can't pull the moon along the orbit path because it's too fast, even at extremely small time steps. I can give them similar velocities that is enough to drag the moon along, but then the moon's orbit period becomes less than 27 days.
orbital-motion simulations computational-physics software celestial-mechanics
ValentinValentin
By the sounds of it you have made a mistake with the units. In fact, you should not be using SI units at all in your simulation; astronomical values in SI units vary by such huge orders of magnitude that they are often a source of floating point errors that can destroy trajectories.
You should instead use the astronomical system of units. Specifically, express your masses in solar masses, your lengths in the astronomical unit, and time in the mean solar day.
Your value of $G$ will then be the square of the Gaussian gravitational constant, i.e.
$$ G=k^2=0.0002959122083\,\mathrm{AU}^3\mathrm{D}^{-2}\mathrm{M}_\odot^{-1} $$
You can get position and velocity data for the planets, their moons, comets, and hundreds of thousands of asteroids, etc, from JPL HORIZONS. You need to connect to their servers via telnet and request the data.
Alternatively, as a starting point, if you know the distance $r$ and mass $m$ of a planet, then its orbital speed should be
$$ v = \sqrt{\frac{GM}{r}} $$
where $M=1\,\mathrm{M}_\odot$. If we ignore eccentricity then the direction should be tangential to the orbit (e.g. position it at $(r,0)$ and give it a velocity $(0,v)$).
lemonlemon
$\begingroup$ Velocity: km/s, distances: km, G: km^3 kg^-1 s^-2= 6.67384*10^-20. I can't seem to find it at JPL. I used this other page (below) from NASA, and it doesn't have the initial velocity; only the orbital velocities. But there should be a way to calculate it using the orbital velocity. nssdc.gsfc.nasa.gov/planetary/factsheet/moonfact.html $\endgroup$ – Valentin Apr 8 '15 at 11:34
$\begingroup$ @Valentin I have updated my answer. $\endgroup$ – lemon Apr 8 '15 at 15:58
$\begingroup$ But again, that's the orbital velocity, which I should get as a result of setting an initial velocity for the orbiting body. I need the initial velocity of the body, which will then change to its orbital velocity as a result of interacting with the sun. I'll see what JPL HORIZONS has. $\endgroup$ – Valentin Apr 8 '15 at 16:39
$\begingroup$ @Valentin The orbital velocity that I give is the initial velocity... $\endgroup$ – lemon Apr 8 '15 at 17:03
$\begingroup$ But what about the moon? I get something like 0.9 km/s. There is no way the moon is that slow. It just can't catch up with the Earth, and the Earth can't pull it strong enough to drag it along. Is there something I got wrong here? $\endgroup$ – Valentin Apr 8 '15 at 17:44
Not the answer you're looking for? Browse other questions tagged orbital-motion simulations computational-physics software celestial-mechanics or ask your own question.
How to calculate linar velocity of planet orbit?
Unexpected eccentricity in moon orbit simulation
Computing the Initial Velocity of an orbiting body
Determing Velocity of Moons
What is the velocity of Sun due to Earth alone?
Clarification of the physics involved of an orbiting body in a circular orbit
Is the Moon's rotation affected by Earth's?
|
CommonCrawl
|
Hik28-dependent and Hik28-independent ABC transporters were revealed by proteome-wide analysis of ΔHik28 under combined stress
Pavinee Kurdrid1,
Rayakorn Yutthanasirikul2,
Sirilak Saree2,
Jittisak Senachak1,
Monpaveekorn Saelee2 &
Apiradee Hongsthong1
BMC Molecular and Cell Biology volume 23, Article number: 27 (2022) Cite this article
Synechocystis histidine kinase, Sll0474: Hik28, a signal protein in a two-component signal transduction system, plays a critical role in responding to a decrease in growth temperature and is also involved in nitrogen metabolism. In the present study, under combined stress from non-optimal growth temperature and nitrogen depletion, a comparative proteomic analysis of the wild type (WT) and a deletion mutant (MT) of Synechocystis histidine kinase, Sll0474: Hik28, in a two-component signal transduction system identified the specific groups of ABC transporters that were Hik28-dependent, e.g., the iron transporter, and Hik28-independent, e.g., the phosphate transporter. The iron transporter, AfuA, was found to be upregulated only in the WT strain grown under the combined stress of high temperature and nitrogen depletion. Whereas, the expression level of the phosphate transporter, PstS, was increased in both the WT and MT strains. Moreover, the location in the genome of the genes encoding Hik28 and ABC transporters in Synechocystis sp. PCC6803 were analyzed in parallel with the comparative proteomic data. The results suggested the regulation of the ABC transporters by the gene in a two-component system located in an adjacent location in the genome.
In Spirulina, the changes in the protein profile at a low temperature (22 °C) were examined at the subcellular level, and it was reported that the proteins involved in the two-component response system, DNA repair, chaperones and nitrogen uptake play an important role in the response of Spirulina to low-temperature stress [1, 2]. Moreover, a proteomic analysis of the cyanobacterium Synechocystis sp. PCC 6803 was performed in the optimal range of growth temperatures, namely, 32–35 °C, and higher in the thermal tolerance range (42 °C). Sixty-five proteins in the categories of heat shock proteins, protein biosynthetic machinery, amino acid biosynthetic enzymes, components of the light and dark photosynthetic apparatus, and energy metabolism were differentially expressed within 1 h after heat shock [3, 4].
Two-component systems (TCSs), consisting of a sensor histidine kinase and a response regulator, play a crucial role in the stress response mechanism. These regulatory systems mediate acclimatization to various environmental changes by linking environmental signals to gene expression. One of the sensor histidine kinase proteins found in cyanobacteria is Hik2, which is a homolog of the chloroplast sensor kinase (CSK) [5]. The protein is involved in redox regulation of chloroplast gene expression in plants and algae during changes in light quality. Moreover, Hik2 shows redundancy with Hik33, which is responsible for sensing osmotic and low-temperature stress [6]. It was proposed that Hik2 and Hik33 are involved in the resistance of PS II to environmental stresses. Furthermore, Hik33 was reported to regulate the expression of cold-inducible genes for membrane lipid biosynthesis in Synechocystis, whereas Synechocystis Hik34 was found to be an essential component for long-term high-temperature adaptation [7, 8]. The Synechocystis wild-type strain was able to recover after 24 h of cultivation at 44 °C, while the ΔHik34 mutant strain was resistant to heat stress only within the first hour, and the mutant could not recover after 24 h of exposure to high-temperature treatment [9].
In Arthrospira platensis strain C1, the combined stress of nitrogen depletion and high temperature was studied, and it was found that photosynthetic activity was reduced by more than half under these conditions compared to stress-free conditions. Moreover, reductions in biomass and total protein were reported under combined stress. The accumulation of linoleic acid (C18:2) and a decrease in γ-linolenic acid within 24 h of stress exposure were observed, together with an increasing level of carbohydrate content [10].
In our previous study, two proteins of Arthrospira platensis C1, namely, the multisensor histidine kinase SPLC1_S041070 (Hik28) and glutamate synthase, were found to interact in a yeast two-hybrid system. Due to the lack of a specific gene manipulation system and gene transformation in Arthrospira, a deletion mutant of Synechocystis Hik28, sll0474, which is a homolog of SPLC1_S041070, was constructed and grown under nitrogen depletion and a combination of nitrogen depletion and temperature stress, either 16 °C (low temperature) or 45 °C (high temperature). The fatty acid composition of the WT and MT strains under nitrogen depletion and combined stress was analyzed by using gas chromatography (GC). The data showed the accumulation of C16:1Δ9. Moreover, the chlorophyll content and O2 evolution rate were decreased drastically under nitrogen depletion and combined stress in both the WT and MT strains, although the rates of the MT cells were lower than those of the WT [11]. However, the analysis of the proteome-wide effect of Hik28 deletion is still required to indicate the possible role of Hik28 under combined stress. Thus, in the present study, proteomic analyses of the WT and MT (∆Hik28) strains were carried out under temperature stress and combined stress by using liquid chromatography–tandem mass spectrometry (LC–MS/MS). The response mechanism regulated by Hik28 and its subsequent effect on metabolic pathways could be elucidated by comparative analysis of the proteomic data from the WT and MT strains under both forms of stress together with in-depth analysis of the protein–protein interaction network and gene location in the genome by using available databases.
Cell growth and conditions
The construction of the Hik28, Sll0474, deletion mutant (MTΔHik28) was described in a previous study [11], and oligonucleotide primers for the construction of the ΔHik28 mutant in Synechocystis sp. PCC6803 are shown in Suppl. Fig. 1 and Suppl. Table 1. Cultures of the Synechocystis sp. PCC 6803 wild type and Hik28-deletion mutant were grown in BG-11 medium under a light intensity of 70 μEm-2s-1 at 30 °C until the optical density at 730 nm reached 0.8-0.9 (mid-log phase) and then harvested by centrifugation at 8,000 rpm for 10 min. The WT and MT cells in the control treatment were washed in normal BG-11 medium and subsequently resuspended in normal BG-11 medium (control treatment). For the nitrogen stress condition, the WT and MT cells were washed in nitrogen-free BG-11 medium and subsequently resuspended in nitrogen-free BG-11 medium (nitrogen-free treatment). Then, the WT and MT cultures receiving the control treatment (normal BG-11) and the nitrogen-free treatment (nitrogen-free BG-11 medium) were grown under 3 temperature conditions: optimal temperature (30 °C), low temperature (16 °C) and high temperature (45 °C). The cultures grown under each experimental condition were collected at 0, 1 and 24 h for further analysis.
In the case of Arthrospira platensis strain C1, the cells were grown in Zarrouk's medium at the optimal temperature of 35 °C under a light intensity of 100 μEm−2s−1. The culture was grown at 35 °C until mid-log phase (when optical density at 560 nm reached 0.4-0.6) and then harvested by centrifugation. Genomic DNA was extracted from the cells by using a Genomic DNA Purification Kit (Promega, USA).
Construction and yeast two-hybrid assays
Arthrospira SPLC1_S041070 (multisensor hybrid histidine kinase) was cloned into pGBKT7, and SPLC1_S630120 (glutamine synthetase) and SPLC1_S240970 (nitrogen regulatory protein P-II) were cloned into the pGAT7 vector. Arthrospira genomic DNA was used as a template for PCR amplification of these genes by using oligonucleotide primers (Suppl. Table 2), and the PCR products were cloned into pGBKT7 and pGAT7 vectors. Then, the constructed vectors were transformed into Saccharomyces cerevisiae.
Protein–protein interactions were examined by using a yeast two-hybrid system. Bait and prey proteins were cloned into pGBKT7 and pGAT7 vectors, respectively (Suppl. Fig. 2). The positive control (pGBKT7-p53 vector), negative control (pGBKT7-Lam) and bait protein in pGBKT7 were transformed into Saccharomyces cerevisiae strain Y2HGold. The control vector pGADT7-T and prey protein in the pGAT7 vector were transformed into the Y187 strain (Clontech, USA). Y2HGold and Y187 cells were mated in 300 μl of 2xYPDA broth at 30 °C and shaken at 200 rpm for 16-18 h. The yeast mating cultures were spread onto SD/−Leu/−Trp/X-α-gal/AbA dropout (DDO/X/A) plates and incubated at 30 °C for 3 days. The blue colonies were selected, streaked onto SD/−Ade/−His/−Leu/−Trp/X-α-gal/AbA dropout (QDO/X/A) plates, and incubated at 30 °C for 3–5 days. Subsequently, bait and prey protein plasmids were switched into the yeast strains Y187 and Y2HGold, respectively, to confirm the specific interactions between bait and prey proteins (Suppl. Table 2).
Growth and chlorophyll measurement
The cell growth of the Synechocystis sp. PCC6803 wild type and ∆Hik28 mutant was examined by OD730 measurement for the growth curve and OD665 measurement for chlorophyll content at 0, 1 and 24 h. Chlorophyll was extracted from the cells using 100% methanol. Chlorophyll a concentrations were calculated according to the following equation [12, 13].
$$\mathrm{Chlorophyll}-\mathrm{a}\left(\mu \mathrm{g}/\mathrm{ml}\right)={12.9447}^{\ast}\left(\mathrm{OD}_{665}\right)^\ast \mathrm{dilution}\ \mathrm{of}\ \mathrm{cell}$$
$$\text{Absorption coefficient of }Synechocystis=12.9447$$
Oxygen evolution measurement
To examine oxygen evolution by using a Clark-type oxygen electrode, the cell suspension at a chlorophyll concentration of 2.5 μg/ml was measured by using a light illumination intensity of 160 μEm-2s-1 at 30 °C in BG-11 medium (+NO3 and -NO3). The O2 evolution rate was measured in three independent experiments.
$${\mathrm{O}}_2\ \mathrm{evolution}\left(\mu \mathrm{mol}\ {\mathrm{O}}_2{\mathrm{mg}}^{-1}\mathrm{Chl}\ {\mathrm{h}}^{-1}\right)=\left(\mathrm{slope}\times 60\ \min\ {\mathrm{h}}^{-1}\times 1000\right)/\left(2.5\ \mathrm{mg}\ \mathrm{Chl}\ {\mathrm{L}}^{-1}\right)$$
Protein preparation
The WT and ΔHik28 cells were harvested by centrifugation at 8,000 rpm after being cultured at 30 °C and 45 °C for 0, 1 and 24 h following exposure to the designated stress conditions; the cells were then washed in 5 mM HEPES (pH 7.0). The cell pellets were dissolved in lysis buffer (20 mM ammonium bicarbonate, 6 M urea, 2 M thiourea, and one tablet of protease inhibitor). The cells were lysed by sonication on ice, and the supernatants were separated by centrifugation at 8,000 rpm at 4 °C for 30 min. Subsequently, 1 volume of the supernatant was diluted with 9 volumes of absolute ethanol and incubated at -20 °C for 16 h. Then, the protein was precipitated by centrifugation at 8,000 rpm at 4 °C for 30 min. The protein pellets were washed with absolute ethanol and then dissolved with 20 mM ammonium bicarbonate pH 8.5. The protein concentration was measured by using a 2D-Quant kit (GE Healthcare Life Sciences USA).
Protein digestion
Protein treatment was done with reducing agent DTT and incubating at 60 °C for 10 min. Then, 50 mM iodoacetamide (IAA) was added and incubated at room temperature (30 °C) for 30 min. The proteins in the supernatant were digested with trypsin enzyme at a ratio of 1:75 w/w (trypsin enzyme:protein sample) and incubated at 37 °C for 16 h. Ten percent trifluoroacetic acid (TFA) was added to the digestion mixture to adjust the pH to ≤ 3, and then the peptide mixture was purified using a C18 column.
Peptide desalting
Peptide samples were passed through a C18 column GL-Tip TM SDB (GL Sciences Japan). The column was preconditioned by adding 100 μl of buffer B (0.1% TFA and 80% acetonitrile (ACN)) to a C18 column tip and equilibrated by adding 100 μl of buffer A (0.1% TFA and 5% ACN) to the C18 column tip. Then, the peptide samples were added to the C18 column tip and washed by adding 100 μl of buffer A. The desalted peptides were eluted by buffer B1 (0.1% TFA and 50% ACN), and the peptide eluents were dried by using a speed vacuum at 60 °C for 3 h. The dried peptide samples were dissolved in 5 μl of buffer containing 50% ACN, 0.1% formic acid (FA) and 20 μl 0.1% FA, and these peptide samples were desalted using 10 μl ZipTip columns (Millipore USA). The ZipTip columns were washed two times with 100% ACN and 50% ACN and three times with 0.1% FA. Then, the samples were loaded into ZipTip columns and washed with 0.1% FA. The peptide samples were finally eluted with buffer (40% ACN, 0.1% FA) and subsequently dried using speed vacuum.
Proteome analysis by using liquid chromatography–tandem mass spectrometry (LC–MS/MS)
All peptide samples were subjected to quantitative proteome analysis by using an Agilent 1260 Infinity HPLC-chip/MS interfaced to the Agilent 6545 Q-TOF LC/MS system (Agilent Technologies, USA). ProtID-chip-150 II (Number G4240-62006) was used in the HPLC-chip/MS system. ProtID-chip-150 II contains a 40 nL trap column and a 75 μm × 150 mm separation column packed with Zorbax 300SB-C18 (5 μm). The mobile phase used for the capillary pump was a buffer containing 2% acetonitrile, 0.6% acetic acid and 2% FA in water at a flow rate of 0.4 μl/min, and those used for the nanopump were buffer A (0.6% acetic acid in water) and buffer B (0.6% FA in acetonitrile) at a flow rate of 0.4 μl/min with a linear gradient. Q-TOF MS/MS conditions were as follows: high resolution, 4 GHz; source temperature, 150 °C; capillary voltage, 1950 V; fragmentor voltage, 140 V; and flow rate, 6 L/min of drying gas. Positive ion mode and automatic data acquisition mode were used for all sample analyses. Automatic data acquisition was performed at a mass range of 100-140 m/z for MS mode and 80–2000 m/z for MS/MS mode. The acquisition rate was 3 spectra/sec for MS and automatic MS/MS mode.
According to the methods described by Kurdrid et al., the peptide samples were dissolved in 3% acetonitrile and 0.3% formic acid in water before analysis [11]. The peptide samples were loaded onto a 36-min gradient column, and the gradient was initiated at 5-15% buffer B for 0-2 min, increased to 15-35% for 2-30 min, increased to 35-60% for 30-32 min, maintained for 32-34 min and then reduced to 5% for 34-36 min. Column equilibration was performed in the negative mode run to clean up by using the polarity for 5 min. The peptide samples were analyzed by LC–MS/MS using MassHunter software (version B.06.01), with the following software settings: modification, carbamidomethylation (C), 600–6000 Da precursor MH+ and scan time range of 0–300 min. The MS/MS search used high stringency criteria; trypsin digestion, permitting up to 2 missed cleavages; carbamidomethylation (C) as a fixed modification; phosphorylation of Ser (S), Thr (T) and Tyr (Y) as variable modifications; precursor mass tolerance ±20 ppm; and product mass tolerance ±50 ppm. The reverse database scores used for % false discovery rate (% FDR) were calculated in search mode. Then, Spectrum Mill (version B.06.00.201HF1) was applied for protein identification by using a Synechocystis database. Principal component analysis (PCA) was used for analysis in the quality control mode of the MPP program. Statistical analysis was performed for the significance analysis, i.e., a t test against zero, a one-way ANOVA for each condition and a two-way ANOVA for two independent variables. The cutoff criterion was a p value less than or equal to 0.05. Then, the protein expression levels were detected using Mass Profiler Professional, or MPP (version 15.1), and statistical assessment was performed. The differentially expressed protein levels were detected using a significance cutoff criterion defined by the fold-change level. Specifically, proteins whose expression increased by a factor of at least 1.5 were considered upregulated proteins, and those whose expression decreased by a factor of at least 1.5 were considered downregulated proteins.
Growth rate, chlorophyll a, and O2 evolution rate
Synechocystis sp. PCC6803 (WT) and mutant cells (MT or ∆Hik28) were grown for a period of 24 h in normal BG-11 (+NO3) and nitrogen depletion BG-11 medium (-NO3) under the optimal temperature, low- and high-temperature stress experiments. Low-temperature stress conditions were described in our previous research [11]. Under the combined stress, nitrogen depletion and high temperature, the growth rate, Chl a and O2 evolution rate decreased in both WT and ∆Hik28 cells. However, the Chl a and O2 evolution rate of ∆Hik28 cells were lower than those of WT cells (Table 1 and Suppl. Fig. 3).
Table 1 Cell density (OD730), chlorophyll a content and oxygen evolution rate of Synechocystis sp. PCC 6803 (WT) and ΔHik28 (MT) strains under high-temperature stress and nitrogen stress
Protein–protein interaction by yeast two-hybrid system
In yeast two-hybrid experiments, the interaction of SPLC1_S041070 (Hik28) with SPLC1_S630120 (GlnA) and SPLC1_S240970 (GlnB) yielded positive results consisting of blue colonies on SD medium lacking Ade, His, Leu, Trp, X-α-gal and AbA dropout (QDO/X/A) (Suppl. Fig. 2). However, the interaction of Hik28 with GlsF has been reported previously [11].
Quantitative proteome analysis and expression patterns of differentially expressed proteins
The proteomes of Synechocystis sp. PCC6803, WT and MT cells grown in normal BG-11 and nitrogen-free BG-11 medium under optimal temperature and after exposure to low (16 °C) temperature stress and high (45 °C) temperature stress for 0, 1, and 24 h were quantitatively analyzed by LC–MS/MS. In total, 5,615 proteins were obtained from the high-stringency MS/MS search, comprising 2,825 proteins from the WT strain and 2,790 proteins from the MT strain (Fig. 1). After clustering of the differentially expressed proteins by using hierarchical clustering of MPP program version 15.1 (Fig. 2), it showed that the group of immediate response proteins, which detected 1 h after exposure to the combined stress, found in the two strains, WT and MT, under the same growth temperature were related. However, the group of delay response proteins, which expressed 24 h after the stress exposure, of MT grown at 16oC was in the same hierarchy as that of the WT and MT grown at 30oC. The absence of Hik28 led to the change in cell response to the combined stress of low temperature and nitrogen, and the expression pattern was similar to that of under the optimal condition of both strains.
Workflow for proteome data analysis. The number in parentheses shows the proteins of the Synechocystis sp. PCC 6803 (WT) and ∆Hik28 (MT) strains obtained from the proteomic analysis under the experimental conditions: temperature stress (non-optimal temperature in the presence of nitrogen (+N)) and combined stress (non-optimal temperature and nitrogen depletion (-N))
Heatmap of the differentially expressed proteins found in the WT and MT strains after (A) 1 h and (B) 24 h of the stress exposure, clustered by MPP program version 15.1
When the number of differentially expressed proteins was considered, under optimal-, low- and high-temperature conditions, respectively, 486, 467 and 464 proteins were found in WT, whereas 470, 466 and 484 proteins were found in MT (Suppl. Table 3 and Suppl. Table 4). Moreover, 434, 493 and 481 proteins in WT and 473, 458 and 439 in MT∆Hik28 were identified under combined temperature and nitrogen stress (Suppl. Table 5 and Suppl. Table 6). Comparative analysis of the differentially expressed proteins in both WT and MT was performed by the MPP program with a cutoff p value of <0.05. A total of 214 and 204 proteins were found to be differentially regulated under temperature and combined stress, respectively, in both strains. A total of 207 differentially expressed proteins under all experimental conditions were identified by cutoff criteria of fold change ≤ -1.5 and ≥1.5 (Fig. 1 and Suppl. Table 7A). Under temperature stress, 33, 35 and 3 proteins were differentially expressed in WT, MT and both strains, respectively (Suppl., Fig. 4, Suppl. Table 7B, 7E and 7H), whereas 25, 36 and 4 proteins were found under combined temperature stress and nitrogen-depletion stress (Suppl. Fig. 4, Suppl. Table 7C, 7F and 7I). Moreover, the expression levels of 13, 11 and 47 proteins were detected in WT, MT and both strains under more than one experimental condition; these proteins are designated as "others" in Fig. 1 and Suppl. Fig. 4, and the details of the proteins are shown in Suppl. Table 7D, 7G and 7J.
Since the signaling proteins and response regulators are regulated by posttranslational modification rather than the expression level [11], the proteomic data before the differential expression analysis in terms of fold change were also considered in this case. In the WT strain, under low-temperature stress in the presence of nitrogen, the two-component systems involved in quorum and osmolarity sensing were uniquely detected, whereas under high-temperature conditions, the efflux system protein involved in nickel and cobalt tolerance was found (Suppl. Fig. 5A-B). Under combined nitrogen and temperature stress, in addition to the two-component system involved in nitrogen metabolism and chemotaxis, metal-sensitive and osmolarity-sensing proteins were uniquely found after low- and high-temperature exposure, respectively (Suppl. Fig. 5C-D). In the MT, in which Hik28 was absent, a different set of TCSs was found under the combined stress. Polysaccharide biosynthesis/export and twitching motility proteins (Fig. 3A-D). Moreover, in the case of combined low-temperature and nitrogen-depletion stress, Sll5060, Sll1228 and Sll1367 were detected in the Hik28-deletion strain, whereas under the combination of high-temperature and nitrogen-depletion stress, the proteins involved in cation efflux, chemotaxis (CheY) and osmolarity sensing were detected together with nitrogen assimilation proteins and the fatty acid–metabolizing enzyme delta12-desaturase (Fig. 3C-D).
The protein–protein interaction (PPI) network of the two-component system, their response regulators and the up- and downregulated proteins found in the Synechocystis ∆Hik28 (MT) strain under A low-temperature stress, B high-temperature stress, C combined low-temperature and nitrogen-depletion stress and D combined high-temperature and nitrogen-depletion stress. The PPI network of the regulated proteins in metabolic pathways is illustrated in the boxes bounded by the dotted lines. The up- and downregulated proteins are shown in red and blue letters, respectively
In the present study, the effect of combined temperature and nitrogen stress on the cell growth of WT and MT strains was studied by measuring cell density, Chl a content and O2 evolution. When the ΔO2 evolution of the cells grown in the presence and absence of nitrogen was calculated (Table 1 and Suppl. Fig. 3), the higher value of ΔO2 evolution found in the MT cells indicated poorer adaptation to N stress. This result implies that Hik28 most likely plays a role in cell growth via the photosynthetic mechanism in response to N stress.
Moreover, the clustering results, shown in Fig. 2B, of the differentially expressed proteins obtained from comparative proteomic analysis of the WT and MT strains supported the critical role of Hik28 in response to the temperature downshift due to the absence of Hik28 led to similar protein expression pattern of MT under 16oC and that of the WT and MT strains under the optimal temperature. Sensor histidine kinases in two-component signal transduction systems, including Hik28, enable cyanobacteria to sense, respond, and adapt to environmental changes, stressors, and growth conditions. It is well known that in the response mechanism, the phosphoryl group is transferred from the autophosphorylated sensor histidine kinase to a response regulator (RR), which subsequently affects cellular physiology by regulating gene expression [14]. After signal retrieval, the activated RR binds to its target promoter regions and subsequently regulates the transcriptional machinery [15]. Moreover, in the stress response mechanism, the two-component system and ABC transporters are functionally related in the transportation of substrates, including peptides, amino acids, sugars and antibiotics [16,17,18,19].
Effects of Hik28 deletion
Two-component signal transduction system and ABC transporter:
Based on the proteomic data, a unique set of signaling and response regulator proteins were detected in the WT and mutant strains under temperature and combined stress (Fig. 3A-D and Suppl. Fig. 5A-D). The data clearly indicated the effect of Hik28 deletion at the level of abiotic stress sensing and the specific set of response regulators involved. Furthermore, a two-component system is known to tightly regulate ABC transporters, which are an important class of proteins that transport various extracellular substrates, including peptides, amino acids, sugars and antibiotics [20]. After the two-component system senses and transfers the signal from the environmental stress, proteins in the ABC transporter group are one of the immediate responses of the cells to the stress that is regulated by the TCS. The TCS induces a quick and specific response to stimuli, and both the TCS and the ABC transporter system have demonstrated their ability to sense biotic and abiotic stress and substrates, including peptides, amino acids, sugars and antibiotics; however, the exact mechanism is not fully established.
In Synechocystis sp. PCC6803, WT strain, there are a total of 73 ABC transporter proteins, and 15 of them were detected in the proteomic data. It has been reported that the genes encoding TCS proteins and the ABC transporters that they regulate are located close together in the genome [21, 22]. Therefore, the loci of the encoded TCS genes and the ABC transporters found in the two strains under each experimental condition are shown in Fig. 4. An illustration of the loci using CGview and genome feature information showed that the iron transporter, Slr0513: FutA2 or AfuA, was in a position upstream of Hik28 and that the protein was downregulated in the Hik28-deletion mutant in response to elevated temperature, whereas it was upregulated in the WT under the combined stress of high temperature and nitrogen depletion. Synechocystis sp. PCC6803 was reported earlier to have a 10-fold higher demand for iron than Escherichia coli to sustain photosynthesis. Thus, (i) the evidence supported the necessity of the iron-binding protein Afu/FutA2 for growth (Badaruh et al. 2008) in the WT, and the data showed that the regulation of this transporter was Hik28-dependent; (ii) the evidence showed that the genes coding for the TCS proteins, e.g., Hik28, might regulate ABC transporters, e.g., Afu/FutA2, whose genes are nearby in the genome.
An illustration of the loci on Synechocystis sp. PCC6803 genome using CGview and genome feature information of the encoded TCS genes and the ABC transporters found in the two strains, WT and MT, under each experimental condition A zoom-in of the region surrounding Hik28-encoded gene and B Synechocystis sp. PCC6803 genome with Hik28-and ABC transporter- genes labeled
In addition to the iron transporter, the group of ABC transporters, whose regulation can be considered Hik28 dependent, were urea and α-glucoside transporters. These two transporters were downregulated in the Hik28-deletion mutant and had protein–protein interactions with proteins involved in the metabolic process to concentrate carbon dioxide. The two transporters were differentially expressed in response to low-temperature stress, supporting the finding that Hik28 plays a critical role as a signaling molecule in the low-temperature response mechanism [11].
Furthermore, it is noteworthy that the hypothetical and unknown proteins located upstream (Slr0516) and downstream (Sll0493) of Hik28 were found to have protein–protein interactions with Hik28 according to STRING [23]. In addition to Hik28, Slr0516 also interacts with biopolymer transporters, whereas Sll0473 interacts with nitrate and bicarbonate transporters (Suppl. Fig. 6A). Moreover, Slr0517, located downstream of Hik28, was functionally related to Hik28 and proteins in glutamine metabolic processes and purine biosynthesis in the PPI network (Suppl. Fig. 6A). In accordance with previous reports, the TCS and the ABC transporter located adjacent to it in the genome were related, thus supporting the finding that the TCS regulates nearby ABC transporter genes [21, 22].
The ABC transporters of the two strains, WT and MT, were compared under combined stress; in the mutant, which lacked Hik28, the ABC transporters responsible for molybdate/sulfate, xenobiotic, and phosphate transporter were revealed to be downregulated proteins (Fig. 3C-D and Suppl. Table 7). However, the deletion of Hik28 combined with temperature stress has negative effects on the iron, osmolyte and sugar transfer systems, whereas the urea transporter Sll0374: UrtE was downregulated after high-temperature exposure and vice versa under low-temperature stress. Interestingly, Slr0559, an ABC transporter for general L-amino acids, was upregulated in the mutant strain under high-temperature stress regardless of the nitrogen supply. However, in the WT under the three experimental conditions, low temperature, combined low-temperature and nitrogen stress, and combined high-temperature and nitrogen stress, and in the MT under the condition of combined low-temperature and nitrogen stress specifically, the proteomic data showed upregulation of the phosphate transporter Slr1247: PstS, suggesting the necessity of phosphate for cyanobacterial growth. Indeed, phosphate is a key growth limiting nutrient, particularly in freshwater cyanobacteria [24].
Other proteins in the class of ABC transporters involved in the control of the C/N ratio inside the cells are Slr0040: bicarbonate transporter and Sll1450: nitrate/nitrite/cyanate transporter. The nitrate/nitrite/cyanate transporter was differentially expressed only in the absence of Hik28 in response to high-temperature stress and in combination with nitrogen depletion. Moreover, the PPI network of these transporters showed interactions with proteins involved in nitrogen metabolism and iron and bicarbonate transport systems (Suppl. Fig. 6F). The bicarbonate transporter was downregulated under the combined stress of nitrogen depletion and low temperature in the Hik28-deletion mutant, whereas the protein expression level in the WT was decreased only under low-temperature stress, showing its Hik28-independent regulation. (Fig. 3A-D). The results suggested that (i) Hik28 possibly played a role in nitrogen assimilation and (ii) the bicarbonate requirement of the cells was reduced in response to low-temperature conditions. The evidence obtained from the proteome analysis supported the importance of the regulation of the C/N ratio in the survival and growth of cyanobacteria [11], especially under stress conditions. Furthermore, the proteins in the bacterial secretion system, HlyD and TolC, were differentially expressed under high temperature in the Hik28-deletion strain (Fig. 3B). The results indicated that the absence of Hik28 and exposure to the combined stress had direct effects on the group of ABC transporters that transfer nutrients across the periplasmic membrane.
Response of metabolic pathways to combined stress
The metabolic pathways affected by the combined stress of immediate temperature shift and nitrogen depletion were comparatively analyzed by strain and by growth condition, as shown in Fig. 3A-D and Suppl. Fig. 5A-D. Taken together, the proteomic data on differentially expressed proteins and the protein–protein interaction network demonstrated changes in the expression levels of N metabolism proteins, GlnA, GlnB and GlnN under temperature stress and combined stress in the absence of Hik28 (Suppl. Fig. 6B-I), supporting the report by Kurdrid et al. that Hik28 is critically involved in N metabolism. Moreover, the results at the proteome level showed the effect on fatty acid biosynthesis in mutant cells in response to the combined stress of high temperature and nitrogen depletion, which was in accord with the fatty acid biosynthesis data showing the drastic accumulation of C16:1Δ9 reported by Kurdrid et al. (2020). It is also noteworthy that the proteins involved in oxidative phosphorylation were significantly upregulated in the MT strain in response to temperature downshift and its combination with nitrogen stress. The upregulation of F-type H+-transporting ATPase and subunit a of ATP synthase suggested that the mutant cells had increased energy requirements under stress; therefore, the absence of Hik28 led to the requirement of ATP under low-temperature and combined stress (Suppl. Fig. 6B and D), which strongly supported the evidence that Hik28 played a crucial role in the low-temperature stress response mechanism [11].
According to a report by Kurdrid et al., oxygen evolution and chlorophyll a content decreased in mutant cells after a temperature shift to 16 °C in the presence of nitrogen, showing the negative effect of Hik28 deletion on the photosynthetic apparatus [11]. In the present study, the upregulation of proteins in PSI, PSII, the cytochrome b6f complex and photosynthetic electron transport was observed in the WT strain (Suppl. Fig. 6F). However, in the MT, under the combined stress of low temperature and N depletion, PetE, a protein in the photosynthetic electron transport system, was upregulated (Suppl. Fig. 6D), whereas it was downregulated after a temperature downshift (Suppl. Fig. 6B), supporting the evidence found by Kurdrid et al. [11]. In response to a temperature upshift or its combination with N stress, the proteins in PSII and the cytochrome b6f complex were upregulated. The comparative proteome data indicated that Hik28 may have an effect on PetE under low-temperature stress and in combination with N stress; however, the expression of other proteins in the photosynthetic system was independent of Hik28. Moreover, the expression level of ribose-5-phosphate isomerase, RpiA, during carbon fixation by this photosynthetic organism increased only in the MT strain after a temperature shift from 35 °C to 16 °C, suggesting that the mutant cells under low-temperature stress acquired higher levels of carbon than WT cells (Fig. 3A, Suppl. Fig. 6B and F). This evidence supported the finding by Hutchings et al. that RpiA, which plays a key role in the pentose phosphate pathway, is directly connected to fatty acid biosynthesis, which is regulated under temperature stress, by NADP/NADPH metabolism [25].
Another pathway involved in photosynthesis is porphyrin metabolism, in which the photosynthetic pigment chlorophyll a is synthesized. The enzymes involved in porphyrin metabolism, such as porphobilinogen synthase (HemE), were upregulated at elevated growth temperatures in the absence of Hik28 protein (Fig. 3B), whereas the expression level of protochlorophyllide oxidoreductase (Por) decreased after a temperature upshift in the WT strain (Suppl. Fig. 5B). The results showed that the proteins involved in porphyrin metabolism were regulated in response to elevated growth temperature and that Hik28 deletion caused differences in the regulation of porphyrin metabolism. Moreover, regardless of Hik28 deletion and stress conditions, the ribosomal proteins and the proteins in the oxidative phosphorylation pathway were upregulated, suggesting a need for protein biosynthesis and energy for the stress response mechanism and other metabolic processes.
Comparative proteomic analyses of Synechocystis sp. PCC6803 and its Hik28-deletion mutant under temperature stress and a combination of temperature and nitrogen-depletion stress revealed the group of response regulators, e.g., cation efflux, chemotaxis and osmolarity sensing, that are involved with the TCS signal protein Hik28. After the cells sensed and transferred the signal, the proteins in the group of transporters were differentially regulated to manage the movement of nutrients across the periplasmic membrane. The Hik28-dependent ABC transporters were iron, urea and α-glucoside transporters, whereas the Hik28-independent ones were phosphate and bicarbonate transporters. By combining the proteome data and the PPI information, we were able to illustrate the network structure of the group of proteins and metabolisms/pathways that were affected by the mutation under temperature stress and combined temperature and nitrogen stress (Fig. 3 and Suppl. Fig. 5). The metabolic processes affected by the absence of Hik28 and the designated stress conditions were the nitrogen and carbon metabolic processes that regulate the C/N ratio, the photosynthetic apparatus and fatty acid biosynthesis.
• The datasets generated and/or analyzed during the current study are available in this published article, supplementary information files and in our data repository, http://www.cyanopro.net/dl/proteome2022feb-syencho-hik28/ with username: reviewer, and password: wanna2C.
• The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD032795.
Submission details:
Project Name: Synechocystis-Hik28 stress response
Project accession: PXD032795
Project DOI: Not applicable
Reviewer account details:
Username: [email protected]
Password: ZyZbr1X1
Hongsthong A, Sirijuntarut M, Yutthanasirikul R, Senachak J, Kurdrid P, Cheevadhanarak S, et al. Subcellular proteomic characterization of the high-temperature stress response of the cyanobacterium Spirulina platensis. Proteome Sci. 2009;7(33):1–19.
Kurdrid P, Senachak J, Sirijuntarut M, Yutthanasirikul R, Phuengcharoen P, Jeamton W, et al. Comparative analysis of the Spirulina platensis subcellular proteome in response to low- and high-temperature stresses: uncovering cross-talk of signaling components. Proteome Sci. 2011;9(39):1–17.
Zavřel T, Sinetova MA, Búzová D, Literáková P, Červený J. Characterization of a model cyanobacterium Synechocystis sp. PCC 6803 autotrophic growth in a flat-panel photobioreactor. Eng Life Sci. 2014;15(1):122–32.
Slabas AR, Suzuki I, Murata N, Simon WJ, Hall JJ. Proteomic analysis of the heat shock response in Synechocystis PCC 6803 and a thermally tolerant knockout strain lacking the histidine kinase 34 gene. Proteomics. 2006;3(53):845–65.
Ibrahim IM, Puthiyaveetil S, Allen JF. A Two-Component Regulatory System in Transcriptional Control of Photosystem Stoichiometry: Redox-Dependent and Sodium Ion-Dependent Phosphoryl Transfer from Cyanobacterial Histidine Kinase Hik2 to Response Regulators Rre1 and RppA. Front Plant Sci. 2016;7(137):1–12.
Mikami K, Kanesaki Y, Suzuki I, Murata N. The histidine kinase Hik33 perceives osmotic stress and cold stress in Synechocystis sp. PCC 6803. Mol Microbiol. 2002;46(4):905–15.
Tuominen I, Pollari M, Tyystjärvi E, Tyystjärvi T. The SigB sigma factor mediates high-temperature responses in the cyanobacterium Synechocystis sp. PCC6803. FEBS Letters. 2006;580(1):319–24.
Aminaka R, Taira Y, Kashino Y, Koike H, Satoh K. Acclimation to the growth temperature and thermosensitivity of photosystem II in a mesophilic cyanobacterium, Synechocystis sp. PCC6803. Plant Cell Physiol. 2006;47(12):1612–21.
Červený J, Sinetova MA, Zavřel T, Los DA. Mechanisms of High Temperature Resistance of Synechocystis sp. PCC 6803: An Impact of Histidine Kinase 34. Life (Basel). 2015;5(1):676–99.
Panyakampol J, Cheevadhanarak S, Senachak J, Dulsawat S, Siangdung W, Tanticharoen M, et al. Different effects of the combined stress of nitrogen depletion and high temperature than an individual stress on the synthesis of biochemical compounds in Arthrospira platensis C1 (PCC 9438). J Appl Phycol. 2016;28:2177–86.
Kurdrid P, Phuengcharoen P, Senachak J, Saree S, Hongsthong A. Revealing the key point of the temperature stress response of Arthrospira platensis C1 at the interconnection of C- and N metabolism by proteome analyses and PPI networking. BMC Mol Cell Biol. 2020;21(43):1–22.
Ritchie RJ. Consistent sets of spectrophotometric chlorophyll equations for acetone, methanol and ethanol solvents. Photosynth Res. 2006;89(1):27–41.
Ritchie RJ. Universal chlorophyll equations for estimating chlorophylls a, b, c, and d and total chlorophylls in natural assemblages of photosynthetic organisms using acetone, methanol, or ethanol solvents. Photosynthetica. 2008;46:115–26.
Laub MT, Goulian M. Specificity in two-component signal transduction pathways. Ann Rev Genetics. 2007;41:121–45.
Stock AM, Robinson VL, Goudreau PN. Two-component signal transduction. Ann Rev. 2000;69:183–215.
Rietkötter E, Hoyer D, Mascher T. Bacitracin sensing in Bacillus subtilis. Mol Microbiol. 2008;68(3):768–85.
Goodman AL, Merighi M, Hyodo M, Ventre I, Filloux A, Lory S. Direct interaction between sensor kinase proteins mediates acute and chronic disease phenotypes in a bacterial pathogen. Genes Dev. 2009;23(2):249–59.
Jubelin G, Vianney A, Beloin C, Ghigo J-M, Lazzaroni J-C, Lejeune P, et al. CpxR/OmpR Interplay Regulates Curli Gene Expression in Response to Osmolarity in Escherichia coli. J Bacteriol. 2005;187(6):2038–49.
Batchelor E, Walthers D, Kenney LJ, Goulian M. The Escherichia coli CpxA-CpxR Envelope Stress Response System Regulates Expression of the Porins OmpF and OmpC. J Bacterial. 2005;187(16):5723–31.
Li X-Z, Plésiat P, Nikaido H. The Challenge of Efflux-Mediated Antibiotic Resistance in GramNegative Bacteria. Clin Microbiol Rev. 2015;28(2):337–418.
Draper LA, Cotter PD, Hill C, Ross RP. Lantibiotic Resistance. Microbiol Mol Biol Rev. 2015;79(2):171–91.
Ohki R, Giyanto KT, Masuyama W, Moriya S, Kobayashi K, Ogasawara N. The BceRS two-component regulatory system induces expression of the bacitracin transporter, BceAB, in Bacillus subtilis. Mol Microbiol. 2003;49(4):1135–44.
Szklarczyk D, Gable AL, Nastou KC, Lyon D, Kirsch R, Pyysalo S, et al. The STRING database in 2021: customizable protein-protein networks, and functional characterization of user-uploaded gene/measurement sets. Nucleic Acids Res. 2021;49(D1):D605–12.
Pitt FD, Mazard S, Humphreys L, Scanlan DJ. Functional Characterization of Synechocystis sp. Strain PCC 6803 pst1 and pst2 Gene Clusters Reveals a Novel Strategy for Phosphate Uptake in a Freshwater Cyanobacterium. J Bacteriol. 2010;192(13):3512–23.
Hutchings D, Rawsthorne S, Emes MJ. Fatty acid synthesis and the oxidative pentose phosphate pathway in developing embryos of oilseed rape (Brassica napus L.). J Experiment Botany. 2005;56(412):577–85.
The authors would like to thank the National Center for Genetic Engineering and Biotechnology (BIOTEC), National Science and Technology Development Agency (NSTDA), Bangkok, Thailand for the funding. The work on the Chip-cube Nano-LC-MS/MS was carried out at the King's Mongkut's University of Technology Thonburi (KMUTT), Bangkok, Thailand.
This research was funded by a grant No. P-16-50306 from the National Center for Genetic Engineering and Biotechnology (BIOTEC), National Science and Technology Development Agency (NSTDA), Bangkok, Thailand.
Biosciences and System Biology Team, National Center for Genetic Engineering and Biotechnology, National Science and Technology Development Agency at King Mongkut's University of Technology Thonburi, Bangkok, 10150, Thailand
Pavinee Kurdrid, Jittisak Senachak & Apiradee Hongsthong
Pilot Plant Development and Training Institute, King Mongkut's University of Technology Thonburi, Bangkok, 10150, Thailand
Rayakorn Yutthanasirikul, Sirilak Saree & Monpaveekorn Saelee
Pavinee Kurdrid
Rayakorn Yutthanasirikul
Sirilak Saree
Jittisak Senachak
Monpaveekorn Saelee
Apiradee Hongsthong
P.K. has made major contributions to cultivation, protein-sample preparation, gene cloning/mutant construction, yeast two hybrid system experiments, proteomics, PPI network construction by using STRING and helping in manuscript preparation. R.Y. has made major contributions to yeast two hybrid system experiments and proteomics. S.S. has made major contributions to protein sample preparation, proteomics and LC-MS/MS operation. J.S. has made major contributions to genome analysis and database (used for proteome analysis) construction. M.S. has made major contributions to post-proteome data management and database (used for proteome analysis) construction. A.H. has made major contributions to experimental design, proteome data analysis by using Spectrum Mill and MPP programs, data analysis, and writing and preparing the manuscript. All authors read and approved the final manuscript.
Correspondence to Apiradee Hongsthong.
12860_2022_421_MOESM1_ESM.docx
12860_2022_421_MOESM6_ESM.xlsx
12860_2022_421_MOESM10_ESM.xlsx
Additional file 10.
12860_2022_421_MOESM11_ESM.docx
Kurdrid, P., Yutthanasirikul, R., Saree, S. et al. Hik28-dependent and Hik28-independent ABC transporters were revealed by proteome-wide analysis of ΔHik28 under combined stress. BMC Mol and Cell Biol 23, 27 (2022). https://doi.org/10.1186/s12860-022-00421-w
Stress response
Response mechanism
Deletion mutant
Genome and Cyanobacteria
|
CommonCrawl
|
Abhyankar, Shreeram S.
Enumerative combinatorics of Young tableaux. (English) Zbl 0643.05001
Pure and Applied Mathematics, 115. New York - Basel: Marcel Dekker, Inc. XVII, 509 p.; $ 99.75 (U.S. & Canada); $ 119.50 (all other countries) (1988).
A unitableau \((p_{ij})\) is a tabular arrangement of positive integers \(p_{ij}\) taken from some set \(\{\) 1,2,...,p\(\}\), with strictly increasing rows. Unitableau is said to be standard (strongly standard) if its row lengths are nonincreasing and its columns are (strictly) increasing also. Strongly standard unitableaux were introduced by A. Young (1901) in his work on invariant theory. Thereafter he used them for describing the irreducible representations of the symmetric group; Young was led to this theme while investigating how Gordan-Capelli series, occurring in the classical invariant theory of forms, can be derived from an identity involving Young tableaux. Since then, Young tableaux have played an important role in quantum mechanics (H. Weyl, 1930-1940), they have occurred in the theory of elementary particles, in many combinatorial and computer science problems. W. Hodge (1947) used Young tableaux to study Schubert varieties of flag manifolds.
This monograph is a detailed and carefully commented research account, resulting from the author's interest (1982-1985) in the structure of Schubert varieties. Having investigated certain determinantal ideals, he was led to the problem of enumerating Young tableaux and especially some related, more general objects - multitableaux.
To give idea of the results in the book, some definitions are needed. A tableau with q sides of the same shape but with possibly different entries, is called a multitableau (or simply, a tableau) of width q; in the special case \(q=2\) it is called bitableau. The tableau of width q having only one row, is called a multivector of width q. A tableau is said to be bounded by \(m,m=(m_ 1,...,m_ q)\in {\mathbb{N}}^ q\), if for any \(j\in \{1,...,q\}\) all the entries on the j-th side are \(\leq m_ j\). A few words more about these bitableaux. Let \(X=(x_{ij})\) be an \(m_ 1\times m_ 2\)-matrix with its elements being indeterminates over a field K. A bitableau bounded by \(m=(m_ 1,m_ 2)\) is a sequence of bivectors bounded by m, and each such bivector indicates some minor of X. Taking the product of all these minors for a given bitableau, we get a monomial in the minors of X; this monomial is called standard if the given tableau is standard. The Straightening Formula [see J. Désarménien, J. P. Kung and G.-C. Rota, Adv. Math. 27, 63-92 (1978; Zbl 0373.05010)] says, that the standard monomials of X form a K-basis of the algebra K[X] of polynomials in \(x_{ij}\). Some more definitions. The number of entries (rows) on each side of the tableau is called its area (depth), the length of the tableau is its largest row length. A standard tableau of width q is said to be predominated by a multivector a of width q if a is bounded by m and the tableau, is again standard.
Chapter 1 contains comments about new and complicated notation, some preliminary remarks, and a systematic and extensive treatment of binomial coefficients. Chapter 2 gives formulas for counting the sets stab(q,T) and \(mon(2,T)\), where \(stab(q,T)\) is the set of all standard tableaux of width q and area V, which are bounded by m and predominated by the multivector a of width q and length p, and \(mon(q,T)\) is the corresponding set of monomials. These formulas are examples of determinantal polynomials in binomial coefficients. Chapter 3 contains a certain universal identity, satisfied by minors of X. Chapter 4 gives several applications of these results: enumerative proofs of the Straightening Formula and of certain generalizations of the Second Fundamental Theorem of invariant theory, computations of Hilbert functions of determinantal ideals in K[X].
All chapters, though parts of the whole, are self-contained. They begin with an informal discussion, contain a summary, and motivation and hints about further use of the main results, as well as comments on underlying principles. Some useful illustrations and mental experiments are given for better understanding of basic points of reasoning. Proofs are divided into a sequence of independent statements, all these steps are numbered and their interrelations are indicated. The new symbol-codes, regardless of being quite puzzling locally, are suitably in the whole and form a complete system.
This book will be a valuable reference for research and applications of enumerating multitableaux, it may be used also as a text in the present- day-combinatorics for graduate courses.
Reviewer: U.Kaljulaid
Cited in 10 Reviews
05-02 Research exposition (monographs, survey articles) pertaining to combinatorics
05A15 Exact enumeration problems, generating functions
14M15 Grassmannians, Schubert varieties, flag manifolds
14M12 Determinantal varieties
05A10 Factorials, binomial coefficients, combinatorial functions
13F20 Polynomial rings and ideals; rings of integer-valued polynomials
Young tableaux; Schubert varieties; determinantal ideals; bitableau; binomial coefficients; invariant theory; multitableaux
\textit{S. S. Abhyankar}, Enumerative combinatorics of Young tableaux. New York etc.: Marcel Dekker, Inc. (1988; Zbl 0643.05001)
|
CommonCrawl
|
The role of ANR in modern topology
Absolute neighborhood retracts (ANRs) are topological spaces $X$ which, whenever $i\colon X\to Y$ is an embedding into a normal topological space $Y$, there exists a neighborhood $U$ of $i(X)$ in $Y$ and a retraction of $U$ onto $i(X)$. They were invented by Borsuk in 1932 (Über eine Klasse von lokal zusammenhängenden Räumen, Fundamenta Mathematicae 19 (1), p. 220-242, EuDML) and have been the object of a lot of developments from 1930 to the 60s (Hu's monograph on the subject dates from 1965), being a central subject in combinatorial topology.
The discovery that these spaces had good topological (local connectedness), homological (finiteness in the compact case) and even homotopical properties must have been a strong impetus for the developement of the theory. Also, they probably played some role in the discovery of the homotopy extension property (it is easy to extend homotopies whose source is a normal space and target an ANR) and of cofibrations.
I have the impression that this more or less gradually stopped being so in the 70s: a basic MathScinet search does not refer that many recent papers, although they seem to be used as an important tool in some recent works (a colleague pointed to me those of Steve Ferry).
My question (which does not want to be subjective nor argumentative) is the following: what is the importance of this notion in modern developments of algebraic topology?
at.algebraic-topology gn.general-topology homotopy-theory ho.history-overview
$\begingroup$ I study model categories, and the notion of cofibration is central to that field. So for me, NDRs, ANRs, etc are interesting because of their categorical properties (e.g. how they behave with respect to pushouts). At least once in my work I've added a hypothesis like "assume the cofibrations satisfy..." and then said this was motivated by analogy to certain maps in $Top$. A model category satisfying this hypothesis can be much easier to work with and then you just check if the examples of interest also satisfy that property. I know that's vague, but hopefully it gives some idea. $\endgroup$
– David White
$\begingroup$ ANRs are a key tool in this paper in geometric group theory: ams.org/journals/jams/1991-04-03/S0894-0347-1991-1096169-1/… $\endgroup$
– Ian Agol
$\begingroup$ I remember, maybe around 1980, Frank Quinn at the beginning of a lecture at the Cornell Topology Festival mentioned ANRs and then stopped, looked around at the audience, and said in a slightly exaggerated southern accent, "Oh, I forgot, y'all don't know about ANRs up here, do you?" $\endgroup$
– Tom Goodwillie
$\begingroup$ I see ANR's all the time in modern geometric topology literature. Pretty much every book uses this notion extensively (except those about low dimensions), e.g. search for the term ANR in Steve Ferry's notes math.rutgers.edu/~sferry/ps/geotop.pdf, or in "Ends of complexes" by Hughes-Ranicki maths.ed.ac.uk/~aar/books/ends.pdf. $\endgroup$
– Igor Belegradek
$\begingroup$ I have read the title as "The role of the agence nationale de la recherche in modern topology". $\endgroup$
– Jonathan Chiche
Another reason you might not see the word ANR these days is that compact finite-dimensional spaces are ANRs if and only if they are locally contractible. Thus, "finite-dimensional and local contractible" can replace ANR in the statement of a theorem (and might help the result appeal to a wider audience).
In comparison geometry, for instance, the existence of a contractibility function takes the place of the ANR condition.
Borsuk conjectured that compact ANRs should have the homotopy types of finite simplicial complexes. Chapman and West proved that they even have preferred simple-homotopy types. This is part of the "topological invariance of torsion" package and is quite a striking result. Every compact, finite-dimensional, locally contractible space has a preferred finite combinatorial structure that is well-defined up to (even local!) simple-homotopy moves.
Steve Ferry
ANRs are (and have always been) irrelevant as long as homotopy-invariant properties of spaces homotopy equivalent to CW-complexes are concerned. But modern algebraic topologists do not seem to be really interested in (or anyway have real tools to deal with) more general spaces AFAIK. (Of course, "general nonsense" like simplicial model categories works for general spaces, but if you are using any invariants like homotopy groups or singular (co)homology theories to get substantial results that do not mention those invariants, you'll probably need theorems such as Whitehead's - which means restricting to spaces homotopic to CW-complexes.)
Shape theory did go beyond spaces homotopic to CW-complexes. But being an ANR is not a shape-invariant property. It is an invariant of local shape (which Ferry, Quinn, Hughes and their collaborators do touch upon in their works) and indeed Quinn once wrote an expository paper on "Local algebraic topology". I don't think these "local" developments have ever been of interest for (mainstream) algebraic topology, but they have very good applications in geometric topology so are usually associated with the latter.
This area of geometric topology, where ANRs and topological manifolds naturally belong, has been steadily falling out of fashion with younger generations (since the 80s I would say, not 60s), apparently because it's tough enough, but not nearly as attractive for an outsider as knots, say. That might as well be a problem of the generations rather than a "flaw" in ANRs.
Sergey Melikhov
I think the answer has more to do with the psychology of mathematicans as a culture than with actual mathematical facts.
I was not alive during the period where ANRs were mentioned in the topology literature but I've read quite a few early topology papers and also noticed before the 60's people couldn't seem to not mention them, and afterwards they were almost never mentioned.
I think this is mostly due to the more formal side of algebraic topology, with model categories. With the terminology cofibration one could largely avoid talking about ANRs and regular neighborhoods. You of course could continue to talk about those things but if you're attempting to write something short and concise with as few confusing side-roads as possible, you would omit it.
So fairly quickly people realized they didn't need to talk about ANRs. I think this kind of thing happens fairly often in mathematics, especially when the definition of a concept maybe slightly misses the mark of what you're aiming for, or if it isn't quite as general as you really need. Terminology like this cycles in and out of mathematics fairly frequently.
You could frame this in terms of the long-term survivability of a mathematical concept -- math verbiage evolution. The flaw in ANRs is they did not anticipate that point-set foundations would become less of a focus of topology, that the field would move on and become more scaleable.
Ryan Budney
$\begingroup$ Does "more scaleable" mean more axiomatic? But I don't know any nontrivial model category where all objects, or all (co)fibrant objects are ANRs. (One problem is that the cone over a non-compact space is non-metrizable.) So I don't see how any talk about ANRs (or even regular neighborhoods) could be made implicit by cofibrations. If your point is that using ANRs looks dated in some AT textbooks, it's a question of presentation. The essential feature of ANRs is, of course, that they include topological manifolds and are similar enough to them, but more easily manageable. $\endgroup$
– Sergey Melikhov
$\begingroup$ More scaleable meaning applicable in situations where you're not dealing with topological spaces -- applicable in a wider-variety of contexts. It isn't a question of being "dated" or not, it's a question of breadth of applicability. $\endgroup$
– Ryan Budney
$\begingroup$ Also, by "topology literature" and "early topology papers" you probably mean algebraic topology? Things like Freedman's proof of the topological 4D Poincare Conjecture, Quinn's proof of the Annulus Conjecture, and Edwards and Cannon's proof of Milnor's Double Suspension Conjecture are very much about ANRs. For the record, these include 2 Annals papers from 1979, and a 1986 Fields medal; a further 1975 Annals paper mentions "ANRs" in its title (it's the main ingredient of West's proof that ANRs have finite types). $\endgroup$
$\begingroup$ Breadth of applicability is very good; I'm all for model categories (and homotopy type theory). I just don't see what this all has to do with ANRs (and topological manifolds). As you explain, ANRs are not really needed to do homotopy theory; on the other hand, model categories haven't yet helped anyone to do ANRs (and hence topological manifolds), AFAIK. $\endgroup$
How about ANR homology manifolds?
See http://www.maths.ed.ac.uk/~aar/homology/tophom.pdf for an important article on the subject.
If I understand correctly, people expect (or know?) that these ANR homology manifold have transitive homeomorphism groups. The possible local models are indexed by the integers, and the value 0 corresponds to $\mathbb R^n$, i.e., to the notion of topological manifold.
$\begingroup$ In this context, I think ANR acts as a placeholder niceness assumption. Yes, it is expected but not known that they are homogeneous. And, yes, one expects that their local homeomorphism type is determined by two integers, the dimension and Quinn invariant (in $1+8\mathbb Z$, 1 corresponds to $\mathbb R^n$). The substitute Bing-Borsuk conjecture is that homogeneous ANRs are built from these charts. You could take that as saying that general ANRs are not interesting. $\endgroup$
– Ben Wieland
It could be that favorable properties of ANRs have already made their contribution by helping prove foundational results. For example, Milnor's result that certain function spaces have the homotopy type of a CW-complex relies on such properties of ANRs; see ON SPACES HAVING THE HOMOTOPY TYPE OF A CW-COMPLEX. From this perspective, it seems odd to say "ANRs are (and have always been) irrelevant as long as homotopy-invariant properties of spaces homotopy equivalent to CW-complexes are concerned" because closure under formation of function spaces is one of the key selling-points of this class of spaces.
In other words, homotopy theorists often examine space-level constructions and try to catalog their attending homotopy coherences in order to build a homotopically robust theory. These space-level constructions then require some powerful point-set topology, leading perhaps to the usefulness of ANRs.
So my guess is that the modern study is pretty content to use and abstract the usual space-level operations (pushouts, pullbacks, smash, loops), but that there may be other operations of interest, in which case, ANRs may again have something to say in homotopy theory.
John Wiltshire-Gordon
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology gn.general-topology homotopy-theory ho.history-overview or ask your own question.
The definition of homotopy in algebraic topology
The whole plethora of topology
When is the Freudenthal compactification an ANR?
Bitopological spaces and algebraic topology
What's the role of $H^{p}(\mathbb{R}^{n})$ in modern (harmonic) analysis?
What is modern algebraic topology(homotopy theory) about?
Topology of functional spaces
|
CommonCrawl
|
On almost periodicity of solutions of second-order differential equations involving reflection of the argument
Peiguang Wang1,
Dhaou Lassoued2,
Syed Abbas3,
Akbar Zada4 &
Tongxing Li5,6
Advances in Difference Equations volume 2019, Article number: 4 (2019) Cite this article
We study almost periodic solutions for a class of nonlinear second-order differential equations involving reflection of the argument. We establish existence results of almost periodic solutions as critical points by a variational approach. We also prove structure results on the set of strong almost periodic solutions, existence results of weak almost periodic solutions, and a density result on the almost periodic forcing term for which the equation possesses usual almost periodic solutions.
The study of existence, uniqueness, and stability of periodic and almost periodic solutions has become one of the most attractive topics in the qualitative theory of ordinary and functional differential equations for its significance in the physical sciences, mathematical biology, control theory, and other fields; see, for instance, [3, 8, 11, 19, 20, 28] and the references cited therein. Indeed, the almost periodic functions are closely connected with harmonic analysis, differential equations, and dynamical systems; cf. Corduneanu [12] and Fink [14]. These functions are basically generalizations of continuous periodic and quasi-periodic functions. Almost periodic functions are further generalized by many mathematicians in various ways; see Šarkovskii [26].
On the other hand, differential equations involving reflection of the argument have numerous applications in the study of stability of differential-difference equations. Such equations show very interesting properties by themselves, and so many authors have worked on this category of equations. Wiener and Aftabizadeh [29] initiated the analysis of boundary value problems involving reflection of the argument. Later on, Gupta [15,16,17] considered boundary value problems for this class of equations. Aftabizadeh et al. [1] studied the existence of a unique bounded solution of the equation
$$x'(t)=f\bigl(t, x(t), x(-t)\bigr),\quad t\in\mathbb{R}. $$
They proved that \(t\mapsto x(t)\) is almost periodic by assuming the existence of bounded solutions. Further results were extended and improved by several authors; see, for instance, the papers by Hai [18], O'Regan [22], Piao [23, 24], Piao and Sun [25], and Zima [30]. In particular, Piao [23, 24] investigated the existence and uniqueness of periodic, almost periodic, and pseudo almost periodic solutions of the equations
$$x'(t)+ax(t)+bx(-t)=g(t), \quad b\neq0, t\in\mathbb{R,} $$
$$x'(t)+ax(t)+bx(-t)= f\bigl(t, x(t), x(-t)\bigr),\quad b\neq0, t\in \mathbb{R}, $$
whereas Piao and Sun [25] studied the existence and uniqueness of Besicovitch almost periodic solutions for a class of second-order differential equations involving reflection of the argument.
In the sequel, the linear space \(\mathbb{R}^{n}\) is endowed with its standard inner product \(x\cdot y:=\sum_{k=1}^{n}x_{k} y_{k}\) and \(\vert \cdot\vert\) denotes the associated Euclidean norm. For a function \(f:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\), \((X,Y)\mapsto f(X,Y)\), we consider the second-order differential equation with reflection of the argument
$$ u''(t)=D_{1}f\bigl(u(t),u(-t) \bigr)+D_{2} f\bigl(u(-t),u(t)\bigr)+e(t), \quad t\in\mathbb{R}, $$
where \(D_{1}\) and \(D_{2}\) denote the (partial) differential with respect to X and Y, respectively, \(e:\mathbb{R}\rightarrow\mathbb{R}^{n}\) is an almost periodic forcing term. Equation (1.1) appears as an Euler–Lagrange equation.
By a strong almost periodic solution of equation (1.1) we mean a function \(u:\mathbb{R}\rightarrow\mathbb{R}^{n}\) which is twice differentiable (in ordinary sense) such that u, \(u'\), and \(u''\) are almost periodic in the sense of Bohr [9] and u satisfies (1.1) for all \(t\in\mathbb{R}\). This solution is also called \({\mathcal {C}}^{2}\)-almost periodic in some earlier work.
A weak almost periodic solution of equation (1.1) is a function \(u:\mathbb{R}\rightarrow\mathbb{R}^{n}\) which is almost periodic in the sense of Besicovitch [4] and possesses a first-order and a second-order generalized derivatives such that u satisfies (1.1) for all \(t\in\mathbb{R}\) and the difference between the two members of equation (1.1) has a quadratic mean value equal to zero. It is natural that a strong almost periodic solution is also a weak almost periodic solution.
The variational method was used for the study of ordinary and functional differential equations; see, for instance, [3, 5,6,7] and the references cited therein. By using a variational method in the mean, we investigate almost periodic solutions for equation (1.1). The almost periodic solutions of (1.1) are characterized as critical points of functionals having the following form:
$$ u\mapsto\lim_{T\to\infty}\frac{1}{2T} \int_{-T}^{T} \biggl(\frac {1}{2} \bigl\vert u'(t) \bigr\vert ^{2}+f\bigl(u(t),u(-t)\bigr)+e(t)\cdot u(t) \biggr)\,dt $$
on the Banach space of almost periodic functions.
This paper is organized as follows. Section 2 presents the considered notation for the various function spaces and auxiliary assumptions. In Sect. 3, we develop variational principles to study the almost periodic solutions of (1.1) and critical points of functionals defined on spaces of almost periodic functions. In Sect. 4, we establish some results about the structure of the set of strong almost periodic solutions of (1.1). Finally, in Sect. 5, we establish an existence result of weak almost periodic solutions of (1.1) by using the techniques in the spirit of the direct methods of calculus of variations, and a result on the density of the almost periodic forcing term for which (1.1) possesses a strong almost periodic solution.
Notation and preliminaries
First, we review some facts about Bohr almost periodic and Besicovitch almost periodic functions. For more details on almost periodic functions, we refer the reader to the monographs [4, 9, 12, 14, 21].
Let \(AP^{0}(\mathbb{R}^{n})\) be the space of the almost periodic functions from \(\mathbb{R}\) into \(\mathbb{R}^{n}\) in the sense of Bohr [9], endowed with the norm
$$\Vert u \Vert _{\infty}=\sup \bigl\{ \bigl\vert u(t) \bigr\vert : t\in \mathbb{R} \bigr\} . $$
It is easy to see that the space \(AP^{0}(\mathbb{R}^{n})\) is a Banach space [9] endowed with the above norm.
\(\mathbb{N}\) is the set of all nonnegative integers, for \(1 \leq k \in \mathbb{N}\), \(AP^{k}(\mathbb{R}^{n})\) stands for the space of functions \(u\in\mathcal{C}^{k}(\mathbb{R},\mathbb{R}^{n})\cap AP^{0}(\mathbb{R}^{n})\) such that \(u^{(j)}:=\frac{d^{j} u}{dt^{j}}\in AP^{0}(\mathbb{R}^{n})\) for all \(j=1,\ldots,k\). It is a Banach space endowed with the norm
$$\Vert u \Vert _{\mathcal{C}^{k}}= \Vert u \Vert _{\infty}+\sum _{j=1}^{k} \bigl\Vert u^{(j)} \bigr\Vert _{\infty}. $$
Every almost periodic function u possesses a mean time
$$\mathcal{M}\{u\}=\mathcal{M}\bigl\{ u(t)\bigr\} _{t}:=\lim _{T\to\infty}\frac {1}{2T} \int_{-T}^{T}u(t)\,dt. $$
For \(\lambda\in\mathbb{R}\), \(a(u,\lambda):=\mathcal{M} \{ u(t)e^{-i\lambda t} \}_{t}\) is the Fourier–Bohr coefficient of u associated to λ. We denote by \(\varLambda(u):= \{\lambda\in \mathbb{R}: a(u,\lambda)\neq0 \}\) the set of exponents of u. We use the notation \(\operatorname{mod}(u)\) for the module of u which is the additive group generated by \(\varLambda(u)\).
For \(p \in[1,\infty)\), \(B^{p}(\mathbb{R}^{n})\) is the completion of \(AP^{0}(\mathbb{R}^{n})\) in \(L^{p}_{\mathrm{loc}}(\mathbb{R},\mathbb{R}^{n})\) (Lebesgue space) with respect to the norm
$$\|u\|_{p} :=\mathcal{M} \bigl\{ \vert u \vert ^{p} \bigr\} ^{\frac{1}{p}}. $$
For \(p=2\), \(B^{2}(\mathbb{R}^{n})\) is a Hilbert space and its norm \(\|\cdot \|_{2}\) is associated to the inner product \((u\mid v):=\mathcal{M} \{ u\cdot v \}\). The elements of these spaces \(B^{p}(\mathbb{R}^{n})\) are called Besicovitch almost periodic functions, cf. [4].
Now, we recall the definitions of some spaces, like Sobolev space, special to the almost periodicity introduced by Blot [7]. Following Vo-Khac [27], the generalized derivative of \(u\in B^{2}(\mathbb{R}^{n})\) (when it exists) is \(\nabla u\in B^{2}(\mathbb{R}^{n})\) such that
$$\lim_{\tau\to0}\mathcal{M} \biggl\{ \biggl\vert \nabla u(t)- \frac{u (t+\tau )-u (t )}{\tau} \biggr\vert ^{2} \biggr\} _{t} =0. $$
The space \(B^{1,2}(\mathbb{R}^{n})\) is the collection of all functions \(u\in B^{2}(\mathbb{R}^{n})\) such that ∇u exists in \(B^{2}(\mathbb {R}^{n})\), and the space \(B^{2,2}(\mathbb{R}^{n})\) is the space of \(u\in B^{1,2}(\mathbb {R}^{n})\) such that \(\nabla^{2} u=\nabla(\nabla u)\) exists in \(B^{2}(\mathbb {R}^{n})\). It is easy to verify that the above-mentioned spaces are Hilbert spaces with the respective norms
$$\|u\|_{1,2}:= \bigl(\|u\|_{2}^{2}+\|\nabla u \|_{2}^{2} \bigr)^{\frac{1}{2}} \quad \text{and}\quad \|u \|_{2,2}:= \bigl(\|u\|_{2}^{2}+\|\nabla u \|_{2}^{2}+\|\nabla^{2} u\| _{2}^{2} \bigr)^{\frac{1}{2}}. $$
For the function \(f:\mathbb{R}^{n}\times\mathbb{R}^{n}\longrightarrow\mathbb {R}\), \((x,y)\mapsto f(x,y)\) of equation (1.1), we give the following hypotheses:
(H1):
\(f\in\mathcal{C}^{1}(\mathbb{R}^{n}\times\mathbb {R}^{n},\mathbb{R})\);
\(\vert Df(X)-Df(Y) \vert \leq a\cdot \vert X-Y \vert \) for some constant \(a>0\) and for all \(X,Y\in\mathbb {R}^{n}\times\mathbb{R}^{n}\);
f is a convex function on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\);
\(f(x,y)\geq c \vert \zeta \vert ^{2}+d\) for two numbers \(c>0\) and \(d\in\mathbb{R}\) and for all \((x,y)\in\mathbb {R}^{n}\times\mathbb{R}^{n}\), where \(\zeta=x\mbox{ or }y\).
Variational principles
We begin this section by establishing two lemmas which contain general properties of almost periodic functions.
If \(u\in AP^{0}(\mathbb{R}^{n})\), then \([t\mapsto u(-t) ]\in AP^{0}(\mathbb{R}^{n})\). Furthermore, if τ is an ϵ-translation of \(u(t)\), then τ is also an ϵ-translation number of \(u(-t)\) and \(\operatorname{mod}(u(t))=\operatorname{mod}(u(-t))\).
The proof can be completed by using Bohr's definition [9, p. 32]. □
If \(u\in B^{p}(\mathbb{R}^{n})\), then the following assertions hold.
\(\mathcal{M} \{u(t) \}_{t}=\mathcal{M} \{u(-t) \}_{t}\).
\([t\mapsto u(-t) ]\in B^{p}(\mathbb{R}^{n})\).
The relation
$$\mathcal{M} \bigl\{ u(-t) \bigr\} _{t}=\lim_{T\to\infty} \frac{1}{2T} \int _{-T}^{T}u(-t)\,dt=\lim_{T\to\infty} \frac{1}{2T} \int_{T}^{-T}-u(s)\,ds= \mathcal{M} \bigl\{ u(t) \bigr\} _{t} $$
gives assertion (1). For assertion (2), note that if \((u_{m})_{m}\) is a sequence in \(AP^{0}(\mathbb{R}^{n})\) such that \(\lim_{m\to \infty}\|u-u_{m}\|_{p}=0\), then using Lemma 3.1 and the facts that \((u_{m}(-t))_{m}\) is a sequence in \(AP^{0}(\mathbb{R}^{n})\) and \(\|u-u_{m}\|_{p}= \| u(-t)-u_{m}(-t)\|_{p}\), we obtain
$$\lim_{m\to\infty} \bigl\Vert u(-t)-u_{m}(-t) \bigr\Vert _{p}=0, $$
which implies that \([t\mapsto u(-t) ]\in B^{p}(\mathbb{R}^{n})\). The proof is complete. □
Under condition (H1), the functional \(J_{0}:AP^{1}(\mathbb{R}^{n})\rightarrow\mathbb{R}\) defined by
$$J_{0}(u):=\mathcal{M} \biggl\{ \frac{1}{2} \bigl\vert u'(t) \bigr\vert ^{2}+f \bigl(u (t ),u (-t ) \bigr)+e(t) \cdot u(t) \biggr\} _{t} $$
is of class \(\mathcal{C}^{1}\), and for all \(u,v\in AP^{0}(\mathbb{R}^{n})\),
$$\begin{aligned} DJ_{0}(u)\cdot v =&\mathcal{M}\bigl\{ u'(t)\cdot v'(t)+D_{1}f \bigl(u (t ),u (-t ) \bigr)\cdot v(t) \\ &{}+D_{2}f \bigl(u (t ),u (-t ) \bigr) \cdot v(-t)+e(t)\cdot v(t) \bigr\} _{t}. \end{aligned}$$
We consider the operator \(Q_{0}:AP^{1}(\mathbb{R}^{n})\rightarrow\mathbb{R}\) defined by \(Q_{0}(u):=\mathcal{M} \{\frac{1}{2} \vert u' \vert ^{2} \}\). The mapping \(q: \mathbb{R}^{n} \rightarrow\mathbb{R}\), \(q(x)=\frac{1}{2} \vert x \vert ^{2}\), is of class \({\mathcal {C}}^{1}\), so the Nemytskiĭ operator \({\mathcal {N}}^{0}_{q} :AP^{0}(\mathbb{R}^{n}) \rightarrow AP^{0}(\mathbb {R})\), \({\mathcal {N}}^{0}_{q}(\phi):= [t\mapsto\frac{1}{2} \vert \phi (t) \vert ^{2}]\), is of class \({\mathcal {C}}^{1}\), cf. [5]. The operator \(\frac{d}{dt}: AP^{1}(\mathbb{R}^{n}) \rightarrow AP^{0}(\mathbb {R}^{n})\) defined by \(\frac{d}{dt}(u):= u'\) is linear continuous, therefore, it is of class \({\mathcal {C}}^{1}\). The functional \({\mathcal {M}}^{0}: AP^{0}(\mathbb{R}) \rightarrow\mathbb {R}\) defined by \({\mathcal {M}}^{0}(\phi):={\mathcal {M}}^{0}_{t}\{\phi(t)\}\) is linear continuous, and hence it is of class \({\mathcal {C}}^{1}\).
Since \(Q_{0}= {\mathcal {M}}^{0} \circ{\mathcal {N}}^{0}_{q} \circ\frac{d}{dt}\), \(Q_{0}\) is of class \({\mathcal {C}}^{1}\) as composition of \({\mathcal {C}}^{1}\)-mappings. Hence, by the chain rule, we have \(DQ_{0}(u)v=\mathcal{M} \{u' \cdot v' \}\).
Furthermore, the operator \(\varTheta_{0}:AP^{1}(\mathbb{R}^{n})\rightarrow \mathbb{R}\) defined by \(\varTheta_{0}(u):=\mathcal{M} \{e \cdot u \} \) is linear continuous, so it is of class \({\mathcal {C}}^{1}\) and its differential is given by \(D\varTheta _{0}(u)v=\mathcal{M} \{e \cdot v \}\).
We consider the operator \(\varPhi_{0}:AP^{1}(\mathbb{R}^{n})\rightarrow\mathbb {R}\) defined by \(\varPhi_{0}(u):=\mathcal{M} \{f (u (t ),u (-t ) ) \}_{t}\). It is not difficult to observe that the operator \(L_{0}:AP^{0}(\mathbb {R}^{n})\rightarrow AP^{0}(\mathbb{R}^{n})\times AP^{0}(\mathbb{R}^{n})\) defined by \(L_{0}(u)(t):=(u(t),u(-t))\) is linear. Both components of \(L_{0}\) are continuous and hence \(L_{0}\) is continuous. Therefore, \(L_{0}\) is of class \(\mathcal{C}^{1}\) and \(DL_{0}(u)v=L_{0}(v)\) for all \(u,v\in AP^{0}(\mathbb{R}^{n})\).
Now, under assumption (H1), the Nemytskiĭ operator \(\mathcal {N}_{f}^{0}:AP^{0}(\mathbb{R}^{n}\times\mathbb{R}^{n})\rightarrow AP^{0}(\mathbb {R})\) defined by \(\mathcal{N}_{f}^{0}(U)(t):=f (U(t) )\) is of class \(\mathcal{C}^{1}\) (see [6] for details). Moreover, for all \(U,V\in (AP^{0}(\mathbb{R}^{n}) )^{2}\), \(D\mathcal{N}_{f}^{0}(U)\cdot V=Df(U)\cdot V\).
Note that the linear operator \(\mathcal{M}_{0}:AP^{0}(\mathbb{R})\rightarrow \mathbb{R}\) defined by \(\mathcal{M}_{0}(u):=\mathcal{M} \{u(t) \}_{t}\) is continuous. It is of class \(\mathcal{C}^{1}\) and thus \(D\mathcal {M}_{0}(\phi)\psi=\mathcal{M}(\psi)\) for all \(\phi,\psi\in AP^{0}(\mathbb {R})\). Further, the linear operator \(in_{0}:AP^{1}(\mathbb{R}^{n})\rightarrow AP^{0}(\mathbb{R}^{n})\), \(in_{0}(u):=u\) is continuous, and consequently it is of class \(\mathcal{C}^{1}\) and so \(Din_{0}(u)v=in_{0}(v)\). Since \(\varPhi _{0}=\mathcal{M}_{0}\circ\mathcal{N}_{f}^{0}\circ L_{0}\circ in_{0}\), \(\varPhi_{0}\) is of class \(\mathcal{C}^{1}\) as a composition of \(\mathcal{C}^{1}\) operators. Using the chain rule, for all \(u,v\in AP^{1}(\mathbb{R}^{n})\),
$$\bigl(D\varPhi_{0}(u)\cdot v \bigr) (t)=\mathcal{M} \bigl\{ D_{1}f \bigl(u (t ),u (-t ) \bigr)\cdot v(t)+ D_{2}f \bigl(u (t ),u (-t ) \bigr)\cdot v(-t) \bigr\} _{t}. $$
By virtue of \(J_{0}=Q_{0}+\varPhi_{0}+\varTheta_{0}\), \(J_{0}\) is of class \(\mathcal {C}^{1}\) as a sum of three \(\mathcal{C}^{1}\) functionals. Therefore, for all \(u,v\in AP^{1}(\mathbb{R}^{n})\), we have (3.1). This completes the proof. □
Assume that assumptions (H1) and (H2) are satisfied. Then the Nemytskiĭ operator \(\mathcal{N}^{1}_{f}:B^{2}(\mathbb{R}^{n}\times\mathbb{R}^{n})\rightarrow B^{1}(\mathbb{R})\) defined by \(\mathcal{N}^{1}_{f}(U)(t):=f (U(t) )\) is well defined and is of class \(\mathcal{C}^{1}\), and \(D\mathcal{N}^{1}_{f}(U) \cdot V=Df(U)\cdot V\) for all \(U, V \in B^{2}(\mathbb{R}^{n}\times\mathbb{R}^{n})\).
It suffices to remark that if (H1) and (H2) hold, then, for all \(X\in\mathbb{R}^{n}\),
$$\bigl\vert Df(X) \bigr\vert \leq \bigl\vert Df(X)-Df(0) \bigr\vert + \bigl\vert Df(0) \bigr\vert \leq a \vert X \vert + \bigl\vert Df(0) \bigr\vert . $$
Using the mean value theorem (see [2, p. 144]), we deduce that, for all \(X\in\mathbb{R}^{n}\),
$$\begin{aligned} \bigl\vert f(X) \bigr\vert \leq& \bigl\vert f(X)-f(Y) \bigr\vert + \bigl\vert f(0)\bigr\vert \\ \leq& \sup_{\xi\in]0, x[} \bigl\vert Df(\xi) \bigr\vert \vert x \vert+ \bigl\vert Df(0) \bigr\vert \\ \leq& \bigl(a \vert X \vert + \bigl\vert Df(0) \bigr\vert \bigr) \vert X \vert+ \bigl\vert Df(0) \bigr\vert \\ \leq& a \vert X \vert ^{2} + \bigl\vert Df(0) \bigr\vert \vert X \vert+ \bigl\vert Df(0) \bigr\vert \\ \leq& a \vert X \vert ^{2} + \frac{1}{2} \bigl\vert Df(0) \bigr\vert ^{2} +\frac{1}{2} \vert X \vert^{2} + \bigl\vert Df(0) \bigr\vert \\ =& \biggl(a+\frac{1}{2} \biggr) \vert X \vert^{2} + \biggl( \frac{1}{2} \bigl\vert Df(0) \bigr\vert ^{2} + \bigl\vert Df(0) \bigr\vert \biggr). \end{aligned}$$
Now, arguing as in [7, Theorem 2], we obtain the result. The proof is complete. □
Proposition 3.5
Under condition (H1), the following assertions are equivalent.
u is a critical point of \(J_{0}\) on \(AP^{1}(\mathbb{R}^{n})\).
u is a strong almost periodic solution of (1.1).
$$q(t):=D_{1}f \bigl(u (t ),u (-t ) \bigr)+D_{2}f \bigl(u (-t ),u (t ) \bigr)+e(t). $$
We know that \(q\in AP^{0}(\mathbb{R}^{n})\) for \(u\in AP^{0}(\mathbb{R}^{n})\). Let us first assume assertion (1). Since the mean value is invariant by reflection of the argument, we have
$$\mathcal{M} \bigl\{ D_{2}f \bigl(u (t ),u (-t ) \bigr) \cdot v(-t) \bigr\} _{t}=\mathcal{M} \bigl\{ D_{2}f \bigl(u (-t ),u (t ) \bigr) \cdot v(t) \bigr\} _{t}. $$
Hence, by Lemma 3.3, for all \(v\in AP^{1}(\mathbb{R}^{n})\), we get \(0 =\mathcal{M} \{u'\cdot v'+q\cdot v \}\). Finally, by using the same reasoning as in the proof of [5, Theorem 1], we obtain \(u\in AP^{2}(\mathbb{R}^{n})\) and \(u''=q\), which is exactly (1.1).
Conversely, if u is a strong almost periodic solution of (1.1), then we have \(u''=q\). Hence, for all \(v\in AP^{1}(\mathbb{R}^{n})\), we obtain
$$DJ_{0}(u) \cdot v=\mathcal{M} \bigl\{ u' \cdot v'+q \cdot v \bigr\} =\mathcal{M} \biggl\{ \frac{d}{dt} \bigl(u' \cdot v \bigr) \biggr\} =0. $$
This completes the proof. □
If conditions (H1) and (H2) are fulfilled, then the functional \(J_{1}:B^{1,2}(\mathbb{R}^{n})\rightarrow\mathbb{R}\) defined by
$$J_{1}(u):=\mathcal{M} \biggl\{ \frac{1}{2} \bigl\vert \nabla u(t) \bigr\vert ^{2}+f \bigl(u (t ),u (-t ) \bigr)+e(t) \cdot u(t) \biggr\} _{t} $$
is of class \(\mathcal{C}^{1}\). Moreover, for all \(u,v\in B^{1,2}(\mathbb{R}^{n})\),
$$\begin{aligned} DJ_{1}(u) \cdot v =&\mathcal{M}\bigl\{ \nabla u(t) \cdot\nabla v(t)+D_{1}f \bigl(u (t ),u (-t ) \bigr) \cdot v(t) \\ &{}+D_{2}f \bigl(u (t ),u (-t ) \bigr) \cdot v(-t)+e(t) \cdot v(t) \bigr\} _{t}. \end{aligned}$$
We consider the operator \(Q_{1}:B^{1,2}(\mathbb{R}^{n})\rightarrow\mathbb {R}\) defined by \(Q_{1}(u):=\mathcal{M} \{\frac{1}{2} \vert \nabla u \vert ^{2} \}\). The mapping \(q: \mathbb{R}^{n} \rightarrow\mathbb{R}\), \(q(x)=\frac{1}{2} \vert x \vert ^{2}\), is of class \({\mathcal {C}}^{1}\). Since \(Dq(x)=x\) satisfies conditions of [13, Theorem 2.6], the Nemytskiĭ operator \({\mathcal {N}}_{q}: B^{2}(\mathbb{R}^{n}) \rightarrow B^{1}(\mathbb {R})\) defined by \({\mathcal {N}}_{q}(v):= [t\mapsto\frac{1}{2} \vert v(t) \vert ^{2}]\) is of class \({\mathcal {C}}^{1}\) and \(D{\mathcal {N}}_{q}(v) \cdot h=[t\mapsto v(t) \cdot h(t)]\) for all \(v, h \in B^{2}(\mathbb{R}^{n})\).
Since the derivation operator \(\nabla: B^{1,2}(\mathbb{R}^{n})\rightarrow B^{2}(\mathbb{R})\) and the operator \({\mathcal {M}} : B^{1}(\mathbb{R}) \rightarrow\mathbb{R}\) are linear continuous, ∇ and \({\mathcal {M}}\) are of class \({\mathcal {C}}^{1}\). Therefore, \(Q_{1}={\mathcal {M}} \circ {\mathcal {N}}_{q} \circ\nabla\) is of class \({\mathcal {C}}^{1}\) as a composition of \({\mathcal {C}}^{1}\)-mappings. Moreover, using the chain rule, we have \(DQ_{1}(u)\cdot v=\mathcal{M} \{\nabla u \cdot\nabla v \}\) for all \(u, v \in B^{1,2}(\mathbb{R}^{n})\).
Now, the operator \(\varTheta_{1} :B^{1,2}(\mathbb{R}^{n})\rightarrow\mathbb {R}\) defined by \(\varTheta_{1}(u):=\mathcal{M} \{e \cdot u \}\) is linear continuous, and thus it is of class \(\mathcal{C}^{1}\) and its differential is given by \(D\varTheta_{1}(u)v=\mathcal{M} \{e \cdot v \}\).
Let us consider the operator \(\varPhi_{1}:B^{1,2}(\mathbb{R}^{n})\rightarrow \mathbb{R}\) defined by \(\varPhi_{1}(u):=\mathcal{M} \{f (u (t ),u (-t ) ) \}_{t}\). Note that the linear operator \(L_{1}:B^{2}(\mathbb{R}^{n})\rightarrow B^{2}(\mathbb{R}^{n})\times B^{2}(\mathbb{R}^{n})\) defined by \(L_{1}(u)(t):=(u(t),u(-t))\) is continuous and so it is of class \(\mathcal{C}^{1}\). Moreover, for all \(u,v\in B^{2}(\mathbb{R}^{n})\), we have \(DL_{1}(u)v=L_{1}(v)\).
Under assumptions (H1) and (H2), by virtue of Lemma 3.4, the Nemytskiĭ operator \(\mathcal{N}^{1}_{f}:B^{2}(\mathbb{R}^{n}\times \mathbb{R}^{n})\rightarrow B^{1}(\mathbb{R})\) defined by \(\mathcal {N}^{1}_{f}(U)(t):=f (U(t) )\) is of class \(\mathcal{C}^{1}\) and for all \(U,V\in B^{2}(\mathbb{R}^{n}\times\mathbb{R}^{n})\), \(D\mathcal {N}^{1}_{f}(U) \cdot V=Df(U) \cdot V\).
The continuous linear operator \(\mathcal{M}_{1}:B^{1}(\mathbb{R})\rightarrow \mathbb{R}\) defined by \(\mathcal{M}_{1}(u):=\mathcal{M} \{u(t) \} _{t}\) is of class \(\mathcal{C}^{1}\) and for all \(\phi,\psi\in B^{1}(\mathbb {R})\), \(D\mathcal{M}_{1}(\phi)\psi=\mathcal{M}(\psi)\). Besides, the linear operator \(in_{1}:B^{1,2}(\mathbb{R}^{n})\rightarrow B^{2}(\mathbb {R}^{n})\), \(in_{1}(u)=u\) is of class \(\mathcal{C}^{1}\) and \(Din_{1}(u)v=in_{1}(v)\).
Since \(\varPhi_{1}=\mathcal{M}_{1}\circ\mathcal{N}^{1}_{f}\circ L_{1}\circ in_{1}\), \(\varPhi_{1}\) is of class \(\mathcal{C}^{1}\) as it is composition of \(\mathcal {C}^{1}\) operators. Hence, by the chain rule, for all \(u,v\in B^{1,2}(\mathbb{R}^{n})\),
$$\bigl(D\varPhi_{1}(u)\cdot v \bigr) (t)=\mathcal{M} \bigl\{ D_{1}f \bigl(u (t ),u (-t ) \bigr)\cdot v(t)+D_{2}f \bigl(u (t ),u (-t ) \bigr) \cdot v(-t) \bigr\} _{t}. $$
By virtue of \(J_{1}=Q_{1}+\varPhi_{1}+\varTheta_{1}\), \(J_{1}\) is of class \(\mathcal {C}^{1}\) as a sum of three \(\mathcal{C}^{1}\) functionals. Thus, for all \(u,v\in B^{1,2}(\mathbb{R}^{n})\), we obtain (3.2). The proof is complete. □
Under conditions (H1) and (H2), the following assertions are equivalent.
u is a critical point of \(J_{1}\) on \(B^{1,2}(\mathbb{R}^{n})\).
u is a weak almost periodic solution of (1.1).
$$p(t):=D_{1}f \bigl(u (t ),u (-t ) \bigr)+D_{2}f \bigl(u (-t ),u (t ) \bigr)+e(t). $$
It is well known that \(p\in B^{2}(\mathbb{R}^{n})\) if \(u\in B^{2}(\mathbb {R}^{n})\). Now if we assume that \(u\in B^{1,2}(\mathbb{R}^{n})\) is a critical point of \(J_{1}\), then the condition \(DJ_{1}(u)=0\) can be written as \(\mathcal{M} \{\nabla u\cdot\nabla v \}=-\mathcal{M} \{ p\cdot v \}\) for all \(v\in B^{1,2}(\mathbb{R}^{n})\). Hence, using [7, Proposition 10], the last condition implies that \(\nabla u\in B^{1,2}(\mathbb{R}^{n})\), i.e., \(u\in B^{2,2}(\mathbb{R}^{n})\) and \(\nabla^{2} u=p\), which exactly means that u is a weak almost periodic solution of (1.1).
Conversely, assume that the assertion (2) is true. Then \(\nabla u\in B^{1,2}(\mathbb{R}^{n})\). Using the fact that \(\mathcal{M} \{\nabla w \}=0\) for all \(w\in B^{1,2}(\mathbb{R}^{n})\) (see [7, Proposition 3] for details) and [7, Proposition 9], we have, for all \(h\in AP^{1}(\mathbb{R}^{n})\),
$$\begin{aligned} 0 =&\mathcal{M} \bigl\{ \nabla (\nabla u \cdot h ) \bigr\} \\ =& \mathcal{M} \bigl\{ \nabla^{2} u \cdot h \bigr\} +\mathcal{M} \bigl\{ \nabla u \cdot h' \bigr\} \\ =& \mathcal{M} \{p \cdot h \}+\mathcal{M} \bigl\{ \nabla u \cdot h' \bigr\} \\ =&DJ_{1}(u) \cdot h. \end{aligned}$$
Since \(AP^{1}(\mathbb{R}^{n})\) is dense in \(B^{1,2}(\mathbb{R}^{n})\), we have \(DJ_{1}(u) \cdot h=0\) for all \(h\in B^{1,2}(\mathbb{R}^{n})\). Therefore, \(DJ_{1}(u)=0\), which proves our claim. This completes the proof. □
Structure results on \(AP^{0}(\mathbb{R}^{n})\)
In this section, we give some structure results on the set of strong almost periodic solutions of equation (1.1). The main tool is the variational structure of the problem.
Under assumptions (H1) and (H3), the following assertions hold.
The set of the strong almost periodic solutions of (1.1) is a convex closed subset of \(AP^{1}(\mathbb{R}^{n})\).
If \(u_{1}\) is a \(T_{1}\) periodic solution of (1.1), \(u_{2}\) is a \(T_{2}\) periodic solution of (1.1), and \(T_{1}/T_{2}\) is not rational, then \((1-\theta)u_{1}+\theta u_{2}\) is a strong almost periodic but nonperiodic solution of (1.1) for all \(\theta\in(0,1)\).
Since f is convex and is of class \(\mathcal{C}^{1}\), the operator \(J_{0}\) is convex and is of class \(\mathcal{C}^{1}\) on \(AP^{1}(\mathbb{R}^{n})\). Therefore,
$$\bigl\{ u\in AP^{1}\bigl(\mathbb{R}^{n}\bigr): J_{0}(u)=\inf J_{0} \bigl(AP^{1}\bigl(\mathbb {R}^{n}\bigr) \bigr) \bigr\} = \bigl\{ u\in AP^{1}\bigl( \mathbb{R}^{n}\bigr): DJ_{0}(u)=0 \bigr\} $$
is closed and convex, and hence assertion (1) becomes a consequence of Proposition 3.5. The assertion (2) is a straightforward consequence of (1). The proof is complete. □
Under assumptions (H1) and (H3), if \(e=0\), then the following assertions hold.
If u is a strong almost periodic solution of (1.1) and \(T\in(0,\infty)\) satisfies \(a (u,2\pi/T )\neq0\), then there exists a nonconstant T periodic solution of (1.1).
If u is a strong almost periodic solution of (1.1), then \(\mathcal{M} \{u \}\) is a constant solution of (1.1).
Define \(C_{T,\nu}(u)(t):=\frac{1}{\nu}\sum_{k=0}^{\nu-1}u(t+kT)\) for all \(\nu\in\mathbb{N}^{\ast}\), where u is a strong almost periodic solution of (1.1). According to the Besicovitch theorem [4, p. 144], there exists a T periodic continuous function denoted by \(u_{T}\) such that
$$\lim_{\nu\to\infty} \bigl\Vert C_{T,\nu}(u)-u_{T} \bigr\Vert _{\infty}=0. $$
Thus we can easily verify that
$$\lim_{\nu\to\infty} \bigl\Vert C_{T,\nu}(u)-u_{T} \bigr\Vert _{\mathcal{C}^{1}}=0. $$
Since \(e=0\), \(t\mapsto u(t+kT)\) is a strong almost periodic solution. Furthermore, since \(C_{T,\nu}(u)\) is a convex combination of strong almost periodic solutions of (1.1), \(C_{T,\nu}(u)\) is also a strong almost periodic solution of (1.1), and hence \(u_{T}\) is also strong almost periodic by using the closedness of the set of strong almost periodic solutions. Thus \(u_{T}\) is a T periodic solution of (1.1). Now, using a straightforward calculation, we can easily observe that \(a (C_{T,\nu}(u),\frac{2\pi}{T} )=a (u,\frac{2\pi}{T} )\) and consequently \(a (u_{T},\frac{2\pi}{T} )=a (u,\frac {2\pi}{T} )\neq0\), then \(u_{T}\) is not constant which proves assertion (1).
To prove assertion (2), it suffices to choose \(T\in(0,\infty)\) such that \(\frac{2\pi}{T} (\mathbb{Z}- \{0 \} )\cap\varLambda (u)=\emptyset\). So all the Fourier–Bohr coefficients of \(u_{T}\) are zero except (perhaps) the mean value of \(u_{T}\) which is equal to \(\mathcal {M} \{u \}\). This completes the proof. □
Existence results
In this section, we study the weak almost periodic solutions of equation (1.1). In the previous section, we use a variational viewpoint but here the Hilbert structure of \(B^{2}(\mathbb{R}^{n})\) permits us to obtain an existence theorem by using direct methods of calculus of variations. Finally, in Theorem 5.2, we give a result of density of the almost periodic forcing term for which equation (1.1) possesses usual almost periodic solutions.
Under assumptions (H1)–(H4), for each \(e\in B^{2}(\mathbb{R}^{n})\), there exists a \(u\in B^{2,2}(\mathbb{R}^{n})\) which is a weak almost periodic solution of (1.1). Moreover, the set of the weak almost periodic solutions of (1.1) is a convex set.
Using Lemma 3.6, under assumptions (H1) and (H2), the functional \(J_{1}\) is of class \(\mathcal{C}^{1}\). It follows from (H3) that \(J_{1}\) is a convex functional. Since the mean value is invariant by reflection, assumption (H4) implies that, for all \(u\in B^{1,2}(\mathbb{R}^{n})\),
$$J_{1}(u) \geq\frac{1}{2}\|u\|_{1,2}^{2}+c\|u \|_{2}^{2}-\|u\|_{2}\|e\|_{2} \geq \alpha\|u\|_{1,2}^{2}-\|u\|_{2}\|e\|_{2}, $$
where \(\alpha:=\min \{\frac{1}{2},c \}\). Consequently, \(J_{1}\) is coercive on \(B^{1,2}(\mathbb{R}^{n})\), i.e., \(J_{1}(u)\to\infty\) as \(\|u\|_{1,2}\to\infty\), and so (see [10, p. 46]) there exists a \(u\in B^{1,2}(\mathbb{R}^{n})\) such that \(J_{1}(u)=\inf J_{1} (B^{1,2}(\mathbb{R}^{n}) )\). Therefore, we conclude that \(DJ_{1}(u)=0\) and u is a weak almost periodic solution of (1.1) by using Proposition 3.7. Hence, the existence is proved.
On the basis of Lemma 3.6, the set of the weak almost periodic solutions of (1.1) is equal to the set \(\{u\in B^{1,2}(\mathbb {R}^{n}) : DJ_{1}(u)=0 \}\). Since \(J_{1}\) is convex, this set is also equal to \(\{u\in B^{1,2}(\mathbb{R}^{n}) : J_{1}(u)= \inf J_{1} (B^{1,2}(\mathbb{R}^{n}) ) \}\) which is a convex set. Thus, the set of the weak almost periodic solutions of (1.1) is convex. The proof is complete. □
Assume that (H1)–(H4) hold. Then, for each \(e\in AP^{0}(\mathbb {R}^{n})\) and for each \(\epsilon>0\), there exist an \(e_{\epsilon}\in AP^{0}(\mathbb{R}^{n})\) and a \(u_{\epsilon}\in AP^{2} (\mathbb {R}^{n})\) such that \(\|e-e_{\epsilon}\|_{2}<\epsilon\) and
$$u''_{\epsilon}(t)=D_{1}f \bigl(u_{\epsilon}(t),u_{\epsilon}(-t)\bigr)+D_{2} f \bigl(u_{\epsilon}(-t),u_{\epsilon}(t)\bigr)+e_{\epsilon}(t). $$
Consider the operator \(\varGamma:B^{2,2}(\mathbb{R}^{n})\rightarrow B^{2}(\mathbb{R}^{n})\) defined by
$$\varGamma(u):=\nabla^{2}u-D_{1}f \bigl(u(t),u(-t) \bigr)-D_{2}f \bigl(u(-t),u(t) \bigr). $$
Under (H1) and (H2), the operators
$$\textstyle\begin{cases} \varGamma_{1}:B^{2,2}(\mathbb{R}^{n})\rightarrow B^{2}(\mathbb{R}^{n}), \\ \varGamma_{1}(u)(t):=D_{1}f (u(t),u(-t) ), \end{cases} $$
$$\textstyle\begin{cases} \varGamma_{2}:B^{2,2}(\mathbb{R}^{n})\rightarrow B^{2}(\mathbb{R}^{n}), \\ \varGamma_{2}(u)(t):=D_{2}f (u(-t),u(t) ), \end{cases} $$
are continuous (cf. [7, Theorem 1]). Since the operator \(\nabla ^{2}:B^{2,2}(\mathbb{R}^{n})\rightarrow B^{2}(\mathbb{R}^{n})\) is continuous, Γ is continuous.
From Theorem 5.1, we know that \(\varGamma (B^{2,2}(\mathbb {R}^{n}) )=B^{2}(\mathbb{R}^{n})\), and so \(AP^{0}(\mathbb {R}^{n})\subset\varGamma (B^{2,2}(\mathbb{R}^{n}) )\). Let \(e\in AP^{0}(\mathbb{R}^{n})\). Then \(e\in\varGamma ( B^{2,2}(\mathbb {R}^{n}) )\), and thus there exists a \(u\in B^{2,2}(\mathbb {R}^{n})\) such that \(\varGamma(u)=e\). Since \(AP^{2}(\mathbb{R}^{n})\) is dense in \(B^{2,2}(\mathbb{R}^{n})\), for each \(\epsilon\in(0,\infty)\), there exists a \(u_{\epsilon}\in AP^{2}(\mathbb{R}^{n})\) such that \(\| u_{\epsilon}-u\|_{2,2}<\epsilon\). An application of continuity of Γ implies that \(\|\varGamma(u_{\epsilon})-e\|_{2}<\epsilon\). Taking into account that \(\varGamma(u_{\epsilon})\in AP^{0}(\mathbb{R}^{n})\), let \(e_{\epsilon}:=\varGamma(u_{\epsilon})\). Then \(e_{\epsilon}\) and \(u_{\epsilon}\) satisfy the desired results. This completes the proof. □
Aftabizadeh, A.R., Huang, Y.K., Wiener, J.: Bounded solutions for differential equations with reflection of the argument. J. Math. Anal. Appl. 135, 31–37 (1988)
Alexeev, V.M., Tihomirov, V.M., Fomin, S.V.: Commande Optimale, French edn. MIR, Moscow (1982)
Ayachi, M., Lassoued, D.: On the existence of Besicovitch almost periodic solutions for a class of neutral delay differential equations. Facta Univ., Ser. Math. Inform. 29, 131–144 (2014)
Besicovitch, A.S.: Almost Periodic Functions. Cambridge University Press, Cambridge (1932)
Blot, J.: Calculus of variations in mean and convex Lagrangians. J. Math. Anal. Appl. 134, 312–321 (1988)
Blot, J.: Une approche variationnelle des orbites quasi-périodiques des systèmes hamiltoniens. Ann. Sci. Math. Qué. 13, 7–32 (1990) (French)
Blot, J.: Oscillations presque-périodiques forcées d'équations d'Euler–Lagrange. Bull. Soc. Math. Fr. 122, 285–304 (1994) (French)
Blot, J., Lassoued, D.: Bumps of potentials and almost periodic oscillations. Afr. Diaspora J. Math. 12, 122–133 (2011)
Bohr, H.: Almost Periodic Functions. Chelsea, New York (1956)
Brézis, H.: Analyse Fonctionnelle. Théorie et Applications. Masson, Paris (1983) (French)
Bu̧se, C., Lassoued, D., Nguyen, T.L., Saierli, O.: Exponential stability and uniform boundedness of solutions for nonautonomous periodic abstract Cauchy problems. An evolution semigroup approach. Integral Equ. Oper. Theory 74, 345–362 (2012)
Corduneanu, C.: Almost Periodic Functions, 2nd English edn. Chelsea, New York (1989)
de Figueiredo, D.G.: The Ekeland Variational Principle with Applications and Detours, Tata Institute of Fundamental Research, Bombay. Springer, Berlin (1989)
Fink, A.M.: Almost Periodic Differential Equations. Lecture Notes in Mathematics. Springer, Berlin (1974)
Gupta, C.P.: Boundary value problems for differential equations in Hilbert spaces involving reflection of the argument. J. Math. Anal. Appl. 128, 375–388 (1987)
Gupta, C.P.: Existence and uniqueness theorems for boundary value problems involving reflection of the argument. Nonlinear Anal. 11, 1075–1083 (1987)
Gupta, C.P.: Two-point boundary value problems involving reflection of the argument. Int. J. Math. Math. Sci. 10, 361–371 (1987)
Hai, D.D.: Two point boundary value problem for differential equations with reflection of argument. J. Math. Anal. Appl. 144, 313–321 (1989)
Lassoued, D.: Exponential dichotomy of nonautonomous periodic systems in terms of the boundedness of certain periodic Cauchy problems. Electron. J. Differ. Equ. 2013, 89 (2013)
Lassoued, D.: New aspects of nonautonomous discrete systems stability. Appl. Math. Inf. Sci. 9, 1693–1698 (2015)
MathSciNet Google Scholar
Levitan, B.M., Zhikov, V.V.: Almost Periodic Functions and Differential Equations. Cambridge University Press, Cambridge (1982)
O'Regan, D.: Existence results for differential equations with reflection of the argument. J. Aust. Math. Soc. A 57, 237–260 (1994)
Piao, D.: Periodic and almost periodic solutions for differential equations with reflection of the argument. Nonlinear Anal. 57, 633–637 (2004)
Piao, D.: Pseudo almost periodic solutions for differential equations involving reflection of the argument. J. Korean Math. Soc. 41, 747–754 (2004)
Piao, D., Sun, J.: Besicovitch almost periodic solutions for a class of second order differential equations involving reflection of the argument. Electron. J. Qual. Theory Differ. Equ. 2014, 41 (2014)
Šarkovskii, A.N.: Functional-differential equations with a finite group of argument transformations. In: Asymptotic Behavior of Solutions of Functional-Differential Equations, pp. 118–142, 157. Akad. Nauk Ukrain. SSR, Inst. Mat., Kiev (1978)
Vo-Khac, K.: Étude des fonctions quasi-stationnaires et de leurs applications aux équations différentielles opérationnelles. Bull. Soc. Math. France Mém. 6 (1966) (French)
Wang, Y., Zada, A., Ahmad, N., Lassoued, D., Li, T.: Uniform exponential stability of discrete evolution families on space of p-periodic sequences. Abstr. Appl. Anal. 2014, Article ID 784289 (2014)
Wiener, J., Aftabizadeh, A.R.: Boundary value problems for differential equations with reflection of the argument. Int. J. Math. Math. Sci. 8, 151–163 (1985)
Zima, M.: On positive solutions of functional-differential equations in Banach spaces. J. Inequal. Appl. 6, 359–371 (2001)
The authors express their sincere gratitude to the editors for the careful reading of the original manuscript and useful comments that helped to improve the presentation of the results and accentuate important details.
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
This research is supported by NNSF of P.R. China (Grant Nos. 11771115, 11271106, and 61503171), CPSF (Grant No. 2015M582091), NSF of Shandong Province (Grant No. ZR2016JL021), KRDP of Shandong Province (Grant No. 2017CXGC0701), DSRF of Linyi University (Grant No. LYDX2015BS001), and the AMEP of Linyi University, P.R. China.
College of Mathematics and Information Science, Hebei University, Baoding, P.R. China
Peiguang Wang
Département de Mathématiques, Faculté des Sciences de Gabès, Université de Gabès, Gabès, Tunisia
Dhaou Lassoued
School of Basic Sciences, Indian Institute of Technology Mandi, Mandi, India
Syed Abbas
Department of Mathematics, University of Peshawar, Peshawar, Pakistan
Akbar Zada
LinDa Institute of Shandong Provincial Key Laboratory of Network Based Intelligent Computing, Linyi University, Linyi, P.R. China
Tongxing Li
School of Information Science and Engineering, Linyi University, Linyi, P.R. China
All five authors contributed equally to this work. They all read and approved the final version of the manuscript.
Correspondence to Tongxing Li.
Wang, P., Lassoued, D., Abbas, S. et al. On almost periodicity of solutions of second-order differential equations involving reflection of the argument. Adv Differ Equ 2019, 4 (2019). https://doi.org/10.1186/s13662-018-1938-7
Accepted: 17 December 2018
Almost periodic solution
Second-order differential equation
Reflection of the argument
Variational principle
|
CommonCrawl
|
Computer Science > Computer Vision and Pattern Recognition
[Submitted on 5 Dec 2022 (v1), last revised 7 Dec 2022 (this version, v2)]
Title:Multiple Perturbation Attack: Attack Pixelwise Under Different $\ell_p$-norms For Better Adversarial Performance
Authors:Ngoc N. Tran, Anh Tuan Bui, Dinh Phung, Trung Le
Abstract: Adversarial machine learning has been both a major concern and a hot topic recently, especially with the ubiquitous use of deep neural networks in the current landscape. Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time. On one hand, the goal is to develop strong and robust deep networks that are resistant to malicious actors. On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models. Most of existing attacks employs a single $\ell_p$ distance (commonly, $p\in\{1,2,\infty\}$) to define the concept of closeness and performs steepest gradient ascent w.r.t. this $p$-norm to update all pixels in an adversarial example in the same way. These $\ell_p$ attacks each has its own pros and cons; and there is no single attack that can successfully break through defense models that are robust against multiple $\ell_p$ norms simultaneously. Motivated by these observations, we come up with a natural approach: combining various $\ell_p$ gradient projections on a pixel level to achieve a joint adversarial perturbation. Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples. Finally, through various experiments with standardized benchmarks, we show that our method outperforms most current strong attacks across state-of-the-art defense mechanisms, while retaining its ability to remain clean visually.
Comments: 18 pages, 8 figures, 7 tables
Subjects: Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR); Machine Learning (cs.LG)
Cite as: arXiv:2212.03069 [cs.CV]
(or arXiv:2212.03069v2 [cs.CV] for this version)
From: Ngoc Tran [view email]
[v1] Mon, 5 Dec 2022 15:38:37 UTC (14,092 KB)
[v2] Wed, 7 Dec 2022 18:30:33 UTC (14,092 KB)
cs.CR
|
CommonCrawl
|
foo.castr: visualising the future AI workforce
Julio Amador Diaz Lopez1,2,
Miguel Molina-Solana ORCID: orcid.org/0000-0001-5688-20391 &
Mark T. Kennedy1,2
Big Data Analytics volume 3, Article number: 9 (2018) Cite this article
Organization of companies and their HR departments are becoming hugely affected by recent advancements in computational power and Artificial Intelligence, with this trend likely to dramatically rise in the next few years. This work presents foo.castr, a tool we are developing to visualise, communicate and facilitate the understanding of the impact of these advancements in the future of workforce. It builds upon the idea that particular tasks within job descriptions will be progressively taken by computers, forcing the shaping of human jobs. In its current version, foo.castr presents three different scenarios to help HR departments planning potential changes and disruptions brought by the adoption of Artificial Intelligence.
In their widely cited paper on how susceptible jobs are to computerisation, Frey and Osborne [1] used data from the US Department of Labor's O*NET database to predict that 47% of jobs will be at high risk of automation as technologies for computerising work continue to develop as expected (though others dramatically reduce such an impact [2]). With astute timing, World Economic Forum founder Klaus Schwab [3] argued that such computerisation would come especially from applications of artificial intelligence (AI) to cognitive work, and that they would bring social and economic change on a scale worthy of comparison to the industrial revolution. In short order, the social and economic impact of AI has become a frequent topic of books offering hopeful to dystopian views as seen in Life 3.0: Being Human in the Age of Artificial Intelligence [4] and Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy [5], respectively.
In Economics, we observe several developments that combine themselves to bring rigour and consensus to anticipating how AI will affect the relationship between capital and labour. For instance, Autor et al. [6] observed a polarisation of labour markets featuring growth in lower skilled service jobs and growth in wages for highly educated and skilled workers doing so-called cognitive tasks generally requiring some tertiary education or equivalent training.
Beaudry et al. [7], more recently, observed a decline in demand for cognitive tasks that began in 2001 and intensified after the Great Recession of 2008; notably, this finding reverses a long history of increasing demand for cognitive tasks and a positive relationship between new technologies and both average wages and productivity [8]. As a side point, Beaudry et al. [7] also observed increased inequality, a trend they link to polarised labour markets and declining demand for cognitive tasks; similarly, Acemoglu and Restropo [9] suggest the rapid spread of physical robots for manufacturing could lead to increased inequality.
In fact, both scholarly and practitioner analyses of automation's effects on demand for labour are shifting in level of analysis from jobs to the set of tasks that make up jobs. For example, Acemoglu and Autor [9] focused on tasks and task replacing technologies rather than skills associated with low or high skilled jobs, effectively shifting focus from the familiar robots-for-jobs substitution scenario to a tools-for-tasks substitution in which tasks become targets for tool development. Similarly, in operations consulting, we find a call for focusing on "activities that can be automated rather than entire occupations" [10]. Following such an approach, consultants at AlphaBeta predicted that AI's impact on jobs will be far gentler than the Frey and Osborne projection.
Following this shift to analyse tools-for-tasks substitution instead of robots-for-jobs, this manuscript presents an approach to forecasting AI's impact on labour that proceeds from organisational data rather than from a national survey asking jobholders what they do —as is the case with the O*NET data from the US Department of Labor. Whereas a national survey lends itself to gross predictions of future employment, our view is that such predictions will benefit from analytical case studies of different kinds of organisations. Therefore, our approach uses a comprehensive HR data set for a subset of employees of a large bank (more than 50k employees in total).
As we will explain below, the dataset includes the structure of the organisation, headcount by job, and the tasks associated with jobs as detailed in job descriptions. We argue that working from organisation data offers two significant benefits compared to using data from a national survey of jobholders. First, for the focal organisation, future demand for labour is forecast not from what is typical for all companies, but from the organisation's own data about how tasks map to jobs. Second, focusing on a subset of a single organisation allows us to validate our mapping of tasks to jobs with company staff who know the jobs and tasks well.
To present our approach, we briefly review the literature and developments that inform our tools-for-tasks approach to forecasting the likely impact of AI on cognitive work. Next, we explain the build-up of our model from data collection to pre-processing and to modelling. Finally, we describe foo.castr and show some generated visualisations to convey model results and engage stakeholders seeking to anticipate the likely impact of AI on cognitive work.
Innovation and the future of organisations
Forecasting how AI-based automation of cognitive work will change organisations is complicated due to the fact that it mixes elements of incremental and radical innovations. In the literature on technology and innovation, scholars distinguish performance-enhancing technologies as radical versus incremental by examining their impact on organisational routines and structures [11]. Incremental innovations mainly preserve existing structures and routines while radical innovations introduce discontinuities in their evolution. These are also called sustaining versus disruptive innovations. Using this distinction, AI-based automation is incremental to the extent that it reproduces existing routines, but it is radical to the extent that it cuts the demand for human labour to levels that eventually force organisational restructuring.
The incremental and radical aspects of AI-based automation underlie the tools-for-tasks and robots-for-jobs versus approaches, as mentioned above, to forecasting the future size and shape of organisations. Although we take the tools-for-tasks approach because it provides direct and rigorous guidance to specific organisations seeking to forecast the size and shape of their future workforces, we believe the economic effects of AI-based automation will eventually be radical as software-based robots with declining marginal costs [12] and increasing returns to investment in developing AI-based alternatives to human labour [13].
For an example that briefly explains what we mean by tools-for-tasks versus robots-for-jobs, consider the co-evolution of personal computers, office automation software, and reductions in demand for administrative and middle management work. The spread of word processors, email and networked calendars have cut and changed the work of administrative assistants (formerly mostly called secretaries), and spreadsheets and databases cut and changed the work of reporting and analysis done by office workers formerly referred to as middle managers. As these technologies developed to create opportunities for business process outsourcing, they changed organisations radically, but not so much by introducing robots that eliminated whole jobs.
To explain how this logic underlies our model, we briefly describe the tools and tasks of our tools-for-tasks approach to modelling and visualising the likely effects of AI on an organisation's workforce.
In recent years, research breakthroughs have unleashed a cascade of complementary technologies now propelling commercialisation of new technologies in three broad areas: (1) so-called "big data" technologies enabled by larger storage, faster networks, and new database architectures capable of organising and accessing both structured and unstructured data; (2) so-called "business analytics" capable of automating real-time operational decisions in areas such as advertising planning, pricing, and supply chain management; and (3) "decision making", meaning AI for replicating human decisions on classification tasks or for helping humans cutting high-dimensional problem spaces down to patterns they can interpret and use in decisions.
To map these technologies to features of tasks, we use the following four categories as a simplifying framework of these technologies:
Data Wrangling (DW): Technologies for curating, collecting, cleaning, storing, and serving data for reporting and analysis. Examples include developing a dataset useful for understanding customers, modelling a supply chain, or supporting the management of a complex process.
Dynamic Optimisation (DO): Technologies that enable automation of real-time operational decisions to manage complex processes and flows of goods and information. Examples include setting prices, determining orders needed to economise on inventory while ensuring smooth flows in a supply chain, or recommending products that retail buyers are likely to want.
Supervised Learning (SL): Technologies that apply machine learning and deep learning techniques to replicate or support decisions featuring classification tasks. Examples of classification include 'decisions' such as whether to approve a loan, whether a biopsied tissue is benign or malignant, how to translate natural language, or what to reply in a conversation.
Unsupervised Learning (USL): Technologies that assist humans in creative or adductive inferences that reduce data to patterns that are interpretable and actionable for purposes such as opportunity identification or risk mitigation. Examples include segmenting customers to aid risk mitigation or opportunity identification; spotting patterns associated with risks such as financial crimes; and suggesting product designs based on feature sets that customers are already combining in relatively inconvenient ways.
To forecast how these technologies will affect the future size and shape of an organisation's workforce, we need task-level data on the work performed in different jobs. Ideally, the tools-for-task approach calls for data that is generally available from organisations so we can aggregate forecasts to build more general estimates of economic impact. In practice, this means seeking data that offers not only comprehensive coverage of all an organisation's jobs, but also fine-grained detail about the tasks associated with each job.
After considering various approaches to managing the trade-offs between comprehensive coverage of jobs and intensive task detail for each job, we chose to base our analyses on a data set comprising HR data on jobs and employees and the text of job descriptions; we call this HRJD (Human Resources Job Description) data. Although job descriptions vary in the extent to which they provide up-to-date and detailed information on key tasks, attrition and the practical demands of hiring combine to keep HRJD data comprehensive and up to date.
Additionally, substantially all medium- and large-sized organisations keep HR databases and job descriptions; hence, using HRJD holds the promise of collecting a library of organisational datasets that could be used, in time, to develop estimates of economy-wide wide effects of automation that are based on fine-grained data and analysis.
Modelling impact of automation in workforce
In our current study, we feed our model with data from a real global bank. This section then begins by describing our data source. Then, we turn to outline the steps taken to produce the initial state matrix needed to feed the model. Finally, we describe the algorithm used to produce displacement of jobs due to automation through time and the technologies used to visualise such changes.
As noted before, we would like to highlight that the view that AI and automation will replace humans is quite restricted. It reflects the belief that any AI or automation development has the sole purpose of mimicking human intelligence. What if, instead, AI and automation support and enhance human skills? Changes in workforce will undoubtedly happen (and so can the work-related wealth inequalities), but the prospects are much less fearful. Our tool actually accounts for all these scenarios, ranging from the more extreme (e.g. humans being replaced) to the more realistic (evolving workforces).
For our research, we used data from a global banking firm kindly made available by our industrial partners at Imperial Business Analytics. The data, in CSV format and stored at Imperial Business Analytics data warehouse, contained information about 17,205 jobs for 55,482 employees within the bank. From the data we could obtain several other pieces of information: job title associated to a textual description of tasks and the hours that the employee should dedicate to them, the position and hierarchical level of such a job within the organisation, and the department that job belongs to (e.g. Human Resources or Capital Markets). In total, we identified 348 different jobs within 10 departments and 8 different positions or levels. The following graphs (Figs. 1 and 2) illustrate the proportion of jobs at each department within the organisation and the number of people in each level of the organisation.
Number of employees within a specific department in the dataset from a global bank
Number of employees within a specific level (from our banking dataset)
With the final goal of understanding and visualising the likely effect automation will have on a workforce, we mapped the four categories of tools described above to features of tasks. Even if neither exhaustive nor entirely precise, such a mapping allows us to use scenarios to calculate and visualise the fraction of human work, in person-hours, that these tools could take on as they are fully adopted. Specifically, we map the tools to tasks as follows:
Repetitive tasks ↦ {Data Wrangling, Optimisation}
Research related tasks ↦ {Dynamic Optimisation, Supervised Learning}
Standardized tasks ↦ {Data Wrangling, Supervised Learning}
Analysis related tasks ↦ {Supervised Learning, Unsupervised Learning}
At the same time, each task was associated to a set of action keywords that enabled us to match responsibilities in job descriptions to a specific technology (see Appendix for the complete list of keywords). It is important to underscore that, even if at first glance tasks do not uniquely match one technology, action keywords allowed us to uniquely match a task category to a technology category.
Having established a map between technologies and tasks, we manually checked the 348 job descriptions and the percentage indicating the proportion to which each task of a specific job could be done with one of the technologies described above. With this information, we calculated the percentage of replacement each technology will have on every department or level within the organisation. It is this percentages that we use as input for out model. The following charts (Figs. 3 and 4 respectively) show the percentage each department and level within the organisation is likely to be impacted by a particular technology.
Percentage of tasks to be carried out by AI-related technologies by department within the organisation; e.g., for the Insurance department, 22.24% of the tasks performed now by humans will be done by Data Wrangling-related technologies, 2.88% will be performed with Dynamic Optimisation related technologies, 10.14% with Supervised Learning technologies, and 6.02% with Unsupervised Learning technologies
Percentage of tasks to be carried out by AI-related technologies by level within the organisation; e.g., 50% of the tasks performed now by humans in the level P05 will be done by Data Wrangling-related technologies, and 15% with Unsupervised Learning technologies
Finally, we calculated the proportion of jobs that have the potential to be displaced by AI-related technologies. We performed this calculation by department and level within the organisation. The data that was obtained from the pre-processing stage was stored into matrix AI (a D×4 matrix, where D stands for the number of departments within the organisation and 4 for the number of AI-related technologies).
The procedure we use to calculate changes associated to the workforce through time begins by assuming there are three main parameters that determine the feasibility of each of the technologies described above:
Scientific availability (\(S^{i}_{t}\)): This parameter indicates whether the scientific foundations of technology i exist at a given time t. We define \(S_{t}^{i} \in [0,1]\)
Commercial availability (\(C^{i}_{t}\)): This parameter indicates whether there are commercial vendors for technology i at time t. We define \(C_{t}^{i} \in [0,1]\)
Willingness to adopt (\(W^{i}_{t}\)): This parameter indicates whether a given firm will be willing to adopt technology i at time t. We define \(W_{t}^{i} \in [0,1]\)
As before, i={DW,O,SL,USL} and t={1,2,...,T}. Notice we do not assume to know the exact values for each of the parameters above. These parameters will be introduced to the model by the user as parameters to produce visualisations.
Next, we define the adoption of any of the technologies described before at time t as follows:
$$ adoption^{i}_{t} = S^{i}_{t} \times W^{i}_{t} \times C^{i}_{t} $$
Moreover, we define the matrix ADOPT to be a 4×T matrix, defined as \({ADOPT}_{i,t} = adoption^{i}_{t}\). Having defined ADOPT, we proceed to calculate the proportion of tasks that have the potential to be displaced at a specific department, by technology i at time t. This is given by:
$$ R = AI \times ADOPT $$
Where R is a D×T matrix that we call the replacement matrix. Specifically, each row of R stands for department d in the organisation, each of the columns of R represent a time period t and each entry (d,t) in R is the potential rate of replacement of tasks in a given department at a specific period of time. Notice that entry (d,t) in R is between zero and one. Therefore, we use the following to calculate the rate of persistence (i.e. the percentage of workers that potentially can remain in the organisation) of the workforce at time t:
$$ P_{d, t} = \left(1 - R_{d,t}\right) $$
Having calculated the rate of persistence, we proceed to calculate the actual proportion of tasks within the organisation that will remain by department at each period t. From our job description data, we know the proportion of workers performing certain tasks in each department within the organisation. This is given by init, that is a D×1 vector. Therefore, the proportion of workers remaining in the organisation by department at every period t will be given by:
$$ W = init \odot P $$
Finally, the proportion of the work that potentially will be done by AI is given by:
$$ AIW = init \odot AI $$
Where ⊙ represents element-wise multiplication. The same procedure was used to calculate the proportional impact of AI-related technologies by level within the organisation.
Once described the modelling choices, we proceed to describe now the technologies behind foo.castr, the visualisation tool we have implemented to present the different results of the described model with different sets of parameters.
foo.castr: visualising the impact of AI automation on the workforce
The use of visualisation tools to convey information is ubiquitous in modern science. Humans are biologically and socially shaped to quickly decipher visual cues conveying a great amount of complex knowledge. In this context, visual tools provide a very valuable mechanism to 1) explore the data, and 2) communicate knowledge to others [14]. Visual data exploration can then be understood as an evolving hypothesis-generation process [15], in which hypotheses can be validated or rejected on a visual basis, and new ones can be introduced.
Large scale visualisation has recently arise as a promising area of development with the aim of providing not only a bigger display canvas, but a social space for collaborative and interactive data exploration. At Imperial College London, we are fortunate enough to have available one of such environments —the Data Observatory— in which visualisations for Bitcoin transactions [16] and large graphs [17] have been developed, to name a few.
With this rationale in mind, we have designed foo.castr as a visual tool that can be presented in a large visualisation studio such as the Data Observatory. It enables users to visualise the impact of AI automation on the workforce of an organization. In particular, foo.castr allows to:
Visualise potential displacement of tasks by departments/levels within an organisation. In specific, these sections make use of Sankey diagrams to display projections on how AI-related technologies could replace labour through time.
Create scenarios by changing assumptions of \(S^{i}_{t}\), \(C^{i}_{t}\) and \(W^{i}_{t}\).
Visualise rates of adoption of AI-related technologies in a given period of time, given \(S^{i}_{t}\), \(C^{i}_{t}\) and \(W^{i}_{t}\) together with a brief description of the scenario they might represent.
The tool was built in Python 2.7 and made use of web technologies such as HTML, CSS and Javascript to generate and display the visualisations. In order to load the data and serve the visualisation, we made use of Flask. In our systems, foo.castr is hosted in an Apache server on a Linux Ubuntu virtual machine.
foo.castr is currently a proprietary software from the authors of this manuscript and will be commercially available in the near future through consultancy agreements with interested parties.
In particular, the data was, first, preprocessed by manually mapping tasks into AI-related technologies (see "Data preprocessing" section); and, then, modelled using pandas and numpy (see "Data modelling" section). Once we obtain matrices W and AIW, the latter were then re-formatted into JSON to represent a directed graph. To be specific, at every period t a node in graph g represents a department/level d within the organisation or an AI-related technology i. The nodes were connected to other(s) at period t+1. The strength of the edges indicate the proportion of node i∈g in period t that has gone to either node i∈g or another node j∈g in period t+1. Formatting the data in this way allowed us to make use of d3.js to produce the visualisation, and more concretely the sankey implementation in d3.js.
Within the framework, we defined three views. The first one (Fig. 5) graphed adoption curves that represent scenarios by depicting the assumptions done in variables \(S^{i}_{t}\), \(C^{i}_{t}\) and \(W^{i}_{t}\). The second one (Figs. 6, and 7 for a detail) depicts potential flow of tasks by department/level to AI technology and vice-versa by period of time. The third one (Fig. 8) shows changes in the proportion of department/levels and use of AI-technologies from period t to period t+1.
Panel I of foo.castr: It provides a brief description of the current scenario and a visual representation of how \(S^{i}_{t}\) and \(C^{i}_{t}\) have been modelled in it. The figure shows the scenario The rise of replicants (successful adoption of supervised learning related technologies but unsuccessful adoption of unsupervised learning related technologies)
Panel II of foo.castr: flow of jobs by department to AI technology by period of time. Every rectangle in the leftmost coloured in shades of gray, blue and green represent a department within organisation. The length of the rectangle represent their proportional size within the organisation. Every rectangle in the leftmost coloured in shades of orange represent AI related technologies and their length their proportional use in the organisation. Strings flowing from the leftmost section to the next section to the right represent the proportion of tasks within a department in the organisation that get replaced with AI-related technologies. Subsequent strings flowing to the right represent tasks replaced with AI related technologies at different points in time. The used scenario is The rise of replicants, and we can see how towards the end of the time the shifting towards AI-related technologies clearly grows
Detail from the left part of Fig. 6. Those departments are the ones depicted in Fig. 1
Panel III of foo.castr: change in the proportion of departments and use of AI technologies by period of time. Every rectangle in the leftmost coloured in shades of gray, blue and green represent a department within organisation. The length of the rectangle represent their proportional size within the organisation. Every rectangle in the leftmost coloured in shades of orange represent AI related technologies and their length their proportional use in the organisation. Different from the last figure, the flows from strings from the left to the right represent the evolution in the share of tasks done either by a department of the organisation or by AI related technologies. The used scenario is The rise of replicants, and we can see how towards the end of the time the proportion of tasks replaced by AI-related technologies grows
In the current version of foo.castr, we have also designed three different predefined scenarios (described below) that assume different adoption variables \(S^{i}_{t}\), \(C^{i}_{t}\) and \(W^{i}_{t}\); this values are all saved as JSON files for later reproducibility, and some of them are actually shown in Tables 1, 2, 3, 4, 5 and 6.
Table 1 Table presents the values for scientific availability \(S^{i}_{t}\) for i=SL,USL and t=0,…,9 under the "AI winter, again" scenario
Table 2 Table presents the values for commercial availability \(C^{i}_{t}\) for i=SL,USL and t=0,…,9 under the "AI winter, again" scenario
Table 3 Table presents the values for scientific availability \(S^{i}_{t}\) for i=SL,USL and t=0,…,9 under the "Rise of replicants" scenario
Table 4 Table presents the values for commercial availability \(C^{i}_{t}\) for i=SL,USL and t=0,…,9 under the rise of "Rise of replicants"
Table 5 Table presents the values for scientific availability \(S^{i}_{t}\) for i=SL,USL and t=0,…,9 under the "Symbiots spread" scenario
Table 6 Table presents the values for commercial availability \(C^{i}_{t}\) for i=SL,USL and t=0,…,9 under the "Symbiots spread" scenario
To the best of our knowledge, foo.castr is the first tool that aims at providing a detailed view on the impact of AI and Automation on the workforce. Several generalist tools are available to create dashboards and to visualise data (e.g. Tableau, Excel, Spotfire, or PowerBI), but none is particularly aimed at understanding this impact.
Three illustrative scenarios for evaluating adoption of AI-related technologies
We designed three scenarios that represent different rates of success in the scientific development and commercial adoption of Supervised and Unsupervised Learning technologies. These scenarios are aimed at representing three different rates of change for variables \(S^{i}_{t}\) and \(C^{i}_{t}\) in a period of t=1,...,10 and their implications for the workforce of our industrial partner. For ease of exposition, we will assume \(W^{i}_{t}=1\) for all t, and that Data Wrangling and Dynamic Optimisation related technologies are now well developed scientifically and quickly adopted; i.e, \(S^{j}_{t}=C^{j}_{t}=W^{j}_{t}=1\) for all t and j={DW,DO}. But Supervised and Unsupervised Learning related technologies lag in their scientific and commercial availability and, hence, willingness to adopt.
The three predefined scenarios are then:
"AI winter, again" This scenario assumes there are some scientific developments in AI related technologies at the moment, but they do not translate into commercial products that will be adopted by the general public. Tables 1 and 2 respectively describe \(S^{i}_{t}\) and \(C^{i}_{t}\) for supervised and unsupervised learning related technologies.
"The rise of replicants" In this scenario we assume there is a wealth of training data available to enable supervised learning related technologies, but scientific breakthroughs in unsupervised learning have stalled commercial availability and, thus, adoption of unsupervised learning related technologies. Tables 3 and 4 respectively describe \(S^{i}_{t}\) and \(C^{i}_{t}\) for supervised and unsupervised learning related technologies.
"Symbiots spread" This scenario encodes that scientific breakthroughs in unsupervised learning technologies enable an explosion on the adoption of all AI related technologies. Tables 5 and 6 respectively describe \(S^{i}_{t}\) and \(C^{i}_{t}\) for Supervised and Unsupervised Learning related technologies.
foo.castr was first presented to Imperial Business Analytics industrial partners at a workshop on the Data Science Institute on November 2017. The aim of the workshop was to provide an overview of how their workforce will change given the adoption of AI related technologies. This information was of particular interest to HR decision makers as they will have to cope with the hiring, layovers, retraining and, most important, reshuffling of their workforce.
After several additional events, we have experienced the plausibility and usefulness of foo.castr's predictions in interactive workshops with executives from large organisations operating in banking, business advisory services, legal services, retail, and telecommunications. In addition, we have presented the model to executives from diverse companies visiting Imperial College's Data Science Institute. In these workshops and presentations, we have seen evidence that employing predefined scenarios is effective for helping organisational decision-makers understand how workforce forecasts depend on the pace of adoption for the four categories of tools described above.
In particular, the scenarios allow participants to relate workforce forecasts to divergent estimates for the pace of technology adoption, and they stimulate conversation about how prepared a focal organisation is to keep pace with technology development and adoption. As important as the tools-to-tasks mapping are, our experience suggests decision makers see greater uncertainty in (1) the pace at which tools will be effective and available for performing tasks previously done by humans, and (2) organisational readiness to keep pace with availability of these tools.
This work has presented foo.castr, a tools-for-tasks framework for facilitating group interactive modelling and visualising of the impact that AI and Automation are likely to have on the future size and shape of a workforce. Whether using scenarios or controls for varying key model parameters, our experiences using foo.castr in workshops suggest that scenario-based visualisation is a powerful approach to helping executive teams get to grips with the complex problem of understanding not only the future size and shape or their workforces, but also what they will have to do be keep pace with changes.
Based on our experience, our current research is focused on assessing whether dynamic visualisations of future workforce forecasts are made more or less understandable or convincing by enriching the scenario specification to include a larger set of parameters for relating the tools of AI and data science to the tasks performed by people in today's organisations. Additionally, simplifying submission of new organisational datasets (HRJD data) will ease the job of collecting a growing library of workforce forecasts, and as mentioned above, such a library is a firm foundation for bottom-up aggregation of organisational forecasts into a rigorous economy-wide estimate of AI's impact on demand for labour.
This section includes list of verbs related to four core competencies. Note that words within the lists are not exclusive and they do not intend to be exhaustive. These words, then, are attributed to one AI-related technology to proceed with the categorisation of jobs described above. Notice that competencies may be related to one or more AI-related technologies and that the disambiguation for the category that was going to be assigned to a particular job description was made by the coder.
Repetitive: furthered, administered, compiled, categorized, distributed, catalogued, familiarize, responded, corresponded, reviewed, adapted, approved, validated, cared-for, screened, incorporated, submitted, registered, prepared, recorded, ensured, coded, filed, organized, operated, assessed, classified, charted, clarified, logged, assisted, advocated, executed, verified, collected, set-up, scheduled, expedited, answered, supported, ordered, processed, purchased, generated, systematized, reserved, demonstrated, facilitated, supplied, arranged, insured, obtained, updated, aided, generated, provided.
Research: experimented, diagnosed, inspected, summarized, formulated, examined, modified, conducted, interpreted, solved, searched, invented, tested, located, calculated, determined, clarified, gathered, extracted, compared, surveyed, measured, organized, researched, detected, investigated, computed, interviewed, collected, explored.
Analysis: determined, estimated, appraised, developed, managed, resolved, spearheaded, utilized, transformed, reconciled, modelled, retrieved, integrated, shaped, assessed, surpassed, solved, critique, balanced, corrected, measured, audited, restored, reduced, researched, projected, monitored, marketed, engineered, programmed, analysed, reduced, identified, planned, evaluated, forecast, conserved.
Standardized: converted, installed, administered, calculated, facilitated, clarified, maintained, conserved, planned, adjusted, restored, scheduled, operated, tested, followed, rectified, overhauled, built, remodelled, fortified, simplified, regulated, displayed, replaced, repaired, upgraded, printed, assembled, specialized, specialized, studied, adapted, allocated, computed, debugged, constructed.
Relationship between competencies and AI-related technologies
Repetitive: Data Wrangling, Dynamic Optimization.
Research: Dynamic Optimization, Unsupervised Learning.
Standardised: Data Wrangling, Supervised Learning.
Analysis: Supervised Learning, Unsupervised Learning.
Frey CB, Osborne MA. The future of employment: How susceptible are jobs to computerisation?Technol Forecast Soc Chang. 2017; 114:254–80. https://doi.org/10.1016/j.techfore.2016.08.019.
Arntz M, Gregory T, Zierahn U. Revisiting the risk of automation. Econ Lett. 2017; 159:157–60. https://doi.org/10.1016/j.econlet.2017.07.001.
Schwab K. The Fourth Industrial Revolution. Foreign Aff. 2015. https://www.foreignaffairs.com/articles/2015-12-12/fourth-industrial-revolution.
Tegmark M. LIFE 3.0: Being Human in the Age of Artificial Intelligence.: Allen Lane; 2017.
O'Neil C. Weapons of Math Destruction : How Big Data Increases Inequality and Threatens Democracy.: Penguin Random House USA; 2016, p. 259.
Autor DH, Katz LF, Kearney MS. The Polarization of the U.S. Labor Market. Am Econ Assoc Pap Proc. 2006; 2:189–94.
Beaudry P, Green DA, Sand BM. The Great Reversal in the Demand for Skill and Cognitive Tasks. J Labor Econ. 2016; 34(S1):199–247. https://doi.org/10.1086/682347.
Acemoglu D. Technical Change, Inequality, and the Labor Market. J Econ Lit. 2002; 40(1):7–72. https://doi.org/10.1257/0022051026976.
Acemoglu D, Restrepo P. The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares and Employment. 2016. https://doi.org/10.3386/w22252. http://www.nber.org/papers/w22252.pdf.
Chui M, Manyika J, Miremadi M. How Many of Your Daily Tasks Could Be Automated? 2015. https://hbr.org/2015/12/how-many-of-your-daily-tasks-could-be-automated. Accessed 16 Feb 2018.
Tushman ML, Anderson P. Technological Discontinuities and Organizational Environments. Adm Sci Q. 1986; 31(3):439–65.
Brynjolfsson E, McAfee A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, London: WW Norton & Company; 2014.
Arthur WB. Increasing Returns and Path Dependence in the Economy. Ann Arbor: University of Michigan Press; 1994.
Fayyad U, Grinstein GG, Wierse A. Information Visualization in Data Mining and Knowledge Discovery. San Francisco: Morgan Kaufmann Publishers Inc.; 2002.
Keim DA. Visual exploration of large data sets. Commun ACM. 2001; 44(8):38–44. https://doi.org/10.1145/381641.381656.
McGinn D, Birch D, Akroyd D, Molina-Solana M, Guo Y, Knottenbelt WJ. Visualizing Dynamic Bitcoin Transaction Patterns. Big Data. 2016; 4(2):109–19. https://doi.org/10.1089/big.2015.0056.
Molina-Solana M, Birch D, Guo Y. Improving data exploration in graphs with fuzzy logic and large-scale visualisation. Appl Soft Comput. 2017; 53:227–35. https://doi.org/10.1016/j.asoc.2016.12.044.
Authors would like to thank 1) the global banking firm who kindly share their HR data with the Imperial Business Analytics Centre; 2) Imperial College Business School students Chu Wang and Prithviraaj Shetty who kindly performed the labelling of tasks; and 3) Imperial's Data Science Institute for allowing us to use their infrastructure.
Miguel Molina-Solana is funded by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 743623.
Data is not available as they belong to a partner company who kindly shared it with us for this research.
Data Science Institute, Imperial College London, London, UK
Julio Amador Diaz Lopez, Miguel Molina-Solana & Mark T. Kennedy
Business School, Imperial College London, London, UK
Julio Amador Diaz Lopez & Mark T. Kennedy
Julio Amador Diaz Lopez
Miguel Molina-Solana
Mark T. Kennedy
All authors have contributed equally to this work. All authors read and approved the final manuscript.
Correspondence to Miguel Molina-Solana.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Amador Diaz Lopez, J., Molina-Solana, M. & Kennedy, M. foo.castr: visualising the future AI workforce. Big Data Anal 3, 9 (2018). https://doi.org/10.1186/s41044-018-0034-z
Received: 10 May 2018
|
CommonCrawl
|
On the Remarkable Formula for Spectral Distance of Block Southeast Submatrix
Alimohammad Nazari
Atiyeh Nezami
Arak university of Iran
10.22072/wala.2018.87428.1174
This paper presents a remarkable formula for spectral distance of a given block normal matrix $G_{D_0} = \begin{pmatrix}
A & B \\
C & D_0
\end{pmatrix} $ to set of block normal matrix $G_{D}$ (as same as $G_{D_0}$ except block $D$ which is replaced by block $D_0$), in which $A \in \mathbb{C}^{n\times n}$ is invertible, $ B \in \mathbb{C}^{n\times m}, C \in \mathbb{C}^{m\times n}$ and $D \in \mathbb{C}^{m\times m}$ with $\rm {Rank\{G_D\}} < n+m-1$
and given eigenvalues of matrix $\mathcal{M} = D - C A^{-1} B $ as $z_1, z_2, \cdots, z_{m}$ where $|z_1|\ge |z_2|\ge \cdots \ge |z_{m-1}|\ge |z_m|$.
Finally, an explicit formula is proven for spectral distance $G_D$ and $G_D_0$ which is expressed by the two last eigenvalues of $\mathcal{M}$.
Normal matrix
Distance norm
[1] Kh.D. Ikramov and A.M. Nazari, On a remarkable implication of the Malyshev formula, Dokl. Akad. Nauk.,
385 (2002) 599-600.
[2] J.-M. Gracia and F.E. Velasco, Nearesr southeast submatrix that makes multiple a prescribed eigenvalue.
Part 1, Linear Algebra Appl., 430(4) (2009), 1196-1215.
[3] A. Nazari and A. Nezami, Computational aspect to the nearest southeast submatrix that makes multiple a
prescribed eigenvalue, J. Linear Topol. Algebra, 6(1) (2017), 67-72.
Autumn - Winter
Nazari, A., & Nezami, A. (2018). On the Remarkable Formula for Spectral Distance of Block Southeast Submatrix. Wavelet and Linear Algebra, 5(2), 15-20. doi: 10.22072/wala.2018.87428.1174
Alimohammad Nazari; Atiyeh Nezami. "On the Remarkable Formula for Spectral Distance of Block Southeast Submatrix". Wavelet and Linear Algebra, 5, 2, 2018, 15-20. doi: 10.22072/wala.2018.87428.1174
Nazari, A., Nezami, A. (2018). 'On the Remarkable Formula for Spectral Distance of Block Southeast Submatrix', Wavelet and Linear Algebra, 5(2), pp. 15-20. doi: 10.22072/wala.2018.87428.1174
Nazari, A., Nezami, A. On the Remarkable Formula for Spectral Distance of Block Southeast Submatrix. Wavelet and Linear Algebra, 2018; 5(2): 15-20. doi: 10.22072/wala.2018.87428.1174
|
CommonCrawl
|
pulver: an R package for parallel ultra-rapid p-value computation for linear regression interaction terms
Sophie Molnos ORCID: orcid.org/0000-0001-9256-35881,2,3,
Clemens Baumbach1,2,3,
Simone Wahl1,2,3,
Martina Müller-Nurasyid4,5,6,7,
Konstantin Strauch5,6,
Rui Wang-Sattler1,2,
Melanie Waldenberger1,2,
Thomas Meitinger8,9,
Jerzy Adamski3,10,11,
Gabi Kastenmüller12,13,
Karsten Suhre12,14,
Annette Peters1,2,3,
Harald Grallert1,2,3,
Fabian J. Theis15,16 &
Christian Gieger1,2,3
Genome-wide association studies allow us to understand the genetics of complex diseases. Human metabolism provides information about the disease-causing mechanisms, so it is usual to investigate the associations between genetic variants and metabolite levels. However, only considering genetic variants and their effects on one trait ignores the possible interplay between different "omics" layers. Existing tools only consider single-nucleotide polymorphism (SNP)–SNP interactions, and no practical tool is available for large-scale investigations of the interactions between pairs of arbitrary quantitative variables.
We developed an R package called pulver to compute p-values for the interaction term in a very large number of linear regression models. Comparisons based on simulated data showed that pulver is much faster than the existing tools. This is achieved by using the correlation coefficient to test the null-hypothesis, which avoids the costly computation of inversions. Additional tricks are a rearrangement of the order, when iterating through the different "omics" layers, and implementing this algorithm in the fast programming language C++. Furthermore, we applied our algorithm to data from the German KORA study to investigate a real-world problem involving the interplay among DNA methylation, genetic variants, and metabolite levels.
The pulver package is a convenient and rapid tool for screening huge numbers of linear regression models for significant interaction terms in arbitrary pairs of quantitative variables. pulver is written in R and C++, and can be downloaded freely from CRAN at https://cran.r-project.org/web/packages/pulver/.
Hundreds of genetic variants associated with complex human diseases and traits have been identified by genome-wide association studies (GWAS) [1,2,3,4]. However, most GWAS only considered univariate models with one outcome and one independent variable, thereby ignoring possible interactions between different quantitative "omics" data [5], such as DNA methylation, genetic variations, mRNA levels, or protein levels. For example, studies observed associations between specific epigenetic-genetic interactions and a phenotype [6,7,8]. The lack of publications analyzing genome-wide interactions may result because of the high computational cost of running linear regressions for all possible pairs of "omics" data. Understanding the interplay among different "omics" layers can provide important insights into biological pathways that underlie health and disease [9].
Previous interaction analyses in genome-wide studies mainly considered interactions between single-nucleotide polymorphisms (SNPs), which led to the development of several rapid analysis tools. For example, BiForce [10] is a stand-alone Java program that integrates bitwise computing with multithreaded parallelization; SPHINX [11] is a framework for genome-wide association mapping that finds SNPs and SNP–SNP interactions using a piecewise linear model; and epiGPU [12] calculates contingency table-based approximate tests using consumer-level graphics cards.
Several rapid programs are also available for calculating linear regressions without interaction terms. For example, OmicABEL [13] efficiently exploits the structure of the data but does not allow the inclusion of an interaction term. The R package MatrixEQTL [14] computes linear regressions very quickly based on matrix operations. This package also allows for testing for interaction between a set of independent variables and one fixed covariate. However, interactions between arbitrary pairs of quantitative covariates would require iteration over covariates, which is quite inefficient.
Thus, our R package called pulver is the first tool to allow the user to compute p-values for interaction terms in huge numbers of linear regressions in a practical amount of time. The acronym pulver denotes parallel ultra-rapid p-value computation for linear regression interaction terms.
We benchmarked the performance of our implemented method using simulated data. Furthermore, we applied our algorithm to "omics" data from the Cooperative Health Research in the Region of Augsburg (KORA) F4 study (DNA methylation, genetic variants, and metabolite levels).
KORA comprises a series of independent population-based epidemiological surveys and follow-up studies of participants living in the region of Augsburg, Southern Germany [15].
Access to the KORA data can be requested via the KORA.Passt System (https://helmholtz-muenchen.managed-otrs.com/otrs/customer.pl).
pulver computes p-values for the interaction term in a series of multiple linear regression models defined by covariate matrices X and Z and an outcome matrix Y, containing continuous data, e.g. metabolite levels, mRNA or proteomics data. In most cases the residuals from the phenotype adjusted for other parameters are used. All matrices must have equal number of rows, i.e., observations. For efficiency reasons, pulver does not adjust for additional covariates, instead the residuals from the phenotype adjusted for other parameters should be used.
Linear regression analysis
For every combination of columns x, y, and z from matrices X , Y, and Z, pulver fits the following multiple linear regression model:
$$ y={\beta}_0+{\beta}_1\ x+{\beta}_2\ z+{\beta}_3\ xz+\varepsilon, \varepsilon \sim i.i.d.N\left(0,{\sigma}^2\right), $$
where y is the outcome variable, x and z are covariates, and xz is the interaction (product) of covariates x and z. All variables are quantitative. We need to test the null hypothesis β 3 = 0 against the alternative hypothesis β 3 ≠ 0. In particular, we are not interested in estimating the coefficients β 1 and β 2, which allows us to take a computational shortcut. By centering and orthogonalizing the variables, we can reduce the multiple linear regression problem into a simple linear regression without intercept. Thus, we can compute the Student's t-test statistic for the coefficient β 3 as a function of the Pearson's correlation coefficient between y and the orthogonalized xz: \( t=r\sqrt{DF/\left(1-{r}^2\right)} \), where DF is the degree of freedom. See the Additional file 1 for a more detailed derivation.
By computing the t-statistic based on the correlation coefficient, which has a very simple expression in the simplified model, we avoid fitting the entire model including estimating the coefficients β 1 and β 2. This is much more efficient because we are actually only interested in the interaction term.
Avoiding redundant computations
Despite the computational shortcut, even more time can be saved by employing a sophisticated arrangement of the computations. The naïve approach would iterate through three nested for-loops, with one for each matrix, where all computations occur in the innermost loop. However, Fig. 1 shows that some computations can be moved out of the innermost loop to avoid redundant computations.
Pseudo-code of the pulverize function
Programming language and general information about the program
We implemented the algorithm in an R package [16] called pulver. Due to speed considerations, the core of the algorithm was implemented in C++. We used R version 3.3.1 and compiled the C++ code with gcc compiler version 4.4.7. To integrate C++ into R, we used the R package Rcpp [17] (version 0.12.7).
To determine whether C/Fortran could improve the performance compared to that of C++, we also implemented the algorithm using a combination of C and Fortran via R's C interface.
We used OpenMP version 3.0 [18] to parallelize the middle loop. To minimize the amount of time required to coordinate parallel tasks, we inverted the order of matrices X and Z so that the middle loop could run over more variables than the outer loop, thereby maximizing the amount of work per thread.
To improve efficiency, the program does not allow covariates other than x and z. If additional covariates are required, the outcome y must be replaced by the residuals from the regression of y on the additional covariates. Missing values in the input matrices are replaced by the respective column mean.
Our pulver package can be used as a screening tool for scenarios where the number of models (number of variables in matrix X × number of variables in matrix Y × number of variables in matrix Z) is too large for conventional tools. By specifying a p-value threshold, the results can be limited to models with interaction term p-values below the threshold, thereby reducing the size of the output greatly. After the initial screening process, additional model characteristics for the significant models, e.g., effect estimates and standard errors, can be obtained with traditional methods such as R's lm function.
The user can access pulver's functionality via two functions: pulverize and pulverize_all. The pulverize function expects three numeric matrices and returns a table with p-values for models with interaction term p-values below the (optionally specified) p-value threshold. The wrapper function pulverize_all expects files with names containing X, Y, and Z matrices, calls pulverize to perform the actual computation, and returns a table in the same format as pulverize. The pulverize_all function is particularly useful if the matrices are too huge to be loaded all at the same time because of the computer memory restrictions. Thus, pulverize_all gets inputs as lists of file names containing the submatrices X, Y, and Z. pulverize_all iterates through these lists and subsequently loads matrices before calling the pulverize.
Comparisons with other R tools for running linear regressions
As illustrated in Fig. 2, the inputs for the interaction analysis can be vectors or matrices. Compared to other R tools such as lm and MatrixEQTL pulver is currently the only available option for users who want all the inputs to be matrices. It is possible to adapt other tools to all-matrix inputs, but the resulting code is not optimized for this use and will be too slow for practical purposes.
Comparison of different input types handled by the R tools lm, MatrixEQTL, and pulver for computation of the linear regression with interaction term. By the braces the dimensions of the matrices are depicted. The R's build-in function lm can only compute the linear regression with interaction term using one variable with n observations per call. The R package MatrixEQTL is able to compute simultaneously the linear regression for each of p 1 variables from the outcome matrix Y and the interaction term of the matrix X with p 2 variables and the vector Z. In contrast, pulver in addition iterates through p 3variables of the matrix Z and finally computes the linear regression for each column of matrices Y , X and Z
$$ {p}_1,{p}_2\ \mathrm{and}\ {p}_3\mathrm{are}\in \mathrm{\mathbb{N}}. $$
To benchmark the performance of pulver against other tools, we simulated X, Y, and Z matrices with different numbers of observations and variables.
We also applied pulver to real data from the KORA study.
Performance comparison using simulated data
No other tool is specialized for the type of interaction analysis described above, so we compared the speed of our R package pulver with that of R's built-in lm function and the R package MatrixEQTL [14] (version 2.1.1) (also see Fig. 2).
To ensure a fair comparison, we did not use the parallelization feature of pulverize because it is not available in R's lm function or MatrixEQTL. However, parallelization is possible and it leads to significant speedups, although sublinear. For benchmarking purposes, each scenario was run 200 times using the R package microbenchmark (version 1.4–2.1, https://CRAN.R-project.org/package=microbenchmark) and the results were filtered with a p-value threshold of 0.05.
Figure 3 shows that pulver performed better than the alternatives in all the benchmarks. Note that the benchmark results obtained for the lm function were so slow that they could not be included in the chart.
Mean run times and standard deviations for interaction analysis using R's lm function, MatrixEQTL, and pulver. The execution times are in milliseconds. We fitted a line through the time points for each package. R's lm function was very inefficient for this type of interaction analysis, and only the first two points are shown for every benchmark. Shown are four different panels (a-d). In panel a the number of columns of the matrix is set to 10, the matrix to 20 and the number of observations is set to 100, while the number of columns for the matrix is varied from 10 to 10,000. In panel b number of columns of the matrix is varied from 10 to 10,000 while the number of columns for the matrix is set to 10 column, the matrix to 20 column and number of observations is set to 100. In panel c the number of observations are varied from 10 to 10,000 while the number of columns for each matrix are fixed (all with 10 columns). In panel d number of columns of the matrix is varied from 10 to 10,000, while the number of columns of the matrix is set to 20, the matrix to 10 and the number of observations is set to 100
In particular, for the benchmark where the number of variables in matrix Z was varied (see Fig. 3d), pulver outperformed the other methods by several orders of magnitudes, and the results obtained by MatrixEQTL could not be included in the chart. The poor performance of MatrixEQTL is because it can only handle one Z variable, which forced us to repeatedly call MatrixEQTL for every variable in the Z matrix. This type of iteration is known to be slow in R. The good performance of pulver with benchmark d is particularly notable because this benchmark reflects the intended user case for pulver where all input matrices contain many variables.
Applying pulver to the analysis of real-world data
Metabolites are small molecules in blood whose concentrations can reflect the health status of humans [19]. Therefore, it is useful to investigate the potential effects of genetic and epigenetic factors on the concentrations of metabolites.
DNA methylation denotes the attachment of a methyl group to a DNA base. Methylation occurs mostly on the cytosine nucleotides preceding a guanine nucleotide, which are also called cytosine-phosphate-guanine (CpG) sites [20]. DNA methylation was measured using the Illumina InfiniumHumanMethylation450 BeadChip platform, which quantifies the relative methylation of CpG sites [21].
DNA methylation was measured in whole blood so it was based on a mixture of different cell types. We employed the method described by Houseman et al. [22] and adjusted for different proportions of cell types. Thus, CpG sites were represented by their residuals after regressing on age, sex, body mass index (BMI), Houseman variables, and the first 20 principal components of the principal component analysis control probes from 450 K Illumina arrays. The control probes were used to adjust for technical confounding, where they comprised the principal components from positive control probes, which were used as quality control for different data preparation and measurement steps.
Furthermore, to avoid false positives, all CpG sites listed by Chen et al. [23] as cross-reactive probes were removed. Cross-reactive probes bind to repetitive sequences or co-hybridize with alternate sequences that are highly homologous to the intended targets, which could lead to false signals.
In the KORA F4 study, genotyping was performed using the Affymetrix Axiom chip [24]. Genotyped SNPs were imputed with IMPUTE v2.3.0 using the 1000 Genomes reference panel.
Metabolite concentrations were measured using two different platforms: Biocrates (151 metabolites) and Metabolon (406 metabolites). Biocrates uses a kit-based, targeted quantitative by electrospray (liquid chromatography) – tandem mass spectrometry (ESI-(LC) MS/MS) method. A detailed description of the data was provided previously by Illig et al. [25]. Metabolon uses non-targeted, semi-quantitative liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) and GC-MS methods. The data were previously described in Suhre et al. [26].
Metabolites were represented by their Box–Cox transformed residuals after regressing on age, sex, and BMI. We used the R package car [27] to compute the Box–Cox transforms.
Initially, there were 345,372 CpG sites, 9,143,401 SNPs (coded as values between 0 and 2 according to an additive genetic model), and 557 metabolites in the dataset. Analyzing the complete data would have taken a very long time even with pulver.
Thus, to estimate the time required to analyze the whole dataset, we ran scenarios using all CpG sites, all metabolites, and different numbers of SNPs (100, 1000, 2000, 4000, and 5000), and extrapolated the runtime that would be required to analyze all SNPs. Due to time limitations, we ran each of the scenarios defined above only once. The estimated runtime required to analyze the complete dataset by parallelizing the work across 40 processors was 1.5 years.
Thus, we decided to only select SNPs that had previously known significant associations with at least one metabolite [1, 25]. We determined whether these signals became even stronger after adding an interaction effect between DNA methylation and SNPs.
To avoid an excessive number of false positives, the SNPs were also required to have a minor allele frequency greater than 0.05. We applied these filters separately to the Biocrates and Metabolon data. After filtering, we had 345,372 CpG sites, 117 SNPs, and 16 metabolites for Biocrates, with 345,372 CpG sites, 6406 SNPs, and 376 metabolites for Metabolon.
We were only interested in associations that remained significant after adjusting for multiple testing, so we used a p-value threshold of \( \frac{0.05}{345372^{\ast }{117}^{\ast }16+{345372}^{\ast }{6406}^{\ast }376}={6.01}^{\ast }\ {10}^{-14} \) according to Bonferroni correction.
We found 27 significant associations for metabolites from the Biocrates platform (p-values ranging from 1.28∗ 10−29 to 5.17∗ 10−14) and 286 significant associations for metabolites from the Metabolon platform (p-values ranging from 1.15∗ 10−42 to 3.73∗ 10−14). All of the significant associations involved the metabolite butyrylcarnitine as well as SNPs and CpG sites on chromosome 12 in close proximity to the ACADS gene (see Fig. 4a and b). Figure 4c shows one of the significant results (SNP rs10840791, CpG site cg21892295, and metabolite butyrylcarnitine) to illustrate how the inclusion of an interaction term in the model increased the adjusted coefficient of determination,R2 (calculated using the summary.lm function in R).
Regional plot with significant associations among SNPs (circles), CpGs (squares), and butyrylcarnitine for the Biocrates platform (a) and Metabolon platform (b). Interactions between SNPs and CpGs are visualized by lines connecting SNPs and CpGs. c Comparison of the adjusted coefficient of determination in the models with and without the interaction term. d Scatterplot of CpG site cg21892295 and metabolite butyrylcarnitine. Genotypes are color-coded
The ACADS gene encodes the enzyme Acyl-CoA dehydrogenase, which uses butyrylcarnitine as a substrate [25], and previous studies have shown that SNPs and CpGs in this gene region are independently associated with butyrylcarnitine [1, 4, 25].
In the case where interaction terms need to be calculated for arbitrary pairs of variables, pulver performs far better than its competitors. The time savings are achieved by avoiding redundant calculations. Thus, computationally expensive p-values are only computed at the very end and only for results below a significance threshold determined using the (computationally cheap) Pearson's correlation coefficient. To maximize the speedup, we recommend always specifying a p-value threshold and using pulver as a filter to find models with significant or near-significant interaction terms. If a p-value threshold is not specified, the time savings will be suboptimal and the number of results will be very high.
Thus, we recommend using a p-value threshold to adjust for multiple testing, such as Bonferroni correction, i.e. \( \frac{0.05}{\mathrm{number}\ \mathrm{of}\ \mathrm{tests}} \) ., number of tests = number of columns in X × number of columns in Y × number of columns in Z.
The core algorithm of pulver was implemented in two languages namely, C++ and C/Fortran, to examine different performances due to programming languages. However, comparing the two different implementation of pulver reveals no striking differences. Thus, we continued to use the C++ version as it offered some useful implemented functions such as those implemented in the C++ Standard Library algorithms [28].
The package imputes missing values based on their column means. If this is not required, then we recommend using other more sophisticated methods, such as the mice package in R [29], in order to remove missing values before applying pulver.
pulver was developed as a screening tool to efficiently identify associations between the outcome, such as metabolite levels, and the interaction among two quantitative variables, such as CpG-SNP interaction. Once, significant associations are identified, other information regarding the fitted models, such as slope coefficients, standard errors, or residuals, can be computed in a second step using traditional tools.
Our pulver package is currently the fastest implementation available for calculating p-values for the interaction term of two quantitative variables given a huge number of linear regression models. Pulver is part of a processing pipeline focused on interaction terms in linear regression models and its main value is allowing users to conduct comprehensive screenings that are beyond the capabilities of existing tools.
Project name: pulver.
Project home page: https://cran.r-project.org/web/packages/pulver/index.html
Operating system(s): Platform independent.
Programming language: R, C++.
Other requirements: R 3.3.0 or higher.
License: GNU GPL.
Any restrictions to use by non-academics: None.
GWAS:
Genome-wide association studies
SNP:
Single-nucleotide polymorphism
Shin SY, Fauman EB, Petersen AK, Krumsiek J, Santos R, Huang J, Arnold M, Erte I, Forgetta V, Yang TP, et al. An atlas of genetic influences on human blood metabolites. Nat Genet. 2014;46(6):543–50.
Kettunen J, Tukiainen T, Sarin AP, Ortega-Alonso A, Tikkanen E, Lyytikainen LP, Kangas AJ, Soininen P, Wurtz P, Silander K, et al. Genome-wide association study identifies multiple loci influencing human serum metabolite levels. Nat Genet. 2012;44(3):269–76.
Draisma HH, Pool R, Kobl M, Jansen R, Petersen A-K, Vaarhorst AA, Yet I, Haller T, Demirkan A, Esko T. Genome-wide association study identifies novel genetic variants contributing to variation in blood metabolite levels. Nat Commun. 2015;6
Petersen AK, Zeilinger S, Kastenmuller G, Romisch-Margl W, Brugger M, Peters A, Meisinger C, Strauch K, Hengstenberg C, Pagel P, et al. Epigenetics meets metabolomics: an epigenome-wide association study with blood serum metabolic traits. Hum Mol Genet. 2014;23(2):534–45.
Maturana E, Pineda S, Brand A, Steen K, Malats N. Toward the integration of Omics data in epidemiological studies: still a "long and winding road". Genet Epidemiol. 2016;40(7):558–69.
Heyn H, Sayols S, Moutinho C, Vidal E, Sanchez-Mut JV, Stefansson OA, Nadal E, Moran S, Eyfjord JE, Gonzalez-Suarez E. Linkage of DNA methylation quantitative trait loci to human cancer risk. Cell Rep. 2014;7(2):331–8.
Ma Y, Follis JL, Smith CE, Tanaka T, Manichaikul AW, Chu AY, Samieri C, Zhou X, Guan W, Wang L. Interaction of methylation-related genetic variants with circulating fatty acids on plasma lipids: a meta-analysis of 7 studies and methylation analysis of 3 studies in the Cohorts for Heart and Aging Research in Genomic Epidemiology consortium. Am J Clin Nutr. 2016;103(2):567–78.
Bell CG, Finer S, Lindgren CM, Wilson GA, Rakyan VK, Teschendorff AE, Akan P, Stupka E, Down TA, Prokopenko I, et al. Integrated genetic and epigenetic analysis identifies haplotype-specific methylation in the FTO type 2 diabetes and obesity susceptibility locus. PLoS One. 2010;5(11):e14040.
Krumsiek J, Bartel J, Theis FJ. Computational approaches for systems metabolomics. Curr Opin Biotechnol. 2016;39:198–206.
Gyenesei A, Moody J, Laiho A, Semple CA, Haley CS, Wei WH. BiForce Toolbox: powerful high-throughput computational analysis of gene-gene interactions in genome-wide association studies. Nucleic Acids Res. 2012;40(Web Server issue):W628–32.
Lee S, Lozano A, Kambadur P, Xing EP. An Efficient Nonlinear Regression Approach for Genome-wide Detection of Marginal and Interacting Genetic Variations. J Comput Biol. 2016;23(5):372–89.
Hemani G, Theocharidis A, Wei W, Haley C. EpiGPU: exhaustive pairwise epistasis scans parallelized on consumer level graphics cards. Bioinformatics. 2011;27(11):1462–5.
Fabregat-Traver D, Sharapov S, Hayward C, Rudan I, Campbell H, Aulchenko Y, Bientinesi P. High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software. F1000Research. 2014;3:200.
Shabalin AA. Matrix eQTL: ultra fast eQTL analysis via large matrix operations. Bioinformatics. 2012;28(10):1353–8.
Wichmann H-E, Gieger C, Illig T, group MKs. KORA-gen-resource for population genetics, controls and a broad spectrum of disease phenotypes. Das Gesundheitswesen. 2005;67(S 01):26–30.
Team RC. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. 2015. https://www.r-project.org/.
Eddelbuettel D, François R, Allaire J, Chambers J, Bates D, Ushey K. Rcpp: Seamless R and C++ integration. J Stat Softw. 2011;40(8):1–18.
OpenMP A: OpenMP Application Program Interface V3. 0. OpenMP Architecture Review Board 2008.
Kastenmüller G, Raffler J, Gieger C, Suhre K. Genetics of human metabolism: an update. Hum Mol Genet. 2015;24(R1):R93–R101.
Jones PA. Functions of DNA methylation: islands, start sites, gene bodies and beyond. Nat Rev Genet. 2012;13(7):484–92.
Bibikova M, Barnes B, Tsan C, Ho V, Klotzle B, Le JM, Delano D, Zhang L, Schroth GP, Gunderson KL, et al. High density DNA methylation array with single CpG site resolution. Genomics. 2011;98(4):288–95.
Houseman EA, Accomando WP, Koestler DC, Christensen BC, Marsit CJ, Nelson HH, Wiencke JK, Kelsey KT. DNA methylation arrays as surrogate measures of cell mixture distribution. BMC Bioinformatics. 2012;13:86.
Chen YA, Lemire M, Choufani S, Butcher DT, Grafodatskaya D, Zanke BW, Gallinger S, Hudson TJ, Weksberg R. Discovery of cross-reactive probes and polymorphic CpGs in the Illumina Infinium HumanMethylation450 microarray. Epigenetics : official journal of the DNA Methylation Society. 2013;8(2):203–9.
Livshits G, Macgregor AJ, Gieger C, Malkin I, Moayyeri A, Grallert H, Emeny RT, Spector T, Kastenmüller G, Williams FM. An omics investigation into chronic widespread musculoskeletal pain reveals epiandrosterone sulfate as a potential biomarker. Pain. 2015;156(10):1845.
Illig T, Gieger C, Zhai G, Romisch-Margl W, Wang-Sattler R, Prehn C, Altmaier E, Kastenmuller G, Kato BS, Mewes HW, et al. A genome-wide perspective of genetic variation in human metabolism. Nat Genet. 2010;42(2):137–41.
Suhre K, Shin SY, Petersen AK, Mohney RP, Meredith D, Wagele B, Altmaier E, Deloukas P, Erdmann J, Grundberg E, et al. Human metabolic individuality in biomedical and pharmaceutical research. Nature. 2011;477(7362):54–60.
Fox J, Weisberg S: An R Companion to Applied Regression, Second edn: Sage; 2011.
Stroustrup B: Programming: principles and practice using C++: Pearson Education; 2014.
Buuren S, Groothuis-Oudshoorn K: mice: Multivariate imputation by chained equations in R. J Stat Softw 2011, 45(3).
We thank all of the participants in the KORA F4 study, everyone involved with the generation of the data, and the two anonymous reviewers for comments.
The KORA study was initiated and financed by the Helmholtz Zentrum München – German Research Center for Environmental Health, which is funded by the German Federal Ministry of Education and Research (BMBF) and by the State of Bavaria. Furthermore, KORA research was supported within the Munich Center of Health Sciences (MC-Health), Ludwig-Maximilians-Universität, as part of LMUinnovativ.
pulver can be downloaded from CRAN at https://cran.r-project.org/web/packages/pulver/.
The data used in the simulations were generated by the create_input_files function found in testing.R.
Research Unit of Molecular Epidemiology, Helmholtz Zentrum München, Neuherberg, Germany
Sophie Molnos, Clemens Baumbach, Simone Wahl, Rui Wang-Sattler, Melanie Waldenberger, Annette Peters, Harald Grallert & Christian Gieger
Institute of Epidemiology II, Helmholtz Zentrum München, Neuherberg, Germany
German Center for Diabetes Research (DZD), Neuherberg, Germany
Sophie Molnos, Clemens Baumbach, Simone Wahl, Jerzy Adamski, Annette Peters, Harald Grallert & Christian Gieger
Department of Medicine I, University Hospital Grosshadern, Ludwig-Maximilians-Universität, Munich, Germany
Martina Müller-Nurasyid
Institute of Genetic Epidemiology, Helmholtz Zentrum München, Neuherberg, Germany
Martina Müller-Nurasyid & Konstantin Strauch
Chair of Genetic Epidemiology, IBE, Faculty of Medicine, LMU Munich, Munich, Germany
DZHK (German Centre for Cardiovascular Research), Partner Site Munich Heart Alliance, Munich, Germany
Institute of Human Genetics, Helmholtz Zentrum München, Neuherberg, Germany
Thomas Meitinger
Institute of Human Genetics, Technische Universität München, Munich, Germany
Genome Analysis Center, Helmholtz Zentrum München, Neuherberg, Germany
Jerzy Adamski
Institute of Experimental Genetics, Technical University of Munich, Freising-Weihenstephan, Germany
Institute of Bioinformatics and Systems Biology, Helmholtz Zentrum München, Neuherberg, Germany
Gabi Kastenmüller & Karsten Suhre
Department of Twins Research and Genetic Epidemiology, Kings College, London, UK
Gabi Kastenmüller
Department of Biophysics and Physiology, Weill Cornell Medical College in Qatar, Doha, Qatar
Karsten Suhre
Institute of Computational Biology, Helmholtz Zentrum München, Neuherberg, Germany
Fabian J. Theis
Department of Mathematics, Technische Universitat München, Garching, Germany
Sophie Molnos
Clemens Baumbach
Simone Wahl
Konstantin Strauch
Rui Wang-Sattler
Melanie Waldenberger
Annette Peters
Harald Grallert
Christian Gieger
SM and CG designed the study. SM and CB wrote the pulver software and conducted computational benchmarking. SM, CB, SW, MN, KS, RW, MW, TM, JA, GK, KS, AP, HG, FJT, and CG contributed to the data acquisition or data analysis and interpretation of results. SM wrote the manuscript. SM, CB, SW, MN, KS, RW,MW, TM, JA, GK, KS, AP, HG, FJT, and CG contributed to the review, editing, and final approval of the manuscript.
Correspondence to Sophie Molnos.
The KORA study was approved by the local ethics committee ("Bayerische Landesärztekammer", reference number: 06068).
All KORA participants gave their signed informed consent.
Additional file
Theory underlying pulver. This file describes the derivation of the t-value computed from the beta value divided by the standard error and the correlation value. (PDF 426 kb)
Molnos, S., Baumbach, C., Wahl, S. et al. pulver: an R package for parallel ultra-rapid p-value computation for linear regression interaction terms. BMC Bioinformatics 18, 429 (2017). https://doi.org/10.1186/s12859-017-1838-y
Received: 23 March 2017
Linear regression interaction term
SNP–CpG interaction
Results and data
|
CommonCrawl
|
Home Journals IJHT Numerical Study of the Ohmic Heating Process Applied to Different Food Particles
Numerical Study of the Ohmic Heating Process Applied to Different Food Particles
Rafael da Silveira Borahel* | Rejane de Césaro Oliveski | Ligia Damasceno Ferreira Marczak
Mechanical Engineering Graduate Program, University of Vale do Rio dos Sinos, São Leopoldo, RS, Brazil
Chemical Engineering Department, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil
[email protected]
In the present work, some aspects related to the ohmic heating (OH) technology are investigated using the Ansys Fluent. The aims of this study are: (i) implementing a mathematical and numerical model capable of reproducing the OH process, (ii) evaluating the influence of the electrical voltage over the transient heating, (iii) investigating the importance of the ohmic cell diameter and (iv) electrodes diameter regarding the heating generated. The computational domain represents a cylindrical ohmic cell used to heat pieces of carrot, meat and potato within an aqueous NaCl solution. Results about the role of the electrical voltage show that the most intense heating was obtained with the highest voltage tested. While it is needed about 2 min to the temperature of the carrot particle reach about 335 K using 50 V, approximately 9 min (a time 450 % greater) are necessary to the particle reach the same temperature using 20 V. Temperature differences for a same particle are also observed when the importance of the electrodes diameter is analyzed. In this case, the highest difference occurs for the meat particle (11.94 K at 138 s), thus it can be stated that the heating generated was influenced by the electrode size.
food engineering, thermal processing of food, emerging technologies of food processing, volumetric heat generation, ohmic heating, computational fluid dynamics (CFD), Ansys fluent code
In the processing of food products, the unit operations responsible for the thermal treatment of the product is considered one of the most important of the process, since it affects directly the quality of the food to be consumed. Among the emerging technologies that can be used for the heat treatment, the ohmic heating (OH) technology is noteworthy. The OH, also known as Joule heating, is a thermal process in which an electric current is conducted through a food in order to heat it, by converting electrical energy into thermal energy. In other words, the ohmic heating is a technology of internal energy generation [1].
Due to the innovative character of the OH technology, where the heating of the processed foods is given by the internal energy generated, the OH possesses many advantages. According to Ruan et al. [2], this technology, unlike to the conventional heating methods, is suitable to promote the thermal treatment of solid foods, especially those which are in the form of particles dispersed in a liquid medium. During the OH of such mixtures, when the electrical conductivity of the particles is almost equal to the liquid, similar heating rates can be obtained for both phases of the mixture, producing a homogeneous heating. Furthermore, the ohmic heating is also appropriate to promote the heating of highly viscous liquids, once the heating produced by this technology is not dependent of the convective heat transfer coefficients, which, in this case, are extremely low [3]. For energy purposes, Sakr and Liu [4] comment that the OH technology can be coupled to the thermal energy storage (TES) systems based on latent heat, such as the systems studied by Abdulmunem and Jalil [5], Benlekkam et al. [6] and Buonomo et al. [7].
Although the industrial use of OH technology is not only restricted to the processes of the food industry, it cannot be denied that its main application is related to this industrial sector. For this reason, over the past three decades several studies have investigated the use of this technology to promote the thermal treatment of various food products, especially those where there are solid particles immersed in a liquid solution. De Alwis and Fryer [8] were pioneers in the numerical solution of problems involving this technology. In their study, they used the finite element method to analyze a generic heating process conducted in a rectangular ohmic cell. Despite the few computational resources available at the time, the numerical results showed a good agreement with experimental results previously obtained. Moreover, the results also indicated that the heating rate observed in a generic particle is dependent of three factors: particle size, particle orientation and the ratio between particle and fluid medium electrical conductivity. Similar observations about the importance of the electrical conductivity in the OH process was done by Sastry and Palaniappan [9] and Salengke and Sastry [10]. In both studies, a greater heating rate were observed at the cases where the electrical conductivity of the particles was greater than the liquid. Interesting results about the role of the relative volume fraction of the phases influencing the heating rates are also presented by Sastry and Palaniappan [9]. Their results suggest that a more pronounced heating is achieved when the particles concentration is higher. Consequently, the particles volume fraction should be considered a key factor in the OH process.
Despite the ohmic heating technology has attracted an increasing attention recently, many questions about the parameters that governs the ohmic heating remain unanswered. Therefore, in order to obtain a better understanding concerning this technology, the aims of this study are: (i) implementing a mathematical and numerical model capable of reproducing the OH process applied to food particles, (ii) evaluating the influence of the electrical voltage over the transient heating process, (iii) investigating the importance of the ohmic cell diameter and (iv) electrodes diameter regarding the heating generated.
For the sake of clarity, the numerical methodology employed is presented separately in five items: computational domain, mathematical and numerical model, initial and boundary conditions, simulations parameters and model validation.
2.1 Computational domain
In the present study, four different cylindrical ohmic cells were numerically investigated using CFD (Computational Fluid Dynamics) techniques. The diameters of the cells studied are 16.5, 20.5, 24.5 and 28.5 mm. All the ohmic cells have 77 mm in total length (W), while the computational domain has only 76 mm in length (L); this difference is associated to the thickness of the electrodes, which is not considered in the computational domain. The computational domain adopted is axisymmetric and comprises four different foods: carrot, meat, potato and an aqueous sodium chloride solution. In order to represent the food particles, three 10 mm squares, with a distance of 10 mm between them, were used. Further details about the ohmic cell analyzed can be seen in Figure 1, where the dashed lines indicate the computational domain adopted:
Figure 1. Schematic representation of the ohmic cell studied
Similar to other studies carried out using the CFD techniques, in this work an extensive literature search was conducted in order to obtain the physical properties of the materials that make up the computational domain. Thus, values of density for all materials heated were taken from Pitzer and Peiper [11], Murakami [12] and Çengel and Ghajar [13]; values of thermal conductivity were obtained from Ozbek and Phillips [14], Slabani and Rahman [15] and mainly Çengel and Ghajar [13]; values of specific heat were gotten from Chen [16], Rao and Rizvi [17] and [13] again, while values of dynamic viscosity for the fluid medium were given by Kestin et al. [18]. Regarding the electric conductivity, the mathematical functions used by Shim et al. [19] to represent the behavior of this property with the temperature were adapted and used here as polynomial equations of 5th order, such procedure was adopted simply to facilitate the numerical implementation of the variation of this property with temperature. Therefore, only the electrical conductivity was modeled considering the variations caused by the temperature. Further details about the physical properties adopted are presented in the Table 1.
Table 1. Physical properties of the materials/food modeled
Density (kg.m-3)
Specific Heat (kJ.kg-1.K-1)
Thermal Conductivity (W.m-1.K-1)
Electrical Conductivity (S.m-1)
3.247389x10‑9T5 –
5.560101x10‑6T4 +
3.796416x10‑3T3 – 1.292028T2 + 219.1637T - 1.48239x104
0.031T – 6.7701a
9.4082x10-9T5 – 1.596173x10‑5T4 + 1.080201x10-2T3 – 3.644712T2 + 613.1243T - 4.113914x104
Aqueous Solution of NaCl
Dynamic Viscosity (µ.Pa.s)
0.1018T – 24.977b
a Valid only when 293 K < T < 370 K
b Valid only when 290 K < T < 380 K
2.2 Mathematical and numerical model
The governing equations of the problem investigated are the conservative equations of continuity, energy and momentum, Eqns. (1), (2) and (3), respectively [20]:
$\frac{\partial \rho }{\partial t}+\nabla .(\rho \vec{U})=0$ (1)
$\frac{\partial (\rho h)}{\partial t}+\nabla .(\rho \vec{U}h)=\nabla .(k\nabla T)+\vec{S}$ (2)
$\frac{\partial (\rho \vec{U})}{\partial t}+\nabla .(\rho \vec{U}\vec{U})=-\nabla p+\nabla (\mu \nabla \vec{U})+pg+\vec{F}$ (3)
where ρ is the density (kg.m-3), t is the time (s), ∇ is the nabla operator, $\vec{U}$ is the velocity vector (m.s-1), h is the specific enthalpy (J.kg-1), k is the thermal conductivity (W.m-1.K-1), T is the temperature (K), S is the energy source term (W.m-3), p is the pressure (Pa), µ is the dynamic viscosity (Pa.s), g is gravity acceleration (m.s-2) and $\vec{F}$ is the momentum source term (N.m-3).
The hypotheses suggested by Zhang and Fryer [21], where the velocity field is not considered and the heat transfer by conduction is the main thermal mechanism, are adopted here. With these hypotheses, both regions (particles and liquid) of the computational domain can be treated as solid. Therefore, the convective term in Eq. (2) can be omitted, so that the Eq. (2) takes the following form [21]:
$\frac{\partial (\rho h)}{\partial t}=+\nabla .(k\nabla T)+\vec{S}$ (4)
According to Shim et al. [19], the energy source term (S) is the responsible for the conversion of the electrical energy into heat. In a conductive material, this source is given by a simple equation involving the electrical conductivity of the medium and the voltage applied. Thus, using the User Defined Functions (UDFs), the energy source term (S) implemented in the ANSYS FLUENT has the following form:
$\vec{S}={{\sigma }_{(T)}}{{\left| \nabla V \right|}^{2}}$ (5)
where σ is the electrical conductivity (S.m-1) as a function of temperature and V is the voltage (V).
Regarding the electrical field distribution, the Laplace's equation presented by De Alwis and Fryer [8] provides this variable at any location of the ohmic cell. In other words, the electrical field distribution at any point of the computational domain is obtained solving the Eq. (6).
$\nabla (\sigma .\nabla V)=0$ (6)
2.3 Initial and boundary conditions
In a numerical simulation performed by CFD, initial and boundary conditions are necessary to solve the governing equations of the problem studied. The correct choice of these conditions is a critical step in the numerical studies, once the physical consistency of the results obtained is dependent of the adopted conditions. In the present work, only one initial condition was adopted (temperature when t = 0), while two types of boundary conditions were used to solve the governing equations: electrical boundary conditions and thermal boundary conditions.
2.3.1 Initial condition
$T(\forall x,\forall y,t=0)={{T}_{0}}$ (7)
where To is equal to 293 K.
2.3.2 Electrical boundary conditions
$V(x=0, \forall y, \forall t)=0 \mathrm{V}\\ [valid for the left electrode]$ (8)
$V(x=0,\forall y,\forall t)=20,\text{ 30, 40 or 50 V}\\ [valid for the right electrode]$ (9)
$\nabla V(\forall x, \forall y, \forall t) \cdot\left.\vec{n}\right|_{\text { wall }}=0\\ [valid for the particles surfaces]$ (10)
where $\vec{n}$ is the normal vector.
2.3.3 Thermal boundary conditions
$k \nabla T(\forall x, \forall y, \forall t) \cdot\left.\vec{n}\right|_{\text { wall }}=0\\ [valid for the electrodes and walls of the ohmic cell]$ (11)
2.4 Simulation parameters
In the present study, the Finite Volume Method is employed to solve the governing equations. All numerical simulations necessary to achieve the objectives proposed were carried out using the commercial software ANSYS FLUENT 16.1. In this software, a rectangular grid was used to represent the computational domain modeled. The most appropriate grid was chosen after a grid independence test, where three grids with different number of elements were tested for each ohmic cell analyzed, totaling 12 grids tested. All grids tested are rectangular, two-dimensional and refined near the food particles. The results of the test performed did not show significant differences between the grids tested. Therefore, in order to reduce the computational workload, the grids less refined were used in all simulations performed. Thus, the results presented here were obtained using four different computational grids: a computational grid with 7.000 elements used to represent the smallest ohmic cell (diameter of 16.5 mm); a computational grid with 8.100 elements used to represent the ohmic cell with diameter of 20.5 mm; a computational grid with 10.200 elements used to represent the ohmic cell with diameter of 24.5 mm and a computational grid with 11.580 elements used to represent the biggest ohmic cell (diameter of 28.5 mm). Figure 2 shows the computational grid used in the ohmic cell with 24.5 mm in diameter.
Figure 2. Grid used to represent numerically the ohmic cell with diameter of 24.5mm
Regarding the time step, a value of 0.01 s with a maximum of 1000 iterations per time step was used here. This value was chosen after a careful examination of the preliminary results obtained with other values (0.001, 0.002 and 0.005 s). Although the adopted value is relatively higher than the others, the results obtained using this value did not differ strongly from the other results. Thus, the value adopted (0.01s) proved to be most appropriate for this study, since through its use a considerable reduction in the total time of simulation is reached without compromising the results quality. Another important factor that interferes directly in the quality of the results obtained, as well as in the total processing time, is the convergence criteria adopted. In the present work, different convergence criteria were prescribed for the governing equations; a convergence criterion of 10-8 was used for the energy equation and a criterion of 10-5 was adopted for the continuity and velocity components equations (which are solved by the software even with the hypothesis of null velocity field).
2.5 Model validation
In order to verify whether the mathematical and numerical model implemented is appropriate for the study carried out, the study performed by Shim et al. [19] was reproduced for validation purposes. The problem studied by Shim et al. [19] is very similar to the investigated in the present study using the ohmic cell with diameter of 24.5 mm. The numerical validation was performed by means two different approaches: one qualitative and other quantitative. While the qualitative validation comprises a simple visual comparison of the temperature fields obtained in both numerical studies (Shim et al. [19] and the present study), the quantitative validation involves the comparison between the temperature obtained numerically in the geometrical center of each food and the values found experimentally by Shim et al. [19].
Figure 3 presents the temperature fields obtained by Shim et al. [19] and the temperature fields obtained in the present work. As can be seen, the results are very similar, with a pronounced heating region around the particles, especially above and below of them. This behavior is also observed by De Alwis and Fryer [8], which simulated two different problems: a generic food particle immersed in a liquid with a higher electrical conductivity (i) and a liquid with a lesser electrical conductivity (ii). In the problems where the electrical conductivity of the particles was lower than those of liquids, De Alwis and Fryer [8] reported the appearance of hot spots adjacent to the top and bottom of the particles, which is related to the high electric current density in these regions. As the liquid medium is more conductive than the particles, a smaller resistance to electron passage may be associated to the liquid regions, so that a higher current density is produced and a more pronounced heating is generated in the regions located around the particles. Therefore, the temperature distribution obtained in the present work is consistent with the theoretical background and other results available in the literature.
Figure 3. Temperature fields of the ohmic cell studied by Shim et al. [19] and the present work
Results of the quantitative validation is illustrated on the side (Figure 4), where the graph A shows the temperature in the geometric center of the carrot particle, the graph B the temperature in the meat particle and the graph C the temperature in the potato particle. As can be seen, there are a great similarity between both numerical results. Moreover, the results obtained in the present work show good agreement with the experimental results presented by Shim et al. [19], especially in the first 200 seconds. After this instant, in the meat particle, a considerable divergence between the numerical results and the experimental results is detected. The divergence observed only for this particle may be associated with a main reason: the difficulty faced by Shim et al. [19] in maintain the thermocouple correctly positioned in the meat particle. According to the authors, the meat tissues lost firmness during the heating, which done it impossible to position correctly the thermocouple in the meat particle, consequently, errors of unknown magnitude are associated with the experimental results of the meat particle. Although identified the main reason for the divergence observed, it should be noted that other empirical factors may also affect the experimental and numerical analyzes of ohmic heating process applied to meat particles. According to Zell et al. [22], the orientation of the muscle fibers, for example, plays a fundamental role in the ohmic heating process, since this factor affects the electrical conductivity of the meat and its heating. Thus, considering the results obtained here for the particles studied (carrot, meat and potato), as well as all the difficulties related in the literature in promoting a study about the ohmic heating of a meat particle, it can be stated that the mathematical and numerical model implemented is suitable for the numerical study of the ohmic heating process applied to food particles.
4a.png
4b.png
Figure 4. Temperature vs. time for experimental and numerical results of Shim et al. [19] and present work
3. Results and Discussions
After validated the model implemented, the numerical simulations used to analyze the effect of voltage in the heating process were conducted using the cell diameter of 24.5 mm. Temperature measurements, at different instants, were performed in the geometric center of the food particles, which were heated to a temperature limit (370 K). Since there is no information about the physical properties of the food particles after this temperature, this limit had to be considered. The results obtained for different voltages are shown in the Figure 5 (a-c), which presents the temperature plotted over the time for the carrot particle [Figure 5 (a)], meat particle [Figure 5(b)] and potato particle [Figure 5 (c)].
Figure 5. Temperature vs. time considering the voltage, for: (a) carrot, (b) meat and (c) potato particle
As can be seen, the use of higher values of voltage provided higher heating rates. Comparing the temperature profile in the carrot particle [Figure 5 (a)], it can be seen that to reach about 335 K it is needed 2 min of heating using 50 V and 9 min (a time 450% greater) using 20 V. Thus, the most intense heating was obtained with the highest voltage tested (50 V). Although high heating rates were obtained with the highest value of voltage tested (50V), it should be noted that this voltage is inappropriate for industrial processing purposes. As high production yields are targeted by the food industrial sector, even higher voltage values are required so that a solid-liquid mixture can be quickly sterilized, which does not occur when a voltage of 50 V is used. Similar results regarding the effect of the voltage in the ohmic heating process can be found in the literature, as in the studies conducted by Piette et al. [23] (cooking of meat sausages by OH) and Sarkis et al. [24] (heating of blueberry pulp by OH). Although both studies did not aim to analyze the effect of voltage under the heating rates generated, an attentive analysis of the results presented allows to verify that the highest heating were obtained when the highest voltage were adopted, exactly the same behavior observed here. Therefore, the voltage applied plays a key role in the ohmic heating process.
In order to analyze the uniformity of the OH, the temperature profiles obtained in the different food particles, for a fixed voltage, are presented in Figure 6 (a-b).
Figure 6. Temperature profiles in the different food particles for: (a) 20 V and (b) 50 V
For 20 V [Figure 6 (a)] it can be seen a very homogenous heating, with tiny differences in the temperatures. However, using 50 V [Figure 6 (b)], the temperature profiles are similar, but it can be seen a difference in the meat particle temperature which is related to its higher electrical conductivity when compared to the other particles (at 330 K, the electrical conductivity of meat, potato and carrot are 3.46, 0.55 and 0.45 S.m-1, respectively). As the electrical conductivity is an increasing function of temperature, a more pronounced heating affects more intensively its variation, especially when the function that governs its growth has a more intense character, like the meat particle. Since the higher voltage applied (50V) provided a more intense heating, the electrical conductivity of the particles varied with greater intensity in this case, which caused the meat particle to present values of electrical conductivity substantially higher than those associated with the other foods. In addition to the applied voltage, higher values of electrical conductivity also allow the generation of higher heating rates, as reported in the literature [8-10]. Thus, due to it is the most conductive particle in the medium, the meat particle underwent a more intense heating than the other particles, which gave rise to an undesirable temperature gradient in the solid-liquid mixture.
Two different numerical simulations are conducted to investigate the role of the ohmic cell diameter (D) in the heating process. In the first study, the electrode has the same size of the ohmic cell diameter, as presented in the shaded areas of the Figure 1. Four different diameter values are used (16.5, 20.5, 24.5 and 28.5 mm) in the simulations, being the voltage applied fixed in 50 V. Since the temperature profiles were not strongly influenced by the diameters, only the two extremes diameters are chosen to be plotted in Figure 7 (a-c).
Figure 7. Temperature profiles in the different food particles for the smallest (16.5 mm) and the biggest (28.5 mm) diameters tested, for: (a) carrot, (b) meat and (c) potato
While Figure 7 (a) presents the temperature profiles for diameters of 16.5 and 28.5 mm for the carrot particle, Figure 7(b) presents the temperature profiles for meat particle and Figure 7 (c) for the potato particle. As can be observed, the profiles of the meat particle are practically the same, with a slightly higher heating in the larger diameter problem (28.5 mm). Tiny differences are observed for carrot and potato, which increase as time increases. Unlike to meat particle, these foods (carrot and potato) obtained a more pronounced heating with the smallest diameter tested; which indicates that the heating rate obtained may be associated to several factors besides the ohmic cell diameter. Thereby, it is possible to conclude that the heating generated was not influenced only by the ohmic cell diameter, but by multiple factors.
In the second study, the ohmic cell diameter was maintained fixed in 24.5 mm, while two different sizes of the electrodes were tested: 10 and 24.5 mm. Figure 8 (a-c) shows the temperature profiles of the different food particles, using different electrodes sizes. As observed, the profiles are similar with differences which increase as time passes. The highest difference occurs for the meat particle (11.94 K at 138 s). Thus, based on these results, it can be stated that the heating generated was influenced by the electrode size, where the larger electrodes provided the higher heating rates.
Figure 8. Temperature profiles for the two electrodes size
Although it is a factor of possible relevance for the OH technology, little is known about the importance of the spatial position of the particles regarding the generated heating. In this way, assuming the same spatial arrangement of Figure 1 for two different diameters (16.5 and 28.5 mm), the role of the particles position was investigated using simulations where all the particles were composed of a single food.
The results obtained are shown in Figure 9 (a-c), where [Figure 9 (a)] presents the temperature profile of the carrot particles, [Figure 9 (b)] the meat particles and [Figure 9 (c)] the potato particles.
Figure 9. Temperature profiles for the problems where all the particles were composed of the same food, as: (a) carrot, (b) meat and (c) potato
As can been seen, the lowest temperatures are associated with the central particle of all the cases studied, especially when the largest diameter is adopted, which is consistent with the behavior observed in Figure 7 (b). While the largest temperature difference observed for the problems involving the smallest diameter tested (16.5 mm) was 1.1 K at 138s (between particles 1 and 2 of carrot), the highest difference associated with the problems involving the biggest diameter used (28.5 mm) was 3.9 K at 138s (between particles 1 and 2 of meat). In other words, the ohmic cell diameter and the spatial positioning of the particles may influence the temperature distribution inside the equipment, but not significantly using this value of voltage (50V).
In the present work, some aspects related to the ohmic heating (OH) technology were investigated using the ANSYS Fluent 16.1. The computational domain adopted is axisymmetric and represents a cylindrical ohmic cell used to heat pieces of carrot, meat and potato within an aqueous NaCl solution. From this geometry, the following conclusions can be drawn:
Results obtained for the model validation showed a good agreement with experimental and numerical results avaible in the literature, thus the mathematical and numerical model is appropriate for the numerical study of the OH process.
The most intense heating was obtained with the highest voltage tested (50 V), then the voltage applied plays a key role in the ohmic heating process.
Although the highest heating rates were obtained when 50 V were used, it should be noted that this voltage still is inappropriate (heating too slow) for industrial processing purposes.
The results obtained also suggest that the ohmic cell diameters, electrode sizes and spatial positioning of the particles, were able to influence the heating generated, but not significantly for the last aspect mentioned.
The authors acknowledge the financial support of CAPES – Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Coordination for the Improvement of Higher Education Personnel) and UNISINOS – Universidade do Vale do Rio dos Sinos (University of Vale do Rio dos Sinos).
ohmic cell diameter, m
momentum source term, N.m-3
gravitational acceleration, m.s-2
specific enthalpy, J. kg-1
thermal conductivity, W.m-1. K-1
computational domain lenght, m
pressure, Pa
energy source term, W.m-3
temperature, K
time, s
velocity, m. s-1
voltage, V
ohmic cell total lenght, m
density, kg. m-3
electrical conductivity, S.m-1
dynamic viscosity, kg. m-1.s-1
[1] Mercali, G., Jaeschke, D., Tessaro, I., Marczak, L. (2012). Study of vitamin C degradation in acerola pulp during ohmic and conventional hear treatment. LTW – Food Science and Technology, 47(1): 91-95. http://dx.doi.org/10.1016/j.lwt.2011.12.030
[2] Ruan, R., Ye, X., Chen, P., Doona, C., Taub, I. (2001). Ohmic heating. In: Richardson, P.S. (eds), Thermal Technologies in Food Processing. Woodhead Publishing Limited, Londres.
[3] Fryer, P. (1995). Electrical resistance heating of foods. In: Gould, G.W. (eds) New Methods of Food Preservation. Blackie Academic & Professional, Glasgow.
[4] Sakr, M., Liu, S. (2014). A comprehensive review on applications of ohmic heating (OH). Renewable and Sustainable Energy Reviews, 39: 262-269. https://doi.org/10.1016/j.rser.2014.07.061
[5] Abdulmunem, A., Jalil, J. (2018). Indoor investigation and numerical analysis of PV cells temperature regulation using coupled PCM/fins. International Journal of Heat and Technology, 36(4): 1212-1222. https://doi.org/10.18280/ijht.360408
[6] Benlekkam, M., Nehari, D., Madani, H. (2018). The thermal impact of the fin tilt angle and its orientation on performance of PV cell using PCM. International Journal of Heat and Technology, 36(3): 919-926. https://doi.org/10.18280/ijht.360319
[7] Buonomo, B., Pasqua, A., Ercole, D., Manca, O. (2018). Entropy generation analysis of parallel plate channels for latent heat thermal energy storages. TECNICA ITALIANA – Italian Journal of Engineering Science, 61(1): 42-48. https://doi.org/10.18280/ti-ijes.620106
[8] De Alwis, P., Fryer, P. (1990). A finite-element analysis of heat generation and transfer during ohmic heating of food. Chemical Engineering Science, 45(6): 1547-1559. http://dx.doi.org/10.1016/0009-2509(90)80006-z
[9] Sastry, S.K., Palaniappan, S. (1992). Mathematical modeling and experimental studies on ohmic heating of liquid-particle mixtures in a static heater. Journal of Food Process Engineering, 15(4): 241-261. http://dx.doi.org/10.1111/j.1745-4530.1992.tb00155.x
[10] Salengke, S., Sastry, S.K. (2007). Experimental investigation of ohmic heating of solid-liquid mixtures under worst-case heating scenarios. Journal of Food Engineering, 83(3): 324-336. http://dx.doi.org/10.1016/j.jfoodeng.2007.02.060
[11] Pitzer, K., Peiper, J. (1984). Thermodynamic properties of aqueous sodium chloride solutions. Journal of Physical and Chemical Reference Data, 13(1): 1-102. http://dx.doi.org/10.1063/1.555709
[12] Murakami, E. (1997). The thermal properties of potatoes and carrots as affected by thermal processing. Journal of Food Process Engineering, 20(5): 415-432. http://dx.doi.org/10.1111/j.1745-4530.1997.tb00431.x
[13] Çengel, Y., Ghajar, A. (2009). Heat and Mass Transfer: Fundamentals and Applications. McGraw-Hill.
[14] Ozbek, H., Phillips, S. (1980). Thermal conductivity of aqueous sodium chloride solutions from 20 to 330 °C. Journal of Chemical and Engineering Data, 25(3): 263-267. http://dx.doi.org/10.1021/je60086a001
[15] Sablani, S., Rahman, M. (2003). Using neural networks to predict thermal conductivity of food as a function of moisture content, temperature and apparent porosity. Food Research International, 36(6): 617-623. http://dx.doi.org/10.1016/s0963-9969(03)00012-7
[16] Chen, C. (1982). Specific heat capacities of aqueous sodium chloride solutions at high pressures. Journal of Chemical and Engineering Data, 27(3): 356-358. http://dx.doi.org/10.1021/je00029a038
[17] Rao, M., Rizvi, S. (1994). Engineering Properties of Food. Marcel Dekker.
[18] Kestin, J., Khalifa, H., Correia, R. (1981). Tables of the dynamic and kinematic viscosity of aqueous NaCl solutions in the temperature range 20-150 °C and the pressure range 0.1-35 Mpa. Journal of Physical and Chemical Reference Data, 10(1): 71-88. http://dx.doi.org/10.1063/1.555641
[19] Shim, J., Lee, S., Jun, S. (2010). Modeling of ohmic heating patterns of multiphase food products using computational fluid dynamics codes. Journal of Food Engineering, 99(2): 136-141. http://dx.doi.org/10.1016/j.jfoodeng.2010.02.009
[20] Ansys Fluent Theory Guide 14.0. (2013). Ansys Inc., Canonsburg, USA.
[21] Zhang, L., Fryer, P. (1993). Models for the electrical heating of solid-liquid food mixtures. Chemical Engineering Science, 48(4): 633-642. http://dx.doi.org/10.1016/0009-2509(93)80132-a
[22] Zell, M., Lyng, J., Cronin, D., Morgan, D. (2009). Ohmic heating of meats: electrical conductivities of whole meat and processed meat ingredients. Meat Science, 83(3): 563-570. https://doi.org/10.1016/j.meatsci.2009.07.005
[23] Piette, G., Buteau, M., Halleux, D., Chiu, L., Raymond, Y., Ramaswamy, H., Dostie, M. (2004). Ohmic cooking of processed meats and its effects on product quality. Journal of Food Science, 69(2): 71-78. https://doi.org/10.1111/j.1365-2621.2004.tb15512.x
[24] Sarkis, J., Jaeschke, D., Tessaro, I., Marczak, L. (2013). Effects of ohmic and conventional heating on anthocyanun degradation during the processing of blueberry pulp. LWT – Food Science and Technology, 51(1): 79-85. http://dx.doi.org/10.1016/j.lwt.2012.10.024
|
CommonCrawl
|
Home / Basic Electrical / Short-Circuit Admittance Parameters
Short-Circuit Admittance Parameters
Ahmad Faizan Basic Electrical
We can represent the generalized coupling network by the π-network shown with dotted lines in Figure 1. It is simpler to work with admittances when we encounter a coupling network in the form of a π-network, which is a dual for a T-network. Although the resulting short-circuit admittance parameters (y-parameters) are not often used in circuit analysis, we derive these parameters of a generalized four-terminal coupling network as an introduction to the hybrid parameters.
Figure 1 Determining the short-circuit admittance parameters
To prevent source E2 from affecting the measurement of the input admittance, we set the right-hand switch in Figure 1 so that the ammeter measuring I2 shorts the output terminals (port 2). Similarly, when we apply E2 to the output terminals, we set the left-hand switch to connect the ammeter measuring I1 directly across the input terminals (port 1), thus shorting them.
With E1 applied to the input terminals, we can use the readings of the meters in the input circuit to find an I/V ratio that represents the input admittance of the network with the output terminals short-circuited. The reading on the ammeter measuring I2 indicates that a portion of the input current is coupled through to the output.
Looking back into the short-circuited output terminals, we see a Norton-equivalent source containing a dependent current source controlled by the voltage applied to the input terminals. To find the parameter that relates I2 to E1, we simply divide I2 by V1, giving a quantity in Siemens.
By reversing the position of the switches, we can find similar parameters for the output terminals. Thus, there are four short-circuit admittance parameters for any four-terminal, two-port network:
Short-circuit input admittance:
\[\begin{matrix}{{\text{y}}_{\text{11}}}\text{=}\frac{{{\text{I}}_{\text{1}}}}{{{\text{E}}_{\text{1}}}}\left( with\text{ }{{\text{E}}_{\text{2}}}=0 \right) & {} & \left( 1 \right) \\\end{matrix}\]
Short-circuit reverse-transfer admittance:
Short-circuit forward-transfer admittance:
\[\begin{matrix}{{\text{y}}_{21}}\text{=}\frac{{{\text{I}}_{\text{2}}}}{{{\text{E}}_{\text{1}}}}\left( with\text{ }{{\text{E}}_{\text{2}}}=0 \right) & {} & \left( 3 \right) \\\end{matrix}\]
Since a Norton-equivalent current source has to be in parallel with the input admittance, the equivalent circuit has two paths for I1 to follow between the input terminals. Writing the Kirchhoff's current-law equation for I1 gives
$\begin{matrix}{{\text{y}}_{\text{11}}}{{\text{E}}_{\text{1}}}\text{+}{{\text{y}}_{\text{12}}}{{\text{E}}_{\text{2}}}\text{=}{{\text{I}}_{\text{1}}} & {} & \left( 5 \right) \\\end{matrix}$
Similarly, for the output terminals (port 2),
\[\begin{matrix}{{\text{y}}_{\text{21}}}{{\text{E}}_{\text{1}}}\text{+}{{\text{y}}_{\text{22}}}{{\text{E}}_{\text{2}}}\text{=}{{\text{I}}_{2}} & {} & \left( 6 \right) \\\end{matrix}\]
Figure 2 shows the y-parameter equivalent circuit that satisfies Equations 5 and 6.
Figure 2 y-parameter equivalent circuit
We can now find the y-parameters for the p-network shown with dashed lines in Figure 1. When the ammeter measuring I2 shorts the output terminals,
${{\text{I}}_{\text{1}}}\text{=}{{\text{E}}_{\text{1}}}\left( {{\text{Y}}_{\text{p}}}\text{+}{{\text{Y}}_{\text{m}}} \right)$
\[\begin{matrix}{{\text{Y}}_{\text{p}}}\text{+}{{\text{Y}}_{\text{m}}}\text{=}\frac{{{\text{I}}_{\text{1}}}}{{{\text{E}}_{\text{1}}}}\text{=}{{\text{y}}_{\text{11}}} & {} & \left( 7 \right) \\\end{matrix}\]
Similarly,
\[\begin{matrix}{{\text{Y}}_{\text{s}}}\text{+}{{\text{Y}}_{\text{m}}}\text{=}\frac{{{\text{I}}_{\text{2}}}}{{{\text{E}}_{\text{2}}}}\text{=}{{\text{y}}_{22}} & {} & \left( 8 \right) \\\end{matrix}\]
When the meter measuring I1 is connected directly across the input terminals, there can be no voltage drop across Yp and, hence, no current through Yp. Therefore, all the current through Ym must flow through the ammeter in the input circuit. As a result, I1 has the same magnitude as the current through Ym. But the current through Ym is caused by E2 and has the opposite direction to I1, as shown in Figure 1. Hence,
${{\text{I}}_{\text{1}}}\text{=-}{{\text{Y}}_{\text{m}}}{{\text{E}}_{\text{2}}}$
\[\begin{matrix}{{\text{Y}}_{\text{m}}}\text{=-}\frac{{{\text{I}}_{\text{1}}}}{{{\text{E}}_{\text{2}}}}\text{=-}{{\text{y}}_{\text{12}}} & {} & \left( 9 \right) \\\end{matrix}\]
\[\begin{matrix}{{\text{Y}}_{\text{m}}}\text{=-}\frac{{{\text{I}}_{\text{2}}}}{{{\text{E}}_{\text{1}}}}\text{=-}{{\text{y}}_{\text{21}}} & {} & \left( 10 \right) \\\end{matrix}\]
Rearranging Equations 7 to 10 gives the equivalent π-network in terms of the y-parameters of the coupling network:
\[\begin{align}& \begin{matrix}{{Y}_{m}}=-{{y}_{12}}=-{{y}_{21}} & {} & \left( 11 \right) \\\end{matrix} \\& \begin{matrix}{{Y}_{p}}={{y}_{11}}+{{y}_{12}} & {} & \left( 12 \right) \\\end{matrix} \\& \begin{matrix}{{Y}_{s}}={{y}_{22}}+{{y}_{21}} & {} & \left( 13 \right) \\\end{matrix} \\\end{align}\]
The short-circuit admittance parameters of a two-port network can be determined by representing the network by its equivalent π-network.
|
CommonCrawl
|
why is the magnetic field circular
According to relativity, If magnetic field is just an electric field viewed from a different frame of reference, why is the magnetic field around the wire is circular?
electromagnetism special-relativity
$\begingroup$ I don't know enough to provide you with why the magnetic field is circular, but I don't think your terminology is quite correct. The magnetic field is certainly not the electric field viewed from another Gaililean frame. The magnetic field arises when the electric field is transformed under relativity. The exact transformation in your example above will have to come from someone else. $\endgroup$ – Reid Erdwien Mar 21 '15 at 5:16
$\begingroup$ why do you think it is circular? $\endgroup$ – Skaperen Mar 23 '15 at 9:23
$\begingroup$ what if the wires are charged but no current is flowing? $\endgroup$ – Skaperen Mar 23 '15 at 9:25
$\begingroup$ If I assume the essence of the question is why the magnetic field force lines do not originate from anywhere ? That's a question that just cannot be explained from "credible and/or official sources". In other words: there is no answer yet. $\endgroup$ – Leon Sprenger Mar 24 '15 at 19:04
$\begingroup$ @Skaperen: If the wires are charged with nu current flowing there is no magnetic field. $\endgroup$ – Leon Sprenger Mar 25 '15 at 16:07
Your statement is not really true, since if you only have a magnetic field in one frame of reference, then it can never be viewed as just an electric field in another frame of reference. And vice-versa.
As described here, the magnetic field can be defined as (e.g. in Jackson's Classical Electrodynamics) the field that is responsible for the Lorentz force $q\vec{v} \times \vec{B}$. Since in the example you show, the force would always be directed radially for charges moving parallel to a current carrying wire, then the field must circulate around the wire.
The reason that the force associated with the magnetic field is radial in such circumstances is one of the set-piece arguments in most textbooks that deal with these things, but arises from the requirement that a charge that is radially stationary with respect to the wire in one frame of reference is also stationary in any other frame of reference moving parallel to the wire. It goes something like this:
Consider the electric/magnetic fields due to a current carrying wire in the stationary frame and a frame moving uniformly, but parallel to the wire.
In the stationary frame, the wire is overall neutral, so there can only be a magnetic field. In the moving frame there is some transformed magnetic field and an electric field radial to the wire, caused by a difference in length contraction for the positive and negative charges in the wire, which due to the current flow, must be moving in opposite directions in the stationary frame.
This electric field in the moving frame clearly exerts a radial force on any test charge originally at rest with respect to the wire in the stationary frame. But, given that there is no radial force or acceleration in the stationary frame, there also cannot be a net radial force on the charge when it is in the moving frame either. The force that counteracts the radial electric field in the moving frame is the Lorentz force due to a mystery (B-)field. As the Lorentz force due to the mysterious (B-)field is observed to be both proportional to and perpendicular to the velocity, then it is natural to define it in terms of a vector product. And in that case, in order to act radially for a charge at any point around the wire, the B-field must circulate around the wire.
$\begingroup$ Thanks. Why does B,according to Biot-Savart Law, have a cross product? Magnetic has a different direction from the magnetic force. Electric field and force have same direction, but this not true for magnetic field and force. I am trying to understand the logic behind the cross product in magnetism $\endgroup$ – user50322 Mar 24 '15 at 16:41
$\begingroup$ @user50322 The Lorentz force is defined in that way. I suppose what you need to think about is if you wanted to define an intrinsic field that produced a radial force on a particle moving parallel to the wire, what else could it be but circulating around the wire. It must be a vector and it must produce a force perpendicular to the velocity for all positions around the wire. $\endgroup$ – Rob Jeffries Mar 24 '15 at 19:59
$\begingroup$ Same direction with magnetic force. Like electric field and electric force. $\endgroup$ – user50322 Mar 24 '15 at 22:16
$\begingroup$ @user50322 I understand (I think) your comment; but then how do you incorporate the fact that the magnitude of the force is proportional to the velocity? The force due to the electric field doesn't depend on the particle velocity. That is how they differ. In addition there are no sources or sinks of B-field, which precludes a radial field. $\endgroup$ – Rob Jeffries Mar 24 '15 at 22:19
$\begingroup$ So can we say, magnetic field doesn't have physical meaning and we don't have any other way to define it besides defined in that way. So we define it that because we can not make any definition any other way. Right? Can we say that? $\endgroup$ – user50322 Mar 24 '15 at 22:47
According to relativity, If magnetic field is just an electric field viewed from a different frame of reference
It is true that a pure electrostatic field in an inertial reference frame (IRF) will be observed as a mix of electric and magnetic fields in some relatively moving IRFs.
However, in the general (time varying) case, it is not possible to find an IRF in which the magnetic field vanishes.
why is the magnetic field around the wire is circular?
Consider the field of an isolated point charge at rest; a purely radial, static electric field.
From a relatively moving IRF, there is a magnetic field component in addition to the electric field. This magnetic field is perpendicular to the velocity vector and electric field in the rest frame and is given by
$$\mathbf {{B}_{\bot}}'= \gamma \left(-\frac{1}{c^2} \mathbf{ v} \times \mathbf {E} \right)$$
A little reflection on the above should convince you that, looking at the charge along the direction of motion, the magnetic field lines form circles centered on the charge.
The extension to a line of charge is straightforward.
Alfred CentauriAlfred Centauri
$\begingroup$ Why does B have to be equal to v x E? Why does B have to be circular? Don't have the same direction with the magnetic(in fact electric) force? Thanks... $\endgroup$ – user50322 Mar 23 '15 at 23:07
$\begingroup$ @user50322, what do you mean by "why?". If you are asking why nature is the way it is and not some other way, I don't have and answer (and there may not be one). $\endgroup$ – Alfred Centauri Mar 23 '15 at 23:15
$\begingroup$ I mean "how". How does B equal to v x E? $\endgroup$ – user50322 Mar 23 '15 at 23:17
$\begingroup$ @user50322, this is how the fields transform between relatively moving inertial reference frames according to special relativity. This is not something that can be explained in a comment or an answer for that matter. Simply put, Maxwell's equations, the equations that correctly describe classical electromagnetism, are relativistically covariant. $\endgroup$ – Alfred Centauri Mar 23 '15 at 23:33
Your question consists actually of two parts, I will answer them one-by-one:
Why is the magnetic field circular?
Any vector field $\vec F$ can be decomposed into a rotational part and a divergent part, according to the Helmholtz decomposition theorem
$$\vec F = - \vec \nabla \Phi + \vec \nabla \times \vec A $$
This is a purely mathematical statement and has nothing yet to do with physics. Physics comes into play when considering Maxwell's second equation
$$\vec \nabla \cdot \vec B = 0$$ Which means that $\vec B$ is divergence-free or source-free. Because we can use the decomposition theorem to calculate
$$\vec \nabla \cdot \vec B = \vec \nabla \cdot (-\vec \nabla \Phi + \nabla \times A) = \nabla \cdot \nabla \times A$$ We see that $B$ must be a purely rotational field and this results because the divergence of a gradient ($\vec \nabla \cdot \vec \nabla \Phi$) always vanishes. The latter statement also follows from math and is not a physical modell.
the second thing is:
How are magnetic and electric field related?
This now follows from Maxwells equations 3+4 other equations, that we didn't use until now. It's them that imply relativity and the transformations that my fellow posters noticed.
Concluding
So really the circularity of the B-field has nothing to do with relativity. But its relativity that allows us to transform between both, no matter how their geometries are.
AtmosphericPrisonEscapeAtmosphericPrisonEscape
$\begingroup$ Yes but B is related with v x r (according to biot-savart). I understand why F is perpendicular to v(Lorentz Force). It is because energy conservation. But I don't understand why B and current must be perpendicular(Biot-Savart force) $\endgroup$ – user50322 Mar 25 '15 at 10:10
$\begingroup$ The current is provided by the movement of the electrons. $\vec v \sim \vec j$ in metals. $\endgroup$ – AtmosphericPrisonEscape Mar 25 '15 at 12:36
Given a certain four-current $J^\mu = (c \varrho, \vec{j})$, that is a charge density $\varrho$ and current density $\vec{j}$. the four-potential $A^\mu = (\Phi / c, \vec{A})$ is given by: $$ A^\mu(\vec{r},t) \propto \int \frac{j^\mu(\vec{r}\ ', t_r)}{|\vec{r}-\vec{r}\ '|} d^3r'$$ with $t_r = t - \frac{|\vec{r}-\vec{r}\ '|}{c}$ if one takes the retarded solution. Therefore the B-Field is: \begin{align} \vec{B} = \nabla \times \vec{A} & \propto \int \frac{\vec{j}(\vec{r}\ ', t_r) \times (\vec{r}-\vec{r}\ ')}{|\vec{r}-\vec{r}\ '|^3} d^3r' + \int \frac{\nabla \times \vec{j}(\vec{r}\ ', t_r)}{|\vec{r}-\vec{r}\ '|} d^3r' \\ & = \int \frac{\vec{j}(\vec{r}\ ', t_r) \times (\vec{r}-\vec{r}\ ')}{|\vec{r}-\vec{r}\ '|^3} d^3r' + \int \frac{\frac{\partial \vec{j}(\vec{r}\ ', t_r)}{\partial t_r} \times (\vec{r}-\vec{r}\ ')}{c|\vec{r}-\vec{r}\ '|^2} d^3r' \end{align}
The first term is which you would call "circular", since $\vec{j}(\vec{r}\ ', t_r) \times (\vec{r}-\vec{r}\ ')$ points always in a direction perpendicular to the current density and the point of interest (relative to the current density). The second term is zero if the current is stationary, that is if it is not time dependant. For example, this is the case if one looks at magnetic fields induced by a wire of constant charge flow.
So in general, the magnetic field is not circular around a wire.
imageimage
$\begingroup$ Do you accept the maxwell equations? If so, my answer above shows you why. $\endgroup$ – image Mar 25 '15 at 14:47
since magnetic field in a wire carrying current is due to the movement of electrons, I assume that the magnetic field of a single isolated moving electron, or any other charged particle, can also be deduced by the right hand thumb rule i.e. the field will be much like Saturn's ring around it, where we assume saturn to be a charged particle, and its ring as its field. Now my question is: How is this field deduced. Why is it circular? Is it the result of superposition, if any? Secondly, I know that magnetic force exerted on a moving charge consists of a cross product of B and v. But I think that this force is the result of interaction between the charge's own magnetic field and the applied field, B. My question is: How does this interaction exactly happen? How does it result in this force? Most importantly, what causes this force to be perp to both B and v (I need an explanation other than that it is just the result of cross product)? I raised a similar question elsewhere but didn't get much help. I'd be grateful if somebody explains.
Reference https://www.physicsforums.com/threads/why-is-the-magnetic-field-of-a-wire-circular.180845/
According to relativity, If magnetic field is just an electric field viewed from a different frame of reference...
Relativity doesn't quite say this. Take a look at Minkowski's Space and Time: "In the description of the field caused by the electron itself, then it will appear that the division of the field into electric and magnetic forces is a relative one with respect to the time-axis assumed; the two forces considered together can most vividly be described by a certain analogy to the force-screw in mechanics; the analogy is, however, imperfect". Also see Jackson's Classical Electrodynamics section 11.10 where he says "one should properly speak of the electromagnetic field Fμv rather than E or B separately". The field of the electron is the electromagnetic field, and it has a "screw" nature, which you can trace back to Maxwell. When you have two charged particles with no initial motion, you see linear electric force only. When you throw one past the other, you also see rotational magnetic force, as per positronium. IMHO the electron's electromagnetic field isn't totally unlike the frame-dragged gravitomagnetic field, and you can depict it by thinking "spinor" and combining radial electric field lines with concentric magnetic field lines, a bit like Maxwell's convergence + curl sketch on page 7 of this paper:
Why is the magnetic field around the wire circular?
Because it's rotationally symmetrical. Think of the wire as a column of electrons interleaved with a column of protons. Their electromagnetic fields cancel. But when you turn on the current and move the electrons, the fields don't quite cancel any more. The residual field has a cylindrical disposition, such that a charged particle thrown past the wire will loop around the magnetic field lines. We call it a magnetic field, but it's just one aspect of the greater whole that is the electromagnetic field.
John DuffieldJohn Duffield
Not the answer you're looking for? Browse other questions tagged electromagnetism special-relativity or ask your own question.
Why do magnetic fields act on moving free charges?
magnetic field lines circular around current carrying wire?
What is the difference between direction of magnetic field and sense of magnetic field?
The equivalent electric field of a magnetic field
Why moving charges cause magnetic field (module and direction)?
Why do these calculations of EM fields for a magnet and wire loop seem inconsistent?
How can magnetic fields pass through a grounded shield?
If electric fields can't pass through a grounded shield, how can a magnetic force be generated on moving charges outside a wire?
Electromagnetism - Why electric and magnetic fields are manifestations of the same phenomenon
Is the presence of a magnetic field frame-dependent?
Can we combine electric and magnetic fields?
Magnetism And Special Relativity
|
CommonCrawl
|
Not that everyone likes to talk about using the drugs. People don't necessarily want to reveal how they get their edge and there is stigma around people trying to become smarter than their biology dictates, says Lawler. Another factor is undoubtedly the risks associated with ingesting substances bought on the internet and the confusing legal statuses of some. Phenylpiracetam, for example, is a prescription drug in Russia. It isn't illegal to buy in the US, but the man-made chemical exists in a no man's land where it is neither approved nor outlawed for human consumption, notes Lawler.
My first impression of ~1g around 12:30PM was that while I do not feel like running around, within an hour I did feel like the brain fog was lighter than before. The effect wasn't dramatic, so I can't be very confident. Operationalizing brain fog for an experiment might be hard: it doesn't necessarily feel like I would do better on dual n-back. I took 2 smaller doses 3 and 6 hours later, to no further effect. Over the following weeks and months, I continued to randomly alternate between potassium & non-potassium days. I noticed no effects other than sleep problems.
One claim was partially verified in passing by Eliezer Yudkowsky (Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar…About the same as drinking a cup of coffee - i.e., it works as a perker-upper, somehow. I'm not sure, since it doesn't do anything for me except possibly mitigate foot cramps.)
There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says.
Tuesday: I went to bed at 1am, and first woke up at 6am, and I wrote down a dream; the lucid dreaming book I was reading advised that waking up in the morning and then going back for a short nap often causes lucid dreams, so I tried that - and wound up waking up at 10am with no dreams at all. Oops. I take a pill, but the whole day I don't feel so hot, although my conversation and arguments seem as cogent as ever. I'm also having a terrible time focusing on any actual work. At 8 I take another; I'm behind on too many things, and it looks like I need an all-nighter to catch up. The dose is no good; at 11, I still feel like at 8, possibly worse, and I take another along with the choline+piracetam (which makes a total of 600mg for the day). Come 12:30, and I disconsolately note that I don't seem any better, although I still seem to understand the IQ essays I am reading. I wonder if this is tolerance to modafinil, or perhaps sleep catching up to me? Possibly it's just that I don't remember what the quasi-light-headedness of modafinil felt like. I feel this sort of zombie-like state without change to 4am, so it must be doing something, when I give up and go to bed, getting up at 7:30 without too much trouble. Some N-backing at 9am gives me some low scores but also some pretty high scores (38/43/66/40/24/67/60/71/54 or ▂▂▆▂▁▆▅▇▄), which suggests I can perform normally if I concentrate. I take another pill and am fine the rest of the day, going to bed at 1am as usual.
NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab's production.) A year's supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one's eyes?), it could cost anywhere up to $10,000.
One of the most widely known classes of smart drugs on the market, Racetams, have a long history of use and a lot of evidence of their effectiveness. They hasten the chemical exchange between brain cells, directly benefiting our mental clarity and learning process. They are generally not controlled substances and can be purchased without a prescription in a lot of locations globally.
For illustration, consider amphetamines, Ritalin, and modafinil, all of which have been proposed as cognitive enhancers of attention. These drugs exhibit some positive effects on cognition, especially among individuals with lower baseline abilities. However, individuals of normal or above-average cognitive ability often show negligible improvements or even decrements in performance following drug treatment (for details, see de Jongh, Bolt, Schermer, & Olivier, 2008). For instance, Randall, Shneerson, and File (2005) found that modafinil improved performance only among individuals with lower IQ, not among those with higher IQ. [See also Finke et al 2010 on visual attention.] Farah, Haimm, Sankoorikal, & Chatterjee 2009 found a similar nonlinear relationship of dose to response for amphetamines in a remote-associates task, with low-performing individuals showing enhanced performance but high-performing individuals showing reduced performance. Such ∩-shaped dose-response curves are quite common (see Cools & Robbins, 2004)
The term "smart pills" refers to miniature electronic devices that are shaped and designed in the mold of pharmaceutical capsules but perform highly advanced functions such as sensing, imaging and drug delivery. They may include biosensors or image, pH or chemical sensors. Once they are swallowed, they travel along the gastrointestinal tract to capture information that is otherwise difficult to obtain, and then are easily eliminated from the system. Their classification as ingestible sensors makes them distinct from implantable or wearable sensors.
The benefits that they offer are gradually becoming more clearly understood, and those who use them now have the potential to get ahead of the curve when it comes to learning, information recall, mental clarity, and focus. Everyone is different, however, so take some time to learn what works for you and what doesn't and build a stack that helps you perform at your best.
Taken together, the available results are mixed, with slightly more null results than overall positive findings of enhancement and evidence of impairment in one reversal learning task. As the effect sizes listed in Table 5 show, the effects when found are generally substantial. When drug effects were assessed as a function of placebo performance, genotype, or self-reported impulsivity, enhancement was found to be greatest for participants who performed most poorly on placebo, had a COMT genotype associated with poorer executive function, or reported being impulsive in their everyday lives. In sum, the effects of stimulants on cognitive control are not robust, but MPH and d-AMP appear to enhance cognitive control in some tasks for some people, especially those less likely to perform well on cognitive control tasks.
After trying out 2 6lb packs between 12 September & 25 November 2012, and 20 March & 20 August 2013, I have given up on flaxseed meal. They did not seem to go bad in the refrigerator or freezer, and tasted OK, but I had difficulty working them into my usual recipes: it doesn't combine well with hot or cold oatmeal, and when I tried using flaxseed meal in soups I learned flaxseed is a thickener which can give soup the consistency of snot. It's easier to use fish oil on a daily basis.
Nature magazine conducted a poll asking its readers about their cognitive-enhancement practices and their attitudes toward cognitive enhancement. Hundreds of college faculty and other professionals responded, and approximately one fifth reported using drugs for cognitive enhancement, with Ritalin being the most frequently named (Maher, 2008). However, the nature of the sample—readers choosing to answer a poll on cognitive enhancement—is not representative of the academic or general population, making the results of the poll difficult to interpret. By analogy, a poll on Vermont vacations, asking whether people vacation in Vermont, what they think about Vermont, and what they do if and when they visit, would undoubtedly not yield an accurate estimate of the fraction of the population that takes its vacations in Vermont.
The research literature, while copious, is messy and varied: methodologies and devices vary substantially, sample sizes are tiny, the study designs vary from paper to paper, metrics are sometimes comically limited (one study measured speed of finishing a RAPM IQ test but not scores), blinding is rare and unclear how successful, etc. Relevant papers include Chung et al 2012, Rojas & Gonzalez-Lima 2013, & Gonzalez-Lima & Barrett 2014. Another Longecity user ran a self-experiment, with some design advice from me, where he performed a few cognitive tests over several periods of LLLT usage (the blocks turned out to be ABBA), using his father and towels to try to blind himself as to condition. I analyzed his data, and his scores did seem to improve, but his scores improved so much in the last part of the self-experiment I found myself dubious as to what was going on - possibly a failure of randomness given too few blocks and an temporal exogenous factor in the last quarter which was responsible for the improvement.
The flanker task is designed to tax cognitive control by requiring subjects to respond based on the identity of a target stimulus (H or S) and not the more numerous and visually salient stimuli that flank the target (as in a display such as HHHSHHH). Servan-Schreiber, Carter, Bruno, and Cohen (1998) administered the flanker task to subjects on placebo and d-AMP. They found an overall speeding of responses but, more importantly, an increase in accuracy that was disproportionate for the incongruent conditions, that is, the conditions in which the target and flankers did not match and cognitive control was needed.
The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total.
Most stock quote data provided by BATS. Market indices are shown in real time, except for the DJIA, which is delayed by two minutes. All times are ET. Disclaimer. Morningstar: Copyright 2018 Morningstar, Inc. All Rights Reserved. Factset: FactSet Research Systems Inc.2018. All rights reserved. Chicago Mercantile Association: Certain market data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Dow Jones: The Dow Jones branded indices are proprietary to and are calculated, distributed and marketed by DJI Opco, a subsidiary of S&P Dow Jones Indices LLC and have been licensed for use to S&P Opco, LLC and CNN. Standard & Poor's and S&P are registered trademarks of Standard & Poor's Financial Services LLC and Dow Jones is a registered trademark of Dow Jones Trademark Holdings LLC. All content of the Dow Jones branded indices Copyright S&P Dow Jones Indices LLC 2018 and/or its affiliates.
Most people would describe school as a place where they go to learn, so learning is an especially relevant cognitive process for students to enhance. Even outside of school, however, learning plays a role in most activities, and the ability to enhance the retention of information would be of value in many different occupational and recreational contexts.
The prefrontal cortex at the front of the brain is the zone that produces such representations, and it is the focus of Arnsten's work. "The way the prefrontal cortex creates these representations is by having pyramidal cells – they're actually shaped like little pyramids – exciting each other. They keep each other firing, even when there's no information coming in from the environment to stimulate the circuits," she explains.
One of the most popular legal stimulants in the world, nicotine is often conflated with the harmful effects of tobacco; considered on its own, it has performance & possibly health benefits. Nicotine is widely available at moderate prices as long-acting nicotine patches, gums, lozenges, and suspended in water for vaping. While intended for smoking cessation, there is no reason one cannot use a nicotine patch or nicotine gum for its stimulant effects.
Price discrimination is aided by barriers such as ignorance and oligopolies. An example of the former would be when I went to a Food Lion grocery store in search of spices, and noticed that there was a second selection of spices in the Hispanic/Latino ethnic food aisle, with unit prices perhaps a fourth of the regular McCormick-brand spices; I rather doubt that regular cinnamon varies that much in quality. An example of the latter would be using veterinary drugs on humans - any doctor to do so would probably be guilty of medical malpractice even if the drugs were manufactured in the same factories (as well they might be, considering economies of scale). Similarly, we can predict that whenever there is a veterinary drug which is chemically identical to a human drug, the veterinary drug will be much cheaper, regardless of actual manufacturing cost, than the human drug because pet owners do not value their pets more than themselves. Human drugs are ostensibly held to a higher standard than veterinary drugs; so if veterinary prices are higher, then there will be an arbitrage incentive to simply buy the cheaper human version and downgrade them to veterinary drugs.
"Cavin's enthusiasm and drive to help those who need it is unparalleled! He delivers the information in an easy to read manner, no PhD required from the reader. 🙂 Having lived through such trauma himself he has real empathy for other survivors and it shows in the writing. This is a great read for anyone who wants to increase the health of their brain, injury or otherwise! Read it!!!"
For 2 weeks, upon awakening I took close-up photographs of my right eye. Then I ordered two jars of Life-Extension Sea-Iodine (60x1mg) (1mg being an apparently safe dose), and when it arrived on 10 September 2012, I stopped the photography and began taking 1 iodine pill every other day. I noticed no ill effects (or benefits) after a few weeks and upped the dose to 1 pill daily. After the first jar of 60 pills was used up, I switched to the second jar, and began photography as before for 2 weeks. The photographs were uploaded, cropped by hand in Gimp, and shrunk to more reasonable dimensions; both sets are available in a Zip file.
Do note that this isn't an extensive list by any means, there are plenty more 'smart drugs' out there purported to help focus and concentration. Most (if not all) are restricted under the Psychoactive Substances Act, meaning they're largely illegal to sell. We strongly recommend against using these products off-label, as they can be dangerous both due to side effects and their lack of regulation on the grey/black market.
Dopaminergics are smart drug substances that affect levels of dopamine within the brain. Dopamine is a major neurotransmitter, responsible for the good feelings and biochemical positive feedback from behaviors for which our biology naturally rewards us: tasty food, sex, positive social relationships, etc. Use of dopaminergic smart drugs promotes attention and alertness by either increasing the efficacy of dopamine within the brain, or inhibiting the enzymes that break dopamine down. Examples of popular dopaminergic smart drug drugs include Yohimbe, selegiline and L-Tyrosine.
But where will it all stop? Ambitious parents may start giving mind-enhancing pills to their children. People go to all sorts of lengths to gain an educational advantage, and eventually success might be dependent on access to these mind-improving drugs. No major studies have been conducted on the long-term effects. Some neuroscientists fear that, over time, these memory-enhancing pills may cause people to store too much detail, cluttering the brain. Read more about smart drugs here.
Modafinil, sold under the name Provigil, is a stimulant that some have dubbed the "genius pill." It is a wakefulness-promoting agent (modafinil) and glutamate activators (ampakine). Originally developed as a treatment for narcolepsy and other sleep disorders, physicians are now prescribing it "off-label" to cellists, judges, airline pilots, and scientists to enhance attention, memory and learning. According to Scientific American, "scientific efforts over the past century [to boost intelligence] have revealed a few promising chemicals, but only modafinil has passed rigorous tests of cognitive enhancement." A stimulant, it is a controlled substance with limited availability in the U.S.
Rabiner et al. (2009) 2007 One public and one private university undergraduates (N = 3,390) 8.9% (while in college), 5.4% (past 6 months) Most common reasons endorsed: to concentrate better while studying, to be able to study longer, to feel less restless while studying 48%: from a friend with a prescription; 19%: purchased it from a friend with a prescription; 6%: purchased it from a friend without a prescription
Lebowitz says that if you're purchasing supplements to improve your brain power, you're probably wasting your money. "There is nothing you can buy at your local health food store that will improve your thinking skills," Lebowitz says. So that turmeric latte you've been drinking everyday has no additional brain benefits compared to a regular cup of java.
How exactly – and if – nootropics work varies widely. Some may work, for example, by strengthening certain brain pathways for neurotransmitters like dopamine, which is involved in motivation, Barbour says. Others aim to boost blood flow – and therefore funnel nutrients – to the brain to support cell growth and regeneration. Others protect brain cells and connections from inflammation, which is believed to be a factor in conditions like Alzheimer's, Barbour explains. Still others boost metabolism or pack in vitamins that may help protect the brain and the rest of the nervous system, explains Dr. Anna Hohler, an associate professor of neurology at Boston University School of Medicine and a fellow of the American Academy of Neurology.
Other drugs, like cocaine, are used by bankers to manage their 18-hour workdays [81]. Unlike nootropics, dependency is very likely and not only mentally but also physically. Bankers and other professionals who take drugs to improve their productivity will become dependent. Almost always, the negative consequences outweigh any positive outcomes from using drugs.
Googling, you sometimes see correlational studies like Intake of Flavonoid-Rich Wine, Tea, and Chocolate by Elderly Men and Women Is Associated with Better Cognitive Test Performance; in this one, the correlated performance increase from eating chocolate was generally fairly modest (say, <10%), and the maximum effects were at 10g/day of what was probably milk chocolate, which generally has 10-40% chocolate liquor in it, suggesting any experiment use 1-4g. More interesting is the blind RCT experiment Consumption of cocoa flavanols results in acute improvements in mood and cognitive performance during sustained mental effort11, which found improvements at ~1g; the most dramatic improvement of the 4 tasks (on the Threes correct) saw a difference of 2 to 6 at the end of the hour of testing, while several of the other tests converged by the end or saw the controls winning (Sevens correct). Crews et al 2008 found no cognitive benefit, and an fMRI experiment found the change in brain oxygen levels it wanted but no improvement to reaction times.
On the other end of the spectrum is the nootropic stack, a practice where individuals create a cocktail or mixture of different smart drugs for daily intake. The mixture and its variety actually depend on the goals of the user. Many users have said that nootropic stacking is more effective for delivering improved cognitive function in comparison to single nootropics.
Each nootropic comes with a recommended amount to take. This is almost always based on a healthy adult male with an average weight and 'normal' metabolism. Nootropics (and many other drugs) are almost exclusively tested on healthy men. If you are a woman, older, smaller or in any other way not the 'average' man, always take into account that the quantity could be different for you.
Coconut oil was recommended by Pontus Granström on the Dual N-Back mailing list for boosting energy & mental clarity. It is fairly cheap (~$13 for 30 ounces) and tastes surprisingly good; it has a very bad reputation in some parts, but seems to be in the middle of a rehabilitation. Seth Robert's Buttermind experiment found no mental benefits to coconut oil (and benefits to eating butter), but I wonder.
Soldiers should never be treated like children; because then they will act like them. However, There's a reason why the 1SG is known as the Mother of the Company and the Platoon Sergeant is known as a Platoon Daddy. Because they run the day to day operations of the household, get the kids to school so to speak, and focus on the minutia of readiness and operational execution in all its glory. Officers forget they are the second link in the Chain of Command and a well operating duo of Team Leader and Squad Leader should be handling 85% of all Soldier issues, while the Platoon sergeant handles the other 15% with 1SG. Platoon Leaders and Commanders should always be present; training, leading by example, focusing on culture building, tracking and supporting NCO's. They should be focused on big business sides of things, stepping in to administer punishment or award and reward performance. If an officer at any level is having to step into a Soldier's day to day lives an NCO at some level is failing. Officers should be junior Officers and junior Enlisted right along side their counterparts instead of eating their young and touting their "maturity" or status. If anything Officers should be asking their NCO's where they should effect, assist, support or provide cover toward intitiatives and plans that create consistency and controlled chaos for growth of individuals two levels up and one level down of operational capabilities at every echelon of command.
Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector.
I started with the 10g of Vitality Enhanced Blend, a sort of tan dust. Used 2 little-spoonfuls (dust tastes a fair bit like green/oolong tea dust) into the tea mug and then some boiling water. A minute of steeping and… bleh. Tastes sort of musty and sour. (I see why people recommended sweetening it with honey.) The effects? While I might've been more motivated - I hadn't had caffeine that day and was a tad under the weather, a feeling which seemed to go away perhaps half an hour after starting - I can't say I experienced any nausea or very noticeable effects. (At least the flavor is no longer quite so offensive.)
…The first time I took supplemental potassium (50% US RDA in a lot of water), it was like a brain fog lifted that I never knew I had, and I felt profoundly energized in a way that made me feel exercise was reasonable and prudent, which resulted in me and the roommate that had just supplemented potassium going for an hour long walk at 2AM. Experiences since then have not been quite so profound (which probably was so stark for me as I was likely fixing an acute deficiency), but I can still count on a moderately large amount of potassium to give me a solid, nearly side effect free performance boost for a few hours…I had been doing Bikram yoga on and off, and I think I wasn't keeping up the practice because I wasn't able to properly rehydrate myself.
Stimulants are drugs that accelerate the central nervous system (CNS) activity. They have the power to make us feel more awake, alert and focused, providing us with a needed energy boost. Unfortunately, this class encompasses a wide range of drugs, some which are known solely for their side-effects and addictive properties. This is the reason why many steer away from any stimulants, when in fact some greatly benefit our cognitive functioning and can help treat some brain-related impairments and health issues.
"I love this book! As someone that deals with an autoimmune condition, I deal with sever brain fog. I'm currently in school and this has had a very negative impact on my learning. I have been looking for something like this to help my brain function better. This book has me thinking clearer, and my memory has improved. I'm eating healthier and overall feeling much better. This book is very easy to follow and also has some great recipes included."
Although piracetam has a history of "relatively few side effects," it has fallen far short of its initial promise for treating any of the illnesses associated with cognitive decline, according to Lon Schneider, a professor of psychiatry and behavioral sciences at the Keck School of Medicine at the University of Southern California. "We don't use it at all and never have."
Take quarter at midnight, another quarter at 2 AM. Night runs reasonably well once I remember to eat a lot of food (I finish a big editing task I had put off for weeks), but the apathy kicks in early around 4 AM so I gave up and watched Scott Pilgrim vs. the World, finishing around 6 AM. I then read until it's time to go to a big shotgun club function, which occupies the rest of the morning and afternoon; I had nothing to do much of the time and napped very poorly on occasion. By the time we got back at 4 PM, the apathy was completely gone and I started some modafinil research with gusto (interrupted by going to see Puss in Boots). That night: Zeo recorded 8:30 of sleep, gap of about 1:50 in the recording, figure 10:10 total sleep; following night, 8:33; third night, 8:47; fourth, 8:20 (▇▁▁▁).
When I worked on the Bulletproof Diet book, I wanted to verify that the effects I was getting from Bulletproof Coffee were not coming from modafinil, so I stopped using it and measured my cognitive performance while I was off of it. What I found was that on Bulletproof Coffee and the Bulletproof Diet, my mental performance was almost identical to my performance on modafinil. I still travel with modafinil, and I'll take it on occasion, but while living a Bulletproof lifestyle I rarely feel the need.
|
CommonCrawl
|
Random reals and strongly meager sets
Adding a single Cohen real makes the set of reals from the ground model strong measure zero (see this question).
The notion of strong measure zero sets has its dual concept in the category branch -- strongly meager sets. A set $X\subseteq \mathbb{R}$ is strongly meager if for any null set $Y$ there exists $t\in \mathbb{R}$ such that $(t+X)\cap Y=\varnothing$. One can see duality of these notions due to Galvin-Mycielski-Solovay Theorem which states that a set $X\subseteq \mathbb{R}$ is strong measure zero if and only if for any meager set $Y$ there exists $t\in \mathbb{R}$ such that $(t+X)\cap Y=\varnothing$.
Random real forcing is dual to Cohen forcing in the sense of measure and category. Therefore it makes sense to ask, whether:
The set of reals from generic model $ \mathbb{R}\cap V$ is strongly meager after adding a single random real?
I have heard that the answer is affirmative, but I have not been able to find any published proof. Note that $\mathbb{R}\cap V$ is meager after adding a random real (see this question).
set-theory forcing lo.logic
m_korchm_korch
As I have written above the affimative answer itself was known to many people including T. Bartoszyński. The following proof is due to T. Weiss (my advisor).
Proof. We follow closely the proof and notation of Lemma 3.2.42 from [1]. Let $A$ be a Borel measure zero set in $M[r]$, where $r$ is a random real over $M$. There exists $\dot{A}\subseteq 2^{\omega}\times 2^{\omega}$ measure zero set coded in $M$, such that $\dot{A}_{r}=A$ (notation: $\dot{A}_{r}=\{y\colon \left<r,y\right>\in \dot{A}\}$).
Then $$\dot{A}\subseteq\bigcap_{m\in\omega}\bigcup_{n\geq m} [s_{n}]\times[t_{n}]$$ where $s_n, t_n\in 2^{<\omega}$, $\sum_{n=0}^{\infty}\frac{1}{2^{2|s_{n}|}}<\infty$ and we can assume that $|t_{n}|=|s_{n}|$ for any $n\in\omega$.
Let $z\in 2^{\omega}\cap M$ and $f\in\omega^\omega$ be increasing. Then $$\mu(\{x\colon\left<x,x_f+z\right>\in [s]\times[t]\})\leq \frac{2^{f^{-1}(|s|)}}{2^{|s|+|t|}}$$ (where $x_f\in 2^{\omega}$ such that $x_{f}(n)=x(f(n))$). By induction on length $|s_{n}|$ we define an increasing function $f_{A}\in\omega^{\omega}$ such that $$\sum_{n=0}^{\infty}\frac{2^{f_{A}^{-1}(|s_{n}|)}}{2^{2|s_{n}|}}<\infty.$$ It is easy to see that such function exists as for any $\varepsilon>0$ we can find $N_{\varepsilon}\in\omega$ such that $\sum_{n\geq N_{\varepsilon}}\frac{1}{2^{2|s_{n}|}}<\varepsilon$.
Notice also that $\left< x,x_{f}+z\right>\in [s]\times[t] $ if and only if $\left<x,x_{f}\right>\in[s]\times [t+z]$.
The set $$H_{z}=\{x\in\left<x,x_{f_A}+z\right>\in\bigcap_{m\in\omega}\bigcup_{n\geq m} [s_{n}]\times[t_{n}]\}$$ has measure zero and is coded in $M$ for every $z\in 2^{\omega}\cap M$. Thus $r\notin H_{z}$ and $\left<r,r_{f_{A}}+z\right>\notin \dot{A}$. This implies that $r_{f_{A}}\notin A+z$ for every $z\in 2^{\omega}\cap M$, so $(2^{\omega}\cap M)+A\neq 2^{\omega}$ and so $2^{\omega}\cap M$ is strongly meager.
$\square$
[1] T. Bartoszyński, H. Judah, Set thoery: on the structure of the real line, A K Peters, 1995
Not the answer you're looking for? Browse other questions tagged set-theory forcing lo.logic or ask your own question.
Adding a random real makes the set of ground model reals meager
Cohen reals and strong measure zero sets
Dual Borel conjecture in Laver's model
Restrictions of null/meager ideal
Antirandom reals
Representation of meager sets in Cohen extensions
On a strengthening of strong measure zero
Solovay-random pairs of reals
Characterization of Cohen reals
Laver property, non-meager reals and cardinal characteristics
|
CommonCrawl
|
Is first-order logic more expressive than propositional logic with infinite statements?
I read that the difference between propositional logic and first-order logic is that in the latter, we can quantify over individual objects. However, if infinitely long statements are allowed, it appears to me that statements in first-order logic can be turned into statements in propositional logic by the following process:
Turn existential quantifiers into universal quantifiers: $\exists x .\varphi \left( x\right) \longrightarrow \neg \forall x .\neg \varphi \left( x\right)$
Expand universal quantifiers into (possibly infinite) conjunctions (if the quantifiers are nested, do this repeatedly from the outermost one): $\forall x\in \left\{ a, b,\ldots \right\} .\varphi \left( x\right) \longrightarrow \varphi \left( a\right) \wedge \varphi \left( b\right) \wedge \ldots $
Substitute predicates for their definitions, with the parameters instead of free variables: when $\varphi \left( x\right) : x = 10$, $\varphi \left( a\right) \longrightarrow a = 10$
Is this correct, and is propositional logic with infinite statements as expressive as first-order logic, or are there statements that can't be converted this way?
logic propositional-calculus first-order-logic
Pkkm
PkkmPkkm
$\begingroup$ Infinitary logic is generally stronger than first-order logic. It's unclear what exactly you're trying to do. Moreover, are you allowing infinite quantifier depth, or only finite one? Even more so, what does it mean $\forall x\in\{1,2,\ldots\}$? The natural numbers are not part of the logic. What if your domain of discourse is uncountable? $\endgroup$ – Asaf Karagila♦ Aug 11 '13 at 23:12
$\begingroup$ The $\left\{ 1,2,\ldots \right\}$ was supposed to be an example set. I'll edit the post and hopefully make this less confusing. $\endgroup$ – Pkkm Aug 11 '13 at 23:18
$\begingroup$ It's still ambiguous. Also, infinite as in countably infinite or any length? $\endgroup$ – Asaf Karagila♦ Aug 11 '13 at 23:28
$\begingroup$ I overlooked uncountability. If the set is uncountable, there can't be an infinite conjunction with predicates on all of its elements, so first-order logic is more expressive in this case, correct? $\endgroup$ – Pkkm Aug 11 '13 at 23:38
$\begingroup$ Why not? It seems to me that you really need to start by learning a lot more logic and at least some more set theory, before attempting this topic. $\endgroup$ – Asaf Karagila♦ Aug 11 '13 at 23:45
The answer to your question is a qualified no. Part of the reason is that we can't assume that every object in our model is named by an individual constant. So for instance, it could be that our model satisfies the sentence $\bigwedge_{i \in I} \neg P(c_i)$, where "$\bigwedge_{i \in I}$" indicates (possibly infinite) conjunction over index set $I$, and $c_i$ are all constants of the language, and yet this same model also satisfies the sentence $\exists x P(x)$. It's just that the object in our model which satisfies $P(x)$ is unnamed.
Of course, you're right that there is a strong analogy between quantifiers and infinite conjunctions/disjunctions in the following sense: if we require that every object in our domain is named by a constant, and if we allow for arbitrary conjuncts/disjuncts, then we can translate the quantified sentences into quantifier-free sentences using (possibly infinite) conjunctions/disjunctions. Logicians sometimes define substitutional quantifiers for this purpose: for instance, letting $\Sigma$ be a new substitutional quantifier, $\Sigma x \varphi(x)$ is true in a model just in case for some constant $c$, $\varphi(c)$ is true in that model, i.e. just in case $\bigvee_{i \in I} \varphi(c_i)$ is true in that model, where $I$ indexes the constants of $L$.
With that said, an infinitary propositional logic without quantifiers is not the same as a first-order logic with quantifiers. For one thing, in a propositional logic, you can only say $p$ is true or false. Your models aren't collections of objects with structure, but rather are simply truth value assignments for the proposition letters. So it's hard to say in what sense, if any, an infinitary propositional logic is the same as first-order logic without infinitary conjunctions/disjunctions. Their models don't even look alike.
Furthermore, even an infinitary predicate logic without quantifiers fails to be equivalent to first-order logic with quantifiers (but only finite conjunctions/disjunctions). The reason is simple: in first-order logic, there is no sentence which is true exactly when the domain is infinite. However, if the language you invoke has (at least) countably many constants $c_i$, then the sentence $\bigwedge_{\substack{i,j \in \omega \\ i \neq j}} c_i \neq c_j$ can only be true in infinite models. Hence, infinitary predicate logic without quantifiers is not compact, and so can't be equivalent to first-order logic.
Alex KocurekAlex Kocurek
Not the answer you're looking for? Browse other questions tagged logic propositional-calculus first-order-logic or ask your own question.
Infinite Disjunctions and Conjunctions
Why is quantified propositional logic not part of first-order logic?
Propositional Logic and First-Order Logic
Making first order logic statements
Questions about Gödel, formal systems, propositional calculus and first order logic.
First-Order Logic into Set Theory?
Lambda calculus combined with first order logic notation (quantifiers, propositional connectives, and set notation)
Generalized Formal First Order Logic
Is unsorted first-order logic equivalently expressive to infinitely-many-sorted first-order logic?
First order logic without existential quantifier, does the position of the "forall" matter?
|
CommonCrawl
|
August 2004 , Volume 4 , Issue 3
Advances in Mathematical Biology
Guest Editors: Lansun Chen, Yang Kuang, Shigui Ruan and Glenn Webb
Asymptotic behavior of disease-free equilibriums of an age-structured predator-prey model with disease in the prey
Ovide Arino, Manuel Delgado and Mónica Molina-Becerra
Local stability and instability of the disease-free equilibriums of an age-structured predator-prey model with disease in the prey is examined. The basic idea is to apply the linearized stability principle and the theory of semigroups.
Ovide Arino, Manuel Delgado, M\u00F3nica Molina-Becerra. Asymptotic behavior of disease-free equilibriums of an age-structured predator-prey model with disease in the prey. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 501-515. doi: 10.3934/dcdsb.2004.4.501.
A stochastic model for the dynamics of a stage structured population
G. Buffoni, S. Pasquali and G. Gilioli
A stochastic model for the dynamics of a single species of a stage structured population is presented. The model (in Lagrangian or Monte Carlo formulation) describes the life history of an individual assumed completely determined by the biological processes of development, mortality and reproduction. The dynamics of the overall population is obtained by the time evolution of the number of the individuals and of their physiological age. No other assumption is requested on the structure of the biological cycle and on the initial conditions of the population. Both a linear and a nonlinear models have been implemented. The nonlinearity takes into account the feedback of the population size on the mortality rate of the offsprings. For the linear case, i.e. when the population growths without any feedback dependent on the population size, the balance equations for the overall population density are written in the Eulerian formalism (equations of Von Foerster type in the deterministic case and of Fokker-Planck type in the stochastic case). The asymptotic solutions to these equations, for sufficiently large time, are in good agreement with the results of the numerical simulations of the Lagrangian model. As a case study the model is applied to simulate the dynamics of the greenhouse whitefly, Trialeurodes vaporarioum (Westwood), a highly polyphagous pest insect, on tomato host plants.
G. Buffoni, S. Pasquali, G. Gilioli. A stochastic model for the dynamics of a stage structured population. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 517-525. doi: 10.3934/dcdsb.2004.4.517.
Intraspecific interference and consumer-resource dynamics
Robert Stephen Cantrell, Chris Cosner and Shigui Ruan
In this paper we first consider a two consumer-one resource model with one of the consumer species exhibits intraspecific feeding interference but there is no interspecific competition between the two consumer species. We assume that one consumer species exhibits Holling II functional response while the other consumer species exhibits Beddington-DeAngelis functional response. Using dynamical systems theory, it is shown that the two consumer species can coexist upon the single limiting resource in the sense of uniform persistence. Moreover, by constructing a Liapunov function it is shown that the system has a globally stable positive equilibrium. Second, we consider a model with an arbitrary number of consumers and one single limiting resource. By employing practical persistence techniques, it is shown that multiple consumer species can coexist upon a single resource as long as all consumers exhibit sufficiently strong conspecific interference, that is, each of them exhibits Beddington-DeAngelis functional response.
Robert Stephen Cantrell, Chris Cosner, Shigui Ruan. Intraspecific interference and consumer-resource dynamics. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 527-546. doi: 10.3934/dcdsb.2004.4.527.
Permanence of predator-prey system with stage structure
Jing-An Cui and Xinyu Song
We consider a periodic predator-prey system where the prey has a history that takes them through two stages, immature and mature. We provide a sufficient and necessary condition to guarantee the permanence of the system.
Jing-An Cui, Xinyu Song. Permanence of predator-prey system with stage structure. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 547-554. doi: 10.3934/dcdsb.2004.4.547.
A monotone-iterative method for finding periodic solutions of an impulsive competition system on tumor-normal cell interaction
Jiawei Dou, Lan-sun Chen and Kaitai Li
In this paper, a monotone-iterative scheme is established for finding positive periodic solutions of a competition model of tumor-normal cell interaction. The model describes the evolution of a population with normal and tumor cells in a periodically changing environment. This population is under periodical chemotherapeutic treatment. Competition among the two kinds of cells is considered. The mathematical problem involves a coupled system of Lotka-Volterra together with periodically pulsed conditions. The existence of positive periodic solutions is proved by the monotone iterative technique and in a special case, the uniqueness of a periodic solution is obtained by proving that any two periodic solutions have the same average. Moreover, we also show that the system is permanent under the conditions which guarantee the existence of the periodic solution. Some computer simulations are carried out to demonstrate the main results.
Jiawei Dou, Lan-sun Chen, Kaitai Li. A monotone-iterative method for finding periodic solutions of an impulsive competition system on tumor-normal cell interaction. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 555-562. doi: 10.3934/dcdsb.2004.4.555.
Periodic solutions of a class of nonautonomous discrete time semi-ratio-dependent predator-prey systems
Meng Fan and Qian Wang
In this paper, we establish sufficient criteria for the existence of positive periodic solutions for a class of discrete time semi-ratio-dependent predator-prey interaction models based on systems of nonautonomous difference equations. The approach involves the coincidence degree and its related continuation theorem as well as some priori estimates.
Meng Fan, Qian Wang. Periodic solutions of a class of nonautonomous discrete time semi-ratio-dependent predator-prey systems. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 563-574. doi: 10.3934/dcdsb.2004.4.563.
The dynamics of public goods
Christoph Hauert, Nina Haiden and Karl Sigmund
We analyze the replicator equation for two games closely related with the social dilemma occurring in public goods situations. In one case, players can punish defectors in their group. In the other case, they can choose not to take part in the game. In both cases, interactions are not pairwise and payoffs non-linear. Nevertheless, the qualitative dynamics can be fully analyzed. The games offer potential solutions for the problem of the emergence of cooperation in sizeable groups of non-related individuals -- a basic question in evolutionary biology and economics.
Christoph Hauert, Nina Haiden, Karl Sigmund. The dynamics of public goods. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 575-587. doi: 10.3934/dcdsb.2004.4.575.
Optimal birth control problems for nonlinear age-structured population dynamics
Z.-R. He, M.-S. Wang and Z.-E. Ma
We study the least cost-size problem and the least cost-deviation problem for a nonlinear population model with age-dependence, which takes fertility rate as the control variable. The existence of a unique optimal control and the optimality conditions of first order are investigated by means of Ekeland's variational principle and normal cone technique. Our conclusion extends a known result in the literature.
Z.-R. He, M.-S. Wang, Z.-E. Ma. Optimal birth control problems for nonlinear age-structured population dynamics. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 589-594. doi: 10.3934/dcdsb.2004.4.589.
Impulsive vaccination of sir epidemic models with nonlinear incidence rates
Jing Hui and Lansun Chen
The impulsive vaccination strategies of the epidemic SIR models with nonlinear incidence rates $\beta I^{p}S^{q}$ are considered in this paper. Using the discrete dynamical system determined by the stroboscopic map, we obtain the exact periodic infection-free solution of the impulsive epidemic system and prove that the periodic infection-free solution is globally asymptotically stable. In order to apply vaccination pulses frequently enough so as to eradicate the disease, the threshold for the period of pulsing, i.e. $\tau _{max}$ is shown, further, by bifurcation theory, we obtain a supercritical bifurcation at this threshold, i.e. when $\tau>\tau_{max}$ and is closing to $\tau_{max}$, there is a stable positive periodic solution. Throughout the paper, we find impulsive epidemiological models with nonlinear incidence rates $\beta I^{p}S^{q}$ show a much wider range of dynamical behaviors than do those with bilinear incidence rate $\beta SI$ and our paper extends the previous results, at the same time, theoretical results show that pulse vaccination strategy is distinguished from the conventional strategies in leading to disease eradication at relatively low values of vaccination, therefore impulsive vaccination strategy provides a more natural, more effective vaccination strategy.
Jing Hui, Lansun Chen. Impulsive vaccination of sir epidemic models with nonlinear incidence rates. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 595-605. doi: 10.3934/dcdsb.2004.4.595.
The impact of state feedback control on a predator-prey model with functional response
Haiying Jing and Zhaoyu Yang
In this paper, we study the impact of feedback control on a predator-prey model with functional response. It is proven that the position and number of positive equilibria and limit cycles, parameter domain of stability and bifurcations of such model can be changed by some feedback control which has the form $u=kx+h.$ The main results of this paper show that a constant control has a stronger impact on the properties of this model than a proportional state feedback.
Haiying Jing, Zhaoyu Yang. The impact of state feedback control on a predator-prey model with functional response. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 607-614. doi: 10.3934/dcdsb.2004.4.607.
A note on the stability analysis of pathogen-immune interaction dynamics
Tsuyoshi Kajiwara and Toru Sasaki
The stability analysis of the interior equilibria, whose components are all positive, of non linear ordinary differential equation models describing in vivo dynamics of infectious diseases are complicated in general. Liu, "Nonlinear oscillation in models of immune responses to persistent viruses, Theor. Popul. Biol. 52(1997), 224-230" and Murase, Sasaki and Kajiwara, "Stability analysis of pathogen-immune interaction dynamics (submitted)" proved the stability of the interior equilibria of such models using symbolic calculation software on computers. In this paper, proofs without using symbolic calculation software of the stability theorems given by Liu and Murase et al. are presented. Simple algebraic manipulations, properties of determinants, and their derivatives are used. The details of the calculation given by symbolic calculation software can be seen clearly.
Tsuyoshi Kajiwara, Toru Sasaki. A note on the stability analysis of pathogen-immune interaction dynamics. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 615-622. doi: 10.3934/dcdsb.2004.4.615.
Two general models for the simulation of insect population dynamics
Dianmo Li, Zengxiang Gao, Zufei Ma, Baoyu Xie and Zhengjun Wang
Detailed studies of single species population dynamics are important for understanding population behaviour and the analysis of large complex ecosystems. Here we present two general models for simulating insect population dynamics: The distributed delay processes and Poisson Process models. In the distributed delay processes model, the simulated population has the characteristic property that the time required for maturation from one stage of growth (instar) to another is directly related to ambient temperature. In this model the parameters DEL and K are significant to the simulated process. The discrete Poisson model deals with the individual development of a group of free entities with random forward movement. These two general component models can be used to simulate the population growth of many insects currently the subject of research interest. The application of distributed delay processes to dynamics of cotton bollworm helicoverpa armigera is presented. The results show the simulation data quite "fit" the observed data.
Dianmo Li, Zengxiang Gao, Zufei Ma, Baoyu Xie, Zhengjun Wang. Two general models for the simulation of insect population dynamics. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 623-628. doi: 10.3934/dcdsb.2004.4.623.
Allee effect and a catastrophe model of population dynamics
Dianmo Li, Zhen Zhang, Zufei Ma, Baoyu Xie and Rui Wang
Some assumptions of Logistic Equation are frequently violated. We applied the Allee effect to the Logistic Equation so as to avoid these unrealistic assumptions. Following basic principles of Catastrophe theory, this new model is identical to a Fold catastrophe type model. An ecological interpretation of the results is provided.
Dianmo Li, Zhen Zhang, Zufei Ma, Baoyu Xie, Rui Wang. Allee effect and a catastrophe model of population dynamics. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 629-634. doi: 10.3934/dcdsb.2004.4.629.
Stability analysis for SIS epidemic models with vaccination and constant population size
Jianquan Li and Zhien Ma
This paper investigates two types of SIS epidemic model with vaccination and constant population size to determine to the thresholds, equilibria, and stabilities. One of SIS models is a delay differential equations, in which the period of immunity due to vaccination is a constant. Another is an ordinary differential equations, in which the loss of immunity due to vaccination is in the exponent form. We find all of their thresholds respectively, and compare them. The disease-free equilibrium is globally asymptotically stable if the threshold is not greater than one; the endemic equilibrium is globally asymptotically stable if the threshold is greater than one.
Jianquan Li, Zhien Ma. Stability analysis for SIS epidemic models with vaccination and constant population size. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 635-642. doi: 10.3934/dcdsb.2004.4.635.
Global stability of an age-structured SIRS epidemic model with vaccination
Geni Gupur and Xue-Zhi Li
This paper focuses on the study of an age-structured SIRS epidemic model with a vaccination program. We first give the explicit expression of the reproductive number $ \mathcal{R}(\psi) $ in the presence of vaccine, and show that the infection-free steady state is locally asymptotically stable if $ \mathcal{R}(\psi)<1 $ and unstable if $ \mathcal{R}(\psi)>1 $. Second, we prove that the infection-free state is globally stable if the basic reproductive number $ \mathcal{R}_0 <1 $, and that an endemic equilibrium exists when the reproductive number $ \mathcal{R}(\psi)>1 $.
Geni Gupur, Xue-Zhi Li. Global stability of an age-structured SIRS epidemic model with vaccination. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 643-652. doi: 10.3934/dcdsb.2004.4.643.
Persistence and periodic solutions of a nonautonomous predator-prey diffusion with Holling III functional response and continuous delay
Zhijun Liu and Weidong Wang
A nonautonomous diffusion model with Holling III functional response and continuous time delay is considered in this paper, where all parameters are time dependent and the prey can diffuse between two patches of a heterogeneous environment with barriers between patches, but for the predator the diffusion does not involve a barrier between patches. It is shown that the system is persistent under any diffusion rate effect. Moreover, sufficient conditions that guarantee the existence of a positive periodic solution which is globally asymptotic stable are obtained.
Zhijun Liu, Weidong Wang. Persistence and periodic solutions of a nonautonomous predator-prey diffusion with Holling III functional response and continuous delay. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 653-662. doi: 10.3934/dcdsb.2004.4.653.
Global stability for a chemostat-type model with delayed nutrient recycling
Zhiqi Lu
In this paper, we consider the question of global stability of the positive equilibrium in a chemostat-type system with delayed nutrient recycling. By constructing Liapunov function, we obtain a sufficient condition for the global stability of the positive equilibrium.
Zhiqi Lu. Global stability for a chemostat-type model with delayed nutrient recycling. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 663-670. doi: 10.3934/dcdsb.2004.4.663.
Asymptotic properties of a delayed SIR epidemic model with density dependent birth rate
Wanbiao Ma and Yasuhiro Takeuchi
In this paper, we consider a delayed $SIR$ epidemic model with density dependent birth process. For the model with larger birth rate, we discuss the asymptotic property of its solutions. Furthermore, we also study the existence of Hopf bifurcation from the endemic equilibrium of the model and local asymptotic stability of the endemic equilibrium.
Wanbiao Ma, Yasuhiro Takeuchi. Asymptotic properties of a delayed SIR epidemic model with density dependent birth rate. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 671-678. doi: 10.3934/dcdsb.2004.4.671.
Quantifying the danger for Parnassius nomion on Beijing Dongling mountain
Dianmo Li, Zufei Ma and Baoyu Xie
It is the major task of the researches of conservation biology to explore species existing necessary conditions and endanger mechanism [1]. Presently, population viability analysis models mainly focus on a single species and few of them take into account the influence of inter-species effect to aimed species [2][3]. It is more difficult to apply traditional population viability analysis to insects, as compared to birds or mammals. First, insects have complex life histories, small body and various species. For animals that have body length between 10m and 1cm, the number of the species increases by 100 times with the body length shorten by 1/10 [4]. Biologists' knowledge is far from completely understanding insect species, or even the number of insect, because it is very difficult to obtain the life parameters of wild insect populations. Second, biologists are accustomed to study the key species of the community, which are often the topmost taxa in biology chain or the dominant species in communities. These insect species are rare to be found playing a key role independently in ecosystem maintenance or community succession. Last, many insect species have become extinct before people know them well. The efficient and comprehensive approach is required to detect why the population of some insect specify is descending and what kind of protective strategies should be applied. In this paper, we have proposed the competition index of Parmassius nomion species by combining the aimed species population dynamics with the diversity index. The results have shown that the alteration of competition index is able to detect the danger of shrinking population.
Dianmo Li, Zufei Ma, Baoyu Xie. Quantifying the danger for <em>Parnassius nomion<\/em> on Beijing Dongling mountain. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 679-686. doi: 10.3934/dcdsb.2004.4.679.
Is there a sigmoid growth of Gause's Paramecium caudatum in constant environment
Gause's experiments of Paramecium caudatum have been thought as one of the most accurate experiments in ecology. Although it has been hypothesized by ecologists that the population dynamics can be approximated by the classical sigmoid curve, there are still some questions as to whether the analytical method is accurate enough in relation to experimental data. Therefore analytical results are frequently encountered with doubt. In this study, we estimated some growing parameters based strictly on the life history of Paramecium caudatum and with a more flexible logistic model. Since the intrinsic growth rate values fell in different regions, the population dynamics were considered to follow a complex pattern.
Dianmo Li, Zufei Ma, Baoyu Xie. Is there a sigmoid growth of Gause\'s <em>Paramecium caudatum<\/em> in constant environment. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 687-694. doi: 10.3934/dcdsb.2004.4.687.
A continuous density Kolmogorov type model for a migrating fish stock
Kjartan G. Magnússon, Sven Th. Sigurdsson, Petro Babak, Stefán F. Gudmundsson and Eva Hlín Dereksdóttir
A continuous probability density model for the spatial distribution and migration pattern for a pelagic fish stock is described. The model is derived as the continuum limit of a random walk in the plane which leads to an advection-diffusion equation. The direction of the velocity vector is given by the gradient of a "comfort function" which incorporates factors such as temperature, food density, distance to spawning grounds, etc., which are believed to affect the behaviour of the capelin. An application to Barents Sea capelin is presented.
Kjartan G. Magn\u00FAsson, Sven Th. Sigurdsson, Petro Babak, Stef\u00E1n F. Gudmundsson, Eva Hl\u00EDn Dereksd\u00F3ttir. A continuous density Kolmogorov type model for a migrating fish stock. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 695-704. doi: 10.3934/dcdsb.2004.4.695.
Noise and productivity dependence of spatiotemporal pattern formation in a prey-predator system
H. Malchow, F.M. Hilker and S.V. Petrovskii
The spatiotemporal pattern formation in a prey-predator dynamics is studied numerically. External noise as well as the productivity of the prey population control emergence, symmetry and stability of as well as transitions between structures. Diffusive Turing structures and invasion waves are presented as example.
H. Malchow, F.M. Hilker, S.V. Petrovskii. Noise and productivity dependence of spatiotemporal pattern formation in a prey-predator system. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 705-711. doi: 10.3934/dcdsb.2004.4.705.
A general dynamical theory of foraging in animals
J. G. Ollason and N. Ren
This paper provides a minimally simple theory that accounts for the foraging behaviour of animals. It presents three separate systems of differential equations that predict the selection of diets from various types of food, and also the time-budgets of the occupancy of patches of food without, and with regeneration of food. The theory subsumes the whole of optimal foraging theory as one special case of foraging behaviour defined by the physiological requirements of animals. The theory explains foraging in terms of both the acquisition of food and the utilization of food in the maintenance of life.
J. G. Ollason, N. Ren. A general dynamical theory of foraging in animals. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 713-720. doi: 10.3934/dcdsb.2004.4.713.
The asymptotic behavior of a chemostat model
Zhipeng Qiu, Jun Yu and Yun Zou
In this paper, the Chemostat model with stage-structure and the Beddington-DeAngelies functional responses is studied. Sufficient conditions for uniform persistence of this model with delay are obtained via uniform persistence of infinite dimensional dynamical systems; and for the model without delay, sufficient conditions for the global asymptotic stability of the positive equilibrium are presented.
Zhipeng Qiu, Jun Yu, Yun Zou. The asymptotic behavior of a chemostat model. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 721-727. doi: 10.3934/dcdsb.2004.4.721.
Stability analysis of a simplified model for the control of testosterone secretion
Hongshan Ren
In [1] a simplified model for the control of testosterone secretion is given by
$ \frac{dR}{dt}=f(T)-b_1R,\qquad\qquad\qquad\qquad $(*)
$ \frac{dT}{dt}=b_2R(t-\tau)-b_3T, $
where $R$ denotes the luteinizing hormone releasing hormone, $T$ denotes the hormone testosterone and the negative feedback function $f(T)$ is a positive monotonic decreasing differentiable function of $T$. The delay $\tau$ is associated with the blood circulation time in the body, and $b_1$, $b_2$ and $b_3$ are positive parameters. In this paper, developing the method given in [2], we establish necessary and sufficient conditions for the steady state of (*) to be asymptotic stable or linearly unstable.
Hongshan Ren. Stability analysis of a simplified model for the control of testosterone secretion. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 729-738. doi: 10.3934/dcdsb.2004.4.729.
The effect of local prevention in an SIS model with diffusion
Toru Sasaki
The effect of spatially partial prevention of infectious disease is considered as an application of population models in inhomogeneous environments. The area is divided into two ractangles, and the local contact rate between infectives and susceptibles is sufficiently reduced in one rectangle. The dynamics of the infection considered here is that described by an SIS model with diffusion. Then the problem can be reduced to a Fisher type equation, which has been fully studied by many authors, under some conditions. The steady states of the linearized equation are considered, and a Nagylaki type result for predicting whether the infection will become extinct over time or not is obtained. This result leads to some necessary conditions for the extinction of the infection.
Toru Sasaki. The effect of local prevention in an SIS model with diffusion. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 739-746. doi: 10.3934/dcdsb.2004.4.739.
Ratio-dependent predator-prey system with stage structure for prey
Xinyu Song, Liming Cai and U. Neumann
A ratio-dependent predator-prey model with stage structure for the prey is proposed and analyzed, which improves the assumption that each individual prey has the same ability to be captured by predator. In this paper, mathematical analysis of the model equations with regard to boundedness of solutions, nature of equilibria, permanence are analyzed. We obtain conditions that determine the permanence of the populations. Furthermore, we establish necessary and sufficient conditions for the local stability of the positive equilibrium of the model. By the application of comparing argument and exploiting the monotonicity of one equation of the model, we obtain sufficient conditions for the global attractivity of positive equilibrium.
Xinyu Song, Liming Cai, U. Neumann. Ratio-dependent predator-prey system with stage structure for prey. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 747-758. doi: 10.3934/dcdsb.2004.4.747.
Modelling and analysis of integrated pest management strategy
Sanyi Tang and Lansun Chen
Two impulsive models concerning integrated pest management(IPM) are proposed according to impulsive effect with fixed moments and unfixed moments, respectively. The first model has the potential to protect the natural enemies from extinction, but under some conditions may also serve to extinction of the pest. The second model is constructed according to the practices of IPM, that is, when the pest population reaching the economic injury level, a combination of biological, cultural, and chemical tactics that reduce pests to tolerable levels is used. By using analytical method, we show that there exists an orbitally asymptotically stable periodic solution with a maximum value no larger than the given economic threshold. Further, the complete expression of period of the periodic solution is given. Thus, IPM strategy proved firstly by mathematical models is more effective than the classical method.
Sanyi Tang, Lansun Chen. Modelling and analysis of integrated pest management strategy. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 759-768. doi: 10.3934/dcdsb.2004.4.759.
Correspondence analysis of body form characteristics of Chinese ethnic groups
Feng-mei Tao, Lan-sun Chen and Li-xian Xia
In this paper, we introduce a method of stepwise correspondence analysis. The mathematical model, criterion of selecting variable and computational procedure of this method are given in the paper. Using this method, we study the relationship among 26 Chinese ethnic groups based on body form characteristics data.
Feng-mei Tao, Lan-sun Chen, Li-xian Xia. Correspondence analysis of body form characteristics of Chinese ethnic groups. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 769-776. doi: 10.3934/dcdsb.2004.4.769.
The mathematical method of studying the reproduction structure of weeds and its application to Bromus sterilis
Svend Christensen, Preben Klarskov Hansen, Guozheng Qi and Jihuai Wang
This article discusses the structure of weed reproduction incorporating the application of a mathematical model. This mathematical methodology enables the construction, testing and application of distribution models for the analysis of the structure of weed reproduction and weed ecology. The mathematical model was applied, at the individual level, to the weed species, Bromus sterilis. The application of this method, to the weed under competition, resulted in an analysis of the overall reproduction structure of the weed which follows approximately Gaussian distribution patterns and an analysis of the shoots in the weed plant which follow approximately Sigmoid distribution patterns. It was also discovered that the application of the mathematical distribution models, when applied under specific conditions could, effectively estimate the seed production and total number of shoots in a weed plant. On the average, a weed plant has 3 shoots, with each shoot measuring 90cm in height and being composed of 21 spikelets. Besides the estimations of the total shoots and seed production within the experimental field, one may also apply these mathematical distribution models to estimate the germination rate of the species within the experimental field in following years.
Svend Christensen, Preben Klarskov Hansen, Guozheng Qi, Jihuai Wang. The mathematical method of studying the reproduction structure of weeds and its application to <em>Bromus sterilis<\/em>. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 777-788. doi: 10.3934/dcdsb.2004.4.777.
Uniform persistence and periodic solution of chemostat-type model with antibiotic
Kaifa Wang and Aijun Fan
A system of functional differential equations is used to model the single microorganism in the chemostat environment with a periodic nutrient and antibiotic input. Based on the technique of Razumikhin, we obtain the sufficient condition for uniform persistence of the microbial population. For general periodic functional differential equations, we obtain a sufficient condition for the existence of periodic solution, therefore, the existence of positive periodic solution to the chemostat-type model is verified.
Kaifa Wang, Aijun Fan. Uniform persistence and periodic solution of chemostat-type model with antibiotic. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 789-795. doi: 10.3934/dcdsb.2004.4.789.
Population dispersal and disease spread
An epidemic model is studied to understand the effect of a population dispersal on the spread of a disease in two patches. Under the assumption that the dispersal of infectious individuals is barred, it is found that susceptive dispersal may cause the spread of the disease in one patch even though the disease dies out in each isolated patch. For the case where the disease spreads in each isolated patch, it is shown that suitable susceptive dispersal can lead to the extinction of the disease in one patch.
Wendi Wang. Population dispersal and disease spread. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 797-804. doi: 10.3934/dcdsb.2004.4.797.
Identifiability of models for clinical trials with noncompliance
Tianfa Xie and Zhong-Zhan Zhang
In this article we focus on clinical trials in which the compliance is measured with random errors, and develop an error-in-variables model for the analysis of the clinical trials. With this model, we separate the efficacy of prescribed treatment from that of the compliance. With additional information correlated with compliance, we prove that the model is identifiable, and get estimators for the parameters of interest, including the parameter reflecting the efficacy of the treatment. Furthermore, we extend the model to stratified populations, and discuss the asymptotic properties of the estimators.
Tianfa Xie, Zhong-Zhan Zhang. Identifiability of models for clinical trials with noncompliance. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 805-811. doi: 10.3934/dcdsb.2004.4.805.
Testing increasing hazard rate for the progression time of dementia
C. Xiong, J.P. Miller, F. Gao, Y. Yan and J.C. Morris
In the longitudinal studies of certain diseases, subjects are assessed periodically. In fact, many Alzheimer's Disease Research Centers (ADRC) in the United States typically assess their subjects annually, resulting in grouped or interval censored data for the progression time from one stage of dementia to a more severe stage of dementia. This paper studies the likelihood ratio test for increasing hazard rate associated with the progression time of dementia based on grouped progression time data. We first give the maximum likelihood estimators (MLEs) for model parameters under the assumption that the hazard rate of the progression time is nondecreasing. We then present the likelihood ratio test for testing the null hypothesis that the hazard rate is constant against the alternative that it is increasing. Finally, the methodology is applied to the dementia progression time from the Consortium to Establish a Registry for Alzheimer's Disease (CERAD). The statistical methodology developed here, although specifically referred to the study of dementia in the paper, can be easily applied to other longitudinal medical studies in which the disease status is categorized according to the severity and the hazard rate associated with the transition time among disease stages is to be tested.
C. Xiong, J.P. Miller, F. Gao, Y. Yan, J.C. Morris. Testing increasing hazard rate for the progression time of dementia. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 813-821. doi: 10.3934/dcdsb.2004.4.813.
Periodic solutions of a discrete nonautonomous Lotka-Volterra predator-prey model with time delays
Rui Xu, M.A.J. Chaplain and F.A. Davidson
A discrete periodic two-species Lotka-Volterra predator-prey model with time delays is investigated. By using Gaines and Mawhin's continuation theorem of coincidence degree theory, a set of easily verifiable sufficient conditions are derived for the existence of positive periodic solutions of the model.
Rui Xu, M.A.J. Chaplain, F.A. Davidson. Periodic solutions of a discrete nonautonomous Lotka-Volterra predator-prey model with time delays. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 823-831. doi: 10.3934/dcdsb.2004.4.823.
Population dynamics of sea bass and young sea bass
Masahiro Yamaguchi, Yasuhiro Takeuchi and Wanbiao Ma
This paper considers population dynamics of sea bass and young sea bass which are modeled by stage-structured delay-differential equations. It is shown that time delay can stabilize the dynamics. That is, as time delay increases, system becomes periodic and stable even if system without time delay is chaotic.
Masahiro Yamaguchi, Yasuhiro Takeuchi, Wanbiao Ma. Population dynamics of sea bass and young sea bass. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 833-840. doi: 10.3934/dcdsb.2004.4.833.
Dynamics of a discrete age-structured SIS models
Yicang Zhou and Paolo Fergola
Age is an important factor in the dynamics of epidemic process. Great attention has been paid to continuous age-structured epidemic models. The discrete epidemic models are in their infancy. In this paper a discrete age-structured epidemic SIS model is formulated. The dynamical behavior of this model is studied. The basic reproductive number is defined and threshold for the persistence or extinction of disease is found.
Yicang Zhou, Paolo Fergola. Dynamics of a discrete age-structured SIS models. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 841-850. doi: 10.3934/dcdsb.2004.4.841.
A simple delayed neural network with large capacity for associative memory
Jianhong Wu and Ruyuan Zhang
We consider periodic solutions of a system of difference equations with delay arising from a discrete neural network. We show that such a small network possesses a huge amount of stable periodic orbits with large domains of attraction if the delay is large, and thus the network has the potential large capacity for associative memory and for temporally periodic pattern recognition.
Jianhong Wu, Ruyuan Zhang. A simple delayed neural network with large capacity for associative memory. Discrete & Continuous Dynamical Systems - B, 2004, 4(3): 851-863. doi: 10.3934/dcdsb.2004.4.851.
|
CommonCrawl
|
Quantification of Long-Range Persistence in Geophysical Time Series: Conventional and Benchmark-Based Improvement Techniques
Annette Witt1,2 &
Bruce D. Malamud2
Surveys in Geophysics volume 34, pages 541–651 (2013)Cite this article
Time series in the Earth Sciences are often characterized as self-affine long-range persistent, where the power spectral density, S, exhibits a power-law dependence on frequency, f, S(f) ~ f −β, with β the persistence strength. For modelling purposes, it is important to determine the strength of self-affine long-range persistence β as precisely as possible and to quantify the uncertainty of this estimate. After an extensive review and discussion of asymptotic and the more specific case of self-affine long-range persistence, we compare four common analysis techniques for quantifying self-affine long-range persistence: (a) rescaled range (R/S) analysis, (b) semivariogram analysis, (c) detrended fluctuation analysis, and (d) power spectral analysis. To evaluate these methods, we construct ensembles of synthetic self-affine noises and motions with different (1) time series lengths N = 64, 128, 256, …, 131,072, (2) modelled persistence strengths β model = −1.0, −0.8, −0.6, …, 4.0, and (3) one-point probability distributions (Gaussian, log-normal: coefficient of variation c v = 0.0 to 2.0, Levy: tail parameter a = 1.0 to 2.0) and evaluate the four techniques by statistically comparing their performance. Over 17,000 sets of parameters are produced, each characterizing a given process; for each process type, 100 realizations are created. The four techniques give the following results in terms of systematic error (bias = average performance test results for β over 100 realizations minus modelled β) and random error (standard deviation of measured β over 100 realizations): (1) Hurst rescaled range (R/S) analysis is not recommended to use due to large systematic errors. (2) Semivariogram analysis shows no systematic errors but large random errors for self-affine noises with 1.2 ≤ β ≤ 2.8. (3) Detrended fluctuation analysis is well suited for time series with thin-tailed probability distributions and for persistence strengths of β ≥ 0.0. (4) Spectral techniques perform the best of all four techniques: for self-affine noises with positive persistence (β ≥ 0.0) and symmetric one-point distributions, they have no systematic errors and, compared to the other three techniques, small random errors; for anti-persistent self-affine noises (β < 0.0) and asymmetric one-point probability distributions, spectral techniques have small systematic and random errors. For quantifying the strength of long-range persistence of a time series, benchmark-based improvements to the estimator predicated on the performance for self-affine noises with the same time series length and one-point probability distribution are proposed. This scheme adjusts for the systematic errors of the considered technique and results in realistic 95 % confidence intervals for the estimated strength of persistence. We finish this paper by quantifying long-range persistence (and corresponding uncertainties) of three geophysical time series—palaeotemperature, river discharge, and Auroral electrojet index—with the three representing three different types of probability distribution—Gaussian, log-normal, and Levy, respectively.
Avoid the common mistakes
Time series can be found in many areas of the Earth Sciences and other disciplines. After obvious periodicities and trends have been removed from a time series, the stochastic component remains. This can be broadly broken up into two parts: (1) the statistical frequency-size distribution of values (how many values at a given size) and (2) the correlations between those values (how successive values cluster together, or the memory in the time series). In this paper, and because of their importance and use in the broad Earth Sciences, we will compare the strengths and weaknesses of commonly used measures for quantifying a frequently encountered type of memory, long-range persistence, also known as long-memory or long-range correlations.
This paper is organized as follows. In this introduction section we introduce long-range persistence and its importance in the Earth Sciences. We then provide in Sect. 2 a brief background to processes and time series and in Sect. 3 a more detailed background to long-range persistence. Section 4 describes the synthetic time series construction and presentation of the synthetic noises (with normal, log-normal, and Levy one-point probability distributions) that we will use for evaluating the strength of long-range persistence. This is followed in Sect. 5 (time domain techniques) and Sect. 6 (frequency-domain techniques) with a description of several prominent techniques (Hurst rescaled range analysis, semivariogram analysis, detrended fluctuation analysis, and power spectral analysis) for measuring the strength of long-range persistence. Section 7 presents the results of the performance analyses of the techniques, with in Sect. 8 a discussion of the results. In Sect. 9, benchmark-based improvements to the estimators for long-range dependence that are based on the techniques described in Sects. 5 and 6 are introduced. Section 10 is devoted to applying these tools to characterize the long-range persistence of three geophysical time series. These three time series—palaeotemperature, river discharge, and Auroral electrojet index—represent three different types of one-point probability distribution—Gaussian, log-normal, and Levy, respectively. Finally, Sect. 11 gives an overall summary and discussion.
After the paper's main text, five appendices give details of the construction of synthetic noises used in this paper and the fitting of power laws to data. Additionally, to accompany this paper, are four sets of electronic supplementary material: (1) 1,260 synthetic fractional noise examples and an R program for creating them, (2) an R program for the user to run the five types of long-range persistence analyses described in this paper, (3) an Excel spread sheet which includes detailed summary results of the performance tests applied here to 6,500 different sets of time series parameters, and a calibration spreadsheet/graph for the user to do benchmark-based improvement techniques, and (4) a PDF file with the 41 figures from this paper at high resolution.
We now introduce the idea of long-range persistence in the context of the Earth Sciences, with many of these ideas explored in more depth in later sections. Many time series in the Earth Sciences exhibit persistence (memory) where successive values are positively correlated; big values tend to follow big and small values follow small. The correlations are the statistical dependence of directly and distantly neighboured values in the time series. Besides correlations caused by periodic components, two types of correlations are often considered in the statistical modelling of time series: short-range (Priestley 1981; Box et al. 1994) and long-range (Beran 1994; Taqqu and Samorodnitsky 1992). Short-range correlations (persistence) are characterized by a decay in the autocorrelation function that is bounded by an exponential decay for large lags; in other words, a fixed number of preceding values influence the next value in the time series. In contrast, long-range correlated time series (of which a specific subclass is sometimes referred to as fractional noises or 1/f noises) are such that any given value is influenced by 'all' preceding values of the time series and are characterized by a power-law decay (exact or asymptotic) of the correlation between values as a function of the temporal distance (or lag) between them.
This power-law decay of values can be better understood in the context of self-similarity and self-affinity. Mandelbrot (1967) introduced the idea of self-similarity (and subsequently fractals) in the context of the coast of Great Britain where the same approximate coastal shape is found at multiple scales. He found a power-law relationship between the total length of the coast as a function of the segment length, with the power-law exponent parameter called the fractal dimension. The concept of fractals to describe spatial objects has become widely used in the Earth Sciences (in addition to other disciplines). Mandelbrot and van Ness (1968) extended the idea of self-similarity in spatial objects to time series, calling the latter a self-affine fractal or a self-affine time series when appropriately rescaling the two axes produces a time series that is statistically similar.
In a self-affine time series, the strength of the variations at a given frequency varies as a power-law function of that frequency. Thus, a large range of frequencies are influenced. In other words, any given value in a time series is influenced by all other values preceding it, with the values themselves forming a self-similar pattern and the self-affine time series exhibiting, by definition, long-range persistence. The strength of long-range correlations can be related to the fractal dimension (Voss 1985; Klinkenberg 1994) and influences the efficacy and appropriateness of long-range persistent algorithms chosen.
Self-affine time series (long-range persistence) have been discussed and documented for many processes in the Earth Sciences. Examples include river run-off and precipitation (Hurst 1951; Mandelbrot and van Ness 1968; Montanari et al. 1996; Kantelhardt et al. 2003; Mudelsee 2007; Khaliq et al. 2009), atmospheric variability (Govindan et al. 2002), temperatures over short to very long time scales (Pelletier and Turcotte 1999; Fraedrich and Blender 2003), fluctuations of the North-Atlantic Oscillation index (Collette and Ausloos 2004), surface wind speeds (Govindan and Kantz 2004), the geomagnetic auroral electrojet index (Chapman et al. 2005), geomagnetic variability (Anh et al. 2007), and ozone records (Kiss et al. 2007).
Although long-range persistence has been shown to be a part of many geophysical records, physical explanations for this type of behaviour and geophysical models that describe this property appropriately are less common. In one example, Pelletier and Turcotte (1997) modelled long-range persistence found in climatological and hydrological time series with an advection–diffusion model of heat and water vapour in the atmosphere. In another example, Blender and Fraedrich (2003) modelled long-range persistent surface temperatures by coupled atmosphere–ocean models and found different persistence strengths for ocean and coastal areas. In a third example, Mudelsee (2007) proposed a hydrological model, where a superposition of short-range dependent processes with different model parameters results in a long-range persistent process; he modelled river discharge as the spatial aggregation of mutually independent reservoirs (which he assumed to be first-order autoregressive processes).
Long-range persistent behaviour occurs also in a few (but not in all) models of self-organized criticality (Bak et al. 1987; Turcotte 1999; Hergarten 2002; Kwapień and Drożdż 2012); as an example the Bak–Sneppen model (Bak and Sneppen 1993; Daerden and Vanderzande 1996) is a simple model of co-evolution between interacting species and has been used to describe evolutionary biological processes. The Bak–Sneppen model has also been extended to solar and geophysical phenomena such as X-ray bursts at the Sun's surface (Bershadskii and Sreenivasan 2003), solar flares (Meirelles et al. 2010), and for Earth's magnetic field reversals (Papa et al. 2012). Nagler and Claussen (2005) found that cellular automata models (i.e. grid-based models with simple nearest-neighbour rules of interaction) can also generate long-range persistent behaviour.
Physical explanations and models for long-range persistence are certainly a strong step forward in the published literature, rather than 'just' documentation of persistence (based on the statistical properties of measured data) itself. However, these physical explanations in the community are often confounded by the following: (1) a confusion of whether asymptotic or the more specific case of self-affine long-range persistence is being explored; (2) in the case of some models, such as 'toy' cellular automata models and some 'philosophical' models, a lack of sensitivity in the model itself, so that any output tends towards some sort of universal behaviour; and (3) sometimes non-rigorous and visual comparison of any model output (which itself is based on a simplification of the physical explanations) with 'reality'. As such, these physical explanations and models are welcome, but are often met with a bit of scepticism by peers in any given community (e.g., see Frigg 2003).
Long-range correlations are also generic to many chaotic systems (Manneville 1980; Procaccia and Schuster 1983; Geisel et al. 1985, 1987), for which a large class of models in the geosciences has been designed. Furthermore, over the last decade it has become clear that long-range correlations are not only important for describing the clustering of the time series values (i.e. big or small values clustering together), but are also one of the key parameters for describing the return times of and correlations between values in a series of extremes over a given threshold (Altmann and Kantz 2005; Bunde et al. 2005; Blender et al. 2008) and for characterizing the scaling of linear trends in short segments of the considered time series (Bunde and Lennartz 2012).
Most empirical studies of self-affinity and long-range persistence compare different techniques or discuss the minimal length of the time series to ensure reliable estimates of the strength of long-range dependence. There are few (e.g., Malamud and Turcotte 1999a; Velasco 2000) systematic studies on the influence of one-point probability distributions (e.g., normal vs. other distributions) on the performance of the estimators. As many time series in the geosciences have a one-point probability density that is heavily non-Gaussian, we will in this paper systematically examine different synthetic time series with varying strengths of long-range persistence and different statistical distributions. By doing so, we will repeat and review parts of what has been found previously, confirming and/or highlighting major issues, but also systematically examine non-Gaussian time series in a manner previously not done, particularly with respect to heavy-tailed frequency-size probability distributions. We will thus establish the degree of utility of common techniques used in the Earth Sciences for examining the presence or absence, and strength, of long-range persistence, by using synthetic time series with probability distributions and number of data values similar to those commonly found in the geosciences.
In this section we give a brief background to processes and time series, along with an introduction to three geophysical time series examples that we consider in this paper. Records of geophysical processes and realizations of their models can be represented by a time series, x t , t = 1, 2, …, N, with t denoting the time index of successive measurements of x t separated by a sampling interval Δ (including units), and N the number of observed data points. The (sample) mean \( \bar{x} \) and (sample) variance \( \sigma_{x}^{2} \) of a time series are as follows:
$$ \bar{x} = \frac{1}{N}\sum\limits_{t = 1}^{N} {x_{t} } ,\quad \sigma_{x}^{2} = \frac{1}{N}\sum\limits_{t = 1}^{N} {\left( {x_{t} - \bar{x}} \right)^{2} } . $$
The (sample) standard deviation σ x is the square root of the (sample) variance. A table of variables used in this paper is given in Table 1.
Table 1 Notation and abbreviations
We distinguish here between a process and a time series. An example of a stochastic process is a first-order autoregressive (AR(1)) process:
$$ x_{t} = \phi_{1} x_{t - 1} + \varepsilon_{t} $$
with ϕ 1 a constant (−1 < ϕ 1 < 1), ε t a white noise, and the value at time t (i.e. x t ) determined by the constant, white noise, and the value at time t–1 (i.e. x t–1). This is a very specific process given by Eq. (2). An example of a time series would be a realization of this process. We will discuss in more depth this AR(1) process in Sect. 3.1.
We can also have other processes which are not described by a simple set of equations, for example, geoprocesses (e.g., climate dynamics, plate tectonics) or a large experimental set-up where the results of the experiment are data; the process in the latter case is the physical or computational interactions in the experiment. In the geosciences, often just a single or a very few realizations of a process are available (e.g., temperature records, recordings of seismicity), unless one does extensive model simulations, where hundreds to thousands of realizations of a given process might be created. Each realization of a process is called a time series. In the geosciences, with (often) just one time series, which is itself one realization of a process, we then attempt to infer from that single realization (the time series), properties of the process. The process can be considered to be the 'underlying' physical mechanism or equation or theory for a given system.
We now consider three diverse examples of time series from the Earth Sciences, which after presenting here, we will return to in Sect. 10 as geophysical examples to which we apply the long-range persistence techniques evaluated in this paper. The first time series (Fig. 1a) is the bi-decadal δ18O record of the Greenland Ice Sheet Project Two (GISP2) data (Stuiver et al. 1995) for the last 10,000 years (500 values at 20 year intervals) and shows the departure of the ratio of 18O to 16O isotopes in the core versus a standard, in parts per mil (parts per thousand or ‰). This measure is considered a proxy for Greenland air temperature (Stuiver et al. 1995). The second time series (Fig. 1b) is daily discharge from the Elkhorn River (USGS 2012) in Nebraska at Waterloo (USGS station 06800500) with a drainage area of 17,800 km2 and for the 73 year period 1 January 1929 to 30 December 2001. The third time series (Fig. 1c) is the geomagnetic auroral electrojet index (AE index) sampled per minute (Kyoto University 2012), both the original series (Fig. 1c) and the first difference (Fig. 1d), and quantifies variations of the auroral zone horizontal magnetic field activity (Davis and Sugiura 1966) of the Northern Hemisphere.
Three examples of geophysical time series exhibiting long-range persistence. a Bi-decadal oxygen isotope data set δ18O (proxy for palaeotemperature) from Greenland Ice Sheet Project Two (GISP2) for the last 10,000 years (Stuiver et al. 1995), with 500 values given at 20 year intervals. b Discharge of the Elkhorn river (at Waterloo, Nebraska, USA) sampled daily for the period from 01 January 1929 to 30 December 2001 (USGS 2012). c The geomagnetic auroral electrojet (AE) index sampled per minute for the 24 h period of 01 February 1978 (Kyoto University 2012). d The differenced AE index, \( \Delta x_{\text{AE}} (t) = x_{\text{AE}} (t) - x_{\text{AE}} (t - 1) \) from (c), with Δ = 1 min; note that the units of Δx AE are the units of x AE divided by minutes. To the right of each time series are given the normalized histograms of the data sets with best-fitting models for one-point probability densities, with those probabilities corresponding to (a) and (b) on a linear axis, and (d) the probability given on a logarithmic axis
For each of the three time series in Fig. 1a,b,d are given the data in time (left) and their respective probability densities and underlying probability distributions (right). Each time series is equally spaced in time, with respective temporal spacing as follows: palaeotemperature Δ = 20 years, river discharge Δ = 1 day, and AE index Δ = 1 min (minute). However, the visual appearance when the three time series are compared is different. These 'time impressions' rely on the statistical frequency-size distribution of values (how many values at a given size) and the correlation between those values (how successive values cluster together, or the memory in the time series).
Visual examination of the probability distributions (Fig. 1, right) of the three time series confirms that they capture what we see in the time series (left) and provides some insight into their statistical character. The distribution of values in the time series x temp (Fig. 1a) is broadly symmetric—with a mean value at about −34.8 [per mil] and with few extremes lower than −36 [per mil] or greater than −34 [per mil]. We see an underlying probability distribution that is symmetric, and most likely Gaussian.
The river discharge series shown in Fig. 1b consists of positive values 0 ≤ x discharge ≤ 2,656 m3 s−1. Note that two values are larger than 1,500 m3 s−1 and not shown on the graph. Its underlying probability distribution shown to the right is highly asymmetric; in other words, there are very few very large values (x discharge > 500 m3 s−1) and many smaller values, a distribution with a long tail of larger values on the right-hand side. This distribution can be approximated by a log-normal distribution.
The differenced AE index Δx AE series presented in Fig. 1d has values between −120 and 140 [W min−2] and is approximately symmetric around zero. Despite its symmetry, its underlying probability distribution is different from the Gaussian-like distributed palaeotemperature series x temp presented in Fig. 1a. Here, the fraction of values in the centre and at the very tails of the distribution is larger, showing double-sided power-law behaviour of the probability distribution (Pinto et al. 2012). These probability densities can be approximated by a Levy probability distribution.
While correlations within each of the three types of geophysical time series given in Fig. 1 (left) are more difficult to compare visually, all three time series exhibit some persistence: large values tend to follow large ones, and small values tend to follow small ones. The relative ordering of small, medium, and large values creates clusters (or lack of clusters) which we can make some attempts to observe visually. The palaeotemperature series (Fig. 1a) appears to have small clusters, contrasting with the discharge series (Fig. 1b) and the differenced AE index series (Fig. 1d), which appear to have larger clusters. One might argue, although it is difficult to do this visually, that the latter two time series therefore exhibit a higher 'strength' of persistence. Measures for quantifying persistence strength will be introduced formally in Sect. 3.1. We can also look at the roughness or 'noisiness' of the time series. The palaeotemperature series (Fig. 1a) appears to have the most scatter followed by the river discharge (Fig. 1b) and the differenced AE index (Fig. 1d), although, again, it is difficult to compare these visually, between clearly very different types of time series. These considerations show that it is sometimes difficult to grasp the strength of persistence visually from the time series itself.
One method commonly used (e.g., Tukey 1977; Andrienko and Andrienko 2005) to examine correlations between pairs of values at lag τ for a given time series is to plot x t+τ on the y-axis and x t on the x-axis, in other words lagged scatter plots. In Fig. 2, we give lagged scatter plots of the three geophysical time series shown in Fig. 1, each shown for lag τ = 1 (with units depending on the respective units of each time series). The resultant graphs give a measure of the dependence on the preceding values, with overall positive correlation given by a positive diagonal line. The ellipse-shaped scatter plots in Fig. 2b,c indicate correlations, whereas the scatter in Fig. 2a,d indicates much less dependence of a given value on its preceding value (i.e. less correlation for a lag τ = 1). However, one could consider other lags (e.g., instead of a lag of 1 day for the discharge, one might consider a lag of 1 year) or consider a range of lags together, from short-range in time to long-range. More quantitative techniques for considering the strength of correlations (persistence) will be introduced in the next section (Sect. 3), where we formally define persistence and persistence strength.
Lagged scatter plots of the three geophysical time series shown in Fig. 1. a Bi-decadal oxygen isotope data set δ18O (proxy for palaeotemperature). b Discharge of the Elkhorn river. c The geomagnetic auroral electrojet (AE) index. d The differenced geomagnetic auroral electrojet index. For each time series from Fig. 1, on the y-axis are shown x t+1 values and on the x-axis x t , giving their dependence on the preceding values
Long-Range Persistence
In this section we first introduce a general quantitative description of correlations in the context of the autocorrelation function and with examples from short-range persistent models (Sect. 3.1). We then give a formal definition of long-range persistence along with a discussion of stationarity (Sect. 3.2), examples of long-range persistent time series and processes from the social and physical sciences (Sect. 3.3), a discussion of asymptotic long-range persistence versus self-affinity (Sect. 3.4), and a brief theoretical overview of white noise and Brownian motion (Sect. 3.5) and conclude with a discussion and overview of fractional noises and motions (Sect. 3.6).
As introduced in Sects. 1 and 2, correlations describe the statistical dependence of directly and distantly neighboured values in a process. These statistical dependencies can be assessed in many different ways, including joint probability distributions between neighbouring values that are separated by a given lag and quantitative measures for the strength of interdependence, such as mutual information (e.g., Shannon and Weaver 1949) or correlation coefficients (e.g., Matheron 1963). In the statistical modelling of time series (realizations of a process), two types of correlations (persistence) can be considered:
Short-range correlations where values are correlated to other values that are in a close temporal neighbourhood with one another, that is, values are correlated with one another at short lags in time (Priestley 1981; Box et al. 1994).
Long-range correlations where all or almost all values are correlated with one another, that is, values are correlated with one another at very long lags in time (Beran 1994; Taqqu and Samorodnitsky 1992).
Persistence is where large values tend to follow large ones, and small values tend to follow small ones, on average more of the time than if the time series were uncorrelated. This contrasts with anti-persistence, where large values tend to follow small ones and small values large ones. For both persistence and anti-persistence, one can have a strength that varies from weak to very strong. We will consider in this paper models (processes) for both persistence and anti-persistence.
One technique by which the persistence (or anti-persistence) of a time series can be quantified is the autocorrelation function. The autocorrelation function C(τ), for a given lag τ, is defined as follows (Box et al. 1994):
$$ C\left( \tau \right) = \frac{1}{{\sigma_{x}^{2} }}\frac{1}{N - \tau }\sum\limits_{t = 1}^{N - \tau } {(x_{t} - \bar{x})(x_{t + \tau } - \bar{x})} $$
where again \( \bar{x} \) is the sample mean, \( \sigma_{x}^{2} \) the sample variance (Eq. 1), and N the number of values in the time series. Here one multiples a given value of the time series x t (mean removed) with the value x t+τ (mean removed), for τ steps later (the lag), sums them up, and then normalizes appropriately. The autocorrelation function of a process is the ensemble average of the autocorrelation function applied to each of many time series (realizations of the process).
For zero lag (τ = 0 in Eq. 3), and using the definition for variance (Eq. 1), the autocorrelation function is C(0) = 1.0. For processes considered in this paper, we find that as the lag, τ, increases, τ = 1, 2, …, (N − 1), the autocorrelation function C(τ) decreases and the correlation between x t+τ and x t decreases. Positive values of C(τ) indicate persistence, negative values indicate anti-persistence, and zero values indicate no correlation. Various statistical tests exist (e.g., the Q K statistic, Box and Pierce 1970) that take into account the sample size of the time series, and values of C(τ) for those τ calculated, to determine the significance of rejecting the time series as being correlated. A plot of C(τ) versus τ is known as a correlogram. A rapid decay of the correlogram indicates short-range correlations, and a slow decay indicates long-range correlations.
A number of fields use time series models based on short-range persistence (e.g., hydrology, Bras and Rodriguez-Iturbe 1993). As an illustration of the autocorrelation function, we will apply it to a short-range persistent model. Several empirical models have been used to generate time series with short-range correlations (persistence) (Thomas and Hugget 1980; Box et al. 1994). Here we use the AR(1) (autoregressive order 1) process introduced in Eq. (2). In Fig. 3 we give four realizations of an AR(1) process for four different values of the constant ϕ 1 = 0.0, 0.2, 0.4, 0.8. With increasing values of ϕ 1, the persistence (and clustering) becomes stronger, as evidenced by large values becoming more likely to follow large ones, and small values followed by small ones; we also observe for increasing ϕ 1 that the variance of the values in each realization increases. We apply the autocorrelation function C(τ) (Eq. 3) to each time series given in Fig. 3 and give the resulting correlograms in Fig. 4.
Realizations of short-range persistence autoregressive (AR(1)) processes from Eq. (2) with the parameter ϕ 1 changing from top to bottom as indicated in the figure panels. In each case, the white noise ε t used in Eq. (2) has mean 0 and standard deviation 1
Correlograms of four AR(1) time series. The autocorrelation function C(τ) in Eq. (3) is applied to the four AR(1) time series shown in Fig. 3 with the parameter ϕ 1 changing from top to bottom as indicated in the figure panels, for lags 0 ≤ τ ≤ 70 (unitless), with results shown in small circles. Also shown (dashed line) is the theoretical prediction for AR(1) process, \( C\left( \tau \right) = \phi_{1}^{\tau } \) (Eq. 5)
The absolute value of the autocorrelation function for short-range correlations is bounded by an exponential decay (Beran 1994):
$$ \left| {C\left( \tau \right)} \right| \le \kappa_{0} \exp \left( { - \kappa \tau } \right), $$
where κ 0 and κ are constants. For an AR(1) process (Eq. 2), if we let κ 0 = 1 and \( \exp \left( { - \kappa } \right) = \phi_{1} \) in Eq. (4), with −1 < ϕ < 1 (a condition for the process to be stationary), then, at lag τ, the autocorrelation function of the AR(1) process can be shown to be (Box et al. 1994; Swan and Sandilands 1995):
$$ C\left( \tau \right) = \phi_{1}^{\tau } . $$
We plot this autocorrelation function of the AR(1) process (Eq. 5) in Fig. 4 (dashed lines) and find excellent agreement with each of the four realizations.
Other examples of empirical models for short-range persistence in time series include the moving average (MA) model and the combination of the AR and MA models to create the ARMA model. Reviews of many of these models are given in Box et al. (1994) and Chatfield (1996). There are many applications of short-range persistent models in the social and physical sciences, ranging from river flows (e.g., Salas 1993), and ecology (e.g., Ives et al. 2010) to telecommunication networks (e.g., Adas 1997).
As a further example of the autocorrelation function applied to time series, in Fig. 5, we show the correlogram of the three geophysical time series discussed in Sect. 2 (see Fig. 1). The autocorrelation functions shown in Fig. 5a (palaeotemperature) and Fig. 5b (river discharge) decay slowly to zero over dozens of lag values and thus indicate correlations. One potential indication of long-range versus short-range correlations is in its slow decay rate. We will find later (Sect. 10) that these correlations are in fact long-range, but for the moment, visually, this conclusion cannot be made. The autocorrelation function of the river discharge time series shown in Fig. 5b shows additional periodic components which reflect the seasonal character of the time series. In Fig. 5c (differenced AE index) the autocorrelation function does not show correlations; in Sect. 10 we will evaluate whether there is any long-range anti-persistence in the time series, but again, visually, we cannot make this conclusion at this point. We now introduce more formally and generally long-range persistence.
Autocorrelation function of the three geophysical time series shown in Fig. 1, given as a function of increasing lag. a Bi-decadal oxygen isotope data set δ18O (proxy for palaeotemperature). b Discharge of the Elkhorn river. c The differenced geomagnetic auroral electrojet index
Formal Definition of Long-range Persistence
Long-range persistence is a common property of records of the variation of spatially or temporarily aggregated variables (Beran 1994). In contrast to short-range persistent processes, a long-range persistent process exhibits a power-law scaling of the autocorrelation function (Eq. 3) such that (Beran 1994, p. 64)
$$ \left| {C(\tau )} \right|\sim \tau^{ - (1 - \beta )} ,\tau \to \infty , - 1 < \beta < 1, $$
holds for large time lags τ. This is a formal definition of long-range persistence. The parameter β is the strength of long-range persistence, with β = 0 a process that has no long-range persistence between values, β > 0 long-range persistence, and β < 0 long-range anti-persistence. We will discuss the parameter β in more detail in Sect. 3.4. The autocorrelation function is, however, limited over the range with which it can evaluate the long-range persistence strength of a process (if it is long range), −1 < β < 1. We therefore turn to the spectral domain, for a definition which holds for a larger range of β.
In the spectral domain, the power spectral density, S, measures the frequency content of a process. Over many realizations, approaching N very large, the average measured S at a given frequency will approach the actual processes' power at that frequency. To avoid a detailed technical explanation here, we will discuss in depth the calculation of S, which is based on the Fourier transform, in Sect. 6. A process can be defined as long-range persistent if S (averaged over multiple realizations) scales asymptotically as a power law for frequencies close to the origin (f → 0) (Beran 1994):
$$ S\left( f \right)\sim f^{ - \beta } , $$
where the power-law exponent, β, measures the strength of persistence. Averaged over many realizations, the power spectral density of the process will approach a scatter-free power-law curve as the number of realizations increases to large numbers.
Another way to define long-range persistence is in terms of the square of the fluctuation function, F 2 (Peng et al. 1992):
$$ F^{2} \left( l \right) = \frac{1}{{\left[ {N/l} \right]}}\sum\limits_{i = 0}^{{\left[ {N/l} \right] - 1}} {\sigma^{2} \left[ {x_{il + 1} ,x_{il + 2} , \ldots ,x_{il + l} } \right]} $$
obtained by dividing the time series x t into non-overlapping segments of length l (l < N), and for each successive segment calculating the variance of the x t values, \( \sigma_{x}^{2} \), and then taking the mean, \( \overline{{\sigma_{x}^{2} }} \). The square brackets in \( \sigma^{2}\)[ ] indicate taking the variance over the terms in the bracket. The variables l and N are always integers. In the summation range, for the case that N/l is non-integer, we take the largest integer that is less than N/l, which is noted in Eq. (8) by [N/l]. For the cases of a long-range persistent time series with β > 1 the power-law shape of the power spectral density (Eq. 7) is equivalent to a power-law scaling of the fluctuation function (Peng et al. 1992):
$$ F\left( l \right)\sim \left( l \right)^{\alpha } , $$
with α ≠ 0.5. Equation (9) holds in the limit of large segment lengths l (and only for those time series with β > 1). The strength of long-range persistence, β, is related to the scaling parameter of the fluctuation function, α, as β = 2α + 1. To make this concept applicable for time series with a strength of long-range persistence β < 1, the aggregated series (also known as the running sum or integrated series, see Sect. 3.5) of the time series can be analysed, but this method works well only in the case of large number of values in the time series, N (Taqqu 1975; Mandelbrot 1999). When aggregating a time series with 'smaller' N, which is the case for most time series being examined in the Earth Sciences, then one must take care that the one-point probability distribution is quasi-symmetrical (e.g., Gaussian, Levy) (Mandelbrot and van Ness 1968; Samorodnitsky and Taqqu 1994).
One important aspect of a time series is the stationarity of its underlying process (Witt et al. 1998). A process is said to be strictly stationary if all moments (e.g., mean value, \( \bar{x} \); variance, \( \sigma_{x}^{2} \); kurtosis) over multiple time series realizations do not change with time t and, in particular, do not depend on the length of the considered time series. Second-order or weak stationarity (Chatfield 1996) requires that the means and standard deviations for different sections of a time series—again taken over multiple realizations (i.e. the process) and for different section lengths—have autocorrelation functions that are approximately the same.
Long-Range Persistence in the Physical and Social Sciences
As discussed in the introduction (Sect. 1), long-range persistence has been quantified and explored for many geophysical time series and processes. However, it is an important and well-studied attribute for time series and processes in many other disciplines where persistence-displaying patterns have been identified, for example:
The 1/f behaviour of voltage and current amplitude fluctuations in electronic systems modelled as a superposition of thermal noises (Schottky 1918; Johnson 1925; van der Ziel 1950).
Trajectories of tracer particles in hydrodynamic flows (Solomon et al. 1993) and in granular material (Weeks et al. 2000).
Condensed matter physics (Kogan 2008).
Neurosciences (Linkenkaer-Hansen et al. 2001; Bédard et al. 2006).
Econophysics (Mantegna and Stanley 2000).
In biology, long-range persistence has been identified in:
Receptor systems (Bahar et al. 2001).
Human gait (Hausdorff et al. 1996; Delignieres and Torre 2009).
Human sensory motor control system (Cabrera and Milton 2002; Patzelt et al. 2007) and human eye movements during spoken language comprehension (Stephen et al. 2009).
Heart beat intervals (Kobayashi and Musha 1982; Peng et al. 1993a; Goldberger et al. 2002).
Swimming behaviour of parasites (Uppaluri et al. 2011).
Furthermore, long-range persistence is typical for musical pitch, rhythms, and loudness fluctuations (Voss and Clarke 1975; Jennings et al. 2004; Hennig et al. 2011; Levitin et al. 2012) and for dynamics on networks such as internet traffic (Leland et al. 1994; Willinger et al. 1997). Long-range dependence is an established concept in describing stock market prices (Lo 1991).
However, with the widespread identification of long-range persistence in physical and social systems has come a concern by those (Rangarajan and Ding 2000; Maraun et al. 2004; Gao et al. 2006; Rust et al. 2008) who believe that long-range persistence has often been incorrectly identified in time series, and who believe instead that many time series are in fact short-range persistent. One part of the confusion surrounding the issue of short-range versus long-range persistence is that of a frequent lack of knowledge as to the process involved that drives the persistence. This can take the form of lack of knowledge of underlying driving equations, physical process, or even a lack of understanding of the variables in the system being studied.
Another major issue, which we explore in more detail in the following section, is the semantics as to what we call long-range persistence. There are at least two ways of thinking about long-range persistence, which we will call asymptotic long-range persistence and self-affine long-range persistence. These are simply called 'long-range persistence' in much of the literature and interchanged without the reader knowing which is being addressed.
Asymptotic Long-Range Persistence Versus Self-Affinity
Asymptotic long-range persistence is the general case where the power-law scaling in Eq. (7) holds in the limit f → 0. Self-affine long-range persistence is the more specific case, where the scaling in Eq. (7) holds for all f, the power spectral density is now scale invariant, and we call this a self-affine time series. In Fig. 6, we have drawn five cartoon examples of the frequency-domain signature of time series, where power spectral density S (Eq. 7) is given as a function of frequency f, on logarithmic axes. Self-affine behaviour (i.e. power-law scaling over the entire frequency range) is presented by the black straight line (a perfect power-law dependence). The other four curves demonstrate very different examples of the power spectral densities scaling asymptotically with a power-law for small frequencies (i.e. f → 0). The orange dashed line demonstrates two scaling ranges and is characterized by two corresponding power-law exponents.
Cartoon sketch of power spectral densities of a self-affine and four other long-range persistent processes. Self-affine behaviour (i.e. power-law scaling over the entire frequency range) is presented by the black straight line (identified by equation and arrow). The other four examples (blue, red, orange, and green dashed lines) represent cartoon examples of power spectral densities that scale asymptotically with a power law for small frequencies, with the red dashed line (second from top) an asymptotic example superimposed by a periodicity, and the orange dashed line (third from top) demonstrating two scaling ranges that are characterized by two corresponding power-law exponents
In both the more general case of asymptotic long-range persistence (i.e. scaling only in the limit f → 0) and the less general case of self-affine time series (scaling for all f), positive exponents β in Eq. (7) represent positive (long-range) persistence and negative ones (β < 0) anti-persistence. For the specific case of self-affine long-range persistence, a value of β = 0 is an uncorrelated time series (e.g., a white noise), and a value of β = 1 is known also as a 1/f or pink or flicker noise (Schottky 1918; Mandelbrot and van Ness 1968; Keshner 1982; Bak et al. 1987). Various colour names are used to refer to different strengths of long-range persistence, with some confusion in both the grey (e.g., internet) and peer-reviewed literature as to (1) whether the names referred to for some specific strengths of persistence are for asymptotic long-range persistence or the more specific self-affine case and (2) the specific colour names used for a given strength of persistence. A general survey gives the following colour names for different strengths of long-range persistence († = generally accepted terms in established literature sources or standards, e.g., see ATIS 2000):
β = −2.0 violet, purple
β = −1.0 blue†
β = 0.0 white†
β = 1.0 pink†, flicker†
β = 2.0 brown†, red†
β > 2.0 black
Brown noise is the result of a Brownian motion process which we discuss further below and which we have referred to as simply 'Brownian motion' in this paper.
For the general asymptotic case (scaling in the limit f → 0), a value of β = 0 stands for short-range persistence (Beran 1994). This type of persistence is typical for such linear stochastic processes as moving average (MA) or autoregressive (AR) processes (Priestley 1981) and is also known under the names of blue, pink, or red noise (Hasselmann 1976; Kurths and Herzel 1987; Box et al. 1994). However, there is different usage of colour names by different authors in the literature as to the specific type of short-range persistence being referred to. In addition, colours like 'pink' and 'red' have one meaning for short-range persistence (e.g., any increase in power in the lower frequencies) and another for long-range (a strength of long-range persistence of β = 1 and 2, for pink and red, respectively). This has caused a bit of confusion between different groups of researchers in terms of false assumptions as to the specific kind of process (e.g., short-range vs. long-range) being explored based on the terminology used. We now discuss white noises and Brownian motion.
White Noises and Brownian Motions
A Gaussian white noise is a classic example of a stationary process, with a mean \( \bar{x} \) and a variance \( \sigma_{x}^{2} \) of the values specified. A realization of a Gaussian white noise is shown in Fig. 7a. In this time series, the values are uncorrelated with one another, with an equal likelihood at each time step of a value being larger or smaller than the preceding value. The autocorrelation function (Eq. 3) for a Gaussian white noise is C(τ) = 0 for all lags τ > 0. Other one-point probability distributions can also be considered. For example, in Fig. 7b,c, respectively, are given a realization of a log-normal and a Levy-distributed white noise. In Sect. 4 we will examine in more detail the Gaussian, log-normal, and Levy one-point probability distributions. These uncorrelated time series (white noises) will provide the basis for the construction of fractional noises and motions that we will use as benchmarks for this paper. Uncorrelated time series can also be created by many computer programs (e.g., Press et al. 1994), using 'random' functions, but care must be taken that the time series are truly uncorrelated and that the frequency-size distribution is specified. An example where these issues are discussed in the context of landslide time series is given by Witt et al. 2010.
Realizations of uncorrelated time series, time series length N = 1,024, and the following one-point probability distributions: a Gaussian, b log-normal (constructed with Box–Cox transform), c v = 0.5, c Levy, a = 1.5. Each time series has been normalized to have mean 0 and variance 1. In d is shown the aggregation (running sum, Eq. 10) of these three uncorrelated time series
The classic example of a non-stationary process is a Brownian motion (Brown 1828; Wang and Uhlenbeck 1945), which is obtained by summing a Gaussian white noise with zero mean. Einstein (1905) showed that, for the motion of a molecule in a gas which follows a Brownian motion, the mean square displacement grows linearly with the time of observation. This corresponds to a scaling parameter of the fluctuation function (Eq. 9) of α = 0.5 and consequently to a strength of long-range persistence of β = 2. Therefore, the value β = 2 corresponds to Brownian motion and the theory of random walks (Brown 1828; Einstein 1905; Chandrasekhar 1943) and describes 'ordinary' diffusion. A Brownian motion is an example of a self-affine long-range persistent process that has a strength of persistence that is very strong. Persistence strength β with β ≠ 2 characterizes 'anomalous' diffusion with 1 < β < 2 related to subdiffusion and β > 2 to superdiffusion (Metzler and Klafter 2000; Klafter and Sokolov 2005).
A Brownian motion process is given by multiple realizations of the aggregated time series, s t :
$$ s_{t} = \sum\limits_{i = 1}^{t} {x_{i} } , $$
where x i is (in this case) our white (uncorrelated) noise, ε i . These aggregated series are also known as running sums, integrated series, or first profiles. The white noises illustrated in Fig. 7a,b,c have been summed to give the Brownian motions in Fig. 7d.
The variance of a Brownian motion created from Gaussian or log-normal white noises, after t values, is given by
$$ \sigma [s_{t} ] = \left( {\sigma_{x} t} \right)^{0.5} , $$
where \(\sigma \) x is the standard deviation of the white noise sequence. In Fig. 8a, we show the superposition of 20 Brownian motions, each created from a realization of a Gaussian white noise with mean zero and variance one. The fluctuations around zero grow with the time index of the aggregated time series. The relation from Eq. (11) is included in the figure, as the dashed line parabola, illustrating the drift of the Brownian motions. Brownian motions have no origin defined, and successive increments are uncorrelated. Shown in Fig. 8b,c, respectively, are the multiple realizations of aggregates for log-normal and Levy-distributed white noises. For aggregated log-normal white noises, the fluctuations scale, on average, following Eq. (11), but the same is not true for Levy noises, because a Levy noise has no defined variance (discussed in more depth in Sect. 4). The heavy tails of the Levy distribution in Fig. 7 lead in Fig. 8 to 'jumps' of the aggregated series.
Ensembles of 20 realizations of the running sums of the three different types of uncorrelated noises shown in Fig. 7. Shown are running sums with time series length N = 1,024, for the following one-point probability distributions: a Gaussian, b log-normal (constructed with Box–Cox transform), c v = 0.5, c Levy, a = 1.5. For (a) and (b), shown by the dashed line envelopes is ±t 0.5, (see Eq. (11)), the theoretical deviation with time of the ensemble of the running sum of these two uncorrelated processes
Fractional Noises and Fractional Motions
In the last section we considered white noises and Brownian motions. Here, we consider fractional noises and fractional motions. Applying our definition of (weak) stationarity given in Sect. 3.2, an asymptotic long-range persistent noise (scaling in the limit f → 0) is a (weakly) stationary time series if the strength of persistence β < 1 (Malamud and Turcotte 1999b). We will refer to these long-range persistent weakly stationary (β < 1) time series as fractional noises. For stronger values of long-range persistence (β > 1), the means and standard deviation are no longer defined since they now depend on the length of the series and the location in the time series. We will refer to these long-range persistent non-stationary (β > 1) time series as fractional motions. The value β = 1 represents a crossover value between (weakly) stationary and non-stationary processes, and between fractional noises and motions; this value is sometimes considered a fractional noise or motion, depending on the context. For very small values of the strength of long-range persistence (β < −1), the corresponding processes are unstable (Hosking 1981); these processes cannot be represented as AR models (generalization of the process in Eq. 2 to processes that incorporate more lags). In Sect. 4.2 we will construct and give examples of both fractional noises and motions, but intuitively, as the value of β increases, the contribution of the high-frequency (short-period) terms is reduced.
Just as previously we summed a Gaussian white noise with β = 0.0 to give a Brownian motion with β = 2.0 (Fig. 7), one can also sum fractional Gaussian noises (e.g., β = 0.7) to give fractional Brownian motions (e.g., β = 2.7), so that the running sum will result in a time series with β shifted by +2.0 (Malamud and Turcotte 1999a). This relationship is true for any symmetrical frequency-size distribution (e.g., the Gaussian) and long-range persistent time series. Analogous results hold for differencing a long-range persistent process (e.g., the first difference of a fractional motion with β = 1.5 will have a value of β = −0.5). However, for self-affine processes the aggregation and differencing results in processes that are asymptotic long-range persistent but not self-affine (Beran 1994), although our studies show that they are almost self-affine.
Another way of constructing long-range persistent processes is the superposition of short-memory processes with suitably distributed autocorrelation parameters (Granger 1980). This has been used to give a physical explanation of the Hurst phenomenon of long memory in river run-off (Mudelsee 2007). Eliazar and Klafter (2009) have applied two similar approaches, the stationary superposition model and the dissipative superposition model, to describe the dynamics of systems carrying heavy information traffic. The resultant processes are Levy distributed and long-range persistent.
Both the general case of asymptotic long-range persistence (e.g., temperature records, Eichner et al. 2003, see also Sects. 3.3 and 3.4 of this paper) and the more specific case of self-affine long-range persistence (many examples will be given in subsequent sections) are commonly identified in the Earth Sciences. In this paper, because self-affine time series are commonly found in the Earth Sciences and many other disciplines, and widely examined using a variety of techniques, we will restrict our analyses to them.
We will call the self-affine time series that we work with in this paper fractional noises. We have above classified fractional noises as a process that is asymptotic long-range persistent with β < 1, and fractional motions as those with β > 1. However, often in the literature, the term fractional noises or noises is used more generically, referring to an asymptotic long-range persistent time series with any value of β. We will try to take care to distinguish in this paper between fractional noises (β < 1) and motions (β > 1), but occasionally will use the more generic term 'noises' (or even sometimes 'fractional noises') to indicate the more general case (all β).
Several techniques and their associated estimators or measures for evaluating long-range persistence in a time series have been proposed. Most of them exploit the properties of long-range dependent time series as described in this section (in particular Eqs. 6, 7, 9). However, these techniques often do not perform hypothesis tests for or against long-range persistence (see Davies and Harte 1987 for an example where hypothesis tests are performed). Rather, all the techniques that will be discussed in this paper assume that the considered time series is long-range persistent, then they proceed to determine the strength of persistence. In this paper, we propose to provide a more rigorous grounding for the quantification of self-affine long-range persistence in time series and will use both existing 'conventional' techniques and benchmark-based improvement techniques.
In examining some of the different techniques and measures for quantifying long-range persistence, we will distinguish between techniques in the time domain (Sect. 5) and the frequency domain (Sect. 6). Five techniques will be discussed in detail: (1) (time domain techniques) Hurst rescaled range (R/S) analysis, semivariogram analysis, and detrended fluctuation analysis; and (2) (spectral domain techniques) power spectral analysis using both log-linear regression and maximum likelihood. To measure the performance of these techniques, we will apply them to a suite of synthetic fractional noise time series, the construction of which we now describe (Sect. 4).
Synthetic Fractional Noises and Motions
In this section we will first describe common techniques for the construction of fractional noises and motions that are commonly found in the literature (Sect. 4.1), and then introduce the extensive fractional noises and motions that we use in this paper (Sect. 4.2). We will conclude with a brief presentation of the fractional noises and motions that we include in the supplementary material, both as text files and R programs (Sect. 4.3). Accompanying this section are Appendices 1–4 which give more detailed specifics as to construction of our synthetic fractional noises and motions.
Common Techniques for Constructing Fractional Noises and Motions
There are different approaches for creating long-range dependent time series with and without short-range correlations and also with and without distinct periodic components. In each case, however, the time series come from a model or process with known properties and defined strengths of persistence. We will use the subscript 'model' (e.g., β model) to indicate that the process has given properties, and thus, the realizations of this process can be used as 'benchmark' time series.
Three of the most commonly used models for constructing fractional noises are the following:
Self-affine fractional noises and motions (Schottky 1918; Dutta and Horn 1981; Geisel et al. 1987; Bak et al. 1987). These are popular in the physical sciences community and are constructed to have an exact power-law scaling of the power spectral density (i.e. Eq. (7) holds for all f). These are constructed by inverse Fourier filtering of a white noise (briefly explained in Sect. 4.2). In Appendix 1–4, we give a detailed description about how to create realizations of this model, as used in this paper. For this type of construction, the autocorrelation and fluctuation functions are not self-affine, and instead scale asymptotically (Eqs. (6) and (9) hold asymptotically for τ → ∞ and l → ∞, respectively).
Self-similar processes (Mandelbrot and van Ness 1968; Embrechts and Maejima 2002). These constructed noises exhibit an exact power-law scaling of the fluctuation function for Gaussian one-point probability distributions so that Eq. (9) holds for all l. They exhibit an asymptotic scaling of the power spectral density (i.e. Eq. (7) holds asymptotically for f → 0), and have an autocorrelation function that scales asymptotically with a power law (Eq. (6) holds for τ → ∞).
Fractionally differenced noises (Granger and Joyeux 1980; Hosking 1981). These are commonly used in the stochastic time series analysis community and are based on infinite-order moving average processes whose coefficients can be represented as binomial coefficients of fractal numbers. These fractional noises have an autocorrelation function, power spectral density, and fluctuation function which scale asymptotically with a power law (i.e. Eq. (6) as τ → ∞, Eq. (7) as f → 0, Eq. (9) as l → ∞).
There are a variety of more complex models for creating a time series with long-range persistence. These models depend on more parameters than just the strength of long-range persistence. We describe some of these models below.
Models which capture short- and long-range correlations (ARFIMA or FARIMA) (Granger and Joyeux 1980; Hosking 1981; Beran 1994; Taqqu 2003). These can be constructed as finite order moving average (MA) or autoregressive (AR) process with a fractional noise as input.
Models for time series which exhibit long-range persistence and 'seasonality' (i.e. cyclicity) (Porter-Hudak 1990) or 'periodicity' (Montanari et al. 1999). These are based on fractional differencing of noise elements which are lagged by multiples of the assumed seasonal period.
Generalized long-memory time series models (e.g., Brockwell 2005) where the stochastic processes have time-dependent parameters and these parameters are long-range dependent.
Models for long-memory process with asymmetric (e.g., log-normal) one-point probability distributions. Two examples of such models that describe long-range persistence have been done for (1) varve glacial data (Palma and Zevallos 2011) and (2) solar flare activity (Stanislavsky et al. 2009).
Models for deterministic nonlinear systems at the edge between regularity and chaos (onset of chaos, Schuster and Just 2005; intermittency, Manneville 1980), and dynamics in Hamiltonian systems (Geisel et al. 1987). In this model class it is very difficult to find examples with a broad variety and continuity of strengths of long-range dependence, and the long-range persistence is true for only certain values of the parameters.
Multifractals (Hentschel and Procaccia 1983; Halsey et al. 1986; Chhabra and Jensen 1989) which depend on a continuum of parameters.
Alternative constructs of stochastic fractals such as cartoon Brownian motion (Mandelbrot 1999) and Weierstrass–Mandelbrot functions (Mandelbrot 1977; Berry and Lewis 1980). These have three properties that make them unsuitable for the performance tests applied in our paper (Sects. 5 and 6): (1) a complicated one-point probability distribution, (2) non-equally spaced time series, and (3) multifractality.
Alternative approaches for constructing time series which are approximately self-similar and discussed by Koutsoyiannis (2002): multiple time scale fluctuations, symmetric moving averages, and disaggregation.
For this paper, the only models of long-range persistence considered are self-affine fractional noises and motions. These processes are constructed to model a given (1) strength of long-range dependence and (2) one-point probability distribution. As previously mentioned, these types of processes are discussed in detail in Schepers et al. (1992), Gallant et al. (1994), Bassingthwaighte and Raymond (1995), Mehrabi et al. (1997), Wen and Sinding-Larsen (1997), Pilgram and Kaplan (1998), Malamud and Turcotte (1999a), Heneghan and McDarby (2000), Weron (2001), Eke et al. (2002), Xu et al. (2005), and Franzke et al. (2012).
Self-affine fractional noises and motions are characterized by their strength of persistence and by their one-point probability distribution. In order to model time series with symmetric distributions, the generated fractional noises and motions should be constructed as realizations of linear stochastic processes and based on Gaussian or Levy-distributed white noises, resulting in fractional noises and motions with different persistence strengths which are also Gaussian or Levy distributed (Kolmogorov and Gnedenko 1954). In order to model time series with asymmetric distributions (e.g., log-normal), one first generates fractional Gaussian or Levy noises/motions, and then these need to be transformed. This is accomplished with either of the following:
Box–Cox transformation (Box and Cox 1964) which is applied to each element of the fractional Gaussian or Levy noise/motion, that is, one transforms x t to f(x t ), t = 1, 2, …, N (for details, see Appendix 3).
The Schreiber–Schmitz algorithm (Schreiber and Schmitz 1996) is an iterative-set operation applied to the entire data series (for details, see Appendix 4).
Both of the above transformations change the one-point probability distribution of the fractional noise or motion being considered; the Box–Cox transform keeps the rank order of the elements, while the Schreiber–Schmitz algorithm maintains the linear correlations (i.e. the power spectral density). The Schreiber–Schmitz algorithm is well known and accepted in the physics and geophysics community whereas, in the hydrology community, the Box–Cox transform is a preferred estimation since the resultant series appear more visually similar to river discharge series.
Sets of Synthetic Fractional Noises and Motions Used in this Paper
To 'benchmark' the five estimation techniques described in Sects. 5 and 6, we have constructed time series of length N = 64, 128, 256, ..., 131,072 with Gaussian, log-normal, and Levy one-point probability distributions. Examples of these three theoretical distributions are given in Fig. 9, and the equations for their probability densities as well as the main properties are summarized in Table 2. These distributions were chosen for the following reasons:
Three one-point probability distributions which typically occur in time series. a Gaussian (normal) distribution with a mean value μ = 0.0 and standard deviation \(\sigma \) = 1.0. b Levy distribution (centred at x = 0.0) with exponents a = 1.6 and a = 1.2; for comparison Gaussian distribution (a = 2.0 with μ = 0.0 and \(\sigma \) = 20.5). c Log-normal distribution with different coefficients of variation: c v = 0.2, 0.5, 1.0, 2.0 and a mean value of μ = 1.0. In d are shown the Gaussian (μ = 0.0, \(\sigma \) = 20.5), Levy (a = 1.2, 1.6), and log-normal (c v = 0.5, μ = 1.0) distributions on logarithmic axes
Table 2 Table of one-point probability distributions and their properties used for the construction of fractional noises and motions
Gaussian distributions are symmetric, thin tailed, and the most commonly used basis for synthetic fractional noises in the literature; they are also the base for the derivation of fractional noises with other thin-tailed probability distributions.
Log-normal distributions are asymmetric, thin-tailed, but like many natural time series (e.g., river flow, sediment varve thicknesses) have only positive values.
Levy distributions are symmetric and heavy-tailed (i.e. the one-point probability distribution approaches a power law for large negative and positive values). Such heavy-tailed distributions are good approximations for the frequency-size statistics of a number of natural hazards (Malamud 2004). These include asteroid impacts (Chapman and Morrison 1994; Chapman 2004), earthquakes (Gutenberg and Richter 1954), forest fires (Malamud et al. 1998, 2005), landslides (Guzzetti et al. 2002; Malamud et al. 2004; Rossi et al. 2010), and volcanic eruptions (Pyle 2000). Floods (e.g., Malamud et al. 1996; Malamud and Turcotte 2006) have also been shown in many cases to follow power-law distributions.
The fractional noises and motions that we have constructed and used in our analyses are as follows:
One-point probability distributions: Gaussian, log-normal (coefficient of variation, \( c_{\text{v}} = \sigma_{x} /\bar{x} = 0.0,0.2,\, \ldots,\,2.0 \)), and (symmetric and centred) Levy distributions (exponent a = 1.0, 1.1, …, 2.0). The log-normal and Levy distributions reduce to Gaussian for c v = 0 and a = 2, respectively. The log-normal distributions were constructed using two different techniques, Box–Cox transform and Schreiber–Schmitz algorithm. The parameter c v is a measure of the skewness of a distribution, but only where that distribution is asymmetrically distributed, such as a log-normal distribution. One can compare the c v of one distribution to another, but only if that distribution has the same underlying statistical family.
Strengths of long-range persistence: −1.0 ≤ β model ≤ 4.0, step size of 0.2 (i.e. 26 successive values of β model).
Length of time series: The time series were realized 100 times for a given β model and constructed with N = 4,096 and then subdivided to also have N = 2,048, 1,024, and 512. These four time series lengths are focussed on in the main body of this paper. However, a further eight noise and motion lengths (N = 64, 128, 256, 8,192, 16,384, 32,768, 65,536, and 131,072) were also constructed, with results presented in the supplementary material.
For each set of 100 time series consisting of (distribution type, modelled persistence strength β model, time series length N), we applied three time domain and two frequency-domain techniques, introduced in Sects. 5 and 6, respectively, to obtain an estimate of the strength of long-range persistence. The time domain techniques applied are (1) Hurst rescaled range (R/S), (2) semivariogram, and (3) detrended fluctuation analysis. The frequency-domain techniques applied are (1) power spectral analysis using log-periodogram regression and (2) power spectral analysis using a maximum likelihood estimator (MLE), the Whittle estimator.
All fractional noises and motions with Gaussian or Levy one-point probability density have been constructed by inverse Fourier filtering of white noises (Appendices 1 and 2) (Theiler et al. 1992; Timmer and König 1995; Malamud and Turcotte 1999a), which for −1 ≤ β ≤ 1 and large N results in fractional noises with the same one-point probability distribution as the white noise. Inverse Fourier filtering requires the multiplication of the Fourier image of a white noise with a real-valued filter function (in our case a power law) followed by an inverse Fourier transform. The construction of synthetic log-normal distributed fractional noises and motions is more complicated because of the asymmetric one-point probability distribution (Venema et al. 2006). We put two approaches into action: (1) fractional Gaussian noises and motions were Box–Cox transformed (Appendix 3), and (2) an iterative algorithm (Schreiber–Schmitz algorithm, Appendix 4) was applied that allows us to prescribe the power spectral density and the one-point probability distribution. Realizations with 512 values each are presented for synthetic fractional Gaussian noises and motions (FGN, Fig. 10), synthetic fractional Levy noises and motions (FLevyN, Fig. 11), synthetic fractional log-normal noises and motions using the Box–Cox transform (FLNNa, Fig. 12), and synthetic fractional log-normal noises and motions using the Schreiber–Schmitz algorithm (FLNNb, Fig. 13). Note that all fractional noises and motions are normalized to have a mean value of zero and a standard deviation of one.
Examples of synthetic fractional Gaussian noises and motions (FGN) (Sect. 4.2) (see Appendix 1) with different modelled strengths of long-range persistence, β model. The presented data series, which have N = 512 elements each, are normalized to have a mean value of zero and a standard deviation of one
Examples of synthetic fractional Levy noises and motions (FLevyN) (Sect. 4.2) (see Appendix 2) with different modelled strengths of long-range persistence, β model. The presented data series, which have N = 512 elements each, are normalized to have a mean value of zero and a standard deviation of one
Examples of synthetic fractional log-normal noises and motions (FLNNa) (Sect. 4.2) (constructed by Box–Cox transform (see Appendix 3)) with different modelled strengths of long-range persistence, β model. The presented data series have N = 512 elements each
Examples of synthetic fractional log-normal noises and motions (FLNNb) (Sect. 4.2) (constructed by Schreiber–Schmitz algorithm (see Appendix 4)) with different modelled strengths of long-range persistence, β model. The presented data series have N = 512 elements each. For β model = 2.0 and β model = 2.5 fluctuations are not apparent due to the y-axis having a much larger range than the fluctuations themselves
In Figs. 10, 11, 12, 13, each figure represents a different one-point probability distribution, and β (the strength of long-range persistence) increases from −1.0 to 2.5, reducing the contribution of the high-frequency (short-period) terms. For β < 0 (anti-persistence), the high-frequency contributions dominate over the low-frequency ones; adjacent values are thus anti-correlated relative to a white noise (β = 0). For these realizations of anti-persistent processes, a value larger than the mean tends to be followed by a value smaller than the mean. With β = 0 (white noise), high-frequency and low-frequency contributions are equal, resulting in an uncorrelated time series; adjacent values have no correlations with one another, and there is equal likelihood of a small or large value (relative to the mean) occurring. For β > 0, and as β gets larger, the low-frequency contributions increasingly dominate over the high-frequency ones; the adjacent values become more strongly correlated, and the time series profiles become increasingly smoothed. The strength of persistence increases, and a value larger than the mean tends to be followed by another value larger than the then mean. As the persistence increases, the tendency for large to be followed by large (and small to be followed by small) becomes greater, manifesting itself in a clustering of large values and clustering of small values. In Sect. 5 we explore different techniques for measuring the strength of long-range persistence.
Fractional Noises and Motions: Description of Supplementary Material
As an aid to the reader, we provide the following in the supplementary material:
Sample fractional noises and motions in tab-delimited text files. A zipped file which contains three folders:
FGaussianNoise contains fractional Gaussian noises.
FLogNormalNoise contains fractional log-normal noises constructed using the Box–Cox transform.
FLevyNoise contains fractional Levy noises.
The folders FLogNormalNoise and FLevyNoise have further subfolders for coefficient of variation c v = 0.2, 0.5, 1.0 that characterizes the log-normal shape, or for the exponent a = 0.85, 1.50 that characterizes the shape of the heavy tails of Levy distributions. Each file is related to a certain strength of persistence, β, and to a certain parameter setting for the 1D probability distribution. The strength of persistence ranges from β = –1.0 to 3.0 with sampling steps of Δβ = 0.2. The parameters that characterize the fractional noise or motion are identified in the name of each file. Each file contains ten realizations of fractional noises with N = 4,096 elements each in accordance with the parameter settings. All fractional Gaussian and log-normal noises are constructed from the single set of ten Gaussian white noises, and all fractional Levy noises are constructed from the single set of ten white Levy noises. There are 126 files contained within all the subfolders, in other words 1,260 'short' (N = 4,096 values) fractional noises and motions.
R program. We give a commented R program that we use to create the synthetic noises and motions in this paper.
Time Domain Techniques for Measuring the Strength of Long-Range Persistence
There are a variety of time domain techniques for quantifying the strength of long-range persistence in self-affine time series. Here, we first discuss two broad frameworks within which these techniques are based (this introduction). We then discuss three techniques that are commonly used, each based on a scaling behaviour of the dispersion of values in the time domain as a function of different time length segments: (1) Hurst rescaled range (R/S) analysis (Sect. 5.1); (2) semivariogram analysis (Sect. 5.2); and (3) detrended fluctuation analysis (DFA) (Sect. 5.3). After this, we discuss (Sect. 5.4) other time domain techniques.
Time domain techniques typically exploit the way that the statistical properties of the original time series x t or the aggregated (summed) time series s t (Eq. 10) vary as a function of the length of different time series segments, l. A commonality to these techniques is that they are all based on either (A) the mean correlation strength of lagged elements as a function of the lag or (B) a power-law scaling of the dispersion of segments of the aggregated series as a function of the segment length l. We can broadly group these techniques into the following subclasses based on A (correlation strength) and B (scaling). We also note aggregation and non-aggregation of the original time series (□ = technique itself does not do any aggregation of the original time series, † = technique itself aggregates the original time series):
Autocorrelation function □ and (semi-)variogram analysis □. These evaluate the average dependence of lagged time series elements.
(B1):
Methods which rely on the scaling of the variance of fractional noises and motions. These are called variable bandwidth methods, scaled windowed variance methods, or fluctuation analysis. The most common techniques in this class are Hurst rescaled range analysis (R/S)† (Hurst 1951) and detrended fluctuation analysis (DFA)† (Peng et al. 1994; Kantelhardt et al. 2001). We mention here three less commonly used other techniques:
The roughness-length technique□ originally developed for use in the Earth Sciences (Malinverno 1990) is identical to DFA where linear fits are applied to the profile (called DFA1). In the roughness length, the 'roughness' is defined as the root-mean-squared value of the residual on a linear trend over the length of a given segment; since it is based on a 'topographic' profile, aggregating of the time series is not needed.
The detrended scaled windowed variance analysis† (Cannon et al. 1997) is similar to DFA1; the absolute values of the data from aggregated time series have been used in place of the variance, and the corresponding dependence on the segment length is studied.
Higuchi's method□ (Higuchi 1988) evaluates the scaling relationship between the mean normalized curve length of the coarse-grained time series (i.e. values x kt are considered for a fixed value of k and t = 1, 2, …, N/k) and the chosen sampling step (here k).
Dispersional analysis □ (Bassingthwaighte and Raymond 1995) analyses the scaling of the variance of a time series that is coarse grained (averages of segments of equal length are considered) as a function of the segment length. This is very similar to relative dispersion analysis□ (Schepers et al. 1992) which describes the scaling of the standard deviation divided by the mean.
Average extreme value analysis □ (Malamud and Turcotte 1999a) examines the mean value of the extremes (minimum, maximum) as a function of segment length.
Although some techniques involve aggregation of the original time series as part of the technique itself, and other techniques involve no aggregation of the time series, any of the techniques can be applied to an aggregated (or first differenced) time series, as long as the time series has a symmetrical one-point probability distribution. We saw this in Sect. 3.6 that if one begins with a time series that has a symmetric one-point probability distribution and a given β, then aggregation or the first difference of the original time series results in a new time series with β shifted by +2 (aggregation) or −2 (first difference). However, care must be taken not to confuse aggregation of the original time series 'before' a technique has been applied (pre-processing the data) with aggregation that is done as a standard part of the technique itself. Some of the techniques above are generally effective (for the time series considered) only over a given range of strengths of long-range persistence (Malamud and Turcotte 1999a; Kantelhardt et al. 2001):
autocorrelation (−1 ≤ β ≤ 1) (Sect. 3.1).
Hurst rescaled range analysis (R/S) (−1 ≤ β ≤ 1) (Sect. 5.1).
semivariogram analysis (1 ≤ β ≤ 3) (Sect. 5.2).
detrended fluctuation analysis (DFA) (all β) (Sect. 5.3).
[frequency-domain technique]: power spectral analysis (all β) (Sect. 6).
We will in Sect. 7 explore further the ranges for all of these techniques except the first one (autocorrelation). One can always aggregate (or first difference) a time series to 'place' it into a specific range of β where a given technique is effective, but as discussed above only if that time series has a one-point probability distribution that is (close to) symmetrical. Therefore, as part of pre-processing, a time series should not be aggregated (or differenced) if it is, for example, log-normal distributed. The aggregation of time series has resulted in confusion for some scientists who have aggregated a time series first, when it was not appropriate, and then miscalculated their strength persistence in either direct by +2 or −2. In the next three sections (Sects. 5.1–5.3) we introduce the most common time domain techniques in more detail.
Hurst Rescaled Range (R/S) Analysis
Historically, the first approach to the quantification of long-range persistence in a time series was developed by Hurst (1951), who spent his life studying the hydrology of the Nile River, in particular the record of floods and droughts. He considered a river flow as a time series and determined the storage limits in an idealized reservoir. To better understand his empirical data, he introduced rescaled range (R/S) analysis. The concept was developed at a time (1) when computers were in their early stages so that calculations had to be done manually and (2) before fractional noises or motions were introduced. Much of Hurst's work inspired later studies by Mandelbrot and others into self-affine time series (e.g., Mandelbrot and Van Ness 1968; Mandelbrot and Wallis 1968, 1969a, b, c). The use of Hurst (R/S) analysis (and variations of it) is still popular and often applied (e.g., human coordination, Chen et al. 1997; neural spike trains, Teich et al. 1997; plasma edge fluctuations, Carreras et al. 1998; earthquakes, Yebang and Burton 2006; rainfall, Salomão et al. 2009).
The Hurst (R/S) analysis first takes the original time series x t , t = 1, 2, …, N, and aggregates it using the running sum (Eq. 10) to give s t . This series is then divided into non-overlapping segments of length l (l < N). The mth segment contains the time series elements \( {s_{{(m - 1)l + t^{\prime}}} } \), t′ = 1, 2, …, l. The range R m,l is used to describe the dispersion of these values, looking at the maximum and minimum s t values within each segment m of length l, and is defined as:
$$ R_{m,l} = \hbox{max} \left[ {s_{{\left( {m - 1} \right)l + 1}} ,s_{{\left( {m - 1} \right)l + 2}} , \ldots ,s_{{\left( {m - 1} \right)l + l}} } \right] - \hbox{min} \left[ {s_{{\left( {m - 1} \right)l + 1}} ,s_{{\left( {m - 1} \right)l + 2}} , \ldots ,s_{{\left( {m - 1} \right)l + l}} } \right]. $$
For each segment m of length l, the variance of the original x t values in that segment is computed giving the standard deviation used in the (R/S) analysis:
$$ S_{m,l} \equiv \sigma_{x} \left[ {x_{{\left( {m - 1} \right)l + 1}} ,x_{{\left( {m - 1} \right)l + 2}} , \ldots ,x_{{\left( {m - 1} \right)l + l}} } \right]. $$
The square brackets \( \sigma_{x} \)[ ] indicate taking the standard deviation over the terms in the bracket. Mean values of the range R m,l and the standard deviation S m,l for segments of length l are determined:
$$ R_{l} = \bar{R}_{m,l} = \frac{1}{{\left[ {N/l} \right]}}\sum\limits_{m = 1}^{{\left[ {N/l} \right]}} {R_{m,l} } \quad{\text{and}}\quad S_{l} = \bar{S}_{m,l} = \frac{1}{{\left[ {N/l} \right]}}\sum\limits_{m = 1}^{{\left[ {N/l} \right]}} {S_{m,l} } $$
where as we did in Eq. (8), if N/l is non-integer, we take the largest integer less than N/l, noted here by [N/l]. For a fractional noise, the ratio, R l /S l , exhibits a power-law scaling as a function of segment length l, with a power-law exponent called the Hurst exponent, Hu:
$$ \left( {\frac{{R_{l} }}{{S_{l} }}} \right)\sim \left( \frac{l}{2} \right)^{Hu} . $$
Although in the literature it is common to denote the Hurst exponent with the symbol H, we use Hu here to avoid confusion with the Hausdorff exponent (also commonly called H, but which we will denote by Ha and introduce in Sect. 5.2). Rescaled range analysis is illustrated for a fractional log-normal noise with β model = 1.0 in Fig. 14a, where we have plotted (R/S) as a function of (l), on logarithmic axes. The Hurst exponent Hu is related to the strength of long-range persistence β as β = 2Hu−1 (Malamud and Turcotte 1999a).
Long-range dependence analysis of a fractional log-normal noise with a persistence strength of β model = 1.0, a coefficient of variation of c v = 0.5 and N = 4,096 elements. The panels represent a Hurst rescaled range (R/S) analysis, b semivariogram analysis, c detrended fluctuation analysis (DFAk with polynomials of order k applied to the profile), d power spectral analysis. All graphs are shown on logarithmic axes. Best-fit power laws are presented by dashed lines, shifted upwards slightly in the y-direction, and the corresponding exponents for each technique (Hu, Ha, α, and β PS) are given in the legend of the corresponding panel. The corresponding β are calculated from equations presented in Sect. 5: \( \beta_{\text{Hu}} = 2Hu - 1,\beta_{\text{Ha}} = 2Ha + 1,\;{\text{and}}\;\beta_{\text{DFA}} = 2\alpha - 1 \)
In this paper, the Hurst exponent Hu is derived by computing the rescaled range for segment lengths l = 8, 9, 10, 11, 12, 13, 14, 15, [24.0], [24.1], [24.2], [24.3], …, [N/4], where the square bracket symbol [ ] denotes rounding down to the closest integer and N is the length of the time series. The power-law exponent Hu from Eq. (15) is estimated by linear regression of log(R l /S l ) versus log(l/2). The errors here (fluctuations around the best-fit line) are multiplicative and, therefore, we use linear regression of the log-transformed data (vs. ordinary nonlinear regression of the data itself) as an unbiased estimate of the power-law exponent. In Appendix 5 we discuss the choice of fitting technique used along with simulations of the resultant bias when different techniques are considered. In addition to Hurst (R/S), for three other techniques used in this paper (semivariogram, detrended fluctuation, and power spectral analyses), we estimate the best-fit power law to a given set of measured data by using a linear regression of the log-transformed data.
Hurst (R/S) analysis has been examined in many investigations (e.g., Bassingthwaighte and Raymond 1994, 1995; Taqqu et al. 1995; Caccia et al. 1997; Cannon et al. 1997; Pilgram and Kaplan 1998; Malamud and Turcotte 1999a; Weron 2001; Eke et al. 2002; Mielniczuk and Wojdyłło 2007; Boutahar 2009). Through these studies, it has become apparent that rescaled range analysis can lead to significantly biased results. In order to diminish this problem, several modifications have been proposed, including the following:
Anis–Lloyd correction (Anis and Lloyd 1976) is a correction term for Hu (see Eq. 15) that compensates the bias caused by small values of the time series length N. It is optimized for white noises (β = 0).
Lo's correction (Lo 1991) which incorporates the autocovariance.
Detrending (Caccia et al. 1997).
Bias correction (Mielniczuk and Wojdyłło 2007).
We will quantify the bias using rescaled range analyses, under a variety of conditions, in our results (Sect. 7).
Semivariogram Analysis
In Sect. 3 we discussed that, in the case of a stationary fractional noise (−1 < β < 1), there is a power-law dependence of the autocorrelation function on lag, C(τ) ~ τ −ν (Eq. 6), with power-law coefficient ν = 1 − β. However, it is difficult to use the autocorrelation function for estimating the strength of long-range dependence β. This is because there are a considerable number of negative values for the autocorrelation function C, and therefore, a linear regression of the logarithm of autocorrelation function C(τ) versus the logarithm of the lag τ is not possible. Finding the best-fit power-law function for C(τ) as a function of τ comes with some technical difficulties (particularly compared to linear regression) such as how to choose good initial values for ν, and choosing appropriate weights and convergence criteria for the nonlinear regression. Because our focus is on less technical methods, we did not use the autocorrelation function to gain information about β.
For non-stationary fractional times series, in other words, fractional motions (β > 1), it is inappropriate to use the autocorrelation function, because C(τ) (Eq. 3) has the mean, \( \bar{x} \), in its definition. An alternative way to measure long-range correlations is the semivariogram (Matheron 1963). The semivariogram, γ(τ), is given by
$$ \gamma \left( \tau \right) = \frac{1}{{2\left( {N - \tau } \right)}}\sum\limits_{t = 1}^{N - \tau } {\left( {x_{t + \tau } - x_{t} } \right)^{2} } , $$
where τ is the time lag between two values. Note that neither the sample mean, \( \bar{x} \), nor the sample variance, \( \sigma_{x}^{2} \), is used in defining the semivariogram. For a fractional motion (β > 1), the semivariogram, γ(τ), scales with τ, the lag,
$$ \gamma \left( \tau \right)\sim \tau^{ 2Ha} , $$
where Ha is the Hausdorff exponent and Ha = (β − 1)/2 (Burrough 1981; Burrough 1983; Mark and Aronson 1984). The Hausdorff exponent, Ha, is a measure of the strength of long-range persistence for fractional motions for which 0 ≤ Ha ≤ 1. Semivariogram analysis is illustrated for a fractional log-normal motion with β model = 1.0 in Fig. 14b.
Semivariogram analysis is widely applied in the geoscientific and ecologic communities; examples include the following:
Landscapes (Burrough 1981).
Soil variations (Burrough 1983).
Rock joint profiles (Huang et al. 1992).
Advective transport (Neuman 1995).
Evaluation of different management systems on crop performance (Eghball and Varvel 1997).
In this paper, we have chosen for our semivariogram analysis values for lag τ that are the same as those used for lengths l in (R/S) analysis, as described in the previous section. This is done to facilitate comparison between the different techniques. The Hausdorff exponent, Ha, is the power-law exponent in Eq. (17) and derived by linear regression of the logarithm of the semivariogram, log(γ(τ)), versus the logarithm of the lag, log(τ) (see Appendix 5 for discussion of the type of technique used for power-law fitting). General discussions of methods used to estimate Ha and other persistence measures for time series have been given by Schepers et al. (1992) and Schmittbuhl et al. (1995).
Detrended Fluctuation Analysis (DFA)
Detrended fluctuation analysis, like (R/S) analysis, is based on examining the aggregate (running sum, Eq. 10) of the time series as a function of segment length and was introduced as fluctuation analysis by Peng et al. (1994) for studying long-term correlations in DNA sequences. Kantelhardt et al. (2001) improved on this technique by generalizing the function through which the trend is modelled from linear to polynomial functions. Detrended fluctuation analysis is very popular and has been applied to characterize long-term correlations for time series in many different disciplines. Examples include the following:
DNA sequences (Peng et al. 1993b, 1994).
Solar radio astronomy (Kurths et al. 1995).
Heart rate variability (Peng et al. 1993a; Penzel et al. 2003).
River run-off series (Koscielny-Bunde et al. 2006).
Long-term weather records and simulations (Fraedrich and Blender 2003).
Fluctuation analysis (Sect. 3.3) is based on analyses of the original time series x t and exploits the scaling properties of the fluctuation function (Eq. 9). Detrended fluctuation analysis is based on analyses of the aggregate (running sum) s t , and the idea is that there is a trend superimposed on a given self-affine fractional noise or motion that must be taken out (i.e. the signal should be detrended). For each segment, this trend is modelled as the best-fitting polynomial function with a given degree k. Then, the values in the mth segment with length l, \( s_{{\left( {m - 1} \right)l + t^{\prime } }} ,\, t^{\prime } = 1,\;2, \ldots ,l \), are detrended by subtracting the best-fit polynomial function for that segment, \( p[k]_{{\left( {m - 1} \right)l + t^{\prime } }}, \, t^{\prime } = 1,2, \ldots ,l \). The detrended values are \( \tilde{s}_{{\left( {m - 1} \right)l + t^{\prime } }} = s_{{\left( {m - 1} \right)l + t^{\prime } }} - p[k]_{{\left( {m - 1} \right)l + t^{\prime } }}, \, t^{\prime } = 1,2, \ldots ,l, \) and the square of the fluctuation of the detrended segments of length l is evaluated in terms of their mean variance; similar to Eq. (8) this gives:
$$ F_{\text{DFA}}^{2} \left( l \right) = \frac{l}{[N/l]}\sum\limits_{i = 0}^{[N/l] - 1} {\sigma^{2} \left[ {\tilde{s}_{il + 1} ,\tilde{s}_{il + 2} , \ldots ,\tilde{s}_{il + l} } \right]} . $$
For Gaussian-distributed fractional noises and motions, the fluctuation function, F DFA, has been mathematically shown (Taqqu et al. 1995) to scale with the length of the segments, l, as
$$ F_{\text{DFA}}^{{2}} \left( l \right)\sim \left( l \right)^{2\alpha } , $$
if the following conditions are fulfilled: (1) the segment length l and the time series length N go to infinity, (2) the quotient l/N goes to zero, and (3) the polynomial order of detrending is k = 1 (i.e. linear trends are subtracted). Hence, if the fluctuation is averaged over all segments and if this averaged fluctuation is considered as a function of the segment length l, for large segment lengths l the fluctuation approaches a power-law function with a power-law scaling coefficient of α. Taqqu et al. (1995) further showed that the power-law exponent in Eq. (19) is equivalent to (β + 1), so that
$$ \alpha = \left( {\beta + 1} \right)/2. $$
The outcome of detrended fluctuation analysis depends on the degree of the polynomial that models the underlying trend. If polynomials of order k are considered, then the resultant estimate of the long-range dependence is called DFAk (e.g., DFA1, DFA2, and DFA3). Detrended fluctuation analysis (DFA1 to DFA4) is illustrated for a fractional log-normal noise with β model = 1.0 in Fig. 14c.
Several authors have discussed potential limitations of detrended fluctuation analysis when applied to observational data that have attributes additional to that of just a 'pure' fractional noise or motion and a superimposed polynomial trend. For example, Hu et al. (2001) showed that an underlying linear, periodic, or power-law trend in the signal leads to a crossover behaviour (i.e. two scaling regimes with different exponents) in the scaling of the fluctuation function. Chen et al. (2002) discussed properties of detrended fluctuation analysis for different types of non-stationarity. In other studies, Chen et al. (2005) studied the effects on detrended fluctuation analysis of nonlinear filtering of the time series.
Guerrero and Smith (2005) have proposed a maximum likelihood estimator that provides confidence intervals for the estimated strength of long-range persistence. Marković and Koch (2005) demonstrated that periodic trend removal is an important prerequisite for detrended fluctuation analysis studies. Gao et al. (2006) and Maraun et al. (2004) have discussed the misinterpretation of detrended fluctuation analysis results and how to avoid pitfalls in the assessment of long-range persistence. Kantelhardt et al. (2003) have generalized the concept of detrended fluctuation analysis such that multifractal properties of time series can be studied. Detrended moving average (DMA) analysis is very similar to detrended fluctuation analysis, but the underlining trends are not assumed to be polynomial.
Within this paper, we restrict our studies to DFA2; in other words, quadratic trends are removed. Further, we have applied the same set of segment lengths as for Hurst rescaled range analysis (R/S): l = 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, [24.0], [24.1], [24.2], [24.3],…, [N/4], where [ ] denotes rounding down to the closest integer and N is the length of the time series. This set of segment lengths was chosen carefully and optimized for DFA2, by balancing the number of segment lengths to be (1) as high as possible to have a precise estimate for β DFA and (2) as few as possible to have low computational costs. To further explore the segment length set chosen, we contrasted analyses using our chosen set (l = 8, 9, 10, 11, 12, 13, 14, 15, [24.0], [24.1], [24.2], [24.3], …, [N/4]) versus a 'complete' set (l = 3, 4, 5, …, N/4). We applied DFA2, using these two sets of segment lengths, on a fractional noise with strength of long-range persistence β = 0.5 and time series lengths N = 512, 1,024, 2,048, or 4,096. We found that the random error of the results from DFA2 using the segment length set chosen was as small as for the complete set of segment lengths. In our final analyses, ordinary linear regression (see Appendix 5) has been applied for the associated values of log(F 2) versus log(l), and the slope of the best-fit linear model gives α from which we obtain the long-range persistence.
Other Time Domain Techniques for Examining Long-Range Persistence
Here we discuss two other time domain methods that can be used to examine long-range persistence: (1) first-return and multi-return probability and (2) fractal geometry.
First-return and multi-return probability methods. The timings of threshold crossings are another feature sensitive to the strength of long-range dependence. The first-return probability method (Hansen et al. 1994) considers a given 'height' of the y-axis, which we will call h. It is based on the probability, conditional on starting at h, of exceeding h after a time τ (with no other crossing between t and t + τ). This probability scales with h as a power law. Alternatively, a multi-return probability (Schmittbuhl et al. 1995) can be studied (crossings between t and t + τ are allowed), which also results in a power-law scaling for the dependence on the height h. Both power-law exponents are related to the strength of long-range persistence, β. These return probability methods work for the stationary case, that is, –1 < β < 1, and for thin-tailed one-point probability distributions. For heavy-tailed, one-point probability distributions, the power-law exponent depends also on the tail parameter.
Fractal geometry methods. These techniques are based on describing the fractal geometry (fractal dimension) of the graph of a fractional noise. By definition, a self-affine, long-range persistent time series (fractional noises and motions) has self-affine fractal geometry, with fractal dimensions constrained between D = 1.0 (a straight line) and 2.0 (space filling time series) (Mandelbrot 1985). The oldest of fractal geometry methods is the divider/ruler method (Mandelbrot 1967; Cox and Wang 1993) that measures the length of the graph of a fractal curve either at different resolutions or by walking a given length stick along the curve. The evaluated curve length depends on the resolution/stick length, and the shorter the length of the stick used, the longer the curve. The resultant power-law relationship of curve length as a function of stick length results in a power-law exponent which is the fractal dimension D or the strength of persistence β, respectively. However, appropriate care must be taken, as the vertical and horizontal coordinates can scale differently (e.g., different types of units). See Voss (1985) and Malamud and Turcotte (1999a) for discussion. After appropriately adjusting the vertical and horizontal coordinates of the time series, other fractal dimensions that are determined directly using geometric methods include the box counting dimension, the correlation dimension (Grassberger and Procaccia 1983; Osborne and Provenzale 1989), and the Kaplan–Yorke dimension (Kaplan and Yorke 1979; Wolf et al. 1985). Note that the application of different types of fractal dimensions to a time series leads to quantitatively different results: for instance, for a fractional motion (1 < β < 3), the divider/ruler dimension is D divider/ruler = (5 – β)/2 (Brown 1987; De Santis 1997), while the correlation dimension is D corr = 2/(β – 1) (Theiler 1991), so one must be careful about 'which' dimension is being referred to. It might be necessary to embed the time series into a higher-dimensional space (Takens 1981) in order to extract the dimension of the time series, which in this context is the dimension of the attractor of the system from which the time series was measured. A number of the fractal dimension estimate techniques that have been discussed in this paragraph require very long and stationary time series.
We have in this section explored time domain techniques for measuring the strength of long-range persistence. The major relationships between β and other power-law scaling exponents (autocorrelation, rescaled range, semivariogram, and fluctuation function) are summarized in Table 3. We will now consider frequency-domain techniques.
Table 3 Table of scaling exponents
Frequency-domain Techniques for Measuring the Strength of Long-Range Persistence: Power Spectral Analysis
It is common in the Earth Sciences and other disciplines to examine the strength of long-range persistence in self-affine time series by first transforming the data from the time domain into the frequency (spectral) domain, using techniques such as the Fourier, Hilbert, or wavelet transforms. Here we will use the Fourier transform with two methods of estimation.
The Fourier Transform and Power Spectral Density
The Fourier transformation X k , k = 1, 2, …, N, of an equally spaced time series x t , t = 1, 2, …, N, results in an equivalent representation of that time series in the frequency domain. It is defined as:
$$ X_{k} = \Delta \sum\limits_{t = 1}^{N} {x_{t} } e^{2\pi itk/N} ,\quad k = 1,2, \ldots ,N , $$
where Δ is the length of the sampling interval (including units) between successive x t and i is the square root of −1. The resultant Fourier coefficients X k are complex numbers. They are symmetric in the sense that X k is the conjugate complex of X N−k . The Fourier coefficients X k , k = 1, 2, …, N, are associated with frequencies f k = k/(NΔ).
The linear correlations of x t will be represented by the periodogram S (Priestley 1981):
$$ S_{k} = \frac{{2\left| {X_{k} } \right|^{2} }}{N\Delta },\quad k = 1,2, \ldots ,\frac{N}{2} , $$
with the complex coefficients X k resulting from the discrete Fourier transform (Eq. 21) and | | denoting the modulus. The periodogram is a frequently used estimator of the power spectral density of the underlying process; in this paper we will not distinguish between the terms 'power spectral density' and 'periodogram' and will use both synonymously. By using fast Fourier transform (FFT) implementations such as the Cooley–Tukey algorithm (Cooley and Tukey 1965), the power spectral density S can be computed with little computational cost.
For a fractional (self-affine) noise, the power spectral density, S k , has a power-law dependence on the frequency for all f k (Beran 1994)
$$ S_{k} \sim f_{k}^{ - \beta } ,\quad k = 1,2, \ldots ,\frac{N}{2}. $$
This is the same as Eq. (7) but for all f, not just the limit as f → 0. The graph of S vs f is also known as the periodogram (and sometimes called a spectral plot).
Detrending and Windowing
The discrete Fourier transform as defined in Eq. (21) is designed for 'circular' time series (i.e. the last and first values in the time series 'follow' one another) (Percival and Walden 1993). In order to reduce non-desirable effects on the Fourier coefficients caused by the large values of the absolute difference of the first and the last time series element, |x N – x 1|, which typically occurs for non-stationary time series and in particular for fractional motions (β > 1), detrending and windowing can be carried out. One example of these non-desirable effects is spectral domain leakage (for a comprehensive discussion, see Priestley 1981; Percival and Walden 1993). Leakage is a term used to describe power associated with frequencies that are non-integer k in Eq. (22) becoming distributed not only to their own bin, but also 'leaking' into other bins. The resultant leakage can seriously bias the resultant power spectral density distribution. To reduce this leakage we will both detrend and window the original time series before doing a Fourier analysis.
Many statistical packages and books recommend removing the trend (detrending) and removing the mean of a time series before performing a Fourier analysis. The mean of a time series can be set equal to 0 and the variance normalized to 1; this will not affect the resulting Fourier coefficients. However, detrending is controversial and, therefore, care should be taken. One way of detrending (which we use here before applying Fourier analysis) is to take the best-fit straight line to the time series and subtract it from all the values. Another way of detrending is to connect a line from the first point and the last point and subtract this line from the time series, forcing x 0 = x N . If a time series shows a clear linear trend, where the series appears to be closely scattered around a straight line, the trend can be safely removed without affecting any but the lowest frequencies in the power spectrum. However, if there is no clear trend, detrending can cause the statistics of the periodogram (in particular the slope) to change.
Windowing (also called tapering, weighting, shading, and fading) involves multiplying the N values of a time series, x t , t = 1, 2, …, N, by the N values of the 'window', w t , t = 1, 2, …, N, before computing the Fourier transform. If w t = 1 for all t, then w t is a rectangular window and the original series is left unmodified. The window is normally constructed to change gradually from zero to a maximum to zero as t goes from 1 to N. Many books discuss the mechanics of how and which windows to use, including Press et al. (1994) and Smith and Smith (1995). We apply a commonly used window, the Welch window:
$$ w_{t} = 1 - \left( {\frac{t - (N/2)}{N/2}} \right)^{2} ,\quad t = 1,2, \ldots ,N. $$
An example of the Welch window applied to a fractional log-normal noise with a coefficient of variation of c v = 0.5 and β model = 2.5 is given in Fig. 15. In Fig. 15a we show the original time series and in Fig. 15b the Welch window (grey area) and the time series after normalization (subtracting out the mean and dividing by the variance, to give mean 0 and variance 1) and application of the Welch window.
Pre-processing of a time series and the effect of windowing. a The original time series, a fractional log-normal noise with a coefficient of variation of c v = 0.5 and β model = 2.5. Also shown (horizontal dashed line) is the mean of the values. b Time series shown in (a) after normalizing (to sample mean \( \bar{x} = 0 \) and sample standard deviation \(\sigma \) x = 1) and application of a Welch window (grey area) (Eq. 24). We then apply power spectral analysis to both (a) and (b). In (c) are shown the power spectral densities as a function of frequency for the original time series and in (d) the same for the normalized and windowed time series, both on logarithmic axes. For both periodograms are given the best-fit power-law exponents: (c) original time series β PS = 1.86; (d) time series with Welch window applied: β PS = 2.43. The overall shapes of the two periodograms are very similar, while the individual values differ
The Fourier coefficients (Eq. 21) are then given by:
$$ X_{k} = \Delta \sum\limits_{t = 1}^{N} {w_{t} x_{t} } e^{2\pi itk/N} ,\quad k = 1,2, \ldots ,N. $$
Windowing significantly reduces the leakage when Fourier transforms are carried out on self-affine time series, particularly for those with high positive β values (i.e. above β = 2). See Percival and Walden (1993) for a discussion of windowing, and Malamud and Turcotte (1999a) for a discussion of windowing applied to fractional noises and motions.
The variance of x t will be different from the variance of (w t x t ); this will affect the total power (variance) in the periodogram, and the amplitude of the power spectral density function will be shifted. One remedy is to normalize the time series x t so it has a mean of 0, calculate the Fourier coefficients X k based on (Eq. 25), and then calculate the final S k using
$$ S_{k} = \frac{1}{{W^{2} }}\left[ {\frac{{2\left| {X_{k} } \right|^{2} }}{N\Delta }} \right],\quad k = 1,2, \ldots ,\frac{N}{2} $$
(26a)
$$ W^{2} = \frac{1}{N}\sum\limits_{t = 1}^{N} {\left( {w_{t} } \right)^{2} } . $$
(26b)
This will normalize the variance of (w t x t ) such that it now has the variance of the original unwindowed time series x t .
In the next two sections, we describe two techniques commonly found in the time series analysis literature for finding a best-fit power law to the power spectral density (in our case, the strength of long-range persistence β in Eq. 23) and will also present the result of the power spectral analysis applied to the windowed and unwindowed time series examples discussed above.
Estimators Based on Log-regression of the Power Spectral Densities
The strength of long-range persistence can be directly measured as a power-law decay of the power spectral density (Geweke and Porter-Hudak 1983). Robinson (1994, 1995) showed that the performance of this technique is similar for non-Gaussian and Gaussian distributed data series. However, in the case of non-Gaussian one-point probability distributions, the uncertainty of the estimate might become larger (depending on the distribution), compared to Gaussian distributions.
If the power spectral density S (Eqs. 22, 26a) is expected to scale over the entire frequency range (and not just for frequencies f → 0) with a power law, \( S(f)\sim f^{ - \beta } \), then the power-law coefficient, β, can be derived by (non-weighted) linear regression of the logarithm of the power spectral density, log(S), versus the logarithm of the frequency, log(f). Although this estimator appears simplistic (at least in comparison with the MLE estimator presented in the next section), it nevertheless has small biases in estimating β, along with tight confidence intervals, and is broadly applicable to time series with asymmetrical one-point probability distributions (Velasco 2000). In Appendix 5 we discuss in detail the use of ordinary linear regression of the log-transformed data versus nonlinear least-squares regression of the non-transformed data. Power spectral analysis, using linear regression of the log-transformed data, is illustrated for a fractional log-normal noise with β model = 1.0 in Fig. 14d; the corresponding estimator is called β PS(best-fit).
We return to the effect of windowing on spectral analysis and in Fig. 15c show the results of power spectral analysis applied to a realization of an original log-normal fractional motion (c v = 0.5, β model = 2.5) and in Fig. 15d on the windowed version of this realization (time series). The power spectral analysis of the unwindowed time series results in a best-fit power-law exponent (using linear regression of log(S) vs. log(f)) of β PS = 1.86, and for the windowed time series β PS = 2.43. The power spectral analysis of the windowed time series has significantly less bias than power spectral analysis of the unwindowed time series.
Above, we are using detrending and windowing to reduce the leakage in the Fourier domain. For the purposes of this paper, we are interested in finding the estimator for a 'single' realization of the process, that is, producing the power spectral densities for a given realization, and finding the best estimator for these (we will discuss this in Sect. 6.4). If one is more interested in the spectral densities of the process (i.e. the average over an ensemble of realizations), then other techniques are more appropriate. For example, some authors take a single realization and break it up into smaller segments, then compute the power spectral densities for each segment, and average over them, thus resulting in less scatter of the densities, but not covering the same frequency range as for the single realization considered as a whole (see for instance Pelletier and Turcotte 1999). Other versions include not breaking up the single realization into orthogonal segments, but rather non-orthogonal (overlapping) segments (e.g., Welch's Overlapped Segment Averaging technique, Mudelsee 2010). Another method includes taking a single realization of a process and binning the frequency range into octave-like frequency bands where linear regression is done for the mean of the logarithm of the power (per octave) versus the mean logarithm of the frequency in that band. Taqqu et al. (1995), however, have shown that this binning-based regression dramatically increases the uncertainties (random error) of the estimate of β.
Maximum Likelihood Estimators
Maximum likelihood estimators (MLEs) (Fisher 1912) have been developed for parametric models of the power spectral density or autocorrelation function (Fox and Taqqu 1986; Beran 1994). For Eq. (23), an MLE equation that depends on the parameters of the power spectral density is required, with maximum likelihood giving the best-fit estimators. These techniques assume Gaussian or Levy-distributed time series and, in particular, a one-point probability distribution that is symmetrical. Maximum likelihood estimators have the advantage when compared with log-periodogram regression to not only output an estimate of the strength of long-range persistence, but also result in a confidence interval based on the Fisher information (the expected value of the observed information) of the estimated parameter. The Whittle estimator (Whittle 1952) is a maximum likelihood estimator for deriving the strength of long-range persistence from the power spectral density.
In our analyses, we applied an approximation of the Whittle maximum likelihood function (Beran 1994). This likelihood function L depends on the following:
The power spectral density, S k (Eqs. 22, 26a), versus the frequency f k (k = 1, 2, …, N/2) of the original time series x t (t = 1, 2, …, N).
The MLE model chosen; here, \( \tilde{S}_{{c,{\kern 1pt} \beta }} (f) = c\,f^{ - \beta } \) is used as a model for the power spectral density S k (k = 1, 2, …, N/2) and has two parameters: the strength of long-range persistence, β, and a factor c, both of which will be evaluated by the MLE.
The maximum likelihood function L, which evaluates our power-law model of the power spectral density, S c,β , has a dependence on the two parameters, c and β, and is given by Beran (1994):
$$ L\left( {c,\beta } \right) = 2\left( {\sum\limits_{j = 1}^{N/2} {\log \left( {\tilde{S}_{c,\beta } \left( {f_{j} } \right)} \right)} + \sum\limits_{j = 1}^{N/2} {\left( {S_{j} /\tilde{S}_{c,\beta } \left( {f_{j} } \right)} \right)} } \right). $$
The function L needs to be minimized as a function of the parameters c and β. In other words, L (Eq. 27) is calculated for one set of values for (c, β), and then for other pairs of (c, β) that are systematically chosen, and the minimum value of L is obtained. The corresponding β min is the estimated strength of long-range dependence β PS(Whittle). This function minimization is illustrated in Fig. 16a, where the maximum likelihood function, L (Eq. 27), is calculated for four realizations of a process created to have a log-normal probability distribution (c v = 0.5, Box–Cox transform), β model = 0.8, and four different time series lengths, N = 512, 1,024, 2,048, and 4,096. The value β where the minimum occurs is β PS(Whittle) = 0.74. As a lower bound of the random error \(\sigma \)(β PS(Whittle)), the Cramér–Rao bound (CRB) (Rao 1945, Cramér 1946) is obtained by evaluating the second derivative of the likelihood function L (Eq. 27):
$$ CRB\left( {\beta_{\text{PS(Whittle)}} } \right) = \left( {\frac{{{\text d}^{2} L}}{{\text d\beta^{2} }}\left( {\beta_{\text{PS(Whittle)}} } \right)} \right)^{ - 0.5} . $$
This is illustrated in Fig. 16b, where the CRB from Eq. (28) is calculated as a function of long-range persistence strength, β. The value at β PS(Whittle) allows for the calculation of the Cramér–Rao bound that is a lower bound for the standard deviation of the estimated strength of long-range dependence. We have discussed here the case of a best-fit power-law exponent using a MLE and the assumption that the original time series is self-affine (where Eq. (7) holds for all f). There are also MLE techniques (Geweke and Porter-Hudak 1983; Beran 1994; Guerrero and Smith 2005) for fitting power spectral densities when the time series shows asymptotic power-law behaviour (i.e. as f → 0).
Whittle estimator and its corresponding maximum likelihood function. a Maximum likelihood function, L (Eq. 27), given as a function of persistence strength, β. The function L is based on the power spectral density of four realizations of a process created to have a log-normal probability distribution (c v = 0.5, Box–Cox transform), β model = 0.8, and four different time series lengths, N = 512, 1,024, 2,048, and 4,096. The value β where the minimum occurs is β PS(Whittle) = 0.74. b The second derivative, d2 L/dβ 2, of the maximum likelihood function (shown in a) is presented, a function of persistence strength, β. The value of d2 L/dβ 2 at β PS(Whittle) = 0.74 allows for the calculation of the Cramér–Rao bound (CRB) (Eq. 28) that is a lower bound for the standard error
Results of Performance Tests
We have been interested in how exactly the considered techniques measure the strength of long-range persistence in a time series. We have applied these techniques to many realizations of fractional noises and motions with well-defined properties, and after discussing systematic and random errors in the context of a specific example (Sect. 7.1) and confidence intervals (Sect. 7.2), we will present the overall results of our performance tests and the results of other studies (Sect. 7.3), along with reference to the supplementary material which contains all of our results. We will then give a brief summary description of the results of each performance test: Hurst rescaled range (R/S) analysis (Sect. 7.4), semivariogram analysis (Sect. 7.5), detrended fluctuation analysis (Sect. 7.6), and power spectral analysis (Sect. 7.7).
Systematic and Random Error
We now discuss systematic and random error in the context of an example of applying a given technique to our benchmark time series. We apply the fluctuation function (resulting from DFA2, see Sect. 5.3) to 1,000 realizations of fractional log-normal noises (coefficient of variation of c v = 0.5, time series length N = 1,024, β model = 0.8, Box–Cox transform construction). Ten examples of these are given in Fig. 17a, where we see that the ten DFA fluctuation functions are similar but not identical. For the 1,000 realizations, the normalized histogram of the resultant estimates of the strength of long-range persistence, β DFA, is given in Fig. 17b. We observe the normalized histogram can be well approximated by a Gaussian distribution with mean value \( \bar{\beta }_{\text{DFA}} \) and standard deviation σ(β DFA). These DFA performance test results from Fig. 17 can be considered in the context of systematic error (bias) and random error (standard deviation); in Sect. 7.2 we will also consider these DFA results in the context of confidence intervals.
Illustration of systematic and random errors using detrended fluctuation analysis. a Detrended fluctuation analysis with quadratic trend removed (DFA2) for ten realizations of fractional log-normal noises with a coefficient of variation of c v = 0.5 and N = 1,024 elements. The modelled strength of long-range persistence is β model = 0.8. b Normalized histogram of β DFA obtained from 1,000 realizations of fractional log-normal noises (same parameters as for a). The systematic error is the sample mean \( \bar{\beta }_{\text{DFA}} \) minus the persistence strength of the process, β model. The random error \(\sigma \)(β DFA) is given by the horizontal arrow
The systematic error in this DFA example is the difference between the modelled strength of persistence and the mean value of the Gaussian distribution, \( \bar{\beta }_{\text{DFA}} - \beta_{\text{model}} \). The systematic error of a particular technique in general is given by the bias:
$$ {\text{bias}} = \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} - \beta_{\text{model}} . $$
The bias or systematic error of the technique does not only depend on β model but also on the technique, the one-point probability distribution, and the time series length N.
The performance of a technique is further described by the random error of the considered technique. In our DFA example (Fig. 17) we have used the standard deviation σ x (β DFA) of the sample values around the mean for quantifying the fluctuations of β DFA. In this paper we will measure the random error of a technique by the standard deviation σ x (\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\)), which is called in the statistics literature the standard error of the estimator (Mudelsee 2010). The random error can be determined from many realizations of a process modelled to have a set of given parameters. If, however, just a single realization of the process is given, the random error σ x (\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\)) can be derived in various ways, such as bootstrapping and jackknifing (Efron and Tibshirani 1993; Mudelsee 2010), or in case of a maximum likelihood estimator by the Cramér–Rao bound (Rao 1945; Cramér 1946). In this paper we will, in most cases, calculate the random error from an ensemble of model realizations, but we will also consider Cramér–Rao bounds (Sect. 6.4) and apply a benchmark-based improvement technique (Sect. 9).
A good measure of the persistence strength should have both of the following properties: very small systematic error (i.e. a bias approaching zero) and small random error (i.e. deviations around \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} \) which are small). One can use both the systematic and random error to come up with a measure for the total error, the root-mean-squared error (RMSE) which is given by (Mudelsee 2010):
$$ \begin{aligned} RMSE & = \left( {\left( {{\text{systematic}}\;{\text{error}}} \right)^{2} + \left( {{\text{random}}\,{\text{error}}} \right)^{2} } \right)^{0.5} \\ & = \left( {\left( {\bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} - \beta_{\text{model}} } \right)^{2} + \left( {\sigma_{x} \left( {\beta_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} } \right)} \right)^{2} } \right)^{0.5} . \\ \end{aligned} $$
For a detailed discussion of bias, standard error, standard deviation, RMSE, and confidence intervals, see Chapter 3 of Mudelsee (2010).
Realizations of a process created to have a given strength of long-range persistence and one-point probability distribution can be contrasted with the underlying behaviour of the process itself where the parameter of a process is β model, in other words the desired β for the process. This process has realizations (the time series) which will have a distribution of their 'true' β values because of the finite-size effect (Peng et al. 1993b). We then measure these with a given technique, which itself has its own error, giving \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\). We are assuming the systematic error that is discussed here is based on the realizations having a Gaussian distribution and that we can get some handle on their 'true' distribution. We are also assuming that the techniques we are using reflect this, in addition to the bias in the techniques themselves. We will never know (except theoretically, if we have closed form equations) the true value of β for each realization of the process, just the parameter that we designed it for (i.e. β model), unless the realizations are taken for an infinite number of values, in which case they will asymptote to the true value of β. In other words, there will always be a finite-size effect on individual realizations. Given this finite-size effect, we can never know the exact true β for each realization, but instead what we are measuring is a measure of the technique and the finite-size effect of going from process to realization (i.e. the synthetic noises and motions we have created). We will now discuss confidence intervals within the framework of our DFA example.
Returning to Fig. 17, with our example of DFA applied to a log-normal noise (c v = 0.5, N = 1,024, β model = 0.8), we find that approximately 95 % of the values of β DFA lie in the interval \( \left[ {\bar{\beta }_{\text{DFA}} - 1.96\;\sigma_{x} \left( {\beta_{{\,{\text{DFA}}}} } \right),\,\bar{\beta }_{\text{DFA}} + 1.96\;\sigma_{x} \left( {\beta_{{\,{\text{DFA}}}} } \right)} \right] \), in other words, the 95 % confidence interval. In general, for confidence intervals, there must be a sufficient number of values from which to make a valid estimation, for which 95 % of those values are within the confidence interval boundaries. Some authors take this as 1,000 values or more (Efron and Tibshirani 1993). However, if the values follow a Gaussian distribution, the confidence interval boundaries can be computed directly from \( \bar{\beta }_{{\,{\text{measured}}}} \pm 1.96\;\sigma_{x} \left( {\beta_{{\,{\text{measured}}}} } \right) \). Efron and Tibshirani (1993) have determined that, for Gaussian-distributed values, confidence intervals can be constructed from just 100 realizations. We note that there are a number of different ways of constructing confidence intervals for β measured, both theoretical (e.g., based on knowledge of the one-point probability distribution) and empirical (e.g., actual examining how many values for a given set of realizations of a process lie in a given interval, such as 95 %). The latter is known as the empirical coverage and is discussed in detail, along with various methods for the construction of confidence intervals by Mudelsee (2010), who also discusses the use of empirical coverage studies in the wider literature. Here we do not determine the empirical coverage, but rather take the approach of first evaluating the normality of a given set of realizations of β measured (relative to a given β model), and then by using this assumed normality calculate the theoretical confidence interval.
Because we would like to calculate confidence intervals for our performance test results, based on only 100 realizations, we first need to determine whether the values are Gaussian (or close to) distributed. We begin with three types of process constructed with Gaussian, log-normal, and Levy-distributed time series, and β model = 1.0. For each one-point probability distribution, and for time series lengths N = 256, 1,024, 4,096, and 16,384, we create 105 realizations, in other words, overall, 3 × 4 × 105 realizations. For each process created and time series length, we perform three analyses: PS(best-fit) (Fig. 18), DFA (Fig. 19), and rescaled range (R/S) (Fig. 20). Shown in each figure, for the three types of processes (a: Gaussian, b: log-normal, c v = 0.5, c: Levy, a = 1.5), and each of the time series lengths, are the results (shown in grey dots) of 5,000 of the 105 realizations. We show, using box and whisker plots (coloured boxes and symbols), the mean, mode, and percentiles of the values within each set of realizations, along with the best-fit Gaussian distributions (solid black line).
Distribution of the estimated strength of long-range persistence using power spectral analysis (β PS(best-fit)) applied to realizations of fractional noises created with β model = 1.0, time series lengths N = 256, 1,024, 4,096, and 16,384, and three types of one-point probability distributions: a fractional Gaussian noises (FGN), b fractional log-normal noises (FLNN) (coefficient of variation c v = 0.5), c fractional Levy noises (FLevyN) (tail parameter a = 1.5). For each probability distribution type, 105 realizations of time series are created for each time series length N. In each panel (a) to (c), and for each length of time series N, are given box and whisker plots and best-fit Gaussian distributions for 105 analyses results of β PS(best-fit) for the 105 realizations. Also shown (grey dots) are 5,000 of the 105 realizations. Each of the box and whisker plots gives the mean of the β PS(best-fit) values (white circle), the median (horizontal line in middle of the box), 25 and 75 % (box upper and lower edges), 5 and 95 % (ends of the vertical lines, i.e. the whiskers), 1 and 99 % (upper and lower triangles), and the minimum and maximum values (upper and lower horizontal bars). In (d) is given the skewness g for each of the distributions from (a) to (c)
Distribution of the estimated strength of long-range persistence using detrended fluctuation analysis (β DFA) applied to realizations of fractional noises created with β model = 1.0, time series lengths N = 256, 1,024, 4,096, and 16,384, and three types of one-point probability distributions: a fractional Gaussian noises (FGN), b fractional log-normal noises (FLNN) (coefficient of variation c v = 0.5), c fractional Levy noises (FLevyN) (tail parameter a = 1.5). In (d) is given the skewness g for each of the distributions from (a) to (c). See Fig. 18 caption for further explanation
Distribution of the estimated strength of long-range persistence using Hurst rescaled range (R/S) analysis (β Hu) applied to realizations of fractional noises created with β model = 1.0, time series lengths N = 256, 1,024, 4,096, and 16,384, and three types of one-point probability distributions: a fractional Gaussian noises (FGN), b fractional log-normal noises (FLNN) (coefficient of variation c v = 0.5), c fractional Levy noises (FLevyN) (tail parameter a = 1.5). In (d) is given the skewness g for each of the distributions from (a) to (c). See Fig. 18 caption for further explanation
Visually, we see that for normal and log-normal noises (Figs. 18a,b, 19a,b, 20a,b), the realizations are reasonably close to a Gaussian distribution. For the Levy realization results (Figs. 18c, 19c, 20c), these are only approximately Gaussian, although are reasonably symmetric. In Figs. 18d, 19d, 20d is given the skewness for each of the distributions from panels (a) to (c) in each figure. For the normal and log-normal results, and four lengths of time series considered, the skewness g is small (DFA: |g| < 0.10, R/S: |g| < 0.15); for Levy results, there are strong outliers in Fig. 19c (DFA) and Fig. 20c (R/S), resulting in large skew (DFA: |g| < 3; R/S: |g| < 0.8), although this is not the case for Fig. 18c (PS(best-fit)) where in Fig. 18d |g| < 0.15. A Shapiro–Wilk test of normality (Shapiro and Wilk 1965) on the different sets of realizations shows that for the smaller values of skewness, in many cases, a Gaussian distribution cannot be rejected at the 0.05 level, whereas for the larger values of skewness (FLevyN using DFA and R/S) it is rejected. Although we recognize that some of our results are only approximately Gaussian, we will use a value of 100 total realizations for a given process created and technique applied, to calculate confidence intervals based on \( \bar{\beta }_{{\,{\text{measured}}}} \pm 1.96\;\sigma_{x} \left( {\beta_{{\,{\text{measured}}}} } \right) \). The size of the 95 % confidence interval of the technique is 3.92 times the standard deviation (random error) of the technique.
Summary of Our Performance Test Results and Those of Other Studies
The benchmarks we carried out are extensive as they are based on fractional noises and motions which differ in length, one-point probability distribution, and modelled strength of persistence. The performance of the different techniques has been studied here for their dependence on the modelled persistence strengths (26 different parameter values, β model = −1.0 to 4.0, step size 0.2), the noise and motion lengths (4 different parameters, N = 512, 1,024, 2,056, and 4,096), and the type of the one-point probability distribution (three different types: Gaussian, log-normal—two different types of construction, and Levy). These will be presented graphically in this section, with a further eight noise and motion lengths (N = 64, 128, 256, 8,192, 16,384, 32,768, 65,536, and 131,072) presented in the supplementary material (discussed in this section further below). Furthermore, in this section we present results for a fixed value of long-range dependence β model, and the parameters that characterize the corresponding distribution parameters have been varied (11 values of the exponent of the Levy distribution a = 1.0 to 2.0, step size 0.1; 21 different coefficients of variation for two different log-normal distribution construction types, c v = 0.0 to 2.0, step size 0.1). Overall, we have studied fractional noises and motions with about 17,000 different sets of characterizing parameters, of which the results for a subset of these (6,500 different sets of parameters) have been included in the supplementary material. For each set of parameters, 100 realizations have been created, and their persistence strength has been evaluated by the five techniques described above.
The results of these performance tests are presented in Figs. 21, 22, 23, 24, 25 where the measured strength of long-range persistence, \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\), is given as a function of the 'benchmark' modelled value, β model. Each of the panels in Figs. 21, 22, 23, 24, 25 shows mean values (diamonds) and confidence intervals (error bars) based on the 100 fractional noises and motions run for that particular distribution type, length of series, and modelled strength of persistence. The 95 % confidence intervals for each specific technique are \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} \pm 1.96\;\sigma_{x} ( {\beta_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} }) \), where the standard deviation \(\sigma \) x is based on the 100 realizations for a given process. The four colours used represent four fractional noise and motion lengths, N = 512, 1,024, 2,048, and 4,096. Also shown in each graph is a dashed diagonal line, which represents the bias-free case, \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} = \beta_{\text{model}} \). Whereas Figs. 21, 22, 23, 24, 25 show the systematic and random error of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) as a dependence on β model, Fig. 26 gives the performance of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) as a function of the log-normal distribution coefficient of variation (c v = 0.0 to 2.0, step size 0.1), and Fig. 27 the performance of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) as a function of the Levy distribution tail parameter (a = 1.0 to 2.0, step size 0.1).
Performance of Hurst rescaled range (R/S) analysis (β Hu) applied to realizations of fractional noises and motions (Sect. 4.2) created with long-range persistence −1.0 ≤ β model ≤ 4.0 and time series lengths N = 512, 1,024, 2,048, and 4,096. Mean values (diamonds) and 95 % confidence intervals (error bars, based on ±1.96 σ x ) of β Hu are presented as a function of the long-range persistence strength β model. Different colours indicate different lengths N of the analysed time series as specified in the legend. The black dashed line indicates the bias-free case of β Hu = β model. The one-point probability distributions include the following: a fractional Gaussian noises and motions (FGN), b fractional Levy noises and motions (FLevyN) with tail parameter a = 1.5, c fractional log-normal noises and motions (FLNNa, constructed by Box–Cox transform of fractional Gaussian noises) with c v = 0.5, d fractional log-normal noises and motions (FLNNb, constructed by Schreiber–Schmitz algorithm) with c v = 0.5
Performance of semivariogram analysis (β Ha) applied to realizations of fractional noises and motions (Sect. 4.2) created with long-range persistence −1.0 ≤ β model ≤ 4.0 and time series lengths N = 512, 1,024, 2,048, and 4,096. Mean values (diamonds) and 95 % confidence intervals (error bars, based on ±1.96 σ x ) of β Ha are presented as a function of the long-range persistence strength β model. Different colours indicate different lengths N of the analysed time series as specified in the legend. The black dashed line indicates the bias-free case of β Ha = β model. The one-point probability distributions include the following: a fractional Gaussian noises and motions (FGN), b fractional Levy noises and motions (FLevyN) with tail parameter a = 1.5, c fractional log-normal noises and motions (FLNNa, constructed by Box–Cox transform of fractional Gaussian noises) with c v = 0.5, d fractional log-normal noises and motions (FLNNb constructed by Schreiber–Schmitz algorithm) with c v = 0.5
Performance of detrended fluctuation analysis (β DFA) applied to realizations of fractional noises and motions (Sect. 4.2) created with long-range persistence −1.0 ≤ β model ≤ 4.0 and time series lengths N = 512, 1,024, 2,048, and 4,096. We apply DFA2 here (quadratic trends removed). Mean values (diamonds) and 95 % confidence intervals (error bars, based on ±1.96 σ x ) of β DFA are presented as a function of the long-range persistence strength β model. Different colours indicate different lengths N of the analysed time series as specified in the legend. The black dashed line indicates the bias-free case of β DFA = β model. The one-point probability distributions include the following: a fractional Gaussian noises and motions (FGN), b fractional Levy noises and motions (FLevyN) with tail parameter a = 1.5, c fractional log-normal noises and motions (FLNNa, constructed by Box–Cox transform of fractional Gaussian noises) with c v = 0.5, d fractional log-normal noises and motions (FLNNb constructed by Schreiber–Schmitz algorithm) with c v = 0.5
Performance of power spectral analysis (β PS(best-fit)) applied to realizations of fractional noises and motions (Sect. 4.2) created with long-range persistence −1.0 ≤ β model ≤ 4.0 and time series lengths N = 512, 1,024, 2,048, and 4,096. Mean values (diamonds) and 95 % confidence intervals (error bars, based on ±1.96 σ x ) of β PS(best-fit) are presented as a function of the long-range persistence strength β model. Different colours indicate different lengths N of the analysed time series as specified in the legend. The black dashed line indicates the bias-free case of β PS(best-fit) = β model. The one-point probability distributions include the following: a fractional Gaussian noises and motions (FGN), b fractional Levy noises and motions (FLevyN) with tail parameter a = 1.5, c fractional log-normal noises and motions (FLNNa, constructed by Box–Cox transform of fractional Gaussian noises) with c v = 0.5, d fractional log-normal noises and motions (FLNNb constructed by Schreiber–Schmitz algorithm) with c v = 0.5
Performance of power spectral analysis (β PS(Whittle)) applied to realizations of fractional noises and motions (Sect. 4.2) created with long-range persistence −1.0 ≤ β model ≤ 4.0 and time series lengths N = 512, 1,024, 2,048, and 4,096. Mean values (diamonds) and 95 % confidence intervals (error bars, based on ±1.96 σ x ) of β PS(Whittle) are presented as a function of the long-range persistence strength β model. Different colours indicate different lengths N of the analysed time series as specified in the legend. The black dashed line indicates the bias-free case of β PS(Whittle) = β model. The one-point probability distributions include the following: a fractional Gaussian noises and motions (FGN), b fractional Levy noises and motions (FLevyN) with tail parameter a = 1.5, c fractional log-normal noises and motions (FLNNa, constructed by Box–Cox transform of fractional Gaussian noises) with c v = 0.5, d fractional log-normal noises and motions (FLNNb, constructed by Schreiber–Schmitz algorithm) with c v = 0.5
Performance of three techniques for evaluating long-range persistence, \({\beta }_{{[{\text{Hu,}}\,{\text{DFA,}}\,{\text{PS}}]}}\), applied to realizations of processes created to have fractional log-normal noises (c v = 0.0 to 2.0, Sect. 4.2) with strength of long-range persistence β model = 0.8 and time series lengths N = 512, 1,024, 2,048, and 4,096. The three techniques applied are: a Hurst rescaled range (R/S) analysis (β Hu), b detrended fluctuation analysis (β DFA), c power spectral analysis (β PS(best-fit)). We do not consider semivariogram analysis here as it is only appropriate to apply over the range of −1.0 < β < 1.0. Fractional log-normal noises are constructed using the Box–Cox transform (FLNNa) (left panels) and the Schreiber–Schmitz algorithm (FLNNb) (right panels). For each set of process parameters, 100 realizations are done. For each panel, mean values (diamonds) and 95 % confidence intervals (error bars, based on ±1.96 σ x ) of \({\beta }_{{[{\text{Hu,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) are presented as a function of the coefficient of variation, c v = 0.0 to 2.0, step size 0.1. c v = 0.0 corresponds to symmetric one-point probability distributions (Gaussian distribution), while large values of c v correspond to highly asymmetric one-point probability distributions. Different colours indicate different lengths of the analysed time series (N = 512, 1,024, 2,048, 4,096) as specified in the legend. The black horizontal dashed line indicates the bias-free case of \({\beta }_{{[{\text{Hu,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) = β model = 0.8
Performance of four techniques for evaluating long-range persistence, \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\), applied to realizations of processes created to have fractional Levy noises (tail parameter, a = 1.0 to 2.0) with strength of long-range persistence β model = 0.8 and time series lengths N = 512, 1,024, 2,048, and 4,096. The four techniques applied are: a Hurst rescaled range (R/S) analysis (β Hu), b semivariogram analysis (β Ha), c detrended fluctuation analysis (β DFA), d power spectral analysis (β PS(best-fit)). For each panel, mean values (diamonds) and 95 % confidence intervals (error bars, based on ±1.96 σ x ) of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) are presented as a function of the tail parameter a = 1.0 to 2.0, step size 0.1. A value of a = 2.0 corresponds to a Gaussian distribution, while values close to a = 1.0 correspond to very heavy tails of the one-point probability distribution of the fractional noise. Different colours indicate different lengths of the analysed time series (N = 512, 1,024, 2,048, 4,096) as specified in the legend. The black horizontal dashed line represents the bias-free case of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) = β model = 0.8
We give in Tables 4 and 5 a tabular overview, summarizing the ranges of the systematic error (\( {\text{bias}} = \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} - \beta_{\text{model}} \)) and the random error (standard deviation of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\), σ x (\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\))) for the five techniques when applied to fractional noises (Table 4) and fractional motions (Table 5). These two tables are summaries for three probability distributions (Gaussian, log-normal with c v = 0.5 and two types of construction, Levy with a = 1.5) and where the number of elements is N = 4,096.
Table 4 Performancea of five techniquesb that evaluate long-range persistence for self-affine fractional noises (i.e. −1.0 < β model < 1.0) with N = 4,096 elements and different one-point probability distributions
Table 5 Performancea of five techniquesb that evaluate long-range persistence for self-affine fractional motions (i.e. 1.0 < β model < 3.0) with N = 4,096 elements and different one-point probability distributions
A first inspection of Figs. 21, 22, 23, 24, 25, 26, 27, and Tables 4 and 5 shows that different techniques perform very differently. These differences will be summarized, for each technique, in Sects. 7.4–7.7.
As a resource to the user, we include in the supplementary material the following:
An Excel Spreadsheet of a subset of our results for all of our different analyses. For each set of 100 realizations of fractional noises or motion parameters for which the process was designed (one-point probability distribution type, number of elements N, β model) and technique applied, we give the mean \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} \), systematic error (bias = \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} - \beta_{\text{model}} \)), random error (standard deviation σ x (\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\))), and root-mean-squared error \( (RMSE =( {( {{\text{systematic}}\;{\text{error}}} )^{2} + ( {{\text{random}}\,{\text{error}}} )^{2} } )^{0.5} ). \) In addition, for each set of 100 realizations, we give the minimum, 25 %, mode, 75 %, and maximum \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\). The analyses applied include those discussed in this paper (Hurst rescaled range analysis, semivariogram analysis, detrended fluctuation analysis, power spectral analysis [best-fit], and power spectral analysis [Whittle]) and the discrete wavelet transform (DWT, results not discussed in this paper, but 'presented' in the supplementary material; see Appendix 6 for a discussion of the DWT applied). These analyses results are provided for 6,500 parameter combinations (out of the 17,000 examined for this paper). See also Sect. 9.5 where the supplementary Excel spreadsheet is described in more detail in the context of benchmark-based improved estimators for long-range persistence.
R programs. We give the set of R programs that we use to perform the tests.
Various other studies have been conducted that simulate self-affine long-range persistent time series and examine the results of performance techniques. For a selection of these, in Table 6 we give a review of 12 of these studies (including this one), where for each study we give: (1) the type of fractional noise or motion used (the one-point probability distribution, technique used to create the fractional noises and motions, and the fractional noise or motion length), (2) the technique used to evaluate the long-range persistence, and (3) any comments. Our study complements and extends existing studies in terms of the range of fractional noises and motions constructed—including the range of β model, addition of Levy-distributed noises and motions which are rarely studied but representative of heavy-tailed processes in nature, and a wide range of lengths of time series—and the performance techniques used. For completeness, although our performance techniques are for self-affine noises and motions, in Table 7 we give a summary of 14 selected studies that simulate asymptotic long-range persistent time series to examine the performance of long-range dependence techniques. We now discuss each performance technique individually.
Table 6 Review of selected studies that simulate long-range persistent time series to examine the performance of techniques that quantify long-range dependence
Table 7 Review of selected papers that simulate asymptotic long-range persistent time series to examine the performance of techniques that quantify long-range dependence
Hurst Rescaled Range Analysis Results (β Hu)
Here we summarize (and will do the same for the other techniques in the three subsequent sections) the following for the performance technique results applied to our fractional noises and motions: (a) range of theoretical applicability of the performance technique; (b) dependence on β model; (c) dependence on the one-point probability distribution; (d) a brief discussion; and (e) overall 'short' conclusions.
Range of theoretical applicability: As Hurst rescaled range analysis can be applied to stationary time series only, it is theoretically appropriate only for fractional noises, –1.0 < β model < 1.0.
Dependence on β model: The results of the Hurst rescaled range analysis are given in Fig. 21 where we see that the performance test results β Hu deviate strongly from the dashed diagonal line (β model = β Hu) and that only over (approximately) the range 0.0 < β model < 1.0 do the largest 95 % confidence intervals (for N = 512) intersect with some part of the bias-free case (β model = β Hu); as the number of elements N increases, the 95 % confidence intervals for β Hu decrease in size, and therefore there are fewer cases where the 95 % confidence intervals for β Hu overlap with β model. In terms of the bias, unbiased results are found only for fractional noises with a strength of persistence of β model ≈ 0.5. For less persistent noises, β model < 0.5, the strength of persistence is overestimated, and for more persistent noises, β model > 0.5, it is underestimated. Apart from the poor general performance, the random error (confidence intervals) of β Hu are rather small (Tables 4, 5).
Dependence on the one-point probability distribution: In Fig. 26a we see that at β model = 0.8 the systematic error (bias) increases with the asymmetry (c v = 0.0 to 2.0) of the one-point probability distribution while the random error (which is proportional to the 95 % confidence interval size) stays constant. In contrast (Fig. 27a), at β model = 0.8, both the systematic error (bias) and random error (confidence interval sizes) are very robust (they do not vary a lot) to changes of the tail parameter (a = 1.0 to 2.0) of the fractional noise.
Discussion: Our results presented in Figs. 21 and 26a show that the systematic error (bias) gets smaller as the time series length N grows from 512 to 4,096. If we consider a broader range of time series lengths (supplementary material), this can be seen more clearly. For example, consider a FGN with β model = −0.8, and then our simulations result in \( \bar{\beta }_{\text {Hu}} \) = −0.42 (N = 4,096), −0.45 (N = 8,192), −0.47 (N = 16,384), −0.49 (N = 32,768), −0.51 (N = 65,536), and −0.53 (N = 131,072), and thus, the value of β model = −0.8 is very slowly approached. The bias of Hurst rescaled range analysis is a finite-size effect; Bassingthwaighte and Raymond (1995) and Mehrabi et al. (1997) have shown for fractional Gaussian noises and motions that for very long sequences, the correct value of β model will be approached by β Hu.
Rescaled range (R/S) analysis brief conclusions: For most cases, it is inappropriate to use Hurst rescaled range (R/S) analysis for the types of self-affine fractional noises and motions (i.e. Gaussian, log-normal, and Levy distributed) considered in this paper, and correspondingly many of the time series found in the Earth Sciences.
Semivariogram Analysis Results (β Ha)
Range of theoretical applicability: The range of β Ha is the interval 1.0 < β model < 3.0, so semivariogram analysis is appropriate for fractional motions only.
Dependence on β model: Fig. 22a,b,c and Tables 4 and 5 demonstrates that for fractional Gaussian noises (FGN), fractional Levy noises (FLevyN), and fractional log-normal noises constructed with the Box–Cox transform (FLNNa), unbiased results are found over much (but not all) of the interval 1.0 < β model < 3.0, with larger values of the bias at the interval borders; larger biases also occur for short time series. For persistence strength β model > 2.0 (more persistent than Brownian motion), semivariograms applied to realizations of log-normal noises and motions based on the Schreiber–Schmitz algorithm (Fig. 22d, FLNNb) result in values of β PS ≈ 2.0, reflecting a failure of this algorithm for this particular setting of the parameters. Our simulations indicate that the Schreiber–Schmitz algorithm does not work for constructing noises that are asymmetric and non-stationary; thus, we cannot discuss the corresponding performance.
Dependence on the one-point probability distribution: For FGN, FLevyN, and FLNNa, Fig. 22, the confidence interval size depends on the strength of long-range persistence: they are small around β model ≈ 1.0, increase up to β model ≈ 2.5, and then decrease for larger values of the persistence strength. It appears plausible to increase the range of applicability of semivariogram analysis to fractional noises (–1.0 < β model < 1.0) by analysing their aggregated series, but only if the original series has a symmetric (or near-symmetric) probability distribution. In Fig. 27b, we see that at β model = 0.8 changes of the heavy-tail parameter of fractional Levy noises from a = 0.0 to 1.0 impact the systematic error (bias) in a complex way, while the random error remains almost constant and very large.
Discussion: Gallant et al. (1994), Wen and Sinding-Larsen (1997), and Malamud and Turcotte (1999a) have discussed the bias of Ha for time series and came to very similar conclusions. Wen and Sinding-Larsen (1997) pointed out (1) that longer lags τ lead to more accurate estimates of Ha (consequently, we have used here long lags up to N/4) and (2) that semivariogram analysis is applicable to incomplete (i.e. gap containing) measurement data. For time series that are incomplete (i.e. values in an otherwise equally spaced time series are missing), only lagged pairs of values which are not affected by the gaps are considered in the summation of (Eq. 16).
Semivariogram analysis brief conclusions: Semivariogram analysis is appropriate for 1.0 < β < 3.0, introduces little bias, but the resulting estimates are rather uncertain. It is appropriate for time series with asymmetric one-point probability distributions, but should not be applied if that distribution is heavy tailed.
Detrended Fluctuation Analysis Results (β DFA)
Range of theoretical applicability: Detrended fluctuation analysis (here performed with the quadratic trend removed, i.e. DFA2) can be applied to all persistence strengths considered in our synthetic fractional noises and motions (Sect. 4.2).
Dependence on β model: For fractional Gaussian, Levy, and log-normal noises and motions, detrended fluctuation analysis is just slightly biased (Fig. 23; Tables 4, 5). It shows a weak overestimation for the strongly anti-persistent noises (−1.0 < β model < −0.7) in particular for the very short time series (N = 512, N = 1,024). For fractional log-normal noises and motions created by Box–Cox transforms (FLNNa), β DFA overestimates the strength of persistence for anti-persistent noises (β model < 0.0) and slightly underestimates for fractional noises and motions with 0.5 < β model < 1.5 (Fig. 23c). For fractional log-normal noises and motions created by the Schreiber–Schmitz algorithm (FLNNb, Fig. 23d), our simulations show large values of the bias for β model ≥ 2.0. This bias is a consequence of the construction of the FLNNb rather than a limitation of detrended fluctuation analysis.
The random error (which is proportional to the 95 % confidence interval size) of detrended fluctuation analysis (Fig. 23) depends on the correlations of the investigated time series: for fractional noises and motions of all considered one-point probability distributions, the sizes of the confidence intervals increase with the persistence strength. For thin-tailed fractional noises and motions (i.e. Gaussian and log-normal), the confidence intervals for fractional Brownian motions (β model = 2.0) are twice as big as for white noises (β model = 0.0) (Fig. 23; Tables 4, 5). So, the stronger the strength of persistence in a times series, the more uncertain will be the result of detrended fluctuation analysis.
Dependence on the one-point probability distribution: For fractional log-normal noises (constructed by Box–Cox transform), the negative bias and the random error (proportional to the confidence interval size) are increasing gradually for increasing coefficients of variations (Fig. 26b, FLNNa). If the fractional log-normal noises are created by the Schreiber–Schmitz algorithm (Fig. 26b, FLNNb) and have positive persistence and a moderate asymmetry (0.0 < c v ≤ 1.0), β DFA is unbiased. However, for fractional noises and motions with strongly asymmetric one-point probability distribution (1.0 < c v < 2.0) and data sets that have a small number of total values, detrended fluctuation analysis underestimates β model (Fig. 26b). The corresponding 95 % confidence intervals grow with increasing asymmetry. They are bigger than those of β DFA for fractional log-normal noises constructed by the Box–Cox transform (Fig. 26b, Table 4). Detrended fluctuation analysis is unbiased for fractional Levy noises with positive persistence strength and different tail exponents, a (Fig. 27c). The corresponding confidence intervals grow with decreasing tail exponent, a.
Discussion: It is important to note that the random error of β DFA which arises from considering different realizations of fractional noises and motions is different from (and in case of positive persistence, β model > 0.0, much larger than) the regression error of β DFA gained by linear regression of the log(fluctuation function) versus log(segment length). The very small regression error originates in the statistical dependence of the difference between the fluctuation function of a particular noise and the average (over many realizations of the noise) fluctuation function. As a consequence, the regression error should not be used to describe the uncertainty of the measured strength of persistence.
In the case of fractional Levy noises with very heavy tails (a ≪ 2) (Fig. 27c), we do not recommend the use of detrended fluctuation analysis, as the error bars become very large with increasing a (Fig. 27c). In this case, the modified version of detrended fluctuation analysis suggested by Kiyani et al. (2006) which has not been 'benchmarked' in our paper might be an option.
The performance of detrended fluctuation analysis (DFA) has been studied extensively (Taqqu et al. 1995; Cannon et al. 1997; Pilgram and Kaplan 1998; Taqqu and Teverovsky 1998; Heneghan and McDarby 2000; Weron 2001; Audit et al. 2002; Xu et al. 2005; Delignieres et al. 2006; Mielniczuk and Wojdyłło 2007; Stroe-Kunold et al. 2009) for different types of fractional noises and motions and asymptotic long-range persistent time series (Tables 6, 7). In some of these studies (Taqqu et al. 1995; Pilgram and Kaplan 1998; Xu et al. 2005), it was demonstrated to be the best-performing technique. In other studies, DFA has been found to have low systematic error (bias) and low random error (confidence intervals) but was slightly outperformed by maximum likelihood techniques (Taqqu and Teverovsky 1998; Audit et al. 2002; Delignieres et al. 2006; Stroe-Kunold et al. 2009).
Detrended fluctuation analysis brief conclusions: Detrended fluctuation analysis is almost unbiased for fractional noises and motions, and the random errors (proportional to the confidence interval sizes) are small for fractional noises. It is inappropriate for time series whose one-point probability distributions are characterized by very heavy tails.
Power Spectral Analyses Results β PS(best-fit) and β PS(Whittle)
Range of theoretical applicability: Power spectral-based techniques β PS(best-fit) and β PS(Whittle) can be applied to all persistence strengths considered in our fractional noises and motions (Sect. 4.2).
Dependence on β model: Symmetrically distributed (i.e. Gaussian- and Levy-distributed fractional noises) power spectral-based techniques used for evaluating the strength of long-range persistence perform very well (Figs. 24, 25; Tables 4, 5). They are (1) unbiased (\( \bar{\beta }_{\text{PS}} = \beta_{\text{model}} \)), and (2) the size of confidence intervals of β PS depends on the length of the fractional noise or motion but not on the strength of long-range persistence, β model. For fractional Levy noises, power spectral techniques are very exact as the related confidence intervals are very tight. For fractional Levy motions with a β model ≥ 3.0, the β PS becomes slightly biased; the strength of persistence is overestimated in particular for the shorter time series. Looking specifically at fractional Levy noises with different strong heavy tails (Fig. 27d), we find (1) an unbiased performance of β PS and (2) that heavier tails cause smaller systematic error.
Dependence on the one-point probability distribution: For the fractional noises and motions with asymmetric distributions, namely the two types of fractional log-normal noises, the performance depends on how these noises and motions are created (Figs. 24c,d, 25c,d, 26c, 27d; Tables 4, 5): if they are constructed by applying a Box–Cox transform to a fractional Gaussian noise (Figs. 24c, 25c; Tables 4, 5), we find for the anti-persistent noises considered here, −1.0 < β model < 0.0, the strength of long-range persistence, β PS, is overestimated while for 0.0 < β model < 1.0, it is underestimated. Because the systematic (bias) and random error is very small compared to β model, the underestimation is somewhat hard to see on the figures themselves, but becomes much more apparent in the supplementary material. This effect of under- and overestimation of β model is stronger if fractional log-normal noises with a more asymmetric one-point probability distribution (larger coefficients of variations, c v) are considered. One can also see (Fig. 26c), for fractional log-normal noises and motions, the confidence interval size gradually grows with increasing asymmetry (increasing c v).
If the fractional log-normal noises are constructed by the Schreiber–Schmitz algorithm (Figs. 24d, 25d), then power spectral techniques perform fairly convincingly in the range of persistence −1.0 < β model < 1.8. For persistence strength β model > 2.0 (more persistent than Brownian motion), spectral techniques result in values of β PS ≈ 2.0, reflecting a failure of the Schreiber–Schmitz algorithm for this particular setting of the parameters. The confidence intervals are equally sized for the entire considered range of persistence strength, but they are approximately 10 % larger than the confidence intervals of fractional Gaussian noises (Figs. 24a, 25a). For a fixed β model, the error bar sizes rise with growing asymmetry (larger coefficients of variations, c v) (Fig. 26c). For highly asymmetric noises (c v > 1.0), the strength of long-range persistence is underestimated.
For the fractional Levy noises, we find that the performance does not depend on the heavy-tail parameter. Figure 27d presents the performance test result for a persistence strength of β model = 0.8; the power spectral technique is unbiased, and the random error (proportional to the confidence intervals) is about the same across all considered values of the exponent a.
Discussion: If the performance of the maximum likelihood estimator, β PS(Whittle), is compared to the performance of the log-periodogram regression, β PS(best-fit), we find that both techniques perform very similarly, except that β PS(Whittle) represents a slightly more exact estimator (Tables 4, 5). The real advantage, however, is that the Whittle estimator also gives the random error, \(\sigma \)(β PS(Whittle)), for any single time series considered.
In Fig. 28a we give the random error (standard deviation of the Whittle estimator, \(\sigma \)(β PS(Whittle)), also called the standard error of the estimator, see Sect. 7.1) as a function of the long-range persistence of 100 realizations (each) of FGN processes created to have −1.0 ≤ β model ≤ 4.0 and four time series lengths N = 256, 1,024, 4,096, and 16,384. In Fig. 28b we give \(\sigma \)(β PS(Whittle)) of 100 realizations (each) of four probability distributions (FGN, FLNN c v = 0.5, FLNN c v = 1.0, FLevyN a = 1.5) with β model = 0.5, as a function of time series length N = 64 to 65,536. For both panels and each set of process parameters in Fig. 28, we also give the maximum likelihood estimate, the Cramér–Rao bound (CRB) (Sect. 6.4, Eq. 28), for each set of 100 realizations. Both y-axes in Fig. 28 are logarithmic, as is the x-axis for Fig. 28b.
Standard error of the Whittle estimator \(\sigma \)(β PS(Whittle)) (dashed lines) and Cramér Rao bounds (CRB) (solid lines) are given as a function of the following: a long-range persistence strength −1.0 ≤ β model ≤ 4.0 of fractional Gaussian noises (FGN) and time series length N = 256, 1,024, 4,096, and 16,384; b Time series length N = 26, 27, 28, …, 216 (i.e. from N = 64 to 65,536) and fractional noise realizations with β model = 0.5 and four types of probability distribution, Gaussian (FGN, diamonds), log-normal (FLNN: circles, c v = 0.5; diamonds, c v = 1.0, created using Box–Cox transform), and Levy (FLevyN, a = 1.5). For both (a) and (b), the standard error \(\sigma \)(β PS(Whittle)) and CRB are on a logarithmic axis. Each individual symbol represents 100 realizations for a given length of time series N, one-point probability distribution, and modelled long-range persistence strength β model. The standard error of the Whittle estimator results (\(\sigma \)(β PS(Whittle)) and the average CRB are taken over all 100 realizations, except for the FLevyN, where for CRB the two smallest and two largest values (of each set of 100 realizations) are taken out before averaging
In Fig. 28a we observe that the random error of the Whittle estimator, \(\sigma \)(β PS(Whittle)), slightly increases as a function of persistence strength, β model, for −1.0 < β model < 2.8. In contrast, the CRB is slightly increasing as a function of β model over the range −1.0 < β model < 0.0 and then decreases by an order of magnitude, over the range 0.0 < β model< 2.0, after which it remains constant. The general shape of the four curves for CRB and the four curves for \(\sigma \)(β PS(Whittle)) do not depend on the length of the time series, N. The CRB is systematically smaller than the random error, (β PS(Whittle)). The ratio CRB/\(\sigma \)(\(\sigma \) β PS(Whittle)) changes significantly for different ranges of β model. Therefore, knowing only the CRB value will not give knowledge about the magnitude of the random error. We therefore do not recommend using the CRB as an estimate of the random error.
All eight curves in Fig. 28b show a power-law dependence on the time series length N (and scale with N −0.5). The Cramér–Rao bound measure is a lower bound for the random error and depends very little on the one-point probability of the fractional noise or motion. We see here that the Cramér–Rao bounds are systematically smaller than the standard errors, in other words the standard deviations of β PS(Whittle) calculated for many realizations, \(\sigma \)(β PS(Whittle)). The mean standard error is smallest for the fractional Levy noises and largest for the fractional log-normal noises, with the largest \(\sigma \)(β PS(Whittle)) for the higher coefficient of variation. The ratio CRB/\(\sigma \)(β PS(Whittle)) changes with the one-point probability distribution but not with the time series length N.
If the performance of these power spectral techniques is considered for time series with N = 4,096 elements, we find (Tables 4, 5):
Power spectral techniques are free of bias for fractional noises and motions with symmetric distributions and they expose a significant bias for time series with strong asymmetric probability distributions.
The random error (proportional to the confidence interval sizes) is rather small, as in the case of symmetrically distributed time series, 95 % of the β PS occupy an interval of length 0.2 or smaller.
For fractional noises and motions with an asymmetric probability distribution, power spectral techniques are less certain. The more asymmetric the time series is, the more uncertain is the estimated strength of long-range persistence. Spectral techniques that estimate the strength of long-range persistence are common in statistical time series analysis, particularly in the econometrics and physics communities, and their performance has been intensively investigated (Schepers et al. 1992; Gallant et al. 1994; Taqqu et al. 1995; Mehrabi et al. 1997; Wen and Sinding-Larsen 1997; Pilgram and Kaplan 1998; Taqqu and Teverovsky 1998; Heneghan and McDarby 2000; Velasco 2000; Weron 2001; Eke et al. 2002; Delignieres et al. 2006; Stadnytska and Werner 2006; Boutahar et al. 2007; Mielniczuk and Wojdyłło 2007; Boutahar 2009; Faÿ et al. 2009; Stroe-Kunold et al. 2009; see also Tables 6 and 7). The most common approach in the literature is to fit models using MLE to time series that are characterized by short- and long-range dependence. In most cases, the considered time series have a Gaussian one-point probability distribution.
Power spectral analysis brief conclusions: Power spectral techniques have small biases and small random errors (tight confidence intervals).
Discussion of Overall Performance Test Results
Overall Interpretation of Performance Test Results
The performance test results presented in Sect. 7 for measures of long-range persistence have shown that some techniques are more suited than others in terms of systematic and random error. In Figs. 29 and 30 we give, respectively, a visual overview of the systematic error (bias = \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} - \beta_{\text{model}} \)) and random error (standard deviation of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\), σ x (\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\))) for the five techniques applied to fractional noises and motions constructed with −1.0 ≤ β model ≤ 4.0 and three probability distributions: Gaussian (FGN), log-normal (FLNNa) with 0.2 ≤ c v ≤ 2.0 using Box–Cox, and Levy (FLevyN) with 1.0 ≤ a ≤ 1.9. For each type of fractional noise and motion, 100 realizations were created each with 4,096 elements. Note that a FGN is the same as FLNNa with c v = 0.0 and FLevyN with a = 2.0. In Fig. 31 for the same 2,730 processes considered in Figs. 29 and 30, we give a visual overview of the root-mean-squared error RMSE (Eq. 30) which is a measure for the overall performance of a technique.
Visual overview of the systematic error (bias = \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} - \beta_{\text{model}} \)) of five techniques for evaluating long-range persistence: a Hurst rescaled range (R/S) analysis (β Hu), b semivariogram analysis (β Ha), c detrended fluctuation analysis (β DFA), d power spectral analysis best-fit (β PS(best-fit)), e power spectral analysis Whittle (β PS(Whittle)). For each panel are shown the biases resulting from 100 realizations each of processes created to have N = 4,096 elements and 546 different sets of parameters: [panel rows] strengths of long-range persistence −1.0 ≤ β model ≤ 4.0; [panel columns] three probability distributions: (1) Levy (FLevyN) with 1.0 ≤ a ≤ 1.9, (2) Gaussian (FGN), (3) log-normal (FLNNa) with 0.2 ≤ c v ≤ 2.0 using Box–Cox. Note that a FGN is the same as FLNNa with c v = 0.0 and FLevyN with a = 2.0. The colour coding within each panel (see legend) ranges from large negative biases (red), 'small' biases (green), to large positive biases (purple)
Visual overview of the random error (\( \sigma_{x} (\beta_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}) \) (abbreviated Std Dev in the figure) of five techniques for evaluating long-range persistence: a Hurst rescaled range (R/S) analysis (β Hu), b semivariogram analysis (β Ha), c detrended fluctuation analysis (β DFA), d power spectral analysis best-fit (β PS(best-fit)), e power spectral analysis Whittle (β PS(Whittle)). For each panel is shown the random error (standard deviations, abbreviated in the panel as std dev) resulting from 100 realizations each of processes created to have N = 4,096 elements and 546 different sets of parameters: [panel rows] strengths of long-range persistence −1.0 ≤ β model ≤ 4.0; [panel columns] three probability distributions: (1) Levy (FLevyN) with 1.0 ≤ a ≤ 1.9, (2) Gaussian (FGN), (3) log-normal (FLNNa) with 0.2 ≤ c v ≤ 2.0 using Box–Cox. The random error for each of the 546 process sets within each panel is represented by the size of the bar for that process (see legend)
Visual overview of the root-mean-squared error (RMSE, Eq. 30) of five techniques for evaluating long-range persistence: a Hurst rescaled range (R/S) analysis (β Hu), b semivariogram analysis (β Ha), c detrended fluctuation analysis (β DFA), d power spectral analysis best-fit (β PS(best-fit)), e power spectral analysis Whittle (β PS(Whittle)). For each panel is shown the RMSE (i.e. ((systematic error)2 + (random error)2)0.5) resulting from 100 realizations each of processes created to have N = 4,096 elements and 546 different sets of parameters: [panel rows] strengths of long-range persistence −1.0 ≤ β model ≤ 4.0; [panel columns] three probability distributions: (1) Levy (FLevyN) with 1.0 ≤ a ≤ 1.9, (2) Gaussian (FGN), (3) log-normal (FLNNa) with 0.2 ≤ c v ≤ 2.0 using Box–Cox. Note that a FGN is the same as FLNNa (c v = 0.0) and FLevyN (a = 2.0). The RMSE for each of the 546 process sets within each panel is represented by the size of the bar for that process (see legend) and colour shading behind that bar (green: 0.0 ≤ RMSE ≤ 0.1; yellow: 0.1 < RMSE ≤ 0.5; red: RMSE > 0.5)
A comparison of the systematic error (bias) of the five techniques (Fig. 29) shows that DFA (Fig. 29c) and spectral techniques (Fig. 29d,e) have small biases (green cells in the panels) over most of the range of β model considered, that is, for most fractional noises and motions. Large biases for DFA and spectral techniques (red or purple cells in Fig. 29c,d,e panels) indicate over- or underestimation of the persistence strengths and occur only for anti-persistent fractional log-normal noises (FLNNa, β model < −0.2) and for a minority of highly persistent fractional Levy motions (FLevyN, 1.0 < a < 1.2). In contrast, Hurst rescaled range analysis (Fig. 29a) leads to results with small biases only for fractional noises with 0.0 < β model < 0.8, and semivariogram analysis (Fig. 29b) has small biases only if the persistence strength is in the range 1.2 < β model < 2.8 and the one-point probability distribution does not have too heavy a tail (i.e. FLevyN with a > 1.2). Overall, when examining the five panels in Fig. 29, one can see (green cells) that DFA and the spectral analysis techniques are generally applicable for all β model, whereas rescaled range analysis (with limitations) is appropriate for −1.0 < β model < 1.0, and semivariogram analysis (again, with limitations) is appropriate for 1.0 < β model < 3.0.
If the random errors (σ x (\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\))) of the five techniques are compared (Fig. 30), the smallest overall random errors (horizontal bars that are very thin or zero) are found for rescaled range analysis (Fig. 30a), and then spectral techniques (Fig. 30d,e) with the Whittle estimator having slightly smaller overall random errors. DFA (Fig. 30c) has overall the largest random error when considering all strengths of persistence (β model) and variety of probability distributions and increases gradually as β model increases. In contrast, when examining semivariogram analysis (Fig. 30b), it shows the largest variation of random errors of all the techniques, particularly large for 1.0 < β model < 3.0.
The overall performance of the techniques is given by the root-mean-squared error, RMSE = ((systematic error [Fig. 29])2 + (random error [Fig. 30])2)0.5 (Eq. 30) which is displayed graphically in Fig. 31. In this figure, the length of the horizontal bar in each panel cell represents RMSE on a scale of 0.0 to 3.0, where (as above) each of the 546 cells in the panel is a combination of process parameters (−1.0 < β model < 4.0; 21 different one-point probability distribution parameter combinations) for which 100 realizations were produced. To highlight different magnitudes of RMSE, each cell has been coloured, such that green represents 'low' values of RMSE (0.0 to 0.1), yellow 'medium' values of RMSE (0.1 to 0.5), and red 'high' values of RMSE (0.5 to 3.0).
Figure 31 illustrates that the performance of the best-fit and Whittle spectral techniques (Fig. 31d,e) generally performs the best (compared to the other three techniques) across a large range of β model and one-point probability types (FLevyN, FGN, and FLNNa) as evidenced by the large 'green' regions (i.e. 0.0 ≤ RMSE ≤ 0.1). However, one also can observe for these spectral techniques (Fig. 31d,e, yellow [0.1 < RMSE ≤ 0.5] and red [RMSE > 0.5] cells) that care should be taken for very heavy-tailed fractional noises with large persistence values (FLevyN, 1.0 ≤ a ≤ 1.3, and β model > 2.0), and for fractional log-normal noises (FLNNa) that are anti-persistent (β model < 0.0) or with weak persistence (0.0 < β model < 1.0) and c v > 0.8. DFA (Fig. 31c), although it is in general applicable over all β model, does not perform as well as the spectral analysis techniques (Fig. 31d,e) as evidenced by a large number of yellow cells (0.1 < RMSE ≤ 0.5) and a few red cells (RMSE > 0.5), particularly for FLevyN across most β model. Semivariogram analysis (Fig. 31b) has large RMSE (red cells) for β model ≤ 0.4 and β model ≥ 3.6 (across FLevyN, FGN, and FLNNa), whereas rescaled range analysis (Fig. 31a) has large RMSE (red cells) for β model ≤ −0.6 and β model ≥ 1.6. The other cells for both semivariogram (Fig. 31b) and rescaled range analysis (Fig. 31a) mostly exhibit medium RMSE (yellow cells) except for narrow bands of 0.2 < β model < 0.6 (rescaled range analysis) and 1.2 < β model < 1.6 where the cells exhibit low RMSE (green cells).
We believe, based on the results shown in Figs. 29, 30, 31, that power spectral analysis techniques (best-fit and Whittle) are acceptable for most practical applications as they are almost unbiased and give tight confidence intervals. Furthermore, based on these figures, detrended fluctuation analysis is appropriate for fractional noises and motions with positive persistence and with non-heavy-tailed and near-symmetric one-point probability distributions; it is not appropriate for asymmetric or heavy-tailed distributions. Semivariogram analysis was unbiased for 1.2 < β model < 2.8 and might be used for double-checking results, if needed, for an aggregated series, but the large random errors for parts of the range over which results are unbiased need to be considered. We do not recommend the use of Hurst rescaled range analysis as it is only appropriate either for very long sequences (with more than 105 data points) (Bassingthwaighte and Raymond 1994) or for fractional noises with a strength of long-range persistence close to β model ≈ 0.5.
If we focus on the performance of β PS(best-fit) and β DFA for fractional noises and motions with N = 4,096 data points (Figs. 29, 30; Tables 4, 5), we find (1) biases of comparable size and (2) confidence interval sizes which are β model independent for β PS(best-fit) and β model dependent for β DFA. For a pink fractional noise (β model = 1.0), we calculate the absolute magnitude of the confidence intervals as 2 × 1.96 × (σ x (β [DFA, PS])). We find the following confidence intervals for [(β PS(best-fit)), (β DFA)]:
[0.12, 0.24] (Gaussian distribution)
[0.16, 0.27] (log-normal distribution with moderate asymmetry, c v = 0.6, constructed by Box–Cox transform)
[0.10, 0.34] (Levy distribution with a = 1.5)
The size of the confidence intervals for β DFA is a factor of 1.7 to 3.4 times the confidence intervals for β PS(best-fit). Therefore, we recommend the use of detrended fluctuation analysis only for fractional noises and motions with a 'well-behaved' one-point probability distribution, in other words for distributions which are almost symmetric and not heavy-tailed.
For anti-persistent noises (β < 0.0), we find a systematic overestimation of the modelled strength of long-range persistence. Rangarajan and Ding (2000) showed that a Box–Cox transform of an anti-persistent noise with a symmetric one-point probability distribution is not just changing the distribution (to an asymmetrical one); the Box–Cox transform effectively superimposes a white noise on the anti-persistent noise, which causes a weakening of the anti-persistence (i.e. β becomes larger). This implies that, for applications, if anti-persistence or weak persistence is identified for an asymmetrically distributed time series, values of long-range persistence that are more negative might be needed for appropriately modelling the original time series. In this situation, we recommend applying a complementary Box–Cox transform to force the original time series to be symmetrically distributed. Then, one should consider the strength of long-range persistence for both the original time series and the transformed time series, discussing both in the results. If a given time series (or realization of a process) has a symmetric one-point probability distribution, one can always aggregate the series and analyse the result (see Sects. 3.5 and 3.6).
With regard to log-normal distributed noises and motions, the results of our performance tests are sensitive to the construction technique used (Box–Cox vs. Schreiber–Schmitz). In this sense, our 'benchmarks' seem to confront the construction of the noises or motions rather than to evaluate the techniques used to estimate the strength of long-range dependence. Nevertheless, both ways of constructing fractional log-normal noises and motions are commonly used. If a log-normal distributed natural process like river run-off is measured, either the original data (in linear coordinates) can be examined, or the logarithm of the data can be taken. Our simulations show that the strength of long-range dependence can alter when going from the original to log-transformed values and vice versa. The Schreiber–Schmitz algorithm creates log-normal noises and motions that have a given power-law dependence of the power spectral density on frequency, whereas the Box–Cox transform creates log-normal noises and motions based on realizations of fractional Gaussian noises and motions with a given β model. The Box–Cox transform will slightly change the power-law dependence (for the FGN) of the power spectral densities on frequency, leading to values of β PS that are systematically (slightly) different from β model.
The Use of Confidence Interval Ranges in Determining Long-Range Persistence
From an applied point of view, it is important to discuss the size of the uncertainties (both systematic and random errors) of the estimated strength of long-range persistence. If a Gaussian-distributed time series with N data points is given that is expected to be self-affine, then the power spectral techniques have a negligible systematic error (bias) and a random error (σ x (β PS)) of approximately 2N −0.5. If we take as an actual example power spectral analysis (best-fit) applied to 100 realizations of a fractional Gaussian noise with β model = 0.2 and three lengths N = 32,768, 4,096, and 256, the average result (supplementary material) of the applied technique is, respectively, \( \bar{\beta }_{{\rm PS}(\text{best-fit})} = 0.201,\,\,0.192,\;0.204 \) giving biases = 0.001, 0.008, and 0.004. The random errors for β PS(best-fit) at N = 32,768, 4,096, and 256 are, respectively, σ x (β PS(best-fit)) = 0.011, 0.030, 0.139, compared to the theoretical random error of 2 N −0.5 = 0.011, 0.031, 0.125. The actual random error and the theoretical error are closer as N gets larger, with for N = 32,768 a negligible percentage difference between the two values, N = 4,096 a 3 % difference, and N = 256 a 11 % difference. For power spectral analysis (Whittle), this same behaviour of the random error (2 N −0.5) can be seen in Fig. 28b, where there is a power-law dependence of (σ x (β PS)) on time series length N (dashed lines, blue triangle).
Confidence intervals (Sect. 7.2) are constructed as \( \bar{\beta }_{\text{PS}} \pm 1.96\;\sigma_{x} \left( {\beta_{\text{PS}} } \right) \). Therefore, if we take the example given above for 100 realizations of a FGN constructed to have β model = 0.2 and N = 16,384, the 95 % confidence intervals are \( \bar{\beta }_{{\text{PS}}({\text{best-fit}})} \pm 1.96\;\sigma_{x}(\beta_{{\text{PS}}({\text{best-fit}})}) = 0.201 \pm (1.96 \times 0.011),\) giving (within the 95 % confidence intervals) 0.179 < β PS(best-fit) < 0.223. If we do the same for the two other lengths, then for N = 4,096, 0.132 < β PS(best-fit) < 0.252, and for N = 512, −0.074 < β PS(best-fit) < 0.482. The confidence interval sizes grow rapidly as the number of elements N decreases, such that, for N = 256, we are unable to confirm (within the 95 % confidence interval) that long-range persistence is in fact present—the confidence interval contains the value β PS = 0.0. Values of β PS that are closer to or at zero are likely to occur for short-term persistent and white (uncorrelated) noises. Thus, if we want to use this analysis technique for showing that a time series with N = 256 elements is long-range persistent (and not β = 0.0), the confidence interval must not contain zero, requiring either β PS > 0.25 or β PS < −0.25, where we have used 1.96 × (2 N −0.5) to derive these limits. In the case of non-symmetric one-point probability distributions, the larger systematic errors (biases) shift the confidence intervals even more for β PS, leading to other (sometimes larger) thresholds for identifying long-range persistence.
Similar considerations can be made for the other three techniques (\({\beta }_{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}\)). Since these techniques are less reliable, the resultant thresholds will be larger and the two thresholds will not be symmetric with respect to zero due to biases. In such cases long-range persistence can only be identified if β model has a very high or very low value. In summary, it might become difficult to identify long-range persistence for non-Gaussian or rather short or non-perfect fractional noises or motions.
Another important aspect of our analysis is stationarity, in other words to decide whether a given time series can be appropriately modelled as a fractional noise (β < 1.0) or a fractional motion (β > 1.0). The value of β = 1.0 is the strength dividing (weakly) stationary noises from non-stationary motions. For this decision, essentially the same technique as described above can be applied where we inferred whether a time series is long-range persistent (β > 0.0) or anti-persistent (β < 0.0). However, the analysis is now restricted to confidence intervals for β DFA, β PS(best-fit), and β PS(Whittle). Hurst rescaled range (R/S) and semivariogram analysis cannot be applied because the critical value of β = 1.0 is at the edge of applicability for both techniques. For investigating whether a time series is a fractional noise (stationary) or motion (non-stationary), one can check all three confidence intervals as to whether they contain β = 1.0 within their lower or upper bounds. If this is the case, the only inference one can make is that the time series is either a noise or a motion, but not specifically one or the other. If all three confidence intervals have an upper bound that is less than β = 1.0, then one can infer that the time series is a fractional noise (and not a motion).
Benchmark-Based Improvements to the Measured Persistence Strength of a Given Time Series
In the previous sections, we have studied how the different techniques that measure long-range persistence perform for benchmark time series. These time series are realizations of processes modelled to have a given strength of persistence (β model), a prescribed one-point probability distributions and a fixed number of values N. Our studies have shown that the measured strength of long-range persistence of a given time series realization can deviate from the persistence strength of the processes underlining the benchmark fractional noises and motions due to systematic and random errors of the techniques. Therefore, using these benchmark self-affine time series, we can have a good idea—based on their β model, one-point probability distribution and N—about the resultant distribution of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) for each different technique, including any systematic errors (biases) and random errors. To aid a more intuitive discussion in the rest of this section, we will use the subscript word 'measured' for the estimators of long-range persistence that are calculated using different techniques, β measured = \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\), where, as before, Hu, Ha, DFA, and PS represent the technique applied.
In practice, we are often confronted with a single time series and want to state whether or not this time series is long-range persistent and, if so, how strong this persistence is and how accurately this strength has been measured. As we have seen already, different techniques can be applied for analysing this single time series, with each technique having its own set of systematic and random errors. Thus, the inverse problem of that discussed in the preceding two sections must be solved: the strength of long-range persistence of what would be the best-modelled fractional noise or motion, β model, is sought, based on the time series length N, its one-point probability distribution, and the β measured persistence strength results of the technique applied. From this, assuming that the time series is self-affine, we would like to infer the 'true' strength of persistence β model (and corresponding confidence intervals). To explore this further, we will use in Sect. 10 the data sets presented in Fig. 1 as case examples. If they are analysed to derive parameters for models, then the 95 % confidence intervals of the persistence strength β model have to be obtained from the computed β measured and from other parameters of the time series such as the one-point probability density and the time series length.
As discussed in Sect. 7.1, the variable β model is a measure of the process that we have designed to have a given strength of long-range persistence (and one-point probability distribution); the time series (our benchmarks) are realizations of that process. These benchmark time series have a distribution of β measured, but with systematic and random errors within that ensemble of time series, due to (1) finite-size effects of the time series length N and (2) inherent biases in the construction process itself (e.g., for strongly asymmetric one-point probability distributions). These biases in the construction are difficult to document, as most research to date addresses biases in the techniques to estimate long-range persistence, not in the construction. For symmetric one-point probability distributions (Gaussian, Levy), each realization of the process, if N were very large (i.e. approaching infinity), would have a strength of long-range persistence equal to β model, in other words equal to the value for which the process was designed (e.g., Samorodnitsky and Taqqu 1994; Chechkin and Gonchar 2000; Enriquez 2004).
One can never know the 'true' strength of long-range persistence β of a realization of a process. Therefore, an estimate of β is introduced based on a given technique, which itself has a set of systematic and random errors. The result of each technique performed on a synthetic or a real-time series is β measured, which therefore includes both any systematic errors within the realizations and the technique itself. Given a time series with a given length N and one-point probability distribution, we can perform a given technique which gives β measured. If we believe that long-range persistence is present, we can improve on our estimate of β measured by using (1) the ensemble of benchmark time series performance results from Sect. 7 of this paper and (2) our knowledge of the number of values N and one-point probability of the given time series. This benchmark-based improvement is using the results of our performance techniques, which are all based on an ensemble of time series that are realizations of a process designed to have a given β model, and which we now explore. The rest of this section is organized as follows. We first provide an analytical framework for our benchmark-based improvement of an estimator (Sect. 9.2), followed by a derivation of the conditional probability distribution for β model given β measured (Sect. 9.3). This is followed by some of the practical issues to consider when calculating benchmark-based improved estimators (Sect. 9.4) and a description of supplementary material for the user to do their own benchmark-based improved estimations (Sect. 9.5). We conclude by giving benchmark-based improved estimators for some example time series (Sect. 9.6).
Benchmark-Based Improvement of Estimators
In order to solve the inverse problem described in Sect. 9.1, we apply a technique from Bayesian statistics (see Gelman et al. 1995). This technique will incorporate the performance, that is, the systematic and random error of the particular technique which is discussed in Sect. 7 (see Figs. 21, 22, 23, 24, 25).
For this purpose, the joint probability distribution \( P\left({\boldsymbol{\beta}}_{\mathbf{model}} ,{\boldsymbol{\beta}} _{{\mathbf{measured}}} \right) \) for fractional noises and motions of length N and with a particular one-point probability distribution is considered. This joint probability distribution now depends on both \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) and \( {\boldsymbol{\beta}}_{{\mathbf{measured}}} . \) Because we will consider in this section probability distributions as functions of two variables and/or fixed values, we will introduce bold (e.g., \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \)) to indicate the set of values versus non-bold (e.g., β measured) to indicate a single value of the variable. In Fig. 32, we give a cartoon example illustrating the different combinations: \( P\left( {\boldsymbol{\beta}}_{{\mathbf{model}}}, {\boldsymbol{\beta}}_{{\mathbf{measured}}} \right) \), \( P\left( {\boldsymbol{\beta}}_{\mathbf{model}} ,\beta_{\text{measured}} \right) \), \( P\left( {\beta_{\text{model}}, {\boldsymbol{\beta}}_{{\mathbf{measured}}} } \right) \), and \( P\left( {\beta_{\text{model}},\beta_{\text{measured}} } \right) \). The probability of just one measurement β measured of one given realization of a process created with β model is given by \( P\left( {\beta_{\text{model}} ,\beta_{\text{measured}} } \right) \), the single dot in Fig. 32. In Sect. 7 we considered one β model for a given process, and the probability distribution of the resultant ensemble of \({\boldsymbol{\beta}}_{{\mathbf{measured}}} \) from a series of realizations of the process; the range of \( P\left( {\beta_{\text{model}}, {\boldsymbol{\beta}}_{{\mathbf{measured}}} } \right) \) is the blue vertical line in Fig. 32. By contrast, the benchmark-based improvements to the persistence strengths that we will explore in this Sect. 9 are one measurement β measured with a corresponding probability of the ensemble of \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) associated with it, \( P\left( {\boldsymbol{\beta}}_{\mathbf{model}}, {\beta}_{{\text{measured}}} \right) \), the red horizontal line in Fig. 32. The yellow area in Fig. 32 represents the ensemble of multiple measurements \( {\boldsymbol{\beta}}_{{\mathbf{measured}}} \) of multiple processes each created with β model, and the probability of the ensemble of \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) associated with each β measured, that is, \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{model}}}, {\boldsymbol{\beta}}_{{\mathbf{measured}}} } \right) \).
Cartoon illustration of the joint probability distributions, using the measured persistence strength (β measured) as a function of modelled persistence strength (β model). The bold notation (e.g., \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \)) indicates the set of values versus non-bold (e.g., β measured) to indicate a single value of the variable. Shown are joint probability distributions \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{model}}}, {\boldsymbol{\beta}}_{{\mathbf{measured}}} } \right) \) (yellow region), \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{model}}} ,\beta_{\text{measured}} } \right) \) (red horizontal line), \( P\left( {\beta_{\text{model}}, {\boldsymbol{\beta}}_{{\mathbf{measured}}} } \right) \) (blue vertical line), and \( \left( {\beta_{\text{model}} ,\beta_{\text{measured}} } \right) \) (black dot)
Applying Bayes rule (Bayes and Price 1763) to our two-dimensional probability distribution \( P\left( {\boldsymbol{\beta}}_{{\mathbf{model}}}, {\boldsymbol{\beta}}_{{\mathbf{measured}}} \right) \) leads to:
$$ P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} } \right) = P\left( {{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} |\beta_{\text{model}} } \right)P\left( {\beta_{\text{model}} } \right) , $$
$$ P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} ,\beta_{\text{measured}} } \right) = P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right)P\left( {\beta_{\text{measured}} } \right) , $$
where \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} |\beta_{\text{model}} } \right) \) and \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right) \) are conditional probability distributions with the vertical bar '|' means 'given'. In other words, \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} |\beta_{\text{model}} } \right) \) (i.e. \( {\boldsymbol{\beta}}_{{{\mathbf{measured}}}} \) given β model) would mean the distribution of measured values \( {\boldsymbol{\beta}}_{{{\mathbf{measured}}}} \) using a specific technique [Hu, Ha, PS, DFA], performed on multiple realizations of a process that was created to have a given strength of long-range persistence β model. The left-hand side of Eq. (31a), \( P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} } \right), \) is the joint probability distribution. This is equal to the right-hand side (Eq. 31a) where the conditional probability distribution \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} |\beta_{\text{model}} } \right) \) is multiplied by P(β model), where P(β model) acts as a normalization such that \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} |\beta_{\text{model}} } \right) \) sums up (over \( {\boldsymbol{\beta}}_{{\mathbf{measured}}} \)) to 1.0.
To illustrate Eq. (31a), we consider the joint probability distribution \( P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} } \right). \) In Fig. 33 we take fractional log-normal noise benchmarks with coefficient of variation c v = 0.5 and N = 1,024 data points and apply DFA. These were the same benchmarks used to produce the performance test results shown in Fig. 23c, with 100 realizations produced at each \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) = −1.0, −0.8, −0.6, …, 4.0. In Fig. 33a we give a histogram of the distribution of the estimated strength of long-range dependence \( {\boldsymbol{\beta}}_{{\mathbf{measured}}} = {\boldsymbol{\beta}}_{{\mathbf{DFA}}} \) for one given value of β model = 0.8, along with the best-fit Gaussian distribution to the probabilities \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{DFA}}} |\beta_{\text{model}} = 0.8} \right) \). In Fig. 33b we show the results of performance tests for multiple realizations of processes created to have an ensemble \( {\boldsymbol{\beta}}_{{\mathbf{model}}}. \)This is shown both as given in Fig. 23c (repeated as Fig. 33b) and a subsection of the results interpolated and contoured (Fig. 33d). Thus, the joint probability density \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{model}}}, {\boldsymbol{\beta}}_{{\mathbf{DFA}}} } \right) \) (the contour lines) is constructed by placing side-by-side thin 'slices' of Gaussian distributions which correspond to the distribution of \( {\boldsymbol{\beta}}_{{\mathbf{measured}}} \) given various values of β model. For achieving uniformly distributed values of β model, the virtual slices have to have equal thickness and equal weight. The grey region with the contours in Fig. 33d represents the two-dimensional (joint) probability distribution \( P\left( {{\boldsymbol{\beta}}_{\mathbf{model}} ,{\boldsymbol{\beta}}_{{\mathbf{DFA}}} }\right) \), whereas the vertical red line in Fig. 33d represents the one-dimensional (joint) probability distribution \( P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} } \right) \), which is equal to (see Eq. 31a) the conditional probability distribution \( P\left( {{\boldsymbol{\beta} }_{{\mathbf{DFA}}} |\beta_{\text{model}} = 0.8} \right) \), multiplied by P(β model).
Schematic illustration of the construction of the joint probability density \( P\left( {{\boldsymbol{\beta} }_{{\mathbf{model}}}, {\boldsymbol{\beta}}_{{\mathbf{measured}}} } \right) \) for realizations of a process created to have strengths of long-range persistence \( 0.0 \le {\boldsymbol{\beta} }_{{\mathbf{model}}} \le 1.5, \) log-normal one-point probability distribution (c v = 0.5, Box–Cox transform), time series length N = 1,024, and using DFA to evaluate the strength of long-range persistence. a A histogram of the distribution of the estimated strength of long-range dependence \( {\boldsymbol{\beta}}_{{\mathbf{measured}}} = {\boldsymbol{\beta} }_{{\mathbf{DFA}}} \) for one given value of β model = 0.8 is given, along with the best-fit Gaussian distribution to the probabilities \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{DFA}}} |\beta_{\text{model}} = 0.8} \right). \) b Performance of detrended fluctuation analysis (β DFA) using realizations of processes created to have different strengths of persistence \( - 1.0 \le {\boldsymbol{\beta}}_{{\mathbf{model}}} \le 4.0 \) and log-normal one-point probability distributions, c v = 0.5. The mean values (diamonds) and 95 % confidence intervals (error bars) of β DFA are presented as a function of the long-range persistence strength β model. This is a reproduction of Fig. 23c. c Enlarged version of (a). d The inset for (b) is enlarged here. Using the best-fitting Gaussian distributions for \( {\boldsymbol{\beta}}_{{\mathbf{model}}} = 0.0,0.2,0.4, \ldots ,1.6 \), and N = 1,024, these Gaussian distributions are interpolated using a spline fit, to create a contour map (diagonal grey region in d) of the joint probability distribution \( P\left({\boldsymbol{\beta}}_{{\mathbf{model}}}, {\boldsymbol{\beta}}_{{\mathbf{DFA}}} \right). \) Shown also are the interpolations of the \( \bar{\boldsymbol{\beta} }_{{\mathbf{DFA}}} \) (diagonal thick purple dashed line), their 95 % confidence interval borders (diagonal purple dotted lines), which are constructed as \( \bar{\beta}_{\text{DFA}}\, {\pm} 1.96\,\sigma_{x} ({\beta}_{\text{DFA}} ), \) and the function β DFA = β model (diagonal solid yellow line). Illustrated in (d) is an example of one value β model = 0.8 (vertical red line). This translates to the Gaussian distribution in (c) (an enlarged version of a), where the Gaussian distribution is a vertical cut of the two-dimensional joint probability distribution \( P\left( {\boldsymbol{\beta}}_{{\mathbf{model}}}, {\boldsymbol{\beta}}_{{\mathbf{DFA}}} \right) \) at β model = 0.8. Also given in (c) is the interval corresponding to \( \bar{\beta}_{\text{DFA}}\, {\pm} 1.96\,\sigma_{x} (\beta_{\text{DFA}} ) \) (vertical dark red line with arrows) that correspond to the β model = 0.8
In Fig. 33 we have shown an example of the joint probability distribution \( P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} } \right) \). We now consider (Eq. 31b) the joint probability distribution \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} ,\beta_{\text{measured}} } \right) = P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right)P\left( {\beta_{\text{measured}} } \right); \) in other words, given a value for β measured, what is the corresponding result for an ensemble of \( {\boldsymbol{\beta}}_{{\mathbf{model}}} . \) In Fig. 34, we give a schematic illustration of the construction of the conditional probability distribution \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right) \) for the same example as in Fig. 33, which was based on a log-normal distribution (c v = 0.5, N = 1,024) and using DFA to evaluate the strength of long-range persistence. Figure 34a gives the two-dimensional probability distribution \( P\left( {{\boldsymbol{\beta} }_{{\mathbf{model}}} ,{\boldsymbol{\beta} }_{{\mathbf{DFA}}} } \right) \) as constructed in Fig. 33d. This is now cut horizontally at three values of \( \beta_{\text{DFA}} = 0.30,\;0.86,\;1.65 \); these horizontal lines are now representing the ranges of the joint probability distributions \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}}, \beta_{\text{measured}} } \right). \) In Fig. 34b, the three conditional probability distributions \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{DFA}} = 0.30,\;0.86,\;1.65} \right) \) are obtained by normalizing \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} ,\beta_{\text{measured}} } \right) \) such that the integral of \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} ,\beta_{\text{measured}} } \right) \) is equal to 1.0.
Schematic illustration of the construction of the conditional probability distribution \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right) \), in other words the distribution of \( {{\boldsymbol{\beta}}}_{{{\mathbf{model}}}} \) given a single value of β measured, for the same example as in Fig. 33 (log-normal distribution, c v = 0.5, time series length N = 1,024, and using DFA to evaluate the strength of long-range persistence). This illustrates the adjustment to β measured based on the benchmark performance results introduced in Sect. 7. a The two-dimensional probability distribution \( P\left( {\boldsymbol{\beta}}_{{\mathbf{model}}} , {\boldsymbol{\beta}}_{{\mathbf{DFA}}} \right) \) as constructed in Fig. 33d is cut horizontally at three values of \( \beta_{\text{DFA}} = 0.30,\;0.86,\;1.65. \) The x-axis here is from 0.0 ≤ β model ≤ 2.2; whereas Fig. 33d is 0.0 ≤ β model ≤ 1.5. b The conditional probability distributions \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{DFA}} } \right) \) are then derived with Eq. (36), which incorporates the performance (the systematic and random errors) of the technique used. The vertical lines indicate the benchmark-based improved estimator \( \beta_{\text{DFA}}^{*} \) (Eq. 37), which is the mean value of the adjusted probability distribution. These are slightly greater than the mode as the distributions are skewed
In the framework of Bayesian statistics, the distribution of persistence strengths \( {\boldsymbol{\beta} }_{{\mathbf{model}}} \) given the measured persistence strength β measured is called the posterior. In this paper, we will use this 'posterior' to derive a benchmark-based improvement of the estimator and indicate the improved estimator by a superscript *. The mean value for our improved estimator for the strength of long-range persistence is given by:
$$ \beta_{\text{measured}}^{*} = \int\limits_{{\beta_{\hbox{min} } }}^{{\beta_{\hbox{max} } }} {\boldsymbol{\beta} }_{\mathbf{model}} \,P\left( {\boldsymbol{\beta} }_{\mathbf{model}} |\beta_{\text{measured}} \right) \;{\text{d}}{\boldsymbol{\beta} }_{\mathbf{model}} , $$
where \( \beta_{\text{measured}}^{*} \) is the benchmark-based improved estimate of β measured based on our benchmark time series results.
In practice, performing the procedure as schematically illustrated in Fig. 34 (i.e. with a two-dimensional histogram) is doable, but requires a sufficiently small bin size for β model and many realizations, such that an interpolation can be made in both directions. Therefore, we would like to derive an equation for \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right), \) and, from this, derive \( \beta_{\text{measured}}^{*} , \) a benchmark-based improvement to a given β measured. We do this in the next section.
Deriving the Conditional Probability Distribution for \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) Given β measured
How can the distribution of persistence strength \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{model}}} |\beta_{\text{measured}} } \right) \) be obtained? Two special properties of our estimators allow a manageable mathematical expression:
For fixed β model, the distribution \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{measured}}} |\beta_{\text{model}} } \right) \) can be approximated by a Gaussian distribution.
The mean value of \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{measured}}} |\beta_{\text{model}} } \right) \) is monotonically growing as a function of β model.
These two properties approximately hold for each of the four techniques applied in this paper, and we will now use them. Our results presented in Sects. 7 and 8 provide evidence that the conditional probability \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{measured}}} |\beta_{\text{model}} } \right) \) follows a Gaussian distribution (see Figs. 18, 19, 20):
$$ P\left( {{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} |\beta_{\text{model}} } \right)\sim {\text{Gaussian}}\left( {\bar{\beta }_{\text{measured | model}} ,\sigma_{{\beta_{\text{measured | model}} }}^{2} } \right) , $$
with \( \bar{\beta }_{\text{measured | model}} \) the mean value of \( {\boldsymbol{\beta}}_{{{\mathbf{measured}}}} \) for a given β model, and \( \sigma_{{\beta_{\text{measured | model}} }}^{2} \) the variance of \( {\boldsymbol{\beta}}_{{{\mathbf{measured}}}} \) for a given β model. Furthermore, we have found (Figs. 21, 22, 23, 24, 25) that \( \bar{\beta }_{\text{measured | model}} \) is monotonically (sometimes nonlinearly) increasing as a function of β model , except for the log-normal noises constructed by the Schreiber–Schmitz algorithm in the non-stationary regime (β model > 1.0) where \( \bar{\beta }_{\text{measured | model}} \) decreases with β model .
With Eq. (31a) we can derive the joint probability \( P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} } \right). \) An assumption is that \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) is uniformly distributed over the interval β min ≤ \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) ≤ β max, where β min and β max are the minimum and maximum values, respectively. We have chosen β model = −1.0, −0.8, −0.6, …, 4.0, and an equal number of realizations for each β model. The one-dimensional probability distribution of \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) is P(β model ) = 1/(β max − β min) = c 1. Substituting P(β model ) into Eq. (31a) allows us to write the joint probability distribution as:
$$ P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} } \right) = c_{1} \,P\left( {{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} |\beta_{\text{model}} } \right). $$
Using the assumption that β model is uniformly distributed and that Δβ model is small enough to give results that are smooth enough to be interpolated, along with Eqs. (33) and (34), then the joint probability distribution \( P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta} }_{{\mathbf{measured}}} } \right) \) is given by:
$$ \;P\left( {\beta_{\text{model}} ,{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} } \right) = \frac{{c_{1} }}{{\sqrt {2\pi } \;\sigma_{{\beta_{\text{measured | model}} }}^{{}} }}\exp \left( { - \frac{{\left( {{\boldsymbol{\beta}}_{{{\mathbf{measured}}}} - \bar{\beta }_{\text{measured | model}} } \right)^{2} }}{{2\;\sigma_{{\beta_{\text{measured | model}} }}^{2} }}} \right). $$
This particular form of \( P\left({\beta}_{\text{model}},\,{\boldsymbol{\beta}}_{{\mathbf{measured}}}\right) \) can be considered for multiple values of β model, and the required calibrated probability distribution \( P\left( {\boldsymbol{\beta}}_{\mathbf{model}} |\beta_{\text{measured}} \right) \) can be derived by rearranging Eq. (31b):
$$ \begin{aligned} P\left( {{\boldsymbol{\beta} }_{{\mathbf{model}}} |\beta_{\text{measured}} } \right) & = \frac{{P\left( {{\boldsymbol{\beta} }_{{\mathbf{model}}} , \beta_{\text{measured}} } \right)}}{{P\left( {\beta_{\text{measured}} } \right)}} \\ & = c_{2} \;\exp \left( { - \frac{{\left( {\beta_{\text{measured}} - \bar{\boldsymbol{\beta }}_{{\mathbf{measured | model}}} } \right)^{2} }}{{2\; {\boldsymbol{\sigma}}_{{{\boldsymbol{\beta} }{{_{{\mathbf{measured | model}}} }} }}^{\bf{2}} }}} \right). \\ \end{aligned} $$
The constant c 2 is based on integrating the final result of Eq. (36) such that \( \int_{{\beta_{\hbox{min}}}}^{{\beta_{\hbox{max} } }} P\left( {\boldsymbol{\beta} }_{\mathbf{model}} |\beta_{\text{measured}} \right) {\text{d}} {\boldsymbol{\beta} }_{\mathbf{model}} = 1. \) Combining Eq. (36) with Eq. (32) gives:
$$ \beta_{\text{measured}}^{*} = \,c_{2} \;\int\limits_{{\beta_{\hbox{min} } }}^{{\beta_{\hbox{max} } }} {{\boldsymbol{\beta}}_{\mathbf{model}} \exp \left( { - \frac{{\left( {\beta_{\text{measured}} - \bar{ \boldsymbol{\beta}}_{\mathbf{measured | model}} } \right)^{2} }}{{2\;{\boldsymbol{\sigma}}_{{{\boldsymbol{\beta}}_{\mathbf{measured | model}} }}^{\bf 2} }}} \right)} \;{\text{d}} {\boldsymbol{\beta}}_{\mathbf{model}}. $$
We now have a general equation for our improved estimator, \( \beta_{\text{measured}}^{*} \), which has been based on the conditional probability \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right), \) in other words, an improvement based on our benchmark-based results from Sects. 7 and 8. Three examples for \( \beta_{\text{measured}}^{*} \) are given in Fig. 34 which schematically illustrates the construction of \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right). \)
Practical Issues When Calculating the Benchmark-based Improved Estimator \( \beta_{\text{measured}}^{*} \)
For practical applications we are interested in deriving the benchmark-based improved estimator \( \beta_{\text{measured}}^{*} \) and associated 95 % confidence intervals. The approach presented above allows us to do this with moderate computational costs in the following way:
For the time series of interest, determine its one-point probability distribution and note its time series length, N.
Measure the strength of long-range dependence of the time series β measured using a specific technique [Hu, Ha, DFA, PS].
Construct benchmark fractional noises and motions which are realizations of processes with different strength of long-range persistence, β model, but with length N and one-point probability distributions equal to those of the analysed time series. We have provided (supplementary material) files with fractional noises and motions drawn from 126 sets of parameters and an R program to create these and other synthetic noises and motions (see Sect. 4.3 for further description).
Use the fractional noises and motions constructed in (C) and the technique used in (B) to determine numerically \( \bar{\boldsymbol{\beta} }_{\mathbf{measured | model}} \) and \( \boldsymbol{\sigma}_{{{\boldsymbol{\beta}}_{\mathbf{measured | model}} }}^{2} \), for a range of β model from β min to β max, such that step size for successive β model results in \( \bar{\boldsymbol{\beta} }_{\mathbf{measured | model}} \) and \( \boldsymbol{\sigma}_{{{\boldsymbol{\beta}}_{\mathbf{measured | model}} }}^{2} \) which are sufficiently smooth. Interpolation within the step size chosen (e.g., linear, spine) might be necessary. We have given these performance results measures (supplementary material) for fractional noises and motions with about 6,500 different sets of parameters (see Sect. 7.3 for further description).
Apply Eq. 36 to determine the 'posterior' of the long-range persistence strength, \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right) \).
Determine the benchmark-based improved estimator for the time series, \( \beta_{\text{measured}}^{*} \), its 95 % confidence intervals from the mean and 95 % confidence intervals of the distribution obtained in (E).
In the case of unbiased techniques, we find \( {\boldsymbol{\beta }}_{\mathbf{measured | model}}={\boldsymbol{\beta}}_{\mathbf{model}}.\) If, in addition, the variance \( \boldsymbol {\sigma_{{\beta}_{\mathbf{measured | model}} }}^{2} \) does not depend on β model , then \( \boldsymbol{\sigma_{{\beta}_{\mathbf{measured | model}} }}^{2} \) = σ 2 where σ 2 is now a constant. An example of an unbiased technique where the variance does not depend on β model is power spectral analysis applied to time series with symmetric one-point probability distributions. For this case, the distribution defined in Eq. (36) simplifies to a Gaussian distribution with a mean value of β model and a variance of σ 2, giving \( P\left({\boldsymbol{\beta}}_{{\mathbf{model}}}|\beta_{\text{measured}}\right)\sim{\text{Gaussian}}\left({\boldsymbol{\beta}}_{\mathbf{model}} ,{\sigma}^{\text{2}}\right).\) This implies, for this case, that (Eq. 37) the benchmark-based improved estimator \( {\boldsymbol{\beta}}_{\mathbf{measured}}^{*} = {\boldsymbol{\beta}}_{\mathbf {model}} . \) However, in contrast, in power spectral analysis applied to time series with asymmetric one-point probability distributions and for the three other techniques considered in this paper for both symmetric and asymmetric one-point probability distributions, either the techniques are biased or the variance \( \boldsymbol{\sigma}_{{\beta}_{\mathbf{measured | model}}}^{2} \) changes as a function of β model . In these cases the corresponding distributions \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} |\beta_{\text{measured}} } \right) \), as defined in Eq. (36), are asymmetric, and also any confidence intervals (2.5 and 97.5 % of the probability distribution) are asymmetric with respect to the mean of the probability distribution, \( \beta_{\text{measured}}^{*} \).
Benchmark-based Improved Estimators: Supplementary Material Description
We have provided (supplementary material) an Excel spreadsheet which allows a user to determine conditional probability distributions based on a user-measured β measured for a time series, and the benchmark performance results discussed in this paper. In Fig. 35 we show example of three Supplementary Material Excel Spreadsheet screenshots.
The first sheet 'PerfTestResults' (Fig. 35a) allows the user to see summary statistics of the results of selected performance tests (Hurst rescaled range analysis, semivariogram analysis, detrended fluctuation analysis, power spectral analysis best-fit, and power spectral analysis Whittle) as applied to benchmark synthetic time series with modelled strengths of long-range persistence (−1.0 < β model < 4.0), given one-point probability distributions (Gaussian, log-normal c v = 0.2 to 2.0, Levy a = 1.0 to 1.9), and time series lengths (N = 64, 128, 256, …, 131,072). For log-normal noises and motion, we give only the results of those constructed with the Box–Cox transform (FLNNa). An example is shown in Fig. 35a of a statistical summary of results for 100 realizations of a fractional log-normal noise process constructed with Box–Cox (FLNNa), c v = 0.8, N = 512, with power spectral analysis (best-fit) applied. Although the results are not discussed in the text of this paper, we also give the results for discrete wavelet analysis in the supplementary material (see Appendix 6 for details of how it was applied).
Example of three screen captures from Supplementary Material Excel Spreadsheet for a user to determine conditional probability distributions based on a user-measured β measured for a time series, and the benchmark performance results discussed in this paper. a Spreadsheet 'PerfTestResults' allows the user to select summary statistics of the results of five different techniques applied to over 6,500 combinations of parameters, as described in this paper. b Spreadsheet 'InterpolSheet' allows an input of a user-measured β measured for their specific time series, and based on the closest match of their time series to benchmark results given in 'PerfTestResults', the mean and standard deviation of the benchmark results for −1.0 < β model < 4.0. The spreadsheet linearly interpolates the performance test results and then calculates β *measured , the benchmark-based improvement to the user-measured value, along with the 97.5 and 2.5 percentiles (i.e. the 95 % confidence intervals). c The sheet 'CalibratedProbChart' shows the calibrated probability distribution of β model conditioned on the user-measured value for beta (measure of the strength of long-range persistence) and benchmark time series
The second sheet 'InterpolSheet' (Fig. 35b) allows the user to input in the yellow box the user-measured β measured for their specific time series, and then, based on the closest match of their time series to the sheet 'PerfTestResults' parameters of one-point probability distribution type, number of values N, and technique used, to input the mean and standard deviation of the benchmark results for −1.0 < β model < 4.0. In this example, it is assumed the user has a time series with the parameters given for Fig. 35a (FLNNa, c v = 0.8, N = 512), has applied power spectral analysis (best-fit), and has user-measured value of β measured = 0.75. The spreadsheet automatically interpolates the performance test results, which have step size Δβ model = 0.2, to Δβ model = 0.01, using linear interpolation, and then calculates β *measured , the benchmark-based improvement to the user-measured value, along with the 97.5 and 2.5 percentiles (i.e. the 95 % confidence intervals).
The third sheet 'CalibratedProbChart' (Fig. 35c) shows the calibrated probability distribution of β model conditioned on the user-measured value for beta (measure of the strength of long-range persistence) and benchmark time series, \( P\left( {{\boldsymbol{\beta} }_{{\mathbf{model}}} |\beta_{\text{measured}} = 0.75} \right), \) showing graphically the mean of the distribution (this gives the value for β *measured ) and the 97.5 and 2.5 percentiles of that distribution.
Benchmark-based improved estimators for example time series
Now we come back to the example of fractional log-normal noises discussed in Sect. 5 and presented and pre-analysed in Fig. 14 and the properties of the corresponding \( \beta_{\text{measured}} = \beta_{\left[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}({\text{best-fit}}),\,{\text{PS}}({\text {Whittle}}) \right]} \) presented in Figs. 21, 22, 23, 24, 25 and Tables 4, 5. Take, for example, a time series with N = 1,024 data points whose one-point probability distribution is a log-normal with a coefficient of variation of c v = 0.5 and created to have β model = 1.0. The four functions—rescaled range, detrended fluctuation function, semivariogram, and power spectral density—result in a power-law dependence on the segment length, lag, or the frequency. In other words, the analyses expose long-range persistence. The corresponding power-law exponents are related to the strength of long-range persistence as mentioned in Sects. 5 and 6 and given in Table 3. The measured strength of long-range persistence has been determined as β Hu = 0.78, β Ha = 1.34, β DFA = 0.99, β PS(best-fit) = 0.99, and β PS(Whittle) = 0.98. We now apply the scheme in Sect. 9.4 to obtain the five calibrated distributions, \( P\left( { {\boldsymbol{\beta}}_{{\mathbf{model}}}}|{\beta_{\text{measured}}} \right) \), conditioned on the five β measured values for each technique (see Fig. 34 for an illustration).
For example, β Hu = 0.78 is put into Eq. (36) giving:
$$ P\left( {{\boldsymbol{\beta}}_{{\mathbf{model}}}} |\beta_{\text{Hu}} = 0.78 \right) = c_{2} \;\exp \left( { - \frac{{\left( {0.78 - \bar{\mathcal{\boldsymbol{\beta} }}_{{\bf{Hu | model}}} } \right)^{2} }}{{2\;\mathcal{\boldsymbol{\sigma} }_{{{\boldsymbol{\beta}}_{{\bf{Hu | model}}} }}^{\bf 2} }}} \right). $$
The set of \( \bar{\boldsymbol{\beta}}_{{\mathbf{Hu | model}}} \) and \( {\boldsymbol{\sigma} }_{{{\boldsymbol{\beta} }_{{\mathbf{Hu | model}}} }}^{\bf2} \) in Eq. (38) are the mean and standard deviation (i.e. the standard error), respectively, of the set of \( {\boldsymbol{\beta} }_{{\mathbf{model}}} \) for log-normal times series with c v = 0.5 and N = 1,024. Each value of \( {\boldsymbol{\beta} }_{{\mathbf{model}}} \) has its own associated mean (\( \bar{\mathcal{\beta }}_{{\text{Hu | model}}} \)) and standard deviation (\({\sigma }_{{{\beta }_{{\text {Hu | model}}} }} \)). For Hurst rescaled range (R/S) analysis, we can read this set of values directly off of Fig. 21c, where the means are the green diamonds plotted and the error bars represent ±1.96 standard deviations. However, as it is difficult to read precise numbers off of the figures, a more accurate way is to go to the supplementary material Excel spreadsheet, choose the appropriate parameters of the process, and read off (with appropriate interpolation if necessary) \( {{\boldsymbol{\beta}}}_{\mathbf{Hu | model}} \) and \( \boldsymbol{ \sigma_{{\beta}_{\mathbf{Hu | model}} }}, \) and to either apply directly Eq. (38) or to have the supplementary material Excel spreadsheet for calculating the appropriate values (Sect. 9.5) and the resultant conditional distributions \( P\left( {\boldsymbol{\beta} }_{{\mathbf{model}}} |\beta_{\text{measured}} \right) \).
In Fig. 36 we give the conditional distributions \( P\left( { {\boldsymbol{\beta}}_{{\mathbf{model}}}} |\beta_{\text{measured}} \right) \), for each of the five performance techniques, based on benchmark results and measured values for the techniques β Hu = 0.78, β Ha = 1.34, β DFA = 0.99, β PS(best-fit) = 0.99, and β (PSWhittle) = 0.98. The conditional distributions for β DFA, β PS(best-fit), and β PS(Whittle) have their modes (maximum probability for each distribution) at the measured values of β, whereas the modes of the calibrated distributions of β Hu and β Ha are shifted because the underlining β model = 1.0 is at the edge of the range of applicability of these two techniques. The calibrated strength of long-range persistence (i.e. the benchmark-based improved estimators) leads for all techniques to values close to one: \( \beta_{\text{Hu}}^{*} = 1.02,\beta_{\text{Ha}}^{*} = 1.30,\beta_{\text{DFA}}^{*} = 1.05,\beta_{{{\text{PS}}\left( {{\text{best-fit}}} \right)}}^{*} = 1.02,\;{\text{and}}\;\beta_{{{\text{PS}}\left( {\text{Whittle}} \right)}}^{*} = 1.02. \) The 95 % confidence intervals (ranging from the 2.5 to the 97.5 percentile), however, differ remarkably: 0.74 < \( \beta_{\text{Hu}}^{*} \) < 1.32, 1.05 < \( \beta_{\text{Ha}}^{*} \) < 1.62, 0.83 < \( \beta_{\text{DFA}}^{*} \) < 1.28, 0.88 < \( \beta_{{{\text{PS}}\left( {{\text{best-fit}}} \right)}}^{*} \) < 1.14 and 0.90 < \( \beta_{{{\text{PS}}\left( {\text{Whittle}} \right)}}^{*} \) < 1.11. The improved estimator \( \beta_{\text{measured}}^{*} \) through use of the power spectral method is the most certain, followed by detrended fluctuation analysis. The confidence intervals resulting from rescaled range analysis and semivariogram analysis are very wide. The confidence interval sizes of \( \beta_{\text{Hu}}^{*} ,\beta_{\text{Ha}}^{*} ,\;{\text{and}}\;\beta_{\text{DFA}}^{*} , \) are larger than the confidence intervals of β Hu, β Ha, and β DFA derived from the random errors, \(\sigma \) x (\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA}}]}}\)). Nevertheless, all techniques are appropriate to confirm the presence of long-range persistence, as no corresponding 95 % confidence interval contains β model = 0.0.
Conditional distributions \( P( {\boldsymbol{\beta} }_{{\mathbf{model}}} |\beta_{{\left[{{\text{Hu}},\,{\text{Ha}},\,{\text{DFA}},\,{\text{PS}}\left({{{\text{best-fit}}}} \right),\,{\text{PS}}\left( {\text{Whittle}}\right)} \right]}} ) \) of the strength of long-range persistence of a log-normal noise (c v = 0.5, N = 1,024, β model = 1.0) for values of β measured obtained by using: (1) Hurst rescaled range analysis (wine, solid line), (2) semivariogram analysis (green, long-dashed line), (3) detrended fluctuation analysis (red, dotted line), (4) power spectral analysis (log-linear regression) (blue, dash–dot–dot line), (5) power spectral analysis (Whittle estimator) (black, dashed line). Examples of how these curves are constructed are given in Figs. 34 and 35
We will now apply our benchmark-based improved estimators in the context of three geophysical examples.
Applications: Strength of Long-Range Persistence of Three Geophysical Records
We now return to the three data series presented in Fig. 1 and apply the techniques explored in this paper to them to investigate the long-range persistence properties of the underlying processes.
The first data set, a palaeotemperature series based on GISP2 bi-decadal oxygen isotopes data for the last 10,000 years, contains N = 500 data points which are normally distributed (see Fig. 1a). We apply the four functions, rescaled range, semivariogram, detrended fluctuation, and power spectral density to this time series (see Fig. 37), and all are found to have strong power-law dependence of the function on the segment lengths, lags, and frequencies. The resultant persistence strengths are summarized in Table 8. The four techniques (with two ways of fitting the power spectral densities, best-fit and Whittle) lead to self-affine long-range persistence strengths of β Hu = 0.42, β Ha = 1.11, β DFA = 0.43, β PS(best-fit) = 0.46, and β PS(Whittle) = 0.54. The results of the benchmark-based improved estimates of β model (Table 8) are \( \beta_{\text{Hu}}^{*} = 0.37,\,\beta_{\text{Ha}}^{*} =0.66,\;\beta_{\text{DFA}}^{*} = 0.47,\;\beta_{\text{PS(best-fit)}}^{*} = 0.46\;{\text{and}}\;\beta_{\text{PS(Whittle)}}^{*} = 0.53.\) In all cases except for semivariogram analysis, the improved estimator results are within 0.05 of the originally measured result. It is reasonable that semivariograms are so far off, as semivariogram analysis is not appropriate over the range −1.0 < β < 1.0, we thus exclude it from further consideration.
Long-range dependence analysis of the 10,000 year (500 values at 20 year intervals) GISP2 bi-decadal oxygen isotope proxy for palaeotemperatures presented in Fig. 1a. The panels represent the following: a Hurst rescaled range (R/S) analysis, b semivariogram analysis, c detrended fluctuation analysis (DFAk with polynomials of order k applied to the profile), d power spectral analysis. All graphs are shown on logarithmic axes. Best-fit power laws are presented by straight solid lines which have been slightly shifted on the y-axis. The corresponding power-law exponents are given in the legend of the corresponding panel and in Table 8
Table 8 Results of five long-range persistence techniquesa applied to the three environmental data series presented in Fig. 1 shown are computed persistence strengths achieved by the five techniques and the corresponding benchmark-based improvement estimates with 95 % confidence intervals
The benchmark-based improved values of the three remaining techniques (not considering confidence intervals) lie in the interval \( 0.37 < \beta_{[{\text{Hu}},\,{\text{Ha}},\,{\text{PS}}({\text{best-fit}}),\, {\text{PS}}({\text{Whittle}})]}^{*} < 0.47. \) The corresponding 95 % confidence intervals for each technique overlap, but they are different in total size, ranging from 0.30 for the Whittle estimator (95 % confidence intervals: \( 0.38 < \beta_{\text{PS(Whittle)}}^{*} < 0.68 \)) to 0.57 for rescaled range analysis (\( 0.08 < \beta_{\text{Hu}}^{*} < 0.65 \)). Since all of these confidence intervals do not contain β = 0.0, long-range persistence is qualitatively confirmed. Another important aspect of our analysis is stationarity, that is, if our time series can be modelled as a fractional noise (β < 1.0) or a fractional motion (β > 1.0). As explained in Sect. 8.2, we have to determine or diagnose whether the values in the confidence intervals just discussed are all smaller or all larger than β = 1.0. We find that these confidence intervals are covered by the interval [0.0, 1.0]. Therefore, we can conclude that the palaeotemperature series can be appropriately modelled by a fractional noise (i.e. β < 1.0).
For quantifying the strength of self-affine long-range persistence, one interpretation would be to take the most certain estimator (based on the narrowest 95 % confidence interval range) \( \beta_{{{\text{PS}}\left( {\text{Whittle}} \right)}}^{*} \) which says that with a probability of 95 %, the persistence strength β ranges between 0.38 and 0.68. Another interpretation would be that based on the results in this paper, the DFA, PS(best-fit), and PS(Whittle) techniques were much more robust (small systematic and random errors) for normally distributed noises and motions compared to (R/S), and thus to state that this palaeotemperature series exhibits long-range persistence with a self-affine long-range persistence strength \( \beta_{\left[ {\text{DFA,PS}}({\text{best-fit}}),{\text{PS}}({\text{Whittle}}) \right]}^{*} \) between 0.46 and 0.53, with combined 95 % confidence intervals for \( \beta_{\left[ {\text{DFA,PS}}({\text{best-fit}}),{\text{PS}}({\text{Whittle}}) \right]}^{*} \) between 0.23 and 0.73. In other words, there is weak long-range positive self-affine persistence.
The second data set is the daily discharge of Elkhorn River (Waterloo, Nebraska, USA) for 1929–2001 (see Fig. 1b). This measurement series has N = 26,662 data points and is log-normal distributed with a high coefficient of variation (c v = 1.68). Rescaled range, semivariogram, and detrended fluctuation analyses reveal two ranges with power-law scaling which are separated at l = 1.0 year (see Fig. 38). Dolgonosov et al. (2008) also observed two scaling ranges of the power spectral density and modelled them by integrating run-off and storage dynamics. In our own results, for the low-frequency scaling range (l > 1.0 year; f < 1.0 year–1), the different performance techniques come up with rather diverse results for the persistence strength: β Hu = 0.66, β Ha = 1.03, β DFA = 0.40, β PS(best-fit) = 0.60, and β PS(Whittle) = 0.71 (see Table 8). As in the first data set above, we will exclude semivariogram analysis from further consideration as it is not appropriate over the range −1.0 < β < 1.0.
Long-range dependence analysis of the 1929–2001 daily discharge data set (Elkhorn river at Waterloo, Nebraska, USA) presented in Fig. 1b. The panels represent the following: a Hurst rescaled range (R/S) analysis, b semivariogram analysis, c detrended fluctuation analysis, d power spectral analysis. All graphs are shown on logarithmic axes. Best-fit power laws are presented by straight solid lines which have been slightly shifted on the y-axis. The corresponding power-law exponents are given in the legend of the corresponding panel and in Table 8
Long-range dependence analysis of the 24 h period (01 February 1978, sampled per minute) geomagnetic auroral electrojet (AE) index data presented in Fig. 1c1. The panels represent the following: a Hurst rescaled range (R/S) analysis, b semivariogram analysis, c detrended fluctuation analysis, d power spectral analysis. All graphs are shown on logarithmic axes. Best-fit power laws are presented by straight solid lines which have been slightly shifted on the y-axis. The corresponding power-law exponents are given in the legend of the corresponding panel and in Table 8
The persistence strengths for the low frequency domain (Table 8) obtained by the benchmark-based improvement techniques (\( \beta_{{\left[ {\text{Hu,\,DFA,\,PS}} \right]}}^{*} \)) range between 0.65 and 0.81. The corresponding 95 % confidence intervals are very wide, ranging from the widest, 0.26 < \( \beta_{\text{PS(best-fit)}}^{*} \)< 1.10, to the 'narrowest', \( 0.46 < \beta_{\text{Hu}}^{*} < 1.07; \) however, all of them do include a 'common' range for the persistence strength interval \( 0.46 < \beta_{{\left[ {\text{Hu,\,DFA,\,PS}} \right]}}^{*} < 0.84. \) These very uncertain results are caused by both the very asymmetric one-point probability density and the consideration of very long segments (l > 1.0 year) or, respectively, very low frequencies. Based on the performance results for realizations of log-normally distributed fractional noises (Sect. 7), we believe that the best estimators are PS(best-fit) and PS(Whittle). If we use the limits of both of these, then we can conclude that this discharge series exposes self-affine long-range persistence with strength \( \beta_{\left[{\text{PS}}({\text{best-fit}}),\,{\text{PS}}({\text{Whittle}}) \right]}^{*} \) between 0.69 and 0.81, and 95 % confidence intervals for the two combined between 0.26 and 1.16. In other words, there is long-range positive persistence with a weak to medium strength. As the 95 % confidence intervals contain the value \( \beta_{\left[ {\text{PS}}({\text{best-fit}}),\,{\text{PS}}({\text{Whittle}}) \right]}^{*} \) = 1.0, we cannot decide whether our time series is a fractional noise (β < 1.0) or fractional motion (β > 1.0).
For both the palaeotemperature and discharge time series, we have modelled them as showing positive long-range persistence. For these data types, both short-range and long-range persistent models have been applied by different authors. For example, for both data types, Granger (1980) and Mudelsee (2007) model the underlying processes as the aggregation of short-memory processes with different strength of short memory.
The third data set, the geomagnetic auroral electrojet (AE) index data, sampled per minute for 01 February 1978 (Fig. 1c), contains N = 1,440 values. The differenced AE index (\( \Delta x_{\text{AE}} (t) = x_{\text{AE}} (t) - x_{\text{AE}} (t - 1) \)) is approximately Levy distributed (double-sided power law) with an exponent of a = 1.40 (Fig. 1d). The four functions that characterize the strength of long-range dependence show a power-law scaling, and the corresponding estimated strengths of long-range dependence for the AE index are as follows (Table 8; Fig. 39): β Hu = 1.02, β Ha = 2.18, β DFA = 2.01, β PS(best-fit) = 1.92, and β PS(Whittle) = 1.92, and for the differenced AE index are as follows (Table 8): β Hu = 0.12, β Ha = 1.01, β DFA = 0.13, β PS(best-fit) = 0.11, and β PS(Whittle) = 0.05.
Based on Sect. 7 performance results for realizations of Levy-distributed fractional noises, we believe that the best estimators are PS(best-fit) and PS(Whittle). If we use the limits of both of these, then we conclude (Table 8) that the AE index is characterized by \( \beta_{\left[ {\text{PS}}({\text{best-fit}}), {\text{PS}}({\text{Whittle}}) \right]}^{*} = 1.92 \), and 95 % confidence intervals for the two combined between 1.82 and 2.00. In other words, there is a strong long-range positive persistence, close to a Levy-Brownian motion. Watkins et al. (2005) have analysed longer series (recordings of an entire year) of the AE index and described it as a fractional Levy motion with a persistence strength of β = 1.90 (standard error of 0.02) with a Levy distribution (a = 1.92). With respect to the strength of long-range persistence, our results for the AE index are very similar to that of Watkins et al. (2005), and our 95 % confidence intervals for β Ha, β DFA, and β PS, do not conflict with a value of β = 1.90.
In order to apply the benchmark-based improvement technique to the differenced AE index, performance tests were run for Levy-distributed (a = 1.40) fractional noises with N = 1,440 data points. The results for \( \beta_{\left[{\text{Hu}},\,{\text{Ha}},\,{\text{DFA}},\,{\text{PS}}({\text{best-fit}}),\,{\text{PS}}({\text{Whittle}}) \right]}^{*} \) are given in Table 8. If we use the limits for both PS(best-fit) and PS(Whittle), then we conclude that the differenced AE index is characterized by \( \beta_{\left[ {\text{PS}}({\text{best-fit}}),\,{\text{PS}}({\text{Whittle}}) \right]}^{*} \) between 0.06 and 0.12, and 95 % confidence intervals for the two combined between −0.03 and 0.20. In other words, there is long-range positive persistence with weak strength. This persistence strength is very close to β = 0, and so our differenced AE index can be considered close to a white Levy noise. We concluded above that the AE index is characterized by \( \beta_{\text{PS}}^{*} = 1.92 \) [95 % confidence: 1.82 to 2.00] and here that the differenced AE index is characterized by \(\beta_{\text{PS}}^{*} = 0.06\, {\text{to}}\, 0.12 \) [95 % confidence: −0.03 to 0.20]. This is not unreasonable as (Sect. 3.6) the long-range persistence strength of a symmetrically distributed fractional noise or motion will be shifted by +2 for aggregation and −2 for the first difference (this case). The difference in the two adjusted measured strengths of long-range persistence for the original and differenced AE index is slightly smaller than two. We believe that this is caused by nonlinear correlations in the data.
We observe that when considering DFA applied to the differenced AE index series, the size of the resultant 95 % confidence intervals (\( - 0.16 < \beta_{\text{DFA}}^{*} < 0.39 \)) is two to three times bigger than that of the spectral techniques \((0.01 < \beta_{\text{PS(best-fit)}}^{*} < 0.20,\; -0.03 < \beta_{\text{PS(Whittle)}}^{*} < 0.12) \). This confirms the results we presented in Sect. 7 for the analysis of synthetic noises: in the case of fractional Levy noises, DFA has larger random errors (proportional to the confidence interval sizes) than power spectral techniques.
The three geophysical time series considered here have all been equally spaced in time. However, unequally spaced time series in the geophysics community are common (unequally spaced either through missing data or through events that do not occur equally in time). For an example of a long-range persistence analysis of an unequally spaced time series (the Nile River) see Ghil et al. (2011).
We have considered three very different geophysical time series with different one-point probability distributions: a proxy for palaeotemperature (Gaussian), discharge (log-normal), and AE index (Levy). For each, we have shown that the estimated strength of long-range persistence can often be more uncertain than one might usually assume. In each case, we have examined these time series with conventional methods that are commonly used in the literature (Hurst rescaled range analysis, semivariogram analysis, detrended fluctuation analysis, and power spectral analysis), and we have complemented these results with benchmark-based improvement estimators, putting the results from each technique into perspective.
Summary and Discussion
In this paper we have compared four common analysis techniques for quantifying long-range persistence: (1) rescaled range (R/S) analysis, (2) semivariogram analysis, (3) detrended fluctuation analysis, and (4) power-spectral analysis (best-fit and Whittle). Although not evaluated in this paper, we have also included in the supplementary material results of a fifth technique, discrete wavelet analysis. To evaluate the first four methods, we have constructed ensembles of realizations of self-affine noises and motions with different (1) time series lengths, N = 64, 128, 256, …, 131,072; (2) persistence strengths, β = −1.0, −0.8, −0.6, …, 4.0; and (3) one-point probability distributions (Gaussian; log-normal with c v = 0.0, 0.1, 0.2, …, 2.0, and two types of construction; Levy with a = 1.0, 1.1, 1.2, …, 2.0). A total of about 17,000 different combinations of process parameters were produced, and for each process type 100 realizations created. We have evaluated the four techniques by statistically comparing their performance. We have found the following:
Hurst rescaled range analysis is not recommended;
Semivariogram analysis is unbiased for 1.2 ≤ β ≤ 2.8, but has large random error (standard deviation or confidence intervals).
Detrended fluctuation analysis is well suited for time series with thin-tailed probability distributions and persistence strength of β > 0.0.
Spectral techniques overall perform the best of the techniques examined here: they have very small systematic errors (i.e. are unbiased), with small random error (i.e. tight confidence intervals and small standard deviations) for positive persistent noises with a symmetric one-point distribution, and they are slightly biased for noises or motions with an asymmetric one-point probability distribution and for anti-persistent noises.
In order to quantify what is the most likely strength of persistence for a fixed time series length and one-point probability distribution, a calibration scheme based on benchmark-based improvement statistics has been proposed. The most useful result of our benchmark-based improvement is realistic confidence intervals for the strength of persistence with respect to the specific properties of the considered time series. These confidence intervals can be used to demonstrate long-range persistence in a time series: if the upper and lower values of the 95 % confidence interval for a persistence strength β do not contain the value β = 0.0, then the considered series can be interpreted (in a statistical sense) to be long-range persistent.
Another outcome of our investigation is that typical confidence intervals for the strength of long-range persistence are asymmetric with respect to the benchmark-based improved estimator, \( \beta_{\text{measured}}^{*} \). The only exception (i.e. symmetric confidence intervals) corresponds to spectral analysis of time series with symmetric one-point probability distributions.
In this context, we emphasize that for time domain techniques the standard deviation of the persistence strength cannot be calculated as the regression error of the linear regression (e.g., for log(DFA) vs. log(segment length), log(R/S) vs. log(segment length), and log(semivariogram) vs. log(lag)). This would be possible only if the fluctuations around the average of the measured functions, \( \overline{{\log(\text{DFA})}}\), \( \overline{{\log({R}/{S})}}\), and \( \overline{{\log(\text {semivariograms})}}\), were independent of the abscissa (log(length) or log(lag)). However, as we characterize highly persistent time series, these fluctuations are also persistent and the assumption of independence cannot be held to be true.
One aspect of our study found limitations in the Schreiber–Schmitz algorithm. It turned out that the Schreiber–Schmitz algorithm can construct fractional noises and motions with symmetric one-point probability distributions and with persistence strength between –1.0 ≤ β ≤ 1.0. However, highly asymmetric probability distributions and with large strengths of persistence (β > 1.0) can lead to resultant time series with a persistence strength that is systematically smaller than the one that is modelled.
In the literature, the performance of detrended fluctuation analysis and spectral analysis has been benchmarked using synthetic time series with known properties (e.g., Taqqu et al. 1995; Pilgram and Kaplan 1998; Malamud and Turcotte 1999a; Eke et al. 2002; Penzel et al. 2003; Maraun et al. 2004). Our current investigations for quantifying long-range persistence of self-affine time series have shown that the systematic errors of both techniques (DFA and spectral analysis) are comparable, while the random errors of spectral analysis are lower, resulting in the fact that a total root-mean-squared error (RMSE, which takes into account both the systematic and random errors) is also lower for spectral analysis over a broad range of persistence strengths and probability distribution types. However, as the analysed time series might have nonlinear correlations, both DFA and spectral analysis should be applied, as the nonlinear nature of the correlations (even if the time series is also self-affine) can strongly influence and give very different results for the two techniques applied (see Rangarajan and Ding 2000). Detrended fluctuation analysis is also subject to practical issues, such as choice of the trend function to use.
We recommend investigation of self-affine long-range persistence of a time series by applying power spectral and detrended fluctuation analysis. In the case of time series with heavy-tailed or strongly asymmetric one-point probability distributions, benchmark-based improvement statistics for the strength of long-range persistence, which is based on a large range of model time series simulations, is required. If the considered time series are not robustly self-affine, but also have short-range correlations or have periodic signals superimposed, then the proposed framework must be appropriately modified. To aid the reader, extensive supplementary material is provided, which includes (1) fractional noises with different strengths of persistence and one-point probability distributions, along with R programs for producing them, (2) the results of applying different long-range persistence techniques to realizations from over 6,500 different sets of process parameters, (3) an Excel spreadsheet to do benchmark-based improvements on the measured persistence strength for a given time series, and (4) a PDF file of all figures from this paper in high-resolution.
Many time series in the Earth Sciences exhibit long-range persistence. For modelling purposes it is important to quantify the strength of persistence. In this paper, we have shown that techniques that quantify persistence can have systematic errors (biases) and random errors. Both types of errors depend on the measuring technique and on parameters of the considered time series such as the one-point probability distribution, the length of the time series, and the strength of self-affine long-range persistence. We have proposed the application of benchmark-based improvement statistics in order to calibrate the measures for quantifying persistence with respect to the specific properties (length, probability distribution, and persistence strength) of the considered time series. Thus, the uncertainties (systematic and random errors) of the persistence measurements obtained might be better contextualized. We give three examples of 'typical' geophysics data series—temperature, discharge, and AE index—and show that the estimated strength of long-range persistence is much more uncertain than might be usually assumed.
Adas A (1997) Traffic models in broadband networks. IEEE Commun Mag 35:82–89. doi:10.1109/35.601746
Altmann EG, Kantz H (2005) Recurrence time analysis, long-term correlation, and extreme events. Phys Rev E 71:056106
Andrews DWK, Sun Y (2004) Adaptive local polynomial Whittle estimation of long-range dependence. Econometrica 72:569–614
Andrienko N, Andrienko G (2005) Exploratory analysis of spatial and temporal data. A systematic approach. Springer, New York
Anh V, Yu Z-G, Wanliss JA (2007) Analysis of global geomagnetic variability. Nonlinear Process Geophys 14:701–708
Anis A, Lloyd EH (1976) The expected value of the adjusted rescaled Hurst range of independent normal summands. Biometrica 63:111–116
ATIS (2000) American National Standard T1.523-2001, Telecom Glossary 2000, ATIS Committee T1A1 performance and signal processing, Available online at: http://www.atis.org/glossary/. Accessed 10 July 2012
Audit B, Bacry E, Muzy J-F, Arneodo A (2002) Wavelet–based estimators of scaling behaviour. IEEE Trans Inf Theory 48:2938–2954
Bahar S, Kantelhardt JW, Neiman A, Rego HHA, Russell DF, Wilkens L, Bunde A, Moss F (2001) Long-range temporal anti-correlations in paddlefish electroreceptors. Europhys Lett 56:454–460
Bak P, Sneppen K (1993) Punctuated equilibrium and criticality in a simple model of evolution. Phys Rev Lett 71:4083–4086
Bak P, Tang C, Wiesenfeld K (1987) Self-organized criticality: an explanation of 1/f noise. Phys Rev Lett 59:381–384
Bard Y (1973) Nonlinear parameter estimation. Academic Press, San Diego
Bassingthwaighte JB, Raymond GM (1994) Evaluating rescaled range analysis for time series. Ann Biomed Eng 22:432–444
Bassingthwaighte JB, Raymond GM (1995) Evaluation of the dispersional analysis method for fractal time series. Ann Biomed Eng 23:491–505
Bates DM, Watts DG (1988) Nonlinear regression analysis and its applications. Wiley, Hoboken
Book Google Scholar
Bayes T, Price R (1763) An essay towards solving a problem in the doctrine of chance. Philos Trans R Soc Lond 53:370–418
Bédard C, Kroeger H, Destexhe A (2006) Does the 1/f frequency scaling of brain signals reflect self-organized critical states? Phys Rev Lett 97:118102
Beran J (1994) Statistics for long-memory processes. Chapman & Hall/CRC, New York
Berry MV, Lewis ZV (1980) On the Weierstrass-Mandelbrot fractal function. Proceedings of the Royal Society A 370:459–484
Bershadskii A, Sreenivasan KR (2003) Multiscale self-organized criticality and powerful X-ray flares. Eur Phys J B 35:513–515
Blender R, Fraedrich K (2003) Long time memory in global warming simulations. Geophys Res Lett 30:1769–1772
Blender R, Freadrich K, Sienz F (2008) Extreme event return times in long-term memory processes near 1/f. Nonlinear Process Geophys 15:557–565
Boutahar M (2009) Comparison of non-parametric and semi-parametric tests in detecting long-memory. J Appl Stat 36:945–972
Boutahar M, Marimoutou V, Nouira L (2007) Estimation methods of the long memory parameter: Monte Carlo analysis and application. J Appl Stat 34:261–301
Box GEP, Cox DR (1964) An analysis of transformations. J R Stat Soc Series B Stat Methodol 26:211–252
Box GEP, Pierce DA (1970) Distribution of residual autocorrelations in autoregressive integrated moving average time series models. J Am Stat Assoc 65:1509–1526
Box GEP, Jenkins GM, Reinsel GC (1994) Time series analysis: forecasting and control, 3rd edn. Prentice Hall, Englewood Cliffs
Bras RL, Rodriguez-Iturbe I (1993) Random functions and hydrology. Dover, New York
Brockwell AE (2005) Likelihood-based analysis of a class of generalized long-memory time series models. J Time Ser Anal 28:386–407
Brown R (1828) A brief account of microscopical observations made in the months of June, July and August, 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Phil Mag 4:161–173
Brown SR (1987) A note on the description of surface roughness using fractal dimension. Geophys Res Lett 14:1095–1098
Bunde A, Lennartz S (2012) Long-term correlations in earth sciences. Acta Geophys 60:562–588
Bunde A, Eichner JF, Kantelardt JW, Havlin S (2005) Long-term memory: a natural mechanism for the clustering of extreme events and anomalous residual times in climate records. Phys Rev Lett 94:048701
Burrough PA (1981) Fractal dimensions of landscape and other environmental data. Nature 294:240–242
Burrough PA (1983) Multiscale sources of spatial variation in soil. I. The application of fractal concepts to nested levels of soil variation. J Soil Sci 34:577–597
Cabrera JL, Milton JM (2002) On–off intermittency in a human balancing task. Phys Rev Lett 89:158702
Caccia DC, Percival DB, Cannon MJ, Raymond GM, Bassingthwaighte JB (1997) Analyzing exact fractal time series: evaluating dispersional analysis and rescaled range methods. Phys A 246:609–632
Cannon MJ, Percival DB, Caccia DC, Raymond GM, Bassingthwaighte JB (1997) Evaluating scaled windowed variance methods for estimating the Hurst Coefficient of time series. Phys A 241:606–626
Carreras BA, van Milligen BP, Pedrosa MA, Balbin R, Hidalgo C, Newman DE, Sanchez E, Frances M, Garcia-Cortes I, Bleuel J, Endler M, Ricardi C, Davies S, Matthews GF, Martines E, Antoni V, Latten A, Klinger T (1998) Self-similarity of the plasma edge fluctuations. Phys Plasmas 5:3632–3643
Chandrasekhar S (1943) Stochastic problems in physics and astronomy. Rev Mod Phys 15:1–89
Chapman CR (2004) The hazard of near–Earth asteroid impacts on earth. Earth Planet Sci Lett 222:1–15
Chapman CR, Morrison D (1994) Impacts on the Earth by asteroids and comets: assessing the hazard. Nature 367:33–40
Chapman SC, Hnat B, Rowlands G, Watkins NW (2005) Scaling collapse and structure functions: identifying self-affinity in finite length time series. Nonlinear Process Geophys 12:767–774
Chatfield C (1996) The analysis of time series, 5th edn. Chapman & Hall, London
Chechkin AV, Gonchar VYu (2000) A model for persistent Levy motion. Phys A 277:312–326. doi:10.1016/S0378-4371(99)00392-1
Chen Y, Ding M, Kelso JA (1997) Long memory processes (1/f α type) in human coordination. Phys Rev Lett 79:4501–4504
Chen Z, Ivanov PCh, Hu K, Stanley HE (2002) Effect of nonstationarities on detrended fluctuation analysis. Phys Rev E 65:041107 (15 pp)
Chen Z, Hu K, Carpena P, Bernaola-Galvan P, Stanley HE, Ivanov PCh (2005) Effect of nonlinear filters on detrended fluctuation analysis. Phys Rev E 71:011104
Chhabra A, Jensen RV (1989) Direct determination of the f(α) singularity spectrum. Phys Rev Lett 62:1327–1330
Collette C, Ausloos M (2004) Scaling analysis and evolution equation of the North Atlantic oscillation index fluctuations. Int J Mod Phys C 15:1353–1366
Cooley JW, Tukey JW (1965) An algorithm for the machine calculation of complex Fourier series. Math Comput 19:297–301. doi:10.2307/2003354
Cox BL, Wang JSY (1993) Fractal surfaces: measurements and applications in the earth sciences. Fractals 1:87–115
Cramér H (1946) Mathematical methods of statistics. Princeton University Press, Princeton
Daerden F, Vanderzande C (1996) 1/f noise in the Bak-Sneppen model. Phys Rev E 53:4723–4728
Daubechies I (1988) Orthonormal bases of compactly supported wavelets. Commun Pure Appl Math 4:909–996
Davies RB, Harte DS (1987) Tests for Hurst effect. Biometrika 74:95–101
Davis TN, Sugiura M (1966) Auroral electrojet activity index AE and its universal time variations. J Geophys Res 71:785–801
De Santis A (1997) A direct divider method for self-affine fractal properties and surfaces. Geophys Res Lett 24:2099–2102
Delignieres D, Torre K (2009) Fractal dynamics of human gait: a reassessment of Hausdorff et al. (1996) data. J Appl Physiol 106:1272–1279
Delignieres D, Ramdani S, Lemoine L, Torre K, Fortes M, Ninot G (2006) Fractal analysis for 'short' time series: a reassessment of classical methods. J Math Psychol 50:525–544
Dolgonosov BM, Korchagin KA, Kirpichnikova NV (2008) Modeling of annual oscillations and 1/f-noise of daily river discharges. J Hydrol 357:174–187
Doroslovacki ML (1998) On the least asymmetric wavelets. IEEE Trans Signal Process 46:1125–1130
Dutta P, Horn PM (1981) Low frequency fluctuations in solids: 1/f noise. Rev Mod Phys 53:497–516
Efron B, Tibshirani R (1993) An introduction to the bootstrap. Chapman and Hall, London
Eghball B, Varvel GE (1997) Fractal analysis of temporal yield variability of crop sequences: Implications for site-specific management. Agron J 89:851–855
Eichner JF, Koscielny-Bunde E, Bunde A, Havlin S, Schellnhuber H-J (2003) Power-law persistence and trends in the atmosphere: a detailed study of long temperature records. Phys Rev E 68, 046133 (5 pp)
Einstein A (1905) Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen. Ann Phys 17:549–560
Eke A, Herman P, Kocsis L, Kozak LR (2002) Fractal characterization of complexity in temporal physiological signals. Physiol Meas 23:R1–R38
Eliazar I, Klafter J (2009) A unified and universal explanation for Lévy laws and 1/f noises. Proc Natl Acad Sci USA 106:12251–12254
Embrechts P, Maejima M (2002) Selfsimilar processes. Princeton University Press, Princeton
Enriquez N (2004) A simple construction of the fractional Brownian motion. Stoch Process Appl 109:203–223. doi:10.1016/j.spa.2003.10.008
Faÿ G, Moulines E, Roueff F, Taqqu MS (2009) Estimators of long-memory: Fourier versus wavelets. J Econom 151:159–177
Fisher RA (1912) An absolute criterion for fitting frequency curves. Messenger Math 41:155–160
Flandrin P (1992) Wavelet analysis and synthesis of fractional Brownian motion. IEEE Trans Inf Theory 38:910–917
Fox R, Taqqu MS (1986) Large-sample properties of parameter estimates for strongly dependent stationary Gaussian time series. Ann Stat 14:517–532
Fraedrich K, Blender R (2003) Scaling of atmosphere and ocean temperature correlations in observations and climate models. Phys Rev Lett 90:108501
Franzke CLE, Graves T, Watkins NW, Gramacy RB, Hughes C (2012) Robustness of estimators of long-range dependence and self-similarity under non-Gaussianity. Philos Trans R Soc A Math Phys Eng Sci 370:1250–1267
Frigg R (2003) Self-organized criticality, what it is, and what it isn't. Stud Hist Philos Sci 34:613–632
Gallant JC, Moore ID, Hutchinson MF, Gessler P (1994) Estimating fractal dimension of profiles: a comparison of methods. Math Geol 26:455–481
Gao JB, Hu J, Tung W–W, Cao YH, Sarshar N, Roychowdhury VP (2006) Assessment of long range correlation in time series: How to avoid pitfalls. Phys Rev E 73:016117
Geisel T, Nierwetberg J, Zacherl A (1985) Accelerated diffusion in Josephson junctions and related chaotic systems. Phys Rev Lett 54:616–619
Geisel T, Zacherl A, Radons G (1987) Generic 1/f noise in chaotic Hamiltonian dynamics. Phys Rev Lett 59:2503–2506
Gelman A, Carlin JB, Stern HS, Rubin DB (1995) Bayesian data analysis. Chapman and Hall/CRC, New York
Geweke J, Porter-Hudak S (1983) The estimation and application of long-memory time series models. J Time Ser Anal 4:221–238
Ghil M, Yiou P, Hallegatte S, Malamud BD, Naveau P, Soloviev A, Friederichs P, Keilis-Borok V, Kondrashov D, Kossobokov V, Mestre O, Nicolis C, Rust HW, Shebalin P, Vrac M, Witt A, Zaliapin I (2011) Extreme events: dynamics, statistics and prediction. Nonlinear Process Geophys 18:295–350. doi:10.5194/npg-18-295-2011
Goldberger AL, Amaral LAN, Hausdorff JM, Ivanov PCh, Peng C-K, Stanley HE (2002) Fractal dynamics in physiology: alterations with disease and aging. Proc Natl Acad Sci USA 99:2466–2472
Golub GH, Pereyra V (1973) The differentiation of pseudo inverses and nonlinear least-squares problems whose variables separate. SIAM J Numer Anal 10:413–432
Govindan RB, Kantz H (2004) Long-term correlations and multifractality in surface wind speed. Europhys Lett 68:184–190
Govindan RB, Vyushin D, Bunde A, Brenner St, Havlin S, Schellnhuber H-J (2002) Global climate models violate scaling of the observed atmospheric variability. Phys Rev Lett 89:028501
Granger CWJ (1980) Long memory relationships and the aggregation of dynamic models. J Econom 14:227–238
Granger CWJ, Joyeux RJ (1980) An introduction to long-range time series models and fractional differencing. J Time Ser Anal 1:15–30
Grassberger P, Procaccia I (1983) Measuring the strangeness of strange attractors. Physica D 9:189–208
Grossmann A, Morlet J (1984) Decomposition of Hardy functions into square integrable wavelets of constant shape. SIAM J Math Anal 15:723–736
Guerrero A, Smith LA (2005) A maximum likelihood estimator for long-range persistence. Phys Lett A 355:619–632
Gutenberg B, Richter CF (1954) Seismicity of the earth and associated phenomenon, 2nd edn. Princeton University Press, Princeton
Guzzetti F, Malamud BD, Turcotte DL, Reichenbach P (2002) Power-law correlations of landslide areas in central Italy. Earth Planet Sci Lett 195:169–183
Halsey TC, Jensen MH, Kadanoff LP, Procaccia I, Shraiman BI (1986) Fractal measures and their singularities: the characterization of strange sets. Phys Rev A 33:1141–1151
Hansen A, Engoy Th, Maloy KJ (1994) Measuring Hurst exponents with the first return method. Fractals 2:527–533
Hasselmann K (1976) Stochastic climate models I: theory. Tellus 28:473–485
Hausdorff JM, Purdon PL, Peng CK, Ladin Z, Wei JY, Goldberger AL (1996) Fractal dynamics of human gait: stability of long-range correlations in stride interval fluctuation. J Appl Physiol 80:1448–1457
Heneghan C, McDarby G (2000) Establishing the relationship between detrended fluctuation analysis and power spectral analysis. Phys Rev E 62:6103–6110
Hennig H, Fleischmann R, Fredebohm A, Hagmayer Y, Nagler J, Witt A, Theis FJ, Geisel T (2011) The nature and perception of fluctuations in human musical rhythms. PLoS One 6 e26457 22046289
Hentschel HGE, Procaccia I (1983) The infinite number of generalized dimensions of fractals and strange attractors. Physica D 8:435–444
Hergarten S (2002) Self-organized criticality in earth systems. Springer, New York
Higuchi T (1988) Approach to an irregular time series on the basis of fractal theory. Physica D 31:277–293
Hosking JRM (1981) Fractional differencing. Biometrika 68:165–176
Hu K, Ivanov PCh, Chen Z, Carpena P, Stanley HE (2001) Effects on trends on detrended fluctuation analysis. Phys Rev E 64:011114
Huang SL, Oelfke SM, Speck RC (1992) Applicability of fractal characterization and modelling to rock joint profiles. Int J Rock Mech Min Sci Geomech Abstr 29:89–98
Hubbard BB (1996) The world according to wavelets: the story of a mathematical technique in the making. A. K. Peters, Wellesley
Hurst HE (1951) Long-term storage capacity of reservoirs. Trans Am Soc Civil Eng 116:770–799
Ives AR, Abbott KC, Ziebarth NL (2010) Analysis of ecological time series with ARMA(p, q) models. Ecology 91:858–871. doi:10.1890/09-0442.1
Jennings H, Ivanov P, Martins A, Dasilva A, Viswanathan G (2004) Variance fluctuations in nonstationary time series: a comparative study of music genres. Phys A 336:585–594
Johnson JB (1925) The Schottky effect in low frequency circuits. Phys Rev 26:71–85
Kantelhardt JW, Koscielny-Bunde E, Rego HHA, Havlin S, Bunde A (2001) Detecting long-range correlations with detrended fluctuation analysis. Phys A 295:441–454
Kantelhardt JW, Rybski D, Zschiegner SA, Braun P, Koscielny-Bunde E, Livina V, Havlin S, Bunde A (2003) Multifractality of river runoff and precipitation: comparison of fluctuation analysis and wavelet methods. Phys A 330:240–245
Kaplan JL, Yorke JA (1979) Chaotic behavior of multidimensional difference equations. In: Peitgen H-O, Walter H-O (eds) Functional differential equations and approximations of fixed points. Lecture Notes in Mathematics 730:204–227, Springer
Keshner MS (1982) 1/f noise. Proc IEEE 70:212–218
Khaliq MN, Ouarda TBMJ, Gachon P (2009) Identification of temporal trends in annual and seasonal low flows occurring in Canadian rivers: the effect of short- and long-term persistence. J Hydrol 369:183–197. doi:10.1016/j.jhydrol.2009.02.045
Kiss P, Müller R, Janosi IM (2007) Long-range correlations of extrapolar total ozone are determined by the global atmospheric circulation. Nonlinear Process Geophys 14:435–442
Kiyani K, Chapman SC, Hnat B (2006) Extracting the scaling exponents of a self-affine, non-Gaussian process from a finite length time series. Phys Rev E 74:051122
Klafter J, Sokolov IM (2005) Anomalous diffusion spreads its wings. Phys World 18:29–32
Klinkenberg B (1994) A review of methods used to determine the fractal dimension of linear features. Math Geol 26:23–46
Kobayashi M, Musha T (1982) 1/f fluctuation of heartbeat period. IEEE Biomed Eng 29:456–457
Kogan S (2008) Electronic noise and fluctuations in solids. Cambridge University Press, Cambridge
Kolmogorov AN, Gnedenko BW (1954) Limit distributions for sums of random variables. Addison-Wesley, Cambridge
Koscielny-Bunde E, Kantelhardt JW, Braun P, Bunde A, Havlin S (2006) Long-term persistence and multifractality of river runoff records: detrended fluctuation studies. J Hydrol 322:120–137
Koutsoyiannis D (2002) The Hurst phenomenon and fractional Gaussian noise made easy. Hydrol Sci J 47:573–595
Kurths J, Herzel H (1987) An attractor in a solar time series. Physica D 25:165–172
Kurths J, Schwarz U, Witt A (1995) Non-linear data analysis and statistical techniques in solar radio astronomy. Lecture Notes Phys 444:159–171, Springer, Berlin. doi:10.1007/3-540-59109-5_48
Kwapień J, Drożdż S (2012) Physical approach to complex systems. Phys Rep 515:115–226
Kyoto University (2012) World Data Center for Geomagnetism, Kyoto. Geomagnetic auroral electrojet index (AE) data available for 1978 and downloaded from: http://swdcwww.kugi.kyoto-u.ac.jp/aeasy/index.html. Accessed 1 May 2012
Leland WE, Taqqu MS, Willinger W, Wilson DV (1994) On the self-similar nature of ethernet traffic (Extended Version). IEEE ACM Trans Netw 2:1–15
Levitin DJ, Chordia P, Menon V (2012) Musical rhythm spectra from Bach to Joplin obey a 1/f power law. Proc Natl Acad USA 109:3716–3720
Linkenkaer-Hansen L, Nikouline V, Palva JM, Ilmoniemi RJ (2001) Long-range temporal correlations and scaling behavior in human brain oscillations. J Neurosci 21:1370–1377
Lo AW (1991) Long-term memory in stock market prices. Econometrica 59:1273–1313
Malamud BD (2004) Tails of natural hazards. Phys World 17:31–35
Malamud BD, Turcotte DL (1999a) Self-affine time series: I. Generation and analyses. Adv Geophys 40:1–90
Malamud BD, Turcotte DL (1999b) Self-affine time series I: measures of weak and strong persistence. J Stat Plan Inference 80:173–196
Malamud BD, Turcotte DL (2006) The applicability of power-law frequency statistics to floods. J Hydrol 322:168–180
Malamud BD, Turcotte DL, Barton CC (1996) The 1993 Mississippi river flood: a one-hundred or a one-thousand year event? Environ Eng Geol II(4):479–486
Malamud BD, Morein G, Turcotte DL (1998) Forest fires: an example of self-organized critical behavior. Science 281:1840–1842
Malamud BD, Turcotte DL, Guzzetti F, Reichenbach P (2004) Landslide inventories and their statistical properties. Earth Surf Proc Land 29:687–711
Malamud BD, Millington JDA, Perry GLW (2005) Characterizing wildfire regimes in the United States. Proc Natl Acad Sci USA 102:4694–4699
Malinverno A (1990) A simple method to estimate the fractal dimension of a self-affine series. Geophys Res Lett 17:1953–1956. doi:10.1029/GL017i011p01953
Mandelbrot BB (1967) How long is the coast of Britain? Statistical self-similarity and the fractional dimension. Science 156:636–638
Mandelbrot BB (1977) Fractals: form, chance, and dimension. Freeman, San Francisco
Mandelbrot BB (1985) Self-affine fractals and fractal dimension. Phys Scripta 32:257–260
Mandelbrot BB (1999) Multifractals and 1/f noise: wild self-affinity in physics. Springer, New York
Mandelbrot BB, van Ness JW (1968) Fractional Brownian motions, fractional noises and applications. SIAM Rev 10:422–437
Mandelbrot BB, Wallis JR (1968) Noah, Joseph and operational hydrology. Water Resour Res 4:909–918
Mandelbrot BB, Wallis JR (1969a) Computer experiments with fractional Gaussian noises. Parts I, II, and III. Water Resour Res 5:228–267
Mandelbrot BB, Wallis JR (1969b) Some long–run properties of geophysical records. Water Resour Res 5:321–340
Mandelbrot BB, Wallis JR (1969c) Robustness of the rescaled range R/S in the measurement of noncyclic long run statistical dependence. Water Resour Res 5:967–988
Manneville P (1980) Intermittency, self-similarity and 1/f spectrum in dissipative dynamical systems. J de Physique 41:1235–1243
Mantegna R, Stanley HE (2000) An introduction to econophysics. Cambridge University Press, Cambridge
Maraun D, Rust HW, Timmer J (2004) Tempting long-memory: on the interpretation of DFA results. Nonlinear Process Geophys 11:495–503
Mark DM, Aronson PB (1984) Scale dependent fractal dimensions of topographical surfaces: an empirical investigation with applications in geomorphology and computer mapping. Math Geol 16:671–683
Marković D, Koch M (2005) Wavelet and scaling analysis of monthly precipitation extremes in Germany in the 20th century: interannual to interdecadal oscillations and the North Atlantic oscillation influence. Water Resour Res 41:W09420, 12 p, doi:10.1029/2004WR003843
Matheron G (1963) Principles of geostatistics. Econ Geol 58:1246–1266
Mehrabi AR, Rassamdana H, Sahimi M (1997) Characterization of long-range correlations in complex distributions and profiles. Phys Rev E 56:712–722
Meirelles MC, Dias VHA, Oliva D, Papa ARR (2010) A simple 2D SOC model for one of the main sources of geomagnetic disturbances: Flares. Phys Lett A 374:1024–1027
Metzler R, Klafter J (2000) The random walk's guide to anomalous diffusion: a fractional dynamics approach. Phys Rep 339:1–77
Mielniczuk J, Wojdyłło P (2007) Estimation of the Hurst exponent revisited. Comput Stat Data Anal 51:4510–4525. doi:10.1016/j.csda.2006.07.033
Montanari A, Rosso R, Taqqu MS (1996) Some long-run properties of rainfall records in Italy. J Geophys Res D21:431–438
Montanari A, Taqqu MS, Teverovsky V (1999) Estimating long-range dependence in the presence of periodicity: an empirical study. Math Comput Model 29:217–228
Mudelsee M (2007) Long memory of rivers from spatial aggregation. Water Resour Res 43:W01202
Mudelsee M (2010) Climate time series analysis: classical statistical and bootstrap methods. Springer, San Francisco
Nagler J, Claussen JC (2005) 1/fα spectra in elementary cellular automata and fractal signals. Phys Rev E 71:067103
Neuman SP (1995) On advective transport in fractal permeability and velocity fields. Water Resour Res 31:1455–1460
Newman MC (1993) Regression analysis of log-transformed data: Statistical bias and its correction. Environ Toxicol Chem 12:1129–1133
Osborne AR, Provenzale A (1989) Finite correlation dimension for stochastic systems with power-law spectra. Physica D 35:357–381
Palma W, Zevallos M (2011) Fitting non-Gaussian persistent data. Appl Stoch Models Bus Industry 27:23–36
Papa ARR, do Espirito Santo MA, Barbosa CS, Oliva D (2012) A generalized Bak–Sneppen model for Earth's magnetic field reversals. arXiv:1106.4942v1 [physics.geo-ph]
Patzelt F, Riegel M, Ernst U, Pawelzik K (2007) Self-organized critical noise amplification in human closed loop control. Frontiers in Computational Neuroscience 1, doi:10.3389/neuro.10.004.2007
Pelletier JD, Turcotte DL (1997) Long-range persistence in climatological and hydrological time series: analysis, modeling and application to drought hazard assessment. J Hydrol 203:198–208
Pelletier JD, Turcotte DL (1999) Self-affine time series II. applications and models. Adv Geophys 40:91–166
Peng C-K, Buldyrev SV, Goldberger AL, Havlin S, Sciortino F, Simons M, Stanley HE (1992) Long-range correlations in nucleotide sequences. Nature 356:168–170
Peng C-K, Buldyrev SV, Havlin S, Simons M, Stanley HE, Goldberger AL (1993a) Long-range anticorrelations and non-Gaussian behavior of the heartbeat. Phys Rev Lett 70:1343–1346
Peng C-K, Buldyrev SV, Goldberger AL, Havlin S, Simons M, Stanley HE (1993b) Finite size effects on long-range correlations: implications for analyzing DNA sequences. Phys Rev E 47:3730–3733
Peng C-K, Buldyrev SV, Havlin S, Simons M, Stanley HE, Goldberger AL (1994) On the mosaic organization of DNA nucleotides. Phys Rev E 49:1685–1689
Penzel T, Kantelhardt JW, Becker HF, Peter JH, Bunde A (2003) Detrended fluctuation analysis and spectral analysis of heart rate variability for sleep stage and apnea identification. Comput Cardiol 30:307–310
Percival DB, Walden AT (1993) Spectral analysis for physical applications: Multitaper and conventional Univariate Techniques. Cambridge University Press, Cambridge
Percival DB, Walden AT (2000) Wavelet methods for time series analysis. Cambridge University Press, Cambridge
Pilgram B, Kaplan DT (1998) A comparison of estimators for 1/f noise. Physica D 114:108–122
Pinto CMA, Mendes Lopes A, Tenreiro Machado JA (2012) A review of power laws in real life phenomena. Commun Nonlinear Sci Numer Simul 17:3558–3578
Porter-Hudak S (1990) An application of the seasonal fractionally differenced model to the monetary aggregates. J Am Stat Assoc 85:338–344
Press WH, Teukolskay SA, Vetterling WT, Flannery BP (1994) Numerical recipes in C: the art of scientific computing, 2nd edn. Cambridge University Press, Cambridge
Priestley MB (1981) Spectral analysis and time series. Academic Press, London
Procaccia I, Schuster H (1983) Functional renormalization–group theory of universal 1/f noise in dynamical systems. Phys Rev A 28:1210–1212
Pyle DM (2000) Sizes of volcanic eruptions. In: Sigurdsson H, Houghton B, Rymer H, Stix J, McNutt S (eds) Encyclopedia of Volcanoes. Academic Press, London, pp 263–269
Rangarajan G, Ding MZ (2000) Integrated approach to the assessment of long-range correlation in time series data. Phys Rev E 61:4991–5001
Rao CR (1945) Information and accuracy attainable in the estimation of statistical parameters. Bull Calcutta Math Soc 37:81–91
Robinson PM (1994) Semiparametric analysis of long-memory time series. Ann Stat 22:515–539
Robinson PM (1995) Log-periodogram regression of time series with long-range dependence. Ann Stat 23:1048–1072
Rossi M, Witt A, Guzzetti F, Malamud BD, Peruccacci S (2010) Analysis of historical landslides in the Emilia–Romagna region, Northern Italy. Earth Surf Proc Land 35:1123–1137
Rust HW, Mestre O, Venema VKC (2008) Fewer jumps, less memory: homogenized temperature records and long memory. J Geophys Res 113:D19110. doi:10.1029/2008JD009919
Salas JD (1993) Analysis and modelling of hydrology time series. In: Maidment DR (ed) Handbook of hydrology. McGraw-Hill, New York, pp 19.1–19.72
Salomão LR, Campanha JR, Gupta HM (2009) Rescaled range analysis of pluviometric records in São Paulo State, Brazil. Theoret Appl Climatol 95:83–89. doi:10.1007/s00704-007-0367-4
Samorodnitsky G, Taqqu MS (1994) Stable non-gaussian processes: stochastic models with infinite variance. Chapman and Hall, London
Schepers HE, van Beek JHGM, Bassingthwaighte JB (1992) Four methods to estimate the fractal dimension from self-affine signals. IEEE Eng Med Biol 11:57–64
Schmittbuhl J, Vilotte JP, Roux S (1995) Reliability of self-affine measurements. Phys Rev E 51:131–147
Schottky W (1918) Über spontane Stromschwankungen in verschiedenen Elektrizitätsleitern. Ann Phys 362:541–567
Schreiber T, Schmitz A (1996) Improved surrogate data for nonlinearity tests. Phys Rev Lett 77:635–638
Schulz M, Mudelsee M, Wolf-Welling TCW (1994) Fractal analyses of Pleistocene marine oxygen isotope records. In: Kruhl JH (ed) Fractals and dynamic systems in geosciences. Springer, Berlin, pp 307–317
Schuster HG, Just W (2005) Deterministic Chaos. Wiley, Weinheim
Shannon CE, Weaver W (1949) The mathematical theory of communication. University of Illinois Press, Urbana
Shapiro SS, Wilk MB (1965) An analysis of variance test for normality (complete samples). Biometrika 52:591–611
Smith WW, Smith JM (1995) Handbook of real-time fast Fourier transforms. IEEE Press, Piscataway
Solomon TH, Weeks ER, Swinney HL (1993) Observations of anomalous diffusion und Levy flights in a two-dimensional rotating flow. Phys Rev Lett 24:3975–3978
Stadnytska T, Werner J (2006) Sample size and accuracy of estimation of the fractional differencing parameter. Methodology: Eur J Res Methods Behav Soc Sci 2:135–141
Stanislavsky AA, Burnecki K, Magdziarz M, Weron A, Weron K (2009) FARIMA modelling of solar flare activity from empirical time series of soft X-ray solar emission. Astrophys J 693:1877–1882
Stephen DG, Mirman D, Magnuson JS, Dixon JA (2009) Lévy-like diffusion in eye movements during spoken-language comprehension. Phys Rev E 79:056114
Stroe-Kunold E, Stadnytska T, Werner J, Braun S (2009) Estimating long-range dependence in time series: an evaluation of estimators implemented in R. Behav Res Methods 41:909–923
Stuiver M, Grootes PM, Braziunas TF (1995) The GISP2 18O climate record of the past 16,500 years and the role of the sun, ocean and volcanoes. Quatern Res 44:341–354
USGS (United States Geological Survey) (2012) Discharge data for the Elkhorn River, Station 06800500, 1 Jan 1929 to 30 Dec 2001, available online at: http://waterdata.usgs.gov/. Accessed 1 June 2012
Swan ARH, Sandilands M (1995) Introduction to geological data analysis. Blackwell Science, Oxford
Takens F (1981) Detecting strange attractors in turbulence. In: Rand DA, YoungL-S (eds) Dynamical systems and turbulence. Lecture Notes in Mathematics 898, Springer, Berlin pp 366–381
Taqqu MS (1975) Weak convergence to fractional Brownian motion and to the Rosenblatt process. Probab Theory Relat Fields 31:287–302
Taqqu MS (2003) Fractional Brownian motion and long-range dependence. In: Doukhan P, Oppenheim G, Taqqu MS (eds) Theory and applications of long-range dependence. Birkhäuser, Boston, pp 5–38
Taqqu MS, Samorodnitsky G (1992) Linear models with long-range dependence and finite or infinite variance. In: New directions in time series analysis, Part II, IMA Volumes in Mathematics and its Applications 46, Springer, pp 325–340
Taqqu MS, Teverovsky V (1998) On estimating long-range dependence in finite and infinite variance series. In: Adler RJ, Feldman RE, Taqqu MS (eds) A practical guide to heavy tails: statistical techniques and applications. Birkhäuser, pp 177–217
Taqqu MS, Teverovsky V, Willinger W (1995) Estimators for long-range dependence: an empirical study. Fractals 3:785–788
Teich MC, Heneghan C, Lowen SB, Ozaki T, Kaplan E (1997) Fractal character of the neural spike train in the visual system of the cat. J Opt Soc Am A: 14:529–546
Theiler J (1991) Some comments on the correlation dimension of 1/f α noise. Phys Lett A 155:480–493
Theiler J, Eubank S, Longtin A, Galdrikian B, Farmer JD (1992) Testing for nonlinearity in time series: the method of surrogate data. Physica D 58:77–94
Thomas RW, Hugget RJ (1980) Modelling in geography: a mathematical approach. Barnes and Noble Books, New Jersey
Timmer J, König M (1995) On generating power law noise. Astron Astrophys 300:707–710
Tukey JW (1977) Exploratory data analysis. Pearson Education
Turcotte DL (1999) Self-organized criticality. Rep Prog Phys 62:1377–1429
Uppaluri S, Nagler J, Stellamanns E, Heddergott N, Herminghaus S, Engstler M, Pfohl T (2011) Impact of microscopic motility on the overall swimming behaviour of parasites. PLoS Comput Biol 7:e1002058
van der Ziel A (1950) On the noise spectra of semi-conductor noise and of flicker effect. Physica 16:359–372
Velasco C (2000) Non-Gaussian log-periodogram regression. Econom Theory 16:44–79
Venema V, Bachner S, Rust H, Simmer C (2006) Statistical characteristics of surrogate data based on geophysical measurements. Nonlinear Process Geophys 13:449–466
Voss RF (1985) Random fractal forgeries. In Earnshaw RA (ed) Fundamental algorithms for computer graphics. NATO ASI Series, Springer F17 pp 805–835
Voss RF, Clarke J (1975) '1/f noise' in music and speech. Nature 258:317–318
Wang MC, Uhlenbeck GE (1945) On the theory of the Brownian motion. Rev Mod Phys 17:323–342
Watkins NW, Credgington D, Hnat B, Chapman SC, Freeman MP, Greenhough J (2005) Towards synthesis of solar wind and geomagnetic scaling exponents: a fractional Levy motion model. Space Sci Rev 121:271–284
Weeks ER, Crocker JC, Levitt AC, Schofield A, Weitz DA (2000) Three-dimensional direct imaging of structural relaxation near the colloidal glass transition. Science 28:627–631
Wen RJ, Sinding-Larsen R (1997) Uncertainty in fractal dimension estimated from power spectra and variograms. Math Geol 29:727–753
Weron R (2001) Estimating long-range dependence: finite sample properties and confidence intervals. Phys A 312:285–299
Whitcher B (2004) Wavelet–based estimation for seasonal long-memory processes. Technometrics 46:225–238
Whittle P (1952) The simultaneous estimation of a time series harmonic components and covariance structure. Trabajos Estadística 3:43–57
Willinger W, Taqqu MS, Sherman R, Wilson DV (1997) Self-similarity through high-variability: statistical analysis of ethernet LAN traffic at the source level. IEEE ACM Trans Netw 5:71–86
Witt A, Kurths J, Pikovsky AS (1998) Testing stationarity in time series. Phys Rev E 58:1800–1810
Witt A, Malamud BD, Rossi M, Guzzetti F, Peruccacci S (2010) Temporal correlation and clustering of landslides. Earth Surf Proc Land 35:1138–1156
Wolf A, Swift JB, Swinney HL, Vastano JA (1985) Determining Lyapunov exponents from a time series. Physica D 16:285–317
Wornell GW (1990) A Karhunen–Loève-like expansion for 1/f processes via wavelets. IEEE Trans Inf Theory 36:859–861
Wornell GW (1993) Wavelet-based representations for the 1/f family of fractal processes. Proc IEEE 81:1428–1450
Wornell GW (1996) Signal processing with fractals: a wavelet-based approach. Prentice-Hall
Wornell GW, Oppenheim AV (1992) Estimation of fractal signals from noisy measurements using wavelets. IEEE Trans Signal Process 40:611–623
Xiao X, White EP, Hooten MB, Durham SL (2011) On the use of log-transfo
|
CommonCrawl
|
Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. It only takes a minute to sign up.
What exactly does it mean to embed classical data into a quantum state?
As the title states.
I am a Machine Learning Engineer with a background in physics & engineering (post-secondary degrees). I am reading the Tensorflow Quantum paper. They say the following within the paper:
One key observation that has led to the application of quantum computers to machine learning is their ability to perform fast linear algebra on a state space that grows exponentially with the number of qubits. These quantum accelerated linear-algebra based techniques for machine learning can be considered the first generation of quantum machine learning (QML) algorithms tackling a wide range of applications in both supervised and unsupervised learning, including principal component analysis, support vector machines, kmeans clustering, and recommendation systems. These algorithms often admit exponentially faster solutions compared to their classical counterparts on certain types of quantum data. This has led to a significant surge of interest in the subject. However, to apply these algorithms to classical data, the data must first be embedded into quantum states, a process whose scalability is under debate.
What is meant by this sentence However, to apply these algorithms to classical data, the data must first be embedded into quantum states?
Are there resources that explain this procedure? Any documentation or links to additional readings would be greatly appreciated as well.
Note: I did look at this previous question for reference. It helped. But if anyone can provide more clarity from a more foundational first principles view (ELI5 almost), I would be appreciative
How do I embed classical data into qubits?
edited Mar 31, 2020 at 10:34
glS♦
Darien SchettlerDarien Schettler
$\begingroup$ the TL;DR is that if you want to do quantum computation, you need to operate on quantum states. If you want to do use a quantum computer to process classical data, you thus need to have your classical data somehow encoded into a quantum state. How exactly you do this depends, but in general it's as simple as pretending that, say, an input 00 correspond to this quantum state, 01 to this other one, etc., and then perform your operations on the quantum states $\endgroup$
– glS ♦
First it is instructive to ask oneself: "how does classical data get into my computer?" In a classical computer, your data is always stored in bits. Because calculations in base 2 are not very straightforward for most people there are abstractions like int types for integers and float types for rational numbers with the associated math operations readily abstracted for the user -- which means that you can easily add, multiply, divide and so on.
Now, on a quantum computer you run into a fundamental problem: Qubits are really expensive. When I say really expensive, this does not only mean that building a quantum computer costs a fortune, but also that in current applications you only have a handful of them (Google's quantum advantage experiment used a device with 53 qubits) -- which means that you have to economize your use of them. In machine learning applications you usually use single precision floating point numbers, which use 32 bits. This means a single "quantum float" would also need 32 qubits, which means that state of the art quantum computers can't even be used to add two floating point numbers together due to the lack of qubits.
But you can still do useful stuff with qubits, and this is because they have additional degrees of freedom! One particular thing is that you can encode an angle (which is a real parameter) bijectively into a single qubit by putting it into the relative phase $$ | \theta \rangle = \frac{1}{\sqrt{2}}(|0\rangle + \mathrm{e}^{i\theta} |1\rangle) $$
And this is the heart of embedding data into quantum states. You simply can't do the same thing you would be doing on a classical computer due to a lack of sufficient qubit numbers and therefore you have to get creative and use the degrees of freedom of qubits to get your data into the quantum computer. To learn more about very basic embeddings, you should have a look at this paper. One particular example I want to highlight is the so-called "amplitude embedding" where you map the entries of a vector $\boldsymbol{x}$ into the different amplitudes of a quantum state $$ | \boldsymbol{x} \rangle \propto \sum_i x_i | i \rangle $$ There is no equals sign because the state needs to be normalized, but for the understanding this is not important. The special thing about this particular embedding is that it embeds a vector with $d$ elements into $\log_2 d$ qubits which is a nice feature in our world where qubits are expensive!
Johannes Jakob MeyerJohannes Jakob Meyer
For 32 bit you only need 5 qubits not 32 qubits. $2^n=N$, where $n$ stands for number of qubits, and $N$ stands for number of bits.
edited Oct 16, 2020 at 21:29
peterh
88722 gold badges1313 silver badges2222 bronze badges
answered Oct 16, 2020 at 4:35
sassan moradisassan moradi
$\begingroup$ For 32bit you only need 5 qubits not 32 qubits. 2^n=N which n stands for number of qubits and N stands for number of bits. Did you mean for 32 states you need 5 qubits? $\endgroup$
– Martin Vesely
Thanks for contributing an answer to Quantum Computing Stack Exchange!
AWS will soon be sponsoring Quantum Computing
Will deep learning neural networks run on quantum computers?
Introductory material for quantum machine learning
Embedding classical information into norm of a quantum state
Quantum machine learning after Ewin Tang
Will NISQ based algorithms be useful in fault-tolerant Quantum computers?
Can we say that we've already have a lot quantum algorithms offer speed up over classical algorithms(at least polynomial)?
Method and Meaning of Quantum Encoding in Quantum Machine Learning
|
CommonCrawl
|
panMARE
CurrentFormer Team
tkMSMMG
TheoryNumericsCode Performance
Open Water PropellerPropeller in Non-Uniform InflowSheet Cavitation on Propeller BladesPressure PulsesCoupling to RANS Solvers
TUHH > PANMARE > Code > Theory
Theoretical Background
The simulation tool panMARE is a three-dimensional first-order panel method which is based on the traditional potential theory for irrotational fluids. The basic theoretical concept of potential theory used in panMARE is the following:
The fluid is assumed to be irrotational, inviscid and incompressible. With this assumptions the continuity equation results in the Laplace's equation for the fluid potential: $$\triangle \Phi^{*} = \nabla^{2} \Phi^{*} = 0, \forall \, (x,y,z)\in V$$ where \(V\) is the fluid domain and \(\Phi^{*} = \Phi + \Phi_{\infty}\) is the fluid potential. \(\Phi\) is the disturbed potential and \(\Phi_{\infty}\) is the free stream potential. The momentum equation results in the Bernoulli equation for the pressure:
$$ p + \rho g z + \frac{1}{2} \rho \vert V \vert ^{2} + \rho \frac{\partial \Phi}{\partial t} = const$$ where \(\rho \) is the fluid density, \( V \) is the total fluid velocity and \( g \) is a gravitation constant.
The solution of the Laplace's equation is a linear combination of several sources and dipoles. In order to calculate the unknown source and dipole strengths a boundary element method is used. In an outer point \(\vec{x} \in \partial V\) of the boundary the solution is defined by: $$ \Phi(x,y,z) = \frac{1}{4 \pi} \int\limits_{\partial V} \mu \frac{\partial}{\partial n} (\frac{1}{r}) dS - \frac{1}{4 \pi} \int\limits_{\partial V} \sigma (\frac{1}{r}) dS $$ where \(\mu:=\Phi\), \( \sigma := \frac{ \partial \Phi}{\partial n}\) are the dipole and the source strength, respectively.
On the surface the potential is described by the Neumann boundary condition which states that the velocity components normal to the body's surface must vanish: $$\nabla \Phi^{*} \cdot \vec{n} = 0, \, \text{on the boundary} \, \partial V \quad \rightarrow \quad \sigma = \frac{ \partial \Phi}{\partial n}$$
On the wake of a lifting body the dipole strength is described by the Kutta condition which claims that at the trailing edge of a lifting body the pressure difference must vanish: $$ \triangle p_{TE} (\mu_{TE}) =0.$$ The above equation is nonlinear in \( \mu_{TE} \). A linear form of the Kutta condition is: $$ \mu_{TE} = \mu_{upper} - \mu_{lower}. $$
With the above assumptions a set of boundary condition equations is set up and solved in order to find the source and dipole strengths on the body surface and where necessary the dipole strength on the wake. From the calculated strengths the locally induced velocity components along the body's surface can be computed and the pressure can be determined by the Bernoulli equation. The induced velocities are calculated by:
$$v_{\xi}(x): =-\frac{\partial \mu}{\partial \xi} \qquad v_{\eta}(x): =-\frac{\partial \mu}{\partial \eta}$$ where \(\xi\) and \(\eta\) are the local tangential coordinates.
panMARE, 2019/11/19
Institut für Fluiddynamik und Schiffstheorie M-8
Am Schwarzenberg-Campus 4 (C), 21073 Hamburg, Germany
Phone +49 40 42878-6052 | Fax +49 40 42878-6055 | Email panMARE
|
CommonCrawl
|
Recent progresses on elliptic two-phase free boundary problems
DCDS Home
Blow-up for the 3-dimensional axially symmetric harmonic map flow into $ S^2 $
December 2019, 39(12): 6945-6959. doi: 10.3934/dcds.2019238
On global solutions to semilinear elliptic equations related to the one-phase free boundary problem
Xavier Fernández-Real 1,, and Xavier Ros-Oton 2,
Department of Mathematics, ETH Zürich, Rämistrasse 101, 8092 Zürich, Switzerland
Institut für Mathematik, Universität Zürich, Winterthurerstrasse, 8057 Zürich, Switzerland
Dedicado con afecto a Luis Caffarelli, cuyos trabajos han influenciado a toda una nueva generación de matemáticos.
Received September 2018 Revised February 2019 Published June 2019
Fund Project: This work has received funding from the European Research Council (ERC) under the Grant Agreements No 721675 and No 801867. In addition, the second author was supported by the Swiss National Science Foundation and by MINECO grant MTM2017-84214-C2-1-P.
Motivated by its relation to models of flame propagation, we study globally Lipschitz solutions of $ \Delta u = f(u) $ in $ \mathbb{R}^n $, where $ f $ is smooth, non-negative, with support in the interval $ [0,1] $. In such setting, any "blow-down" of the solution $ u $ will converge to a global solution to the classical one-phase free boundary problem of Alt–Caffarelli.
In analogy to a famous theorem of Savin for the Allen–Cahn equation, we study here the 1D symmetry of solutions $ u $ that are energy minimizers. Our main result establishes that, in dimensions $ n<6 $, if $ u $ is axially symmetric and stable then it is 1D.
Keywords: Elliptic PDE, 1D symmetry, De Giorgi conjecture, one-phase free boundary problem, flame propagation.
Mathematics Subject Classification: Primary: 35R35, 35J91; Secondary: 35B07.
Citation: Xavier Fernández-Real, Xavier Ros-Oton. On global solutions to semilinear elliptic equations related to the one-phase free boundary problem. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 6945-6959. doi: 10.3934/dcds.2019238
H. W. Alt and L. Caffarelli, Existence and regularity for a minimum problem with free boundary, J. Reine Angew. Math., 325 (1981), 105-144. Google Scholar
L. Ambrosio and X. Cabré, Entire solutions of semilinear elliptic equations in $ \mathbb{R}^3$ and a conjecture of De Giorgi, J. Amer. Math. Soc., 13 (2000), 725-739. doi: 10.1090/S0894-0347-00-00345-3. Google Scholar
[3] J. D. Buckmaster and G. S. Ludford, Theory of Laminar Flames, Cambridge Univ. Press, Cambridge, 1982. Google Scholar
X. Cabré, Regularity of minimizers of semilinear elliptic problems up to dimension four, Comm. Pure Applied Mathematics, 63 (2010), 1362-1380. doi: 10.1002/cpa.20327. Google Scholar
X. Cabré and A. Capella, On the stability of radial solutions of semilinear elliptic equations in all of $ \mathbb{R}^n$, C. R. Acad. Sci. Paris, Ser. I, 338 (2004), 769-774. doi: 10.1016/j.crma.2004.03.013. Google Scholar
X. Cabré and X. Ros-Oton, Regularity of stable solutions up to dimension 7 in domains of double revolution, Comm. Partial Differential Equations, 38 (2013), 135-154. doi: 10.1080/03605302.2012.697505. Google Scholar
X. Cabré and J. Terra, Saddle-shaped solutions of bistable diffusion equations in all of $ \mathbb{R}^{2m}$, J. Eur. Math. Soc., 11 (2009), 819-843. doi: 10.4171/JEMS/168. Google Scholar
L. Caffarelli, D. Jerison and C. Kenig, Global energy minimizers for free boundary problems and full regularity in three dimension, Contemp. Math., 350 (2004), 83-97. doi: 10.1090/conm/350/06339. Google Scholar
L. Caffarelli and S. Salsa, A Geometric Approach To Free Boundary Problems, AMS, 2005. doi: 10.1090/gsm/068. Google Scholar
L. Caffarelli and J. L. Vázquez, A free-boundary problem for the heat equation arising in flame propagation, Trans. Amer. Math. Soc., 347 (1995), 411-441. doi: 10.1090/S0002-9947-1995-1260199-7. Google Scholar
E. De Giorgi, Proceedings of the International Meeting on Recent Methods in Nonlinear Analysis (Rome, 1978), (Pitagora, Bologna, Italy), 131–188. Google Scholar
D. De Silva and D. Jerison, A singular energy minimizing free boundary, J. Reine Angew. Math., 635 (2009), 1-22. doi: 10.1515/CRELLE.2009.074. Google Scholar
L. Dupaigne and A. Farina, Stable solutions of $ -\Delta u = f(u) $ in $ \mathbb{R}^N $, J. Eur. Math. Soc., 12 (2010), 855-882. doi: 10.4171/JEMS/217. Google Scholar
A. Farina, Propriétés qualitatives de solutions d'équations et systèmes d'équations non-linéaires, Habilitation à diriger des recherches, Paris Ⅵ, 2002. Google Scholar
A. Farina and E. Valdinoci, The State of the Art for a Conjecture of De Giorgi and Related Problems, Recent Progress on Reaction-Diffusion Systems and Viscosity Solutions, World Scientific, 2008. doi: 10.1142/9789812834744_0004. Google Scholar
D. Jerison and O. Savin, Some remarks on stability of cones for the one-phase free boundary problem, Geom. Funct. Anal., 25 (2015), 1240-1257. doi: 10.1007/s00039-015-0335-6. Google Scholar
Y. Liu, K. Wang and J. Wei, Global minimizers of the Allen–Cahn equation in dimension $ n = 8$, J. Math. Pures Appl., 108 (2017), 818-840. doi: 10.1016/j.matpur.2017.05.006. Google Scholar
Y. Liu, K. Wang and J. Wei, On one phase free boundary problem in $ \mathbb{R}^n$, preprint, arXiv: 1705.07345, (2017). Google Scholar
A. Petrosyan and N. K. Yip, Nonuniqueness in a free boundary problem from combustion, J. Geom. Anal., 18 (2007), 1098-1126. doi: 10.1007/s12220-008-9044-9. Google Scholar
O. Savin, Regularity of flat level sets in phase transitions, Ann. of Math., 169 (2009), 41-78. doi: 10.4007/annals.2009.169.41. Google Scholar
P. Sternberg and K. Zumbrun, Connectivity of phase boundaries in strictly convex domains, Arch. Rational Mech. Anal., 141 (1998), 375-400. doi: 10.1007/s002050050081. Google Scholar
G. S. Weiss, A singular limit arising in combustion theory: Fine properties of the free boundary, Calc. Var. PDE, 17 (2003), 311-340. Google Scholar
Figure 1. Representation of $ \Phi_\varepsilon(t) = \int_0^t \beta_\varepsilon(s)\, ds $
Figure 2. Representation of the cases (ⅰ) $ a > 1 $, (ⅱ) $ a = 1 $, and (ⅲ) $ a < 1 $
Giovanni Gravina, Giovanni Leoni. On the behavior of the free boundary for a one-phase Bernoulli problem with mixed boundary conditions. Communications on Pure & Applied Analysis, 2020, 19 (10) : 4853-4878. doi: 10.3934/cpaa.2020215
Changfeng Gui. On some problems related to de Giorgi's conjecture. Communications on Pure & Applied Analysis, 2003, 2 (1) : 101-106. doi: 10.3934/cpaa.2003.2.101
Chifaa Ghanmi, Saloua Mani Aouadi, Faouzi Triki. Recovering the initial condition in the one-phase Stefan problem. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021087
Donatella Danielli, Marianne Korten. On the pointwise jump condition at the free boundary in the 1-phase Stefan problem. Communications on Pure & Applied Analysis, 2005, 4 (2) : 357-366. doi: 10.3934/cpaa.2005.4.357
Claude-Michel Brauner, Luca Lorenzi. Instability of free interfaces in premixed flame propagation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 575-596. doi: 10.3934/dcdss.2020363
Fabio Camilli, Elisabetta Carlini, Claudio Marchi. A flame propagation model on a network with application to a blocking problem. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 825-843. doi: 10.3934/dcdss.2018051
Norbert Požár, Giang Thi Thu Vu. Long-time behavior of the one-phase Stefan problem in periodic and random media. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 991-1010. doi: 10.3934/dcdss.2018058
Maxime Hauray, Samir Salem. Propagation of chaos for the Vlasov-Poisson-Fokker-Planck system in 1D. Kinetic & Related Models, 2019, 12 (2) : 269-302. doi: 10.3934/krm.2019012
Naoki Sato, Toyohiko Aiki, Yusuke Murase, Ken Shirakawa. A one dimensional free boundary problem for adsorption phenomena. Networks & Heterogeneous Media, 2014, 9 (4) : 655-668. doi: 10.3934/nhm.2014.9.655
Michael L. Frankel, Victor Roytburd. Dynamical structure of one-phase model of solid combustion. Conference Publications, 2005, 2005 (Special) : 287-296. doi: 10.3934/proc.2005.2005.287
Daniela De Silva, Fausto Ferrari, Sandro Salsa. On two phase free boundary problems governed by elliptic equations with distributed sources. Discrete & Continuous Dynamical Systems - S, 2014, 7 (4) : 673-693. doi: 10.3934/dcdss.2014.7.673
Daniela De Silva, Fausto Ferrari, Sandro Salsa. Recent progresses on elliptic two-phase free boundary problems. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 6961-6978. doi: 10.3934/dcds.2019239
Luis A. Caffarelli, Alexis F. Vasseur. The De Giorgi method for regularity of solutions of elliptic equations and its applications to fluid dynamics. Discrete & Continuous Dynamical Systems - S, 2010, 3 (3) : 409-427. doi: 10.3934/dcdss.2010.3.409
Fabio Paronetto. A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 853-866. doi: 10.3934/dcdss.2017043
Tomasz Cieślak, Kentarou Fujie. Global existence in the 1D quasilinear parabolic-elliptic chemotaxis system with critical nonlinearity. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 165-176. doi: 10.3934/dcdss.2020009
Teddy Pichard. A moment closure based on a projection on the boundary of the realizability domain: 1D case. Kinetic & Related Models, 2020, 13 (6) : 1243-1280. doi: 10.3934/krm.2020045
Alexander Zlotnik, Ilya Zlotnik. Finite element method with discrete transparent boundary conditions for the time-dependent 1D Schrödinger equation. Kinetic & Related Models, 2012, 5 (3) : 639-667. doi: 10.3934/krm.2012.5.639
Rachel Clipp, Brooke Steele. An evaluation of dynamic outlet boundary conditions in a 1D fluid dynamics model. Mathematical Biosciences & Engineering, 2012, 9 (1) : 61-74. doi: 10.3934/mbe.2012.9.61
Elena Rossi. Well-posedness of general 1D initial boundary value problems for scalar balance laws. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3577-3608. doi: 10.3934/dcds.2019147
Anna Kostianko, Sergey Zelik. Inertial manifolds for 1D reaction-diffusion-advection systems. Part Ⅰ: Dirichlet and Neumann boundary conditions. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2357-2376. doi: 10.3934/cpaa.2017116
Xavier Fernández-Real Xavier Ros-Oton
|
CommonCrawl
|
MathOverflow Meta
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Lyndon–Hochschild–Serre spectral sequence for not normal subgroup
Is there analog of Lyndon–Hochschild–Serre spectral sequence for not normal subgroup?
What can you say about it? Can you describe $E^{p, q}_1$ ? What is about $E^{p, q}_2$?
What is the best technique to get the spectral sequence? For me Grothendieck spectral sequence much better than spectral sequence of a filtered complex.
There is a parallel question which is likely easier.
Is there analog of Hochschild–Serre spectral sequence for Lie subalgebra which is not ideal?
2 and 3 remain the same
I already asked a version of this question but get no responses.
https://math.stackexchange.com/questions/1112179/hochschild-serre-spectral-sequence-for-not-normal-subalgebra
homological-algebra group-cohomology spectral-sequences lie-algebra-cohomology
122 silver badges33 bronze badges
quinquequinque
$\begingroup$ The main difficulty is that, if $H < G$, there is not really a functor that will take the $H$-fixed points and produce the $G$-fixed points unless you include extra data. There are methods to get the cohomology of $G$, but almost all of them will require as extra input the cohomology of intersections of conjugates of $H$. $\endgroup$
– Tyler Lawson
$\begingroup$ It is very interesting. Can you provide references? $\endgroup$
– quinque
$\begingroup$ The shortest way to say it is the following. For a group $G$, the category of $G$-modules is equivalent to the category of quasicoherent (etale) sheaves on the classifying stack $BG$, and the global section functor is the fixed-point functor. There is a faithfully flat cover $BH \to BG$, and so there is a Cech-to-derived / descent spectral sequence. But to compute effectively with it, you need to know the iterated fiber products of $BH$ over $BG$, which correspond to $G$-orbits in $(G/H)^k$. $\endgroup$
$\begingroup$ From the point of view of filtered complexes, this is a little easier to do because there is a nonprojective resolution of $\Bbb Z$ by a complex with terms $\Bbb Z[(G/H)^k]$, with the same boundary operator as on homogeneous chains. You can apply $\Bbb RHom_G(-,M)$ to this resolution and get a filtered complex, and this gives you the spectral sequence too. $\endgroup$
$\begingroup$ (In answer to your explicit question, no, I do not know an immediate reference for this. I do know that this technique, from the topological point of view, appears in calculations of homological stability (I think it's used in Quillen's calculations for number fields). The chain complex I just described is the simplicial chain complex of a "classifying space for the family of subgroups of $H$".) $\endgroup$
Sorry for reviving an old question, but it seems that the Kropholler spectral sequence exactly answers the first 3 questions:
Kropholler, P.H., A generalization of the Lyndon-Hochschild-Serre spectral sequence with applications to group cohomology and decompositions of groups., J. Group Theory 9, No. 1, 1-25 (2006). ZBL1115.20042.
Mark GrantMark Grant
I don't think so. The LHS spectral sequence can be thought of as the Serre spectral sequence associated to the fiber sequence
$$BN \to BG \to B(G/N)$$
where $G$ is a group and $N$ is a normal subgroup of it. If $N$ is not required to be normal then the third term in this fiber sequence no longer exists, so it's unclear to me in what sense we can have a reasonable analogue of the LHS spectral sequence here.
$\begingroup$ What you said is just that this approach does not work. But I mean something different. I even edited my question, wrote "analog of Lyndon–Hochschild–Serre spectral sequence". $\endgroup$
$\begingroup$ math.ru.nl/~solleveld/scrip.pdf Here you can find a definition of relative Lie algebra cohomology. It is done by means of explicit complex but it is still a way to make sense of such kind of objects. $\endgroup$
$\begingroup$ @quinque: that notion of relative Lie algebra cohomology is a little different. If you think of the cohomology of Lie algebras as an algebraic model of the de Rham cohomology of compact Lie groups $G$, then relative Lie algebra cohomology should be an algebraic model of the de Rham cohomology of homogeneous spaces $G/H$. Of course these can make sense if $H$ is not normal, but for group cohomology we want to compute the cohomology of the delooping of these spaces, and we just can't deloop homogeneous spaces in general. $\endgroup$
$\begingroup$ Yes, relative cohomology are cohomology of homogenious space! And moreover there is a spectral sequence of a bundle. But I want to get this sequence for arbitrary Lie algebra. Here have to be purely algebraic approach for this. $\endgroup$
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged homological-algebra group-cohomology spectral-sequences lie-algebra-cohomology or ask your own question.
Differentials in the Lyndon-Hochschild spectral sequence
Tracking spectral sequence differentials
stability results for the Atiyah-Hirzebruch spectral sequence
Cohomology of $\mathbb Z_4$ via the Lyndon-Hochschild-Serre spectral sequence
|
CommonCrawl
|
How does $e^{i x}$ produce rotation around the imaginary unit circle?
Euler's formula states that $e^{i x} = \cos(x) + i \sin(x)$.
I can see from the MacLaurin Expansion that this is indeed true; however, I don't intuitively understand how raising $e$ to the power of $ix$ produces rotation. Can anyone give me an intuitive understanding?
complex-numbers exponential-function
calebcaleb
$\begingroup$ What is your intuitive understaing of the exponential function? $\endgroup$ – Emanuele Paolini Jan 31 '13 at 21:36
$\begingroup$ Possible duplicates: How does e, or the exponential function, relate to rotation?, How to prove Euler's formula: $\exp(it)=\cos(t)+i\sin(t)$? $\endgroup$ – Rahul Jan 31 '13 at 21:38
$\begingroup$ @manu-fatto The exponential function is simply e (2.718...) raised to a power. $\endgroup$ – caleb Jan 31 '13 at 21:38
$\begingroup$ Oops. Yep. Fixed. $\endgroup$ – caleb Jan 31 '13 at 21:45
$\begingroup$ I just happened across this video: youtube.com/watch?v=F_0yfvm0UoU, which helped give me a more intuitive understanding. $\endgroup$ – caleb Feb 13 '17 at 16:36
Consider a particle moving along the path $f(t)=e^{i t}$. It's instantaneous velocity is given by the derivative, and convince yourself that it is, treating $i$ as a constant, $ie^{it}$ Thus we see
$$\text{Velocity} = i\text{Position} = \text{Position (rotated by} \frac{\pi}{2} \text{radians)}$$
Because $f(0) = 1$, intitial velocity is $i$. Moving the position slightly and changing the velocity shows us that $|f(t)| = 1$ and thus $|\frac{d}{dt}f(t)|=1$. If $t =\theta$, the particle will have traveled $\theta$ radians around the unit circle.
edited Feb 1 '13 at 1:30
ArgonArgon
$\begingroup$ Ok, I think this helps me understand it. As you raise i to integer powers, it ends up rotating around the imaginary unit circle: $i^0=1$, $i^1=i$, $i^2=-1$, $i^3=-i$, and $i^4=1$. These positions (1, i, -1, -1) correspond the the following (x,y) positions: (1,0), (0,1), (-1,0), (0,-1). So it makes sense that multiplying the current position by i would result in a 90 degree ($\pi/2$) rotation. $\endgroup$ – caleb Jan 31 '13 at 22:35
$\begingroup$ @caleb This is correct, yes. And the (principal) corresponding polar forms for these are, respectively, $e^{i\cdot 0}$, $e^{i\frac{\pi}{2}}$, $e^{i\pi}$ and $e^{i\frac{3\pi}{2}}$ $\endgroup$ – Argon Jan 31 '13 at 22:58
$\begingroup$ I think this is the best, most succinct explanation of this phenomenon I've seen. Nice work. You could further expand this by pointing out that the acceleration is perpendicular to velocity, which will result in a circular trajectory. $\endgroup$ – John Moeller Feb 1 '13 at 0:28
$\begingroup$ This is the exact explanation given in the beginning of the book "Visual Complex Analysis", there are deeper explanations later on which I recommend the original poster take a look at. $\endgroup$ – Dider Sep 3 '15 at 20:45
$\begingroup$ This is so great, simple and straightforward! :) I think that I finally get it intuitively, thank you! $\endgroup$ – Isti115 Mar 9 at 18:45
Converted from this article written for sci.math:
Starting with this formulation of $e^x$ $$ e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n\tag{1} $$ and extending this definition to $e^{ix}$: $$ e^{ix}=\lim_{n\to\infty}\left(1+\frac{ix}{n}\right)^n\tag{2} $$ For a complex number $z$, let $|z|$ be its magnitude and $\arg(z)$ be its angle. If it is not already known, only a small amount of algebra and trigonometry is needed to show that $$ \begin{align} |wz|&=|w|\cdot|z|\tag{3a}\\ \arg(wz)&=\arg(w)+\arg(z)\tag{3b} \end{align} $$ Induction then shows that \begin{align} |z^n|&=|z|^n\tag{4a}\\ \arg(z^n)&=n\arg(z)\tag{4b} \end{align} Let us take a closer look at $1+\dfrac{ix}{n}$. $$ \begin{align} \left|\,1+\frac{ix}{n}\,\right|&=\sqrt{1+\frac{x^2}{n^2}}\tag{5a}\\ \tan\left(\arg\left(1+\frac{ix}{n}\right)\right)&=\frac xn\tag{5b} \end{align} $$ Using $(4a)$, $(5a)$, and $(2)$, we get $$ \begin{align} |e^{ix}| &=\lim_{n\to\infty}\left|\,1+\frac{ix}{n}\,\right|^n\\ &=\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{n/2}\\ &=\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{\frac{n^2}{2n}}\\ &=\lim_{n\to\infty}e^{\frac{x^2}{2n}}\\[12pt] &=1\tag{6} \end{align} $$ It can be shown that when $x$ is measured in radians $$ \lim_{x\to0}\frac{\tan(x)}{x}=1\tag{7} $$ Using $(4b)$, $(5b)$, and $(7)$, we get $$ \begin{align} \arg(e^{ix}) &=\lim_{n\to\infty}n\arg\left(1+\frac{ix}{n}\right)\\ &=\lim_{n\to\infty}n\arg\left(1+\frac{ix}{n}\right) \frac{\tan\left(\arg\left(1+\frac{ix}{n}\right)\right)}{\arg\left(1+\frac{ix} {n}\right)}\\ &=\lim_{n\to\infty}n\frac xn\\ &=x\tag{8} \end{align} $$ Using $(6)$ and $(8)$, we get that $e^{ix}$ has magnitude $1$ and angle $x$. Thus, converting from polar coordinates: $$ e^{ix} = \cos(x) + i\sin(x)\tag{9} $$ We get the rotational action from $(9)$ and $(3)$.
robjohn♦robjohn
I don't know exactly what kind of intuition you're looking for. You're probably thinking about $e^{i\theta}$ the same way you would $2^2$; that is, $e^{i\theta}$ is found by multiplying $e$ by itself $i\theta$ times. While this is useful for introducing exponentiation of the form $n^m$ when $n,m$ are positive integers, it doesn't really make sense to try and apply this kind of reasoning to expressions of the form $a^x$ or $a^z$ for real or complex arguments.
The only sort of intuition I can suggest is the following: what is $e$? It's typically defined by the expression $\sum_{n=0}^{\infty} \frac{1}{n!}$. This isn't an alternate interesting fact but a definition for the number $e$. We also define $e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!}$, and we observe that this series converges for all real $x$. This is just what the expression $e^x$ means. Similarly, $e^{i \theta}$ is defined by $\sum_{n=0}^{\infty} \frac{(i \theta)^n}{n!}$, and it just so happens that this converges absolutely for all $z$, giving us Euler's formula.
This is why $e^{i\theta}$ is a rotation about the unit circle in $\mathbb{C}$. Because it's defined that way.
anonymousanonymous
Let's look at the $|e^{ix}|$, this is always constant $ (|\cos x + i \sin x| = \sqrt{ \cos^2x + \sin^2x} = 1)$. The only thing that is changed is $x$, now if we assign coordinates to real $(\cos x$) as $x$-coordinate and complex value $(\sin x)$ as $y$-coordinate (or imaginary axis), then this is same as parametric equation of unit circle with $x$ as parameter. As $x$ increases, the path traced by the point will be circular.
Santosh LinkhaSantosh Linkha
$\exp(z)$ is the function which is its own derivative. It's natural to introduce coordinates since we're thinking about the circle (a 2D figure).
We can consider it's real and imaginary parts: $\exp(iy) = c(y) + is(y)$, differentiating gives $\exp(iy) = - i c'(y) + s'(y)$ comparing with the previous gives $s'(y) = c(y)$ and $c'(y) = - s(y)$.
From the power series we find the real part is all the even powers so $c(-y) = c(y)$ and the imaginary part is all the odd powers so $s(-y) = -s(y)$, this lets us conclude Pythagoras' theorem $c(y)^2 + s(y)^2 = \exp(iy)\exp(-iy) = 1$.
From that we easily deduce that the path $(c(y),s(y))$ lies on the unit circle and is arc-length parametrized. Therefore it returns to its starting point when $y$ reaches $2 \pi$.
Not the answer you're looking for? Browse other questions tagged complex-numbers exponential-function or ask your own question.
How do I interpret Euler's formula?
The proof of the formula $z = |z|e^{i \phi}$
Is $e^{i\theta}$ a circle?
How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$?
How to prove Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?
How do I understand $e^i$ which is so common?
How does e, or the exponential function, relate to rotation?
Why can complex numbers be written in exponential form? $z=r(\cos \theta+i\sin \theta)$ is $z=re^{i\theta}$.
How do complex number exponents actually work?
How fundamental is Euler's identity, really?
Can you explain $(1 + iX/n)^{n}$ without using e, sin, or cos?
Find the imaginary part of this sum
Show that $\sin6\alpha\equiv \sin2\alpha(16\cos^4\alpha-16\cos^2\alpha+3)$
Euler's identity: why is the $e$ in $e^{ix}$? What if it were some other constant like $2^{ix}$?
3D rotation by a complex angle around a complex axis.
How can I visualize the interaction of the imaginary parts of the cosine/sine functions?
How to express $\sin \sqrt{a-ib} \sin \sqrt{a+ib}$ without imaginary unit?
How to understand complex rotation?
Can someone give intuition behind understanding $i^i = e^{\frac{-\pi}{2}}$ and more so on complex powers?
How to find the residue of $\frac{1}{e^{2z}-1}$ and $\frac{z^2}{1-\cos(z)}$?
|
CommonCrawl
|
Multiple homoclinic solutions for a one-dimensional Schrödinger equation
On integral separation of bounded linear random differential equations
August 2016, 9(4): 1009-1023. doi: 10.3934/dcdss.2016039
Characterizations of uniform hyperbolicity and spectra of CMV matrices
David Damanik 1, , Jake Fillman 2, , Milivoje Lukic 3, and William Yessen 1,
Department of Mathematics, Rice University, Houston, TX 77005, United States
Department of Mathematics, Virginia Tech, Blacksburg, VA 24061, United States
Department of Mathematics, University of Toronto, Toronto, Ontario M5S 2E4, Canada
Received September 2014 Revised July 2015 Published August 2016
We provide an elementary proof of the equivalence of various notions of uniform hyperbolicity for a class of GL$(2,\mathbb{C})$ cocycles and establish a Johnson-type theorem for extended CMV matrices, relating the spectrum to the set of points on the unit circle for which the associated Szegő cocycle is not uniformly hyperbolic.
Keywords: Linear cocycles, generalized eigenfunctions, CMV matrices, uniform hyperbolicity, orthogonal polynomials..
Mathematics Subject Classification: Primary: 37D20, 42C05; Secondary: 37A2.
Citation: David Damanik, Jake Fillman, Milivoje Lukic, William Yessen. Characterizations of uniform hyperbolicity and spectra of CMV matrices. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1009-1023. doi: 10.3934/dcdss.2016039
Ju. M. Berezanskii, Expansions in Eigenfuncions of Selfadjoint Operators, Amer. Math. Soc., Providence, 1968. Google Scholar
J. Bochi and N. Gourmelon, Some characterizations of domination, Math. Z., 263 (2009), 221-231. doi: 10.1007/s00209-009-0494-y. Google Scholar
D. Damanik, J. Fillman, M. Lukic and W. Yessen, Uniform hyperbolicity for Szegő cocycles and applications to random CMV matrices and the Ising model, Int. Math. Res. Not., 2015 (2015), 7110-7129. doi: 10.1093/imrn/rnu158. Google Scholar
D. Damanik, J. Fillman and D. C. Ong, Spreading estimates for quantum walks on the integer lattice via power-law bounds on transfer matrices, J. Math. Pures Appl., 105 (2016), 293-341. doi: 10.1016/j.matpur.2015.11.002. Google Scholar
J. Geronimo and R. Johnson, Rotation number associated with difference equations satisfied by polynomials orthogonal on the unit circle, J. Differential Equations, 132 (1996), 140-178. doi: 10.1006/jdeq.1996.0175. Google Scholar
F. Gesztesy and M. Zinchenko, Weyl-Titchmarsh theory for CMV operators associated with orthogonal polynomials on the unit circle, J. Approx. Theory, 139 (2006), 172-213. doi: 10.1016/j.jat.2005.08.002. Google Scholar
R. Johnson, Exponential dichotomy, rotation number, and linear differential operators with bounded coefficients, J. Diff. Eq., 61 (1986), 54-78. doi: 10.1016/0022-0396(86)90125-7. Google Scholar
Y. Last and B. Simon, Eigenfunctions, transfer matrices, and absolutely continuous spectrum of one-dimensional Schrödinger operators, Invent. Math., 135 (1999), 329-367. doi: 10.1007/s002220050288. Google Scholar
M. Lukic and D. Ong, Generalized Prüfer variables for perturbations of Jacobi and CMV matrices,, J. Math. Anal. Appl., (). Google Scholar
P. Munger and D. Ong, The Hölder continuity of spectral measures of an extended CMV matrix, J. Math. Phys., 55 (2014), 093507, 10 pp. doi: 10.1063/1.4895762. Google Scholar
D. Ong, Purely singular continuous spectrum for CMV operators generated by subshifts, J. Stat. Phys., 155 (2014), 763-776. doi: 10.1007/s10955-014-0974-2. Google Scholar
M. Reed and B. Simon, Methods of Modern Mathematical Physics, I: Functional Analysis, Academic Press, New York, 1972. Google Scholar
R. Sacker and G. Sell, Existence of dichotomies and invariant splittings for linear differential systems I., J. Diff. Eq., 15 (1974), 429-458. doi: 10.1016/0022-0396(74)90067-9. Google Scholar
R. Sacker and G. Sell, A spectral theory for linear differential systems, J. Diff. Eq., 27 (1978), 320-358. doi: 10.1016/0022-0396(78)90057-8. Google Scholar
J. Selgrade, Isolated invariant sets for flows on vector bundles, Trans. Amer. Math. Soc., 203 (1975), 359-390. doi: 10.1090/S0002-9947-1975-0368080-X. Google Scholar
B. Simon, Orthogonal Polynomials on the Unit Circle. Part 1. Classical Theory, American Mathematical Society Colloquium Publications 54, Part 1, American Mathematical Society, Providence, RI, 2005. Google Scholar
J.-C. Yoccoz, Some questions and remarks about SL$(2,\mathbbR)$ cocycles, Modern Dynamical Systems and Applications, 447-458, Cambridge Univ. Press, Cambridge, 2004. Google Scholar
Z. Zhang, Resolvent set of Schrödinger operators and uniform hyperbolicity,, preprint, (). Google Scholar
He Zhang, John Harlim, Xiantao Li. Estimating linear response statistics using orthogonal polynomials: An RKHS formulation. Foundations of Data Science, 2020, 2 (4) : 443-485. doi: 10.3934/fods.2020021
Leonid Golinskii, Mikhail Kudryavtsev. An inverse spectral theory for finite CMV matrices. Inverse Problems & Imaging, 2010, 4 (1) : 93-110. doi: 10.3934/ipi.2010.4.93
Boris Hasselblatt, Yakov Pesin, Jörg Schmeling. Pointwise hyperbolicity implies uniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2819-2827. doi: 10.3934/dcds.2014.34.2819
Mickaël Kourganoff. Uniform hyperbolicity in nonflat billiards. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1145-1160. doi: 10.3934/dcds.2018048
Yakov Pesin, Vaughn Climenhaga. Open problems in the theory of non-uniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2010, 27 (2) : 589-607. doi: 10.3934/dcds.2010.27.589
Dean Crnković, Bernardo Gabriel Rodrigues, Sanja Rukavina, Loredana Simčić. Self-orthogonal codes from orbit matrices of 2-designs. Advances in Mathematics of Communications, 2013, 7 (2) : 161-174. doi: 10.3934/amc.2013.7.161
Dean Crnković, Ronan Egan, Andrea Švob. Self-orthogonal codes from orbit matrices of Seidel and Laplacian matrices of strongly regular graphs. Advances in Mathematics of Communications, 2020, 14 (4) : 591-602. doi: 10.3934/amc.2020032
Boris Kalinin, Anatole Katok. Measure rigidity beyond uniform hyperbolicity: invariant measures for cartan actions on tori. Journal of Modern Dynamics, 2007, 1 (1) : 123-146. doi: 10.3934/jmd.2007.1.123
Wilhelm Schlag. Regularity and convergence rates for the Lyapunov exponents of linear cocycles. Journal of Modern Dynamics, 2013, 7 (4) : 619-637. doi: 10.3934/jmd.2013.7.619
Boris Kalinin, Victoria Sadovskaya. Linear cocycles over hyperbolic systems and criteria of conformality. Journal of Modern Dynamics, 2010, 4 (3) : 419-441. doi: 10.3934/jmd.2010.4.419
Darren C. Ong. Orthogonal polynomials on the unit circle with quasiperiodic Verblunsky coefficients have generic purely singular continuous spectrum. Conference Publications, 2013, 2013 (special) : 605-609. doi: 10.3934/proc.2013.2013.605
Mahesh G. Nerurkar, Héctor J. Sussmann. Construction of ergodic cocycles that are fundamental solutions to linear systems of a special form. Journal of Modern Dynamics, 2007, 1 (2) : 205-253. doi: 10.3934/jmd.2007.1.205
Yajuan Zang, Guangzhou Chen, Kejun Chen, Zihong Tian. Further results on 2-uniform states arising from irredundant orthogonal arrays. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020109
Boris Kalinin, Anatole Katok, Federico Rodriguez Hertz. Errata to "Measure rigidity beyond uniform hyperbolicity: Invariant measures for Cartan actions on tori" and "Uniqueness of large invariant measures for $\Zk$ actions with Cartan homotopy data". Journal of Modern Dynamics, 2010, 4 (1) : 207-209. doi: 10.3934/jmd.2010.4.207
Aihua Fan, Jörg Schmeling, Weixiao Shen. $ L^\infty $-estimation of generalized Thue-Morse trigonometric polynomials and ergodic maximization. Discrete & Continuous Dynamical Systems, 2021, 41 (1) : 297-327. doi: 10.3934/dcds.2020363
Marcin Mazur, Jacek Tabor, Piotr Kościelniak. Semi-hyperbolicity and hyperbolicity. Discrete & Continuous Dynamical Systems, 2008, 20 (4) : 1029-1038. doi: 10.3934/dcds.2008.20.1029
Mauricio Poletti. Stably positive Lyapunov exponents for symplectic linear cocycles over partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 5163-5188. doi: 10.3934/dcds.2018228
Yuzhou Tian, Yulin Zhao. Global phase portraits and bifurcation diagrams for reversible equivariant Hamiltonian systems of linear plus quartic homogeneous polynomials. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 2941-2956. doi: 10.3934/dcdsb.2020214
Kishan Chand Gupta, Sumit Kumar Pandey, Indranil Ghosh Ray, Susanta Samanta. Cryptographically significant mds matrices over finite fields: A brief survey and some generalized results. Advances in Mathematics of Communications, 2019, 13 (4) : 779-843. doi: 10.3934/amc.2019045
Stéphane Gaubert, Nikolas Stott. A convergent hierarchy of non-linear eigenproblems to compute the joint spectral radius of nonnegative matrices. Mathematical Control & Related Fields, 2020, 10 (3) : 573-590. doi: 10.3934/mcrf.2020011
David Damanik Jake Fillman Milivoje Lukic William Yessen
|
CommonCrawl
|
Login | Create
Sort by: Relevance Date Users's collections Twitter
Group by: Day Week Month Year All time
Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity)
First measurement of the polarization observable $E$ and helicity-dependent cross sections in single $\pi^{0}$ photoproduction from quasi-free nucleons (1705.07342)
M. Dieterle, L. Witthauer, F. Cividini, S. Abt, P. Achenbach, P. Adlarson, F. Afzal, Z. Ahmed, C.S. Akondi, J.R.M. Annand, H.J. Arends, M. Bashkanov, R. Beck, M. Biroth, N.S. Borisov, A. Braghieri, W.J. Briscoe, S. Costanza, C. Collicott, A. Denig, E.J. Downie, P. Drexler, M.I. Ferretti-Bondy, S. Gardner, S. Garni, D.I. Glazier, D. Glowa, W. Gradl, M. Günther, G.M. Gurevich, D. Hamilton, D. Hornidge, G.M. Huber, A. Käser, V.L. Kashevarov, S. Kay, I. Keshelashvili, R. Kondratiev, M. Korolija, B. Krusche, A.B. Lazarev, J.M. Linturi, V. Lisin, K. Livingston, S. Lutterer, I.J.D. MacGregor, J. Mancell, D.M. Manley, P.P. Martel, V. Metag, W. Meyer, R. Miskimen, E. Mornacchi, A. Mushkarenkov, A.B. Neganov, A. Neiser, M. Oberle, M. Ostrick, P.B. Otte, D. Paudyal, P. Pedroni, A. Polonski, S.N. Prakhov, A. Rajabi, G. Reicherz, G. Ron, T. Rostomyan, A. Sarty, C. Sfienti, M.H. Sikora, V. Sokhoyan, K. Spieker, O. Steffen, I.I. Strakovsky, Th. Strub, I. Supek, A. Thiel, M. Thiel, A. Thomas, M. Unverzagt, Yu.A. Usov, S. Wagner, N.K. Walford D.P. Watts, D. Werthmüller, J. Wettig, M. Wolfes, L. Zana
May 20, 2017 nucl-ex
The double-polarization observable $E$ and the helicity-dependent cross sections $\sigma_{1/2}$ and $\sigma_{3/2}$ have been measured for the first time for single $\pi^{0}$ photoproduction from protons and neutrons bound in the deuteron at the electron accelerator facility MAMI in Mainz, Germany. The experiment used a circularly polarized photon beam and a longitudinally polarized deuterated butanol target. The reaction products, recoil nucleons and decay photons from the $\pi^0$ meson were detected with the Crystal Ball and TAPS electromagnetic calorimeters. Effects from nuclear Fermi motion were removed by a kinematic reconstruction of the $\pi^{0}N$ final state. A comparison to data measured with a free proton target showed that the absolute scale of the cross sections is significantly modified by nuclear final-state interaction (FSI) effects. However, there is no significant effect on the asymmetry $E$ since the $\sigma_{1/2}$ and $\sigma_{3/2}$ components appear to be influenced in a similar way. Thus, the best approximation of the two helicity-dependent cross sections for the free neutron is obtained by combining the asymmetry $E$ measured with quasi-free neutrons and the unpolarized cross section corrected for FSI effects under the assumption that the FSI effects are similar for neutrons and protons.
Helicity-dependent cross sections and double-polarization observable E in eta photoproduction from quasi-free protons and neutrons (1704.00649)
L. Witthauer, M. Dieterle, S. Abt, P. Achenbach, F. Afzal, Z. Ahmed, C.S. Akondi, J.R.M. Annand, H.J. Arends, M. Bashkanov, R. Beck, M. Biroth, N.S. Borisov, A. Braghieri, W.J. Briscoe, F. Cividini, S. Costanza, C. Collicott, A. Denig, E.J. Downie, P. Drexler, M.I. Ferretti-Bondy, S. Gardner, S. Garni, D.I. Glazier, D. Glowa, W. Gradl, M. Günther, G.M. Gurevich, D. Hamilton, D. Hornidge, G.M. Huber, A. Käser, V.L. Kashevarov, S. Kay, I. Keshelashvili, R. Kondratiev, M. Korolija, B. Krusche, A.B. Lazarev, J.M. Linturi, V. Lisin, K. Livingston, S. Lutterer, I.J.D. MacGregor, J. Mancell, D.M. Manley, P.P. Martel, V. Metag, W. Meyer, R. Miskimen, E. Mornacchi, A. Mushkarenkov, A.B. Neganov, A. Neiser, M. Oberle, M. Ostrick, P.B. Otte, D. Paudyal, P. Pedroni, A. Polonski, S.N. Prakhov, A. Rajabi, G. Reicherz, G. Ron, T. Rostomyan, A. Sarty, C. Sfienti, M.H. Sikora, V. Sokhoyan, K. Spieker, O. Steffen, I.I. Strakovsky, Th. Strub, I. Supek, A. Thiel, M. Thiel, A. Thomas, M. Unverzagt, Yu.A. Usov, S. Wagner, N.K. Walford, D.P. Watts, D. Werthmüller, J. Wettig, M. Wolfes, L. Zana
April 3, 2017 nucl-ex
Precise helicity-dependent cross sections and the double-polarization observable $E$ were measured for $\eta$ photoproduction from quasi-free protons and neutrons bound in the deuteron. The $\eta\rightarrow 2\gamma$ and $\eta\rightarrow 3\pi^0\rightarrow 6\gamma$ decay modes were used to optimize the statistical quality of the data and to estimate systematic uncertainties. The measurement used the A2 detector setup at the tagged photon beam of the electron accelerator MAMI in Mainz. A longitudinally polarized deuterated butanol target was used in combination with a circularly polarized photon beam from bremsstrahlung of a longitudinally polarized electron beam. The reaction products were detected with the electromagnetic calorimeters Crystal Ball and TAPS, which covered 98\% of the full solid angle. The results show that the narrow structure observed earlier in the unpolarized excitation function of $\eta$ photoproduction off the neutron appears only in reactions with antiparallel photon and nucleon spin ($\sigma_{1/2}$). It is absent for reactions with parallel spin orientation ($\sigma_{3/2}$) and thus very probably related to partial waves with total spin 1/2. The behavior of the angular distributions of the helicity-dependent cross sections was analyzed by fitting them with Legendre polynomials. The results are in good agreement with a model from the Bonn-Gatchina group, which uses an interference of $P_{11}$ and $S_{11}$ partial waves to explain the narrow structure.
Insight into the narrow structure in {\boldmath{$\eta$}}-photoproduction on the neutron from helicity dependent cross sections (1702.01408)
L. Witthauer, M. Dieterle, S. Abt, P. Achenbach, F. Afzal, Z. Ahmed, J.R.M. Annand, H.J. Arends, M. Bashkanov, R. Beck, M. Biroth, N.S. Borisov, A. Braghieri, W.J. Briscoe, F. Cividini, S. Costanza, C. Collicott, A. Denig, E.J. Downie, P. Drexler, M.I. Ferretti-Bondy, S. Gardner, S. Garni, D.I. Glazier, D. Glowa, W. Gradl, M. Günther, G.M. Gurevich, D. Hamilton, D. Hornidge, G.M. Huber, A. Käser, V.L. Kashevarov, S. Kay, I. Keshelashvili, R. Kondratiev, M. Korolija, B. Krusche, A.B. Lazarev, J.M. Linturi, V. Lisin, . Livingston, S. Lutterer, I.J.D. MacGregor, J. Mancell, D.M. Manley, P.P. Martel, V. Metag, W. Meyer, R. Miskimen, E. Mornacchi, A. Mushkarenkov, A.B. Neganov, A. Neiser, M. Oberle, M. Ostrick, P.B. Otte, D. Paudyal, P. Pedroni, A. Polonski, S.N. Prakhov, A. Rajabi, G. Reicherz, G. Ron, T. Rostomyan, A. Sarty, C. Sfienti, M.H. Sikora, V. Sokhoyan, K. Spieker, O. Steffen, I.I. Strakovsky, Th. Strub, I. Supek, A. Thiel, M. Thiel, A. Thomas, M. Unverzagt, Yu.A. Usov, S. Wagner, N.K. Walford, D.P. Watts, D. Werthmüller, J. Wettig, M. Wolfes, L. Zana
Feb. 5, 2017 nucl-ex
The double polarization observable $E$ and the helicity dependent cross sections $\sigma_{1/2}$ and $\sigma_{3/2}$ were measured for $\eta$ photoproduction from quasi-free protons and neutrons. The circularly polarized tagged photon beam of the A2 experiment at the Mainz MAMI accelerator was used in combination with a longitudinally polarized deuterated butanol target. The almost $4\pi$ detector setup of the Crystal Ball and TAPS is ideally suited to detect the recoil nucleons and the decay photons from $\eta\rightarrow 2\gamma$ and $\eta\rightarrow 3\pi^0$. The results show that the narrow structure previously observed in $\eta$ photoproduction from the neutron is only apparent in $\sigma_{1/2}$ and hence, most likely related to a spin-1/2 amplitude. Nucleon resonances that contribute to this partial wave in $\eta$ production are only $N1/2^-$ ($S_{11}$) and $N1/2^+$ ($P_{11}$). Furthermore, the extracted Legendre coefficients of the angular distributions for $\sigma_{1/2}$ are in good agreement with recent reaction model predictions assuming a narrow resonance in the $P_{11}$ wave as the origin of this structure.
Photon asymmetry measurements of $\overrightarrow{\gamma} \mathrm{p} \rightarrow \pi^{0} \mathrm{p}$ for E$_{\gamma}$=320$-$650 MeV (1606.07930)
S. Gardner, D. Howdle, M.H. Sikora, Y. Wunderlich, S. Abt, P. Achenbach, F. Afzal, P. Aguar-Bartolome, Z. Ahmed, J.R.M. Annand, H.J. Arends, K. Bantawa, M. Bashkanov, R. Beck, M. Biroth, N.S. Borisov, A. Braghieri, W.J. Briscoe, S. Cherepnya, F. Cividini, S. Costanza, C. Collicott, B.T. Demissie, A. Denig, M. Dieterle, E.J. Downie, P. Drexler, M.I. Ferretti-Bondy, L.V. Filkov, D.I. Glazier, S. Garni, W. Gradl, M. Günther, G.M. Gurevich, D. Hamilton, E. Heid, D. Hornidge, G.M. Huber, O. Jahn, T.C. Jude, A. Käser, S. Kay, V.L. Kashevarov, I. Keshelashvili, R. Kondratiev, M. Korolija, B. Krusche, J.M. Linturi, V. Lisin, K. Livingston, S. Lutterer, I.J.D. MacGregor, R. Macrae, J. Mancell, D.M. Manley, P.P. Martel, J.C. McGeorge, E.F. McNicoll, D.G. Middleton, R. Miskimen, C. Mullen, A. Mushkarenkov, A.B. Neganov, A. Neiser, A. Nikolaev, M. Oberle, M. Ostrick, R.O. Owens, P.B. Otte, B. Oussena, D. Paudyal, P. Pedroni, A. Polonski, S. Prakhov, A. Rajabi, J. Robinson, G. Rosner, T. Rostomyan, A. Sarty, S. Schumann, V. Sokhoyan, K. Spieker, O. Steffen, C. Sfienti, I.I. Strakovsky, B. Strandberg, Th. Strub, I. Supek, C.M. Tarbert, A. Thiel, M. Thiel, A. Thomas, M. Unverzagt, Yu.A. Usov, D.P. Watts, D. Werthmüller, J. Wettig, M. Wolfes, L. Witthauer, L. Zana
June 25, 2016 nucl-ex
High statistics measurements of the photon asymmetry $\mathrm{\Sigma}$ for the $\overrightarrow{\gamma}$p$\rightarrow\pi^{0}$p reaction have been made in the center of mass energy range W=1214-1450 MeV. The data were measured with the MAMI A2 real photon beam and Crystal Ball/TAPS detector systems in Mainz, Germany. The results significantly improve the existing world data and are shown to be in good agreement with previous measurements, and with the MAID, SAID, and Bonn-Gatchina predictions. We have also combined the photon asymmetry results with recent cross-section measurements from Mainz to calculate the profile functions, $\check{\mathrm{\Sigma}}$ (= $\sigma_{0}\mathrm{\Sigma}$), and perform a moment analysis. Comparison with calculations from the Bonn-Gatchina model shows that the precision of the data is good enough to further constrain the higher partial waves, and there is an indication of interference between the very small $F$-waves and the $N(1520) 3/2^{-}$ and $N(1535) 1/2^{-}$ resonances.
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.